
The future of Cloud-Native infrastructure is Resilient and Flexible
Mon, 13 Dec 2021 18:40:31 -0000
|Read Time: 0 minutes
Next generation infrastructures to support Cloud-Native workloads must be resilient and flexible to satisfy workload requirements while also reducing the management burden on IT staffers.
While much of the emphasis on the benefits of Cloud-Native infrastructure are focused on speed and agility from development to deployment, the rise of stateful containerized applications will force organizations to take resiliency, storage performance and data services more seriously. In the Voice of the Enterprise: DevOps, Workloads & Projects 2020 study, 56% of organizations have more than 50% applications that are stateful and this trend will rise as more production workloads run on containers.
The need for persistent storage also raises the stakes for data protection capabilities such as snapshots, replication, backup and disaster recovery. Even when it comes to non-mission critical and non-business critical workloads such as test/dev, organizations have minimal tolerance for downtime or data loss. The rising customer expectations for resiliency will only increase pressure on organizations to implement storage systems with rich data protection capabilities and the ability to automate the deployment of these features based on the importance of a particular workload.
Data placement and optimization continue to be key concerns in large scale environments, and it is important for next generation systems to provide intelligent load balancing to position data across nodes in a manner that makes optimal use of resources. These data placement capabilities need to be automated, since many of these operations will occur in the background when workloads are not as active.
Though it is tempting to go with a clean sheet approach when designing next generation infrastructures for emerging Cloud-Native workloads, workloads that are branded as “legacy” do not disappear, even if they are not top of mind in planning discussions. In interactions with organizations building out Cloud-Native infrastructures, it is far more common for them to be running their containerized workloads on top of or inside of VMs today, as opposed to building a new silo of infrastructure for Cloud-Native.
Just as VMs have not completely displaced workloads running on non-virtualized physical systems, we are still a long way from seeing all of the applications currently running in VMs shifting over completely to containers. Infrastructures which have the flexibility to provide compute and storage resources for physical, virtualized, and containerized workloads simultaneously will be necessary for many years.
For more information, please read the 451 Research Special Report:
Infrastructure Requirements for a Cloud-Native World.
Author: Henry Baltazar
Copyright © 2021 S&P Global Market Intelligence.
The content of this artifact is for educational purposes only. 451 Research, S&P Global Market Intelligence does not endorse any companies, technologies, products, services, or solutions.
Related Blog Posts

PowerProtect Cyber Recovery – Abilities and Improvements in the Cloud
Thu, 26 Jan 2023 21:42:07 -0000
|Read Time: 0 minutes
As part of organizations’ cloud journey, their presence in the cloud is increasing and they are running their development and production environment, or some of it, in the cloud.
Although running in the cloud has its own benefits, the need for cyber recovery abilities doesn’t change from on-premises to cloud, because the dangers remain the same.
Organizations understand the benefits and contributions of a working cyber recovery solution. And PowerProtect Cyber Recovery provides them exactly that, and it’s already supported on AWS.
The design is simple – organizations run their Cyber Recovery vault on their AWS cloud account. Their production site is also in the cloud. The production and the vault can be deployed in different regions or even using different cloud accounts. (It is also possible to run the Cyber Recovery vault on the cloud in conjunction with an on-premises production environment, but this option is less recommended). The production data could be protected with PowerProtect Data Manager (for example) with PowerProtect DDVE and replicated to the vault DDVE.
Until the recent PowerProtect Cyber Recovery 19.11 release, these environments were missing an important component that completes the solution on the cloud – CyberSense.
CyberSense will soon be supported on AWS, so you will be able to deploy it as an EC2 instance and be able to analyze their copies.
New also to the PowerProtect Cyber Recovery 19.11 release is the ability to use the “Secure Copy Analyze” action, which saves you from having to run “Secure Copy” first, followed by “Analyze” on the copy itself after it’s created.
You can now simply select “Secure Copy Analyze” to combine both actions:
You can also change their two “Secure Copy” and “Analyze” schedules to a single “Secure Copy Analyze” schedule. This means that if you have multiple sites replicating to the vault or multiple policies, you’ll be able to reduce the number of schedules that you need to maintain.
These new features are exciting on their own, but why stop there?
PowerProtect Cyber Recovery 19.11 also allows you to deploy the Cyber Recovery solution on Azure! Here’s a simplified view:
In this figure, notice that the Cyber Recovery components are the same as those in the on-premises and on AWS deployments. The Cyber Recovery host and a PowerProtect DD system are on an isolated subnet, and the Jump host is on another isolated subnet providing access to the Cyber Recovery server and the DDVE. (CyberSense is not supported yet on Azure.)
Note: Deploying PowerProtect Cyber Recovery in the cloud must be performed by Dell Technologies Professional Services.
Resources
Additional interesting resources can be found here:
Author: Eli Persin

Driving Innovation with the Dell Validated Platform for Red Hat OpenShift and IBM Instana
Wed, 14 Dec 2022 21:20:39 -0000
|Read Time: 0 minutes
“There is no innovation and creativity without failure. Period.” – Brené Brown
In the Information Technology field today, it seems like it’s impossible to go five minutes without someone using some variation of the word innovate. We are constantly told we need to innovate to stay competitive and remain relevant. I don’t want to spend time arguing the importance of innovation, because if you’re reading this then you probably already understand its importance.
What I do want to focus on is the role that failure plays in innovation. One of the biggest barriers to innovation is the fear of failure. We have all experienced some level of failure in our lives, and the costly mistakes can be particularly memorable. To create a culture that fosters innovation, we need to create an environment that reduces the costs associated with failure – these can be financial costs, time costs, or reputation costs. This is why one of the core tenets of modern application architecture is “fail fast”. Put simply, it means to identify mistakes quickly and adjust. The idea is that a flawed process or assumption will cost more to fix the longer it is present in the system. With traditional waterfall processes, that flaw could be present and undetected for months during the development process, and in some cases, even make it through to production.
While the benefits of fail fast can be easy to see, implementing it can be a bit harder. It involves streamlining not just the development process, but also the build process, the release process, and having proper instrumentation all the way through from dev to production. This last part, instrumentation, is the focus of this article. Instrumentation means monitoring a system to allow the operators to:
- See current state
- Identify application performance
- Detect when something is not operating as expected
While the need for instrumentation has always been present, developers are often faced with difficult timelines and the first feature areas that tend to be cut are testing and instrumentation. This can help in the short term, but it often ends up costing more down the road, both financially and in the end-user experience.
IBM Instana is a tool that provides observability of complete systems, with support for over 250 different technologies. This means that you can deploy Instana into the environment and start seeing valuable information without requiring any code changes. If you are supporting web-based applications, you can also take things further by including basic script references in the code to gain insights from client statistics as well.
Announcing Support for Instana on the Dell Validated Platform for Red Hat OpenShift
Installing IBM Instana into the Dell Validated Platform for Red Hat OpenShift can be done by Operator, Helm Chart, or YAML File.
The simplest way is to use the Operator. This consists of the following steps:
- Create the instana-agent project
- Set the policy permissions for the instana-agent service account
- Install the Operator
- Apply the Operator Configuration using a custom resource YAML file
You can configure IBM Instana to point to IBM’s cloud endpoint. Or for high security environments, you can choose to connect to a private IBM Instana endpoint hosted internally.
Figure 1. Infrastructure view of the OpenShift Cluster
Once configured, the IBM Instana agent starts sending data to the endpoint for analysis. The graphical view in Figure 1 shows the overall health of the Kubernetes cluster, and the node on which each resource is located. The resources in a normal state are gray: any resource requiring attention would appear in a different color.
Figure 2: Cluster View
We can also see the metrics across the cluster, including CPU and Memory statistics. The charts are kept in time sync, so if you highlight a given area or narrow the time period, all of the charts remain in the same context. This makes it easy to identify correlations between different metrics and events.
Figure 3: Application Calls View
Looking at the application calls allows you to see how a given application is performing over time. Being able to narrow down to a one second granularity means that you can actually follow individual calls through the system and see things like the parameters passed in the call. This can be incredibly helpful for troubleshooting intermittent application issues.
Figure 4: Application Dependencies View
The dependencies view gives you a graphical representation of all the components within a system and how they relate to each other, in a dependency diagram. This is critically important in modern application design because as you implement a larger number of more focused services, often created by different DevOps teams, it can be difficult to keep track of what services are being composed together.
Figure 5: Application Stack Traces
The application stack trace allows you to walk the stack of an application to see what calls were made, and how much time each call took to complete. Knowing that a page load took five seconds can help indicate a problem, but being able to walk the stack and identify that 4.8 seconds was spent running a database query (and exactly what query that was) means that you can spend less time troubleshooting, because you already know exactly what needs to be fixed.
For more information about the Dell Validated Platform for Red Hat OpenShift, see our launch announcement: Accelerate DevOps and Cloud Native Apps with the Dell Validated Platform for Red Hat OpenShift | Dell Technologies Info Hub.
Author: Michael Wells, PowerFlex Engineering Technologist
Twitter: @SqlTechMike
LinkedIn