Your Browser is Out of Date

ShareDemos uses technology that works best in other browsers.
For a full experience use one of the browsers below

Home > APEX > Blogs

Blogs

Blogs on various topics exploring Dell Technologies APEX

blogs (5)

Tag :

All Tags

Author :

All Authors

APEX APEX Private Cloud

Serverless Workload and APEX Private Cloud

Juan Carlos Reyes

Wed, 11 May 2022 17:45:02 -0000

|

Read Time: 0 minutes

What is a serverless service?

To begin answering this question, let’s build upon my previous blog in which I walked through how a developer can deploy a machine-learning workload on APEX Private Cloud Services. Now, I’ll expand on this workload example and demonstrate how to deploy it as a serverless service.

A serverless service is constructed in a serverless architecture, a development model that allows developers to build and run applications without managing the infrastructure. Combining serverless architecture and APEX Cloud Services can provide developers with a robust environment for their application development.

Knative Serving and Eventing

Knative is a popular open-source Kubernetes-based platform to deploy and manage modern serverless workloads. It consists of two main components: Serving and Eventing.

Knative Serving builds on Kubernetes and a network layer to support deploying and serving serverless applications/functions. Serving is easy to get started with, and it scales to support complex scenarios.

The Knative Serving project provides middleware components that enable:

  • Rapid deployment of serverless containers
  • Autoscaling, including scaling pods down to zero
  • Support for multiple networking layers such as Ambassador, Contour, Kourier, Gloo, and Istio for integration into existing environments
  • Point-in-time snapshots of deployed code and configurations

Knative Eventing enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers that create events and event consumers, or sinks, that receive events.

Examples of event sources for applications include Slack, Zendesk, and VMware.

Deployment demo

Following the Knative installation instructions, I configured Knative in my cluster. Next, I configured real DNS in my environment.

I also installed the Knative CLI through homebrew to make deploying of Knative services easier. Using the kn CLI, I wrapped my flask server in the serving framework. After a successful deployment, I used the following command to view the current Knative services:

kubectl get ksvc

You can see from the screenshots how the pods get created and destroyed as the service receives traffic.

 

Now, the serverless user interphase can request predictions from my model.

Kserve

My first attempt to wrap the TensorFlow service with Knative wasn't effective. The service dropped the opening requests, and the response times were slower. The spinning up and down of the pods was creating the delay and the execution drops. I fixed these issues by having a constant heartbeat so that the pods would stay active. Unfortunately, this workaround defeats some of the benefits of Knative. This was not the way for me to move forward.

In my quest to have the model in a serverless framework, I came across Kubeflow.

Kubeflow is a free and open-source machine-learning platform designed to use machine-learning pipelines to orchestrate complicated workflows running on Kubernetes.

Kubeflow integrates with Knative to deploy and train ML models. Kserve is the part of Kubeflow used for serving machine-learning models on arbitrary frameworks. Kserve recently graduated from the Kubeflow project, and you can configure it by itself without installing the whole suite of Kubeflow.

Following the Kserve installation guide, I configured it in my cluster.

 

Creating the YAML file for this service is straightforward enough. However, the tricky part was entering the correct storageUri for my APEX Private Cloud environment. This parameter is the path to the model’s location, and depending on the storage used, it can look a little different. For example, for APC, we need to save the model in a persistent volume claim (pvc).

Here is the YAML file code snippet I used to create the pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim-1
spec:
  storageClassName: vsan-default-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Once the pvc is formed, we need to copy the model to the pv. I achieved this by creating a pod and attaching the volume. After the pod is created, we can copy the model to the pvc directory.

#pv-model-store.yaml
apiVersion: v1
kind: Pod
metadata:
  name: model-store-pod
spec:
  volumes:
    - name: model-store
      persistentVolumeClaim:
        claimName: task-pv-claim-1
  containers:
    - name: model-store
      image: ubuntu
      command: [ "sleep" ]
      args: [ "infinity" ]
      volumeMounts:
        - mountPath: "/pv"
          name: model-store
      resources:
        limits:
          memory: "1Gi"
          cpu: "1"
  imagePullSecrets:
    - name: regcred

By running the following command, we can copy the model to the PVC:

kubectl cp [model folder location] [name of pod with PVC]:[new location within PVC] -c model-store
kubectl cp /home/jreyes/HotDog_App/hotdog_image_classification_model/new_model model-store-pod:/pv/hotdog/1 -c model-store

The critical part is not forgetting to add a version number to the model. In this case, I added version number 1 to the end of the path.

Once the model is stored, we can log in to the pod to verify the contents using the following command:

kubectl exec -it model-store-pod – bash

After verification, we need to delete the pod to free up the pvc.

We can now run the Kserve Inference service YAML file that will use the pvc.

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "hotdog"
spec:
  predictor:
    tensorflow:
      storageUri: "pvc://task-pv-claim-1/hotdog"

The TensorFlow serving container automatically looks for the version inside the folder, so there is no need to add the version number in the storageUri path.

After executing the YAML file, we can find the address of our Kserve service with the following command:

kubectl get isvc

With this address, we can update the resnet client to test the model.

Here are the predictions when we run the client with two different images:

We have successfully made our user interface and model use a serverless framework. The final step is to update the flask server to point to the new address.

Note: I could not get an inference service to listen to two ports at a time (REST and gRPC). My solution was to create two inference services and adjust the flask code as necessary.

Conclusion

Now, we have a complete image-recognition application on a serverless architecture. The serverless architecture grants us greater resource flexibility with autoscaling and facilitates a canary deployment for the machine-learning model. Furthermore, combining this architecture with APEX Private Cloud services provides an environment that is powerful and flexible for many edge application deployments. In my next blog, I will cover migrating the application to the public cloud to compare the differences and provide a cost analysis.

Until next time!

 Author: Juan Carlos Reyes

Read Full Blog
PowerScale APEX aaS

Introducing File Services for APEX Data Storage Services

Vincent Shen

Thu, 16 Dec 2021 16:41:35 -0000

|

Read Time: 0 minutes

This is the first blog in a series that introduces the file services portion of APEX Data Storage Services (DSS). This blog is mainly focused on the touch-and-feel of how to subscribe to file services for APEX DSS.  

APEX Data Storage Services is an on-premises, as-a-service portfolio of scalable and elastic storage resources designed for OpEx treatment. This offering enables you to optimize for simplicity by eliminating over- and under-provisioning and by eliminating complex procurement and migration cycles.

You can easily manage your as-a-service experience through a single interface — the APEX Console. You can increase agility by scaling up and down to respond dynamically to customer and workload requirements, and only pay for what you use at a single rate with no overage fees.

You begin by specifying your order name and selecting the data service that is best suited for your workload. As of this writing, you can select Block Services or File Services. In this example, choose File Services.

Now select a performance tier. Depending on your workload type and requirements, Dell Technologies offers three performance tiers:

If your needs change, you can always increase your performance level. You can also view more granularity by clicking Show Details.

In the next screen, specify your base capacity, which is the minimum amount of storage that you are committing to.

Besides the base capacity, we also provide on-demand usage capacity. Anything you use above base capacity is on-demand capacity, which is measured every five minutes. The great advantage of APEX Data Storage Services is that it allows our customers to pay a single rate on a dollar-per-terabyte basis for both base and on-demand usage.

As on-demand capacity is used, it may go much higher than base capacity. In this case, you will have the option to raise the base capacity at any time without any impact to the length of the term. At the end of the term, the base capacity can be lowered again.

There is an additional buffer available for incremental growth, ensuring that you don’t run out of available capacity. If your utilization grows, additional capacity is deployed and the buffer increases.  

If capacity requirements decrease, the unused infrastructure is removed, in coordination with the Customer Success Manager (CSM), which we’ll cover in a moment.

The next step is to select the subscription term length. We offer two options: a one year term and a three year term. On this screen, you can also check the base capacity rate and the on-demand capacity rate by clicking Show Details:

After you enter your site information and fill out the survey, everything is pretty much done. APEX Data Storage Services will be delivered with a time to value (TTV) of 14 days.  Capacity expansions also require up to 14 days to deploy. Your CSM will coordinate the expansions.

Note: TTV is the time measured between order acceptance and activation. It is subject to acceptance of APEX terms by required parties, and site qualification, which must be completed before order placement, and participation in pre-order planning.  Product availability, international shipping, holidays and other factors may impact deployment time.

In the following blog, I will explain how the APEX DSS file service and its components work. Hope you enjoy it!

Author: Vincent Shen


Read Full Blog
PowerStore PowerScale APEX

Dell APEX Data Storage Service (DSS) in Colocation

Vincent Shen

Wed, 19 Jan 2022 21:03:40 -0000

|

Read Time: 0 minutes

With the December 2021 update for APEX DSS, Dell Technologies has now an option to provide a colocation capability for APEX DSS customers. This article will walk you through this new feature in the following aspects:

  • APEX DSS in Colocation: Overview
  • APEX DSS in Colocation: Architecture
  • APEX DSS in Colocation: Shared responsibility model

APEX DSS in Colocation: Overview

Dell Technologies APEX Data Storage Services is an as-a-Service portfolio of scalable and elastic, outcome-based storage resources delivered so that customers only pay for what they use with the ability to scale up and down, delivered to the service level they need with infrastructure that is owned and maintained by Dell Technologies.

APEX Data Storage Services in colocation are storage services hosted at Dell Technologies’ partners that provide colocation data centers for customers and the deployment is in Dell-managed colocation facilities. Dell Technologies offers leading storage solution services for file, block, and object storage, backed by proven, best-in-class Dell storage technologies. File and Object storage are provided with Dell PowerScale appliances; Block storage is provided with Dell PowerStore appliances.

Storage Services includes a core set of infrastructure management capabilities, from deployment to ongoing monitoring, operations, optimization, and support, plus a clearly defined process for renewals and decommission at the end of service. A self-service portal console, the APEX Console, allows customers to identify, configure, deploy, monitor, and expand the solutions quickly. As non-colo APEX deployment, you can file a service ticket for advanced operations.

APEX DSS in Colocation: Architecture

The following figure shows the overall architecture of the APEX DSS in Colocation.

Dell Technologies data centers host the APEX Console and APEX backend systems. APEX Console is a secure portal for customers to manage and monitor their storage in the APEX Data Storage Services in colocation.

The Management Zone in the colocation is used for managing the service components in the management and customer zones, including availability management, patch management, and logging and monitoring.

The Customer Zone is where the storage appliances reside, and where customer data is stored. Customers have three options for accessing the customer zone and its data:

  • (A) From the customer’s own colocation environment
    • Data access to customer’s own colocation environment. Direct connection from a customer provided instance like a bare-metal server or virtual machine in same colocation provider/partner (same provider/partner that Dell Technologies is using); connects with dedicated fabric port and VLAN.
  • (B) Through the customer’s cloud service provider (CSP)
    • Data access to customer’s cloud service provider (CSP) – Direct connection from a Cloud (hyperscaler or other cloud connection available in same colocation (same provider that Dell Technologies is using); connects with dedicated fabric port and VLAN.
  • (C) From the customer’s on-premises data center
    • Data access to customer’s on-premises data center – on-premises replication over MPLS or Direct Internet; connects into Dell Technologies colocation partner fabric with a dedicated crossconnect.

APEX DSS in Colocation: Shared responsibility model

The security of APEX Data Storage Services in colocation is a shared responsibility between Dell Technologies and the customer:

  • Dell Technologies is responsible for securing the data storage service and protecting the infrastructure that runs the service. This infrastructure is composed of hardware, software, networking, and facilities.
  • The customer is responsible for securing their data within the storage service. Ensuring data security and maintaining security controls for accessing the data are always the responsibility of the customer.

The following figure shows the areas of responsibility between Dell Technologies and customers.

The overall security of this storage service is achieved through the shared responsibilities of Dell Technologies and customers.

Conclusion

To recap, for customer-owned storage inside a customer’s on-premises datacenter, the whole stack is owned, maintained, and paid for the customer.

The difference is that when consuming Dell APEX Data Services in colocation, many responsibilities are shifted from you to Dell Technologies, relieving you from worrying about the operational burdens of securing your infrastructure.

Author: Vincent Shen







Read Full Blog
PowerScale AWS object storage APEX

How to Create Object Storage in Dell APEX Data Storage Services (DSS)

Vincent Shen

Wed, 19 Jan 2022 21:18:36 -0000

|

Read Time: 0 minutes

As of the December 2021 release of APEX DSS, Dell now supports creating object storage! APEX File Services provides multi-protocol data access and includes support for the S3 (Simple Storage Service) Object protocol.

During activation of APEX File deployments (or subsequently, in response to a Service Request), Dell Services will enable the specific data access protocols (SMB, NFS, and S3) as requested by the customer.

Object capabilities are a good fit for file users who are leveraging complex application designs that demand File and Object access to the same data, thus expanding file storage to include cloud-native workloads without the need to make a data set copy.

Here is a walkthrough of how to create S3 object storage in APEX DSS:

  1. Launch the OneFS web UI. Make sure the S3 object service is enabled by clicking Protocols > Object storage (S3) > Global settings:

2. Create the secret key for the end-user. In this case, I will create the key for the user vince. Under the Key management tab in the Object storage (S3) panel, click Select user. Select the user vince and click the button Create a key. Note the Access id and the corresponding secret key for future use. In my case they are:

Access id: 1_vince_accid

Secret key: yHVUjcEJR1u1wq3glGJleAqXyVh6

3. To create the S3 bucket, select the Buckets tab under the Object storage (S3) panel. Click the button Create bucket. In my example, I will create a bucket using the following parameters:

Bucket name: vince

Owner: vince

Path: /ifs/vince

4. Test your S3 object storage. You can use any S3 client tools for this purpose. In my case, I am using CloudBerry Explorer to set up the connection:

Note: by default, it will use an SSL certificate to encrypt the connection. The default port for HTTPS is 9021 which you can configure in the OneFS web UI under Global settings.

Conclusion

Using APEX DSS, you can easily deploy your S3 object storage in minutes. With this capability, clients can access APEX DSS file-based data as objects efficiently. OneFS S3 in APEX DSS is designed as a first-class protocol including features for bucket and object operations, security implementation, and management interface.

In our next blog, we will go through the colocation feature in APEX DSS for file.

Author: Vincent Shen



Read Full Blog
cloud video analytics APEX

Cloud-Native Workloads: Object Detection in Video Streams

Bob Ganley

Wed, 02 Mar 2022 22:13:52 -0000

|

Read Time: 0 minutes

See containers and Kubernetes in action with a streaming video analysis Advanced Driver Assistance System on APEX Cloud Services.

Initially published on November 11, 2021 at https://www.dell.com/en-us/blog/cloud-native-workloads-object-detection-in-video-streams/.

A demo may be the best way to make a concept real in the IT world. This blog describes one of a series of recorded demonstrations illustrating the use of VMware Tanzu on APEX Cloud Services as the platform for open-source based cloud-native applications leveraging containers with Kubernetes orchestration.

 This week we’re showcasing an object detection application for an Advanced Driver-Assistance System (ADAS) monitoring road traffic in video sources that leverages several open-source projects to analyze streaming data using an artificial intelligence and machine learning (AI/ML) algorithm.

 The base platform is VMware Tanzu running on APEX Private Cloud Service. APEX Private Cloud simplifies VMware cloud adoption as a platform for application modernization. It is based on Dell EMC VxRail with VMware vSphere Enterprise Plus and vSAN Enterprise available as a 1- or 3-year subscription with hardware, software and services (deployment, rack integration, support and asset recovery) components included in a single monthly price.  VMware Tanzu Basic Edition was added post-deployment to create the Container-as-a-Service (CaaS) platform with Kubernetes running integrated in the vSphere hypervisor.

Object detection in video sources requires managing streaming data for analysis, both real time and storing that data for later analysis. This demo includes the newly announced Dell EMC ObjectScale object storage platform which was designed for Kubernetes as well as the innovative Dell EMC Streaming Data Platform for ingesting, storing and analyzing continuously streaming data in real time.

The image recognition application leverages several open-source components:

  • Pravega software that implements a storage abstraction called a “stream” which is at the heart of the Streaming Data Platform.
  • Apache Flink real time analytics engine for the object detection computations.
  • Tensor Flow for the object detection model.
  • Jupyter as the development environment for data flow and visualization.

The demo shows these components running in Tanzu Kubernetes Grid clusters to host the components of the object detection demo. It looks from the perspective of a data scientist who configures the projects and data flows in the Streaming Data Platform. Also, the Jupyter notebooks are configured to push the data into the Pravega stream and display the video with the object detection.

 You can view the demo here.

 Demos like these are a great way to see how the Dell Technologies components can be combined to create a modern application environment. Please view this demo and provide us some feedback on other demos you’d like to see in the future.

 You can find more information on Dell Technologies solutions with VMware Tanzu here.

Author: Bob Ganley


Read Full Blog