New and Updated Terraform Providers for Dell Infrastructure in Q2 2023 Release
Thu, 29 Jun 2023 12:35:34 -0000
|Read Time: 0 minutes
Last quarter we announced the first release of Terraform providers for Dell infrastructure. Now Terraform providers are also part of the Q2 release cadence of Dell infrastructure as code (IaC) integrations. We are excited to announce the following new features for the Terraform integrations for Dell infrastructure:
- v1.0 of the provider for OpenManage Enterprise
- v1.0 of the provider for PowerMax
- v1.1 of the provider for PowerFlex
- v1.1 of the provider for PowerStore
Terraform provider for OpenManage Enterprise v1.0
OpenManage Enterprise simplifies large-scale PowerEdge infrastructure management. You can define templates to manage the configuration of different groups of servers based on the workloads running on them. You can also create baseline versions for things like firmware and immediately get a report of noncompliance with the baseline. Now, as the scale of deployment increases—for example, in edge use cases—the configuration management can itself becomes arduous. This is where Terraform can manage the state of all the configurations and baselines in OpenManage Enterprise and deploy these for the server inventory as well.
The following resources and data sources are available in v1.0 of the OpenManage Enterprise provider:
Resources:
- Configuration baseline resource
- Configuration compliance resource
- Template resource
- Deployment resource
Data sources:
- Baseline compliance data source
- Group device data source
- Template data source
- VLAN network data source
Here are some examples of how to use OpenManage Enterprise resources and data sources to create and manage objects, and query from the objects:
Creating baselines
- Create baseline using service tags:
resource "ome_configuration_baseline" "baseline_name" { baseline_name = "Baseline Name" device_servicetags = ["MXL1234", "MXL1235"] }
- Create baseline using device IDs:
resource "ome_configuration_baseline" "baseline1" { baseline_name = "baseline1" ref_template_id = 745 device_ids = [10001, 10002] description = "baseline description" }
- Create baseline using device service tag with daily notification scheduled:
resource "ome_configuration_baseline" "baseline2" { baseline_name = "baseline2" ref_template_id = 745 device_servicetags = ["MXL1234", "MXL1235"] description = "baseline description" schedule = true notify_on_schedule = true email_addresses = ["test@testmail.com"] cron = "0 30 11 * * ? *" output_format = "csv" }
- Create baseline using device IDs with daily notification on status changing to noncompliant:
Resource “ome_configuration_baseline” “baseline3” { baseline_name = “baseline3” ref_template_id = 745 device_ids = [10001, 10002] description = “baseline description” schedule = true email_addresses = [“test@testmail.com”] output_format = “pdf” }
Compliance against baseline
- Remediate baseline for the specified target devices:
resource "ome_configuration_compliance" "remeditation0" { baseline_name = "baseline_name" target_devices = [ { device_service_tag = "MX12345" compliance_status = "Compliant" } ] }
- Remediate baseline for the specified target devices with scheduling:
resource "ome_configuration_compliance" "remeditation1" { baseline_name = "baseline_name" target_devices = [ { device_service_tag = "MX12345" compliance_status = "Compliant" } ] run_later = true cron = "0 00 11 14 02 ? 2032" }
Template creation and management
- Create a template with reference device id:
resource "ome_template" "template_1" { name = "template_1" refdevice_id = 10001 }
- Create a template with reference device servicetag:
resource "ome_template" "template_2" { name = "template_2" refdevice_servicetag = "MXL1234" }
- Create a template with fqdds as NIC:
resource "ome_template" "template_3" { name = "template_3" refdevice_id = 10001 fqdds = "NIC" }
Data source examples
- Fetch the main data source objects:
# Get configuration compliance report for a baseline data "ome_configuration_report_info" "cr" { baseline_name = "BaselineName" } # Get Deviceid's and servicetags of all devices that belong to a specified list of groups data "ome_groupdevices_info" "gd" { device_group_names = ["WINDOWS"] } # get the template details data "ome_template_info" "data-template-1" { name = "template_1" } # get details of all the vlan networks data "ome_vlannetworks_info" "data-vlans" { }
The following set of examples uses locals heavily. Locals in Terraform is a way to assign a name to an expression, allowing it to be used multiple times within a module without repeating it. These named expressions are evaluated once and can then be referenced multiple times in other parts of a module configuration. This makes your configurations easier to read and maintain. Check out the Local Values topic in the HashiCorp documentation to learn more.
Let us continue with the examples:
- Get VLAN and template objects:
data "ome_vlannetworks_info" "vlans" { } data "ome_template_info" "template_data" { name = "template_4" }
- Fetch VLAN network ID from VLAN name for updating VLAN template attributes:
locals { vlan_network_map = { for vlan_network in data.ome_vlannetworks_info.vlans.vlan_networks : vlan_network.name => vlan_network.vlan_id } }
- Modify the attributes required for updating a template for assigning identity pool:
locals { attributes_value = tomap({ "iDRAC,IO Identity Optimization,IOIDOpt 1 Initiator Persistence Policy" : "WarmReset, ColdReset, ACPowerLoss" "iDRAC,IO Identity Optimization,IOIDOpt 1 Storage Target Persistence Policy" : "WarmReset, ColdReset, ACPowerLoss" "iDRAC,IO Identity Optimization,IOIDOpt 1 Virtual Address Persistence Policy Auxiliary Powered" : "WarmReset, ColdReset, ACPowerLoss" "iDRAC,IO Identity Optimization,IOIDOpt 1 Virtual Address Persistence Policy Non Auxiliary Powered" : "WarmReset, ColdReset, ACPowerLoss" "iDRAC,IO Identity Optimization,IOIDOpt 1 IOIDOpt Enable" : "Enabled" }) attributes_is_ignored = tomap({ "iDRAC,IO Identity Optimization,IOIDOpt 1 Initiator Persistence Policy" : false "iDRAC,IO Identity Optimization,IOIDOpt 1 Storage Target Persistence Policy" : false "iDRAC,IO Identity Optimization,IOIDOpt 1 Virtual Address Persistence Policy Auxiliary Powered" : false "iDRAC,IO Identity Optimization,IOIDOpt 1 Virtual Address Persistence Policy Non Auxiliary Powered" : false "iDRAC,IO Identity Optimization,IOIDOpt 1 IOIDOpt Enable" : false }) template_attributes = data.ome_template_info.template_data.attributes != null ? [ for attr in data.ome_template_info.template_data.attributes : tomap({ attribute_id = attr.attribute_id is_ignored = lookup(local.attributes_is_ignored, attr.display_name, attr.is_ignored) display_name = attr.display_name value = lookup(local.attributes_value, attr.display_name, attr.value) })] : null }
- Create a template in a resource and uncomment the lines as shown here to update the template to attach the identity pool and VLAN:
resource "ome_template" "template_4" { name = "template_4" refdevice_servicetag = "MXL1234" # attributes = local.template_attributes # identity_pool_name = "IO1" # vlan = { # propogate_vlan = true # bonding_technology = "NoTeaming" # vlan_attributes = [ # { # untagged_network = lookup(local.vlan_network_map, "VLAN1", 0) # tagged_networks = [0] # is_nic_bonded = false # port = 1 # nic_identifier = "NIC in Mezzanine 1A" # }, # { # untagged_network = 0 # tagged_networks = [lookup(local.vlan_network_map, "VLAN1", 0), lookup(local.vlan_network_map, "VLAN2", 0), lookup(local.vlan_network_map, "VLAN3", 0)] # is_nic_bonded = false # port = 1 # nic_identifier = "NIC in Mezzanine 1B" # }, # ] # } }
- Modify the attributes required for updating a template using attribute IDs:
# get the template details data "ome_template_info" "template_data1" { name = "template_5" } locals { attributes_map = tomap({ 2740260 : "One Way" 2743100 : "Disabled" }) template_attributes = data.ome_template_info.template_data1.attributes != null ? [ for attr in data.ome_template_info.template_data1.attributes : tomap({ attribute_id = attr.attribute_id is_ignored = attr.is_ignored display_name = attr.display_name value = lookup(local.attributes_map, attr.attribute_id, attr.value) })] : null }
- Create a template and update the attributes of the template:
# attributes are only updatable and is not applicable during create operation. # attributes existing list can be fetched from a template with a datasource - ome_template_info as defined above. # modified attributes list should be passed to update the attributes for a template resource "ome_template" "template_5" { name = "template_5" refdevice_servicetag = "MXL1234" attributes = local.template_attributes }
- Create multiple templates with template names and reference devices:
resource "ome_template" "templates" { count = length(var.ome_template_names) name = var.ome_template_names[count.index] refdevice_servicetag = var.ome_template_servicetags[count.index] }
- Clone a deploy template to create compliance template:
resource "ome_template" "template_6" { name = "template_6" reftemplate_name = "template_5" view_type = "Compliance" }
- Create a deployment template from an XML:
resource "ome_template" "template_7" { name = "template_7" content = file("../testdata/test_acc_template.xml") }
- Create a compliance template from an XML:
resource "ome_template" "template_8" { name = "template_8" content = file("../testdata/test_acc_template.xml") view_type = "Compliance" }
Flexible and granular deployment of templates
- Deploy template using device service tags:
resource "ome_deployment" "deploy-template-1" { template_name = "deploy-template-1" device_servicetags = ["MXL1234", "MXL1235"] job_retry_count = 30 sleep_interval = 10 }
- Deploy template using device IDs:
resource "ome_deployment" "deploy-template-2" { template_name = "deploy-template-2" device_ids = [10001, 10002] }
- Get device IDs or service tags from a specified list of groups:
data "ome_groupdevices_info" "gd" { device_group_names = ["WINDOWS"] }
- Deploy template for group by fetching the device IDs using data sources:
resource "ome_deployment" "deploy-template-3" { template_name = "deploy-template-3" device_ids = data.ome_groupdevices_info.gd.device_ids }
- Deploy template using device service tags with schedule:
resource "ome_deployment" "deploy-template-4" { template_name = "deploy-template-4" device_servicetags = ["MXL1234"] run_later = true cron = "0 45 12 19 10 ? 2022" }
- Deploy template using device IDs and deploy device attributes:
resource "ome_deployment" "deploy-template-5" { template_name = "deploy-template-5" device_ids = [10001, 10002] device_attributes = [ { device_servicetags = ["MXL12345", "MXL23456"] attributes = [ { attribute_id = 1197967 display_name = "ServerTopology 1 Aisle Name" value = "aisle updated value" is_ignored = false } ] } ] }
- Deploy template using device IDs and boot to network ISO:
resource "ome_deployment" "deploy-template-6" { template_name = "deploy-template-6" device_ids = [10001, 10002] boot_to_network_iso = { boot_to_network = true share_type = "CIFS" iso_timeout = 240 iso_path = "/cifsshare/unattended/unattended_rocky8.6.iso" share_detail = { ip_address = "192.168.0.2" share_name = "" work_group = "" user = "username" password = "password" } } job_retry_count = 30 }
- Deploy template using device IDs by changing the job_retry_count and sleep_interval, and ignore the same during updates:
resource "ome_deployment" "deploy-template-7" { device_servicetags = ["MXL1234"] job_retry_count = 30 sleep_interval = 10 lifecycle { ignore_changes = [ job_retry_count, sleep_interval ] } }
- Deploy template using device service tags and group names:
resource "ome_deployment" "deploy-template-8" { template_id = 614 device_servicetags = concat(data.ome_groupdevices_info.gd.device_servicetags, ["MXL1235"]) }
Terraform provider for PowerMax v1.0
My colleagues Paul and Florian did a great blog post on the Terraform provider for PowerMax when we announced the beta release last quarter. I am adding the details of the provider for the sake of completion here:
PowerMax resources
- PowerMax storage group:
resource "powermax_storagegroup" "test" { name = "terraform_sg" srp_id = "SRP_1" slo = "Gold" host_io_limit = { host_io_limit_io_sec = "1000" host_io_limit_mb_sec = "1000" dynamic_distribution = "Never" } volume_ids = ["0008F"] }
- PowerMax host:
resource "powermax_host" "host_1" { name = "host_1" initiator = ["10000000c9fc4b7e"] host_flags = { volume_set_addressing = { override = true enabled = true } openvms = { override = true enabled = false } } }
- PowerMax host group:
resource "powermax_hostgroup" "test_host_group" { # Optional host_flags = { avoid_reset_broadcast = { enabled = true override = true } } host_ids = ["testHost"] name = "host_group" }
- PowerMax port group:
resource "powermax_portgroup" "portgroup_1" { name = "tfacc_pg_test_1" protocol = "SCSI_FC" ports = [ { director_id = "OR-1C" port_id = "0" } ] }
- PowerMax masking view:
resource "powermax_maskingview" "test" { name = "terraform_mv" storage_group_id = "terraform_sg" host_id = "terraform_host" host_group_id = "" port_group_id = "terraform_pg" }
PowerMax data sources
- Storage group with dot operations:
data "powermax_storagegroup" "test" { filter { names = ["esa_sg572"] } } output "storagegroup_data" { value = data.powermax_storagegroup.test } data "powermax_storagegroup" "testall" { } output "storagegroup_data_all" { value = data.powermax_storagegroup.testall }
- PowerMax host data source with dot operations to query the information needed:
data "powermax_host" "HostDsAll" { } data "powermax_host" "HostDsFiltered" { filter { # Optional list of IDs to filter names = [ "Host124", "Host173", ] } } output "hostDsResultAll" { value = data.powermax_host.HostDsAll } output "hostDsResult" { value = data.powermax_host.HostDsFiltered }
- Host group data source:
data "powermax_hostgroup" "all" {} output "all" { value = data.powermax_hostgroup.all } # List a specific hostgroup data "powermax_hostgroup" "groups" { filter { names = ["host_group_example_1", "host_group_example_2"] } } output "groups" { value = data.powermax_hostgroup.groups }
- Port groups data source and dot operations to output information:
# List fibre portgroups. data "powermax_portgroups" "fibreportgroups" { # Optional filter to list specified Portgroups names and/or type filter { # type for which portgroups to be listed - fibre or iscsi type = "fibre" # Optional list of IDs to filter names = [ "tfacc_test1_fibre", #"test2_fibre", ] } } data "powermax_portgroups" "scsiportgroups" { filter { type = "iscsi" # Optional filter to list specified Portgroups Names } } # List all portgroups. data "powermax_portgroups" "allportgroups" { #filter { # Optional list of IDs to filter #names = [ # "test1", # "test2", #] #} } output "fibreportgroups" { value = data.powermax_portgroups.fibreportgroups } output "scsiportgroups" { value = data.powermax_portgroups.scsiportgroups } output "allportgroups" { value = data.powermax_portgroups.allportgroups.port_groups }
- Masking view data source and dot operations to output information:
# List a specific maskingView data "powermax_maskingview" "maskingViewFilter" { filter { names = ["terraform_mv_1", "terraform_mv_2"] } } output "maskingViewFilterResult" { value = data.powermax_maskingview.maskingViewFilter.masking_views } # List all maskingviews data "powermax_maskingview" "allMaskingViews" {} output "allMaskingViewsResult" { value = data.powermax_maskingview.allMaskingViews.masking_views }
Terraform provider for PowerStore v1.1.0
In PowerStore v1.1, the following new resources and data sources are being introduced.
New resources for PowerStore
- Volume group resource:
resource "powerstore_volumegroup" "terraform-provider-test1" { # (resource arguments) description = "Creating Volume Group" name = "test_volume_group" is_write_order_consistent = "false" protection_policy_id = "01b8521d-26f5-479f-ac7d-3d8666097094" volume_ids = ["140bb395-1d85-49ae-bde8-35070383bd92"] }
- Host resource:
resource "powerstore_host" "test" { name = "new-host1" os_type = "Linux" description = "Creating host" host_connectivity = "Local_Only" initiators = [{ port_name = "iqn.1994-05.com.redhat:88cb605"}] }
- Host group resource:
resource "powerstore_hostgroup" "test" { name = "test_hostgroup" description = "Creating host group" host_ids = ["42c60954-ea71-4b50-b172-63880cd48f99"] }
- Volume snapshot resource:
resource "powerstore_volume_snapshot" "test" { name = "test_snap" description = "powerstore volume snapshot" volume_id = "01d88dea-7d71-4a1b-abd6-be07f94aecd9" performance_policy_id = "default_medium" expiration_timestamp = "2023-05-06T09:01:47Z" }
- Volume group snapshot resource:
resource "powerstore_volumegroup_snapshot" "test" { name = "test_snap" volume_group_id = "075aeb23-c782-4cce-9372-5a2e31dc5138" expiration_timestamp = "2023-05-06T09:01:47Z" }
New data sources for PowerStore
- Volume group data source:
data "powerstore_volumegroup" "test1" { name = "test_volume_group1" } output "volumeGroupResult" { value = data.powerstore_volumegroup.test1.volume_groups }
- Host data source:
data "powerstore_host" "test1" { name = "tf_host" } output "hostResult" { value = data.powerstore_host.test1.hosts }
- Host group data source:
data "powerstore_hostgroup" "test1" { name = "test_hostgroup1" } output "hostGroupResult" { value = data.powerstore_hostgroup.test1.host_groups }
- Volume snapshot data source:
data "powerstore_volume_snapshot" "test1" { name = "test_snap" #id = "adeeef05-aa68-4c17-b2d0-12c4a8e69176" } output "volumeSnapshotResult" { value = data.powerstore_volume_snapshot.test1.volumes }
- Volume group snapshot data source:
data "powerstore_volumegroup_snapshot" "test1" { # name = "test_volumegroup_snap" } output "volumeGroupSnapshotResult" { value = data.powerstore_volumegroup_snapshot.test1.volume_groups }
- Snapshot rule data source:
data "powerstore_snapshotrule" "test1" { name = "test_snapshotrule_1" } output "snapshotRule" { value = data.powerstore_snapshotrule.test1.snapshot_rules }
- Protection policy data source:
data "powerstore_protectionpolicy" "test1" { name = "terraform_protection_policy_2" } output "policyResult" { value = data.powerstore_protectionpolicy.test1.policies }
Terraform provider for PowerFlex v1.1.0
We announced the very first provider for Dell PowerFlex last quarter, and here we have the next version with new functionality. In this release, we are introducing new resources and data sources to support the following activities:
- Create and manage SDCs
- Create and manage protection domains
- Create and manage storage pools
- Create and manage devices
Following are the details of the new resources and corresponding data sources.
Host mapping with PowerFlex SDCs
Storage Data Client (SDC) is the PowerFlex host-side software component that can be deployed on Windows, Linux, IBM AIX, ESXi, and other operating systems. In this release of the PowerFlex provider, a new resource is introduced to map multiple volumes to a single SDC. Here is an example of volumes being mapped using their ID or name:
resource "powerflex_sdc_volumes_mapping" "mapping-test" { id = "e3ce1fb600000001" volume_list = [ { volume_id = "edb2059700000002" limit_iops = 140 limit_bw_in_mbps = 19 access_mode = "ReadOnly" }, { volume_name = "terraform-vol" access_mode = "ReadWrite" limit_iops = 120 limit_bw_in_mbps = 25 } ] }
To unmap all the volumes mapped to SDC, the following configuration can be used:
resource "powerflex_sdc_volumes_mapping" "mapping-test" { id = "e3ce1fb600000001" volume_list = [] }
Data sources for storage data client and server components:
- PowerFlex SDC data source:
data "powerflex_sdc" "selected" { #id = "e3ce1fb500000000" name = "sdc_01" } # # Returns all sdcs matching criteria output "allsdcresult" { value = data.powerflex_sdc.selected }
- PowerFlex SDS data source:
data "powerflex_sds" "example2" { # require field is either of protection_domain_name or protection_domain_id protection_domain_name = "domain1" # protection_domain_id = "202a046600000000" sds_names = ["SDS_01_MOD", "sds_1", "node4"] # sds_ids = ["6adfec1000000000", "6ae14ba900000006", "6ad58bd200000002"] } output "allsdcresult" { value = data.powerflex_sds.example2 }
PowerFlex protection domain resource and data source
Here is the resource definition of the protection domain:
resource "powerflex_protection_domain" "pd" { # required parameters ====== name = "domain_1" # optional parameters ====== active = true # SDS IOPS throttling # overall_io_network_throttling_in_kbps must be greater than the rest of the parameters # 0 indicates unlimited IOPS protected_maintenance_mode_network_throttling_in_kbps = 10 * 1024 rebuild_network_throttling_in_kbps = 10 * 1024 rebalance_network_throttling_in_kbps = 10 * 1024 vtree_migration_network_throttling_in_kbps = 10 * 1024 overall_io_network_throttling_in_kbps = 20 * 1024 # Fine granularity metadata caching fgl_metadata_cache_enabled = true fgl_default_metadata_cache_size = 1024 # Read Flash cache rf_cache_enabled = true rf_cache_operational_mode = "ReadAndWrite" rf_cache_page_size_kb = 16 rf_cache_max_io_size_kb = 32 }
All this information for an existing protection domain can be stored with the corresponding datastore, and information can be queried using the dot operator:
data "powerflex_protection_domain" "pd" { name = "domain1" # id = "202a046600000000" } output "inputPdID" { value = data.powerflex_protection_domain.pd.id } output "inputPdName" { value = data.powerflex_protection_domain.pd.name } output "pdResult" { value = data.powerflex_protection_domain.pd.protection_domains }
PowerFlex storage pool resource and data source
Storage resources in PowerFlex are grouped into these storage pools based on certain attributes such as performance characteristics, types of disks used, and so on. Here is the resource definition of the storage pool resource:
resource "powerflex_storage_pool" "sp" { name = "storagepool3" #protection_domain_id = "202a046600000000" protection_domain_name = "domain1" media_type = "HDD" use_rmcache = false use_rfcache = true #replication_journal_capacity = 34 capacity_alert_high_threshold = 66 capacity_alert_critical_threshold = 77 zero_padding_enabled = false protected_maintenance_mode_io_priority_policy = "favorAppIos" protected_maintenance_mode_num_of_concurrent_ios_per_device = 7 protected_maintenance_mode_bw_limit_per_device_in_kbps = 1028 rebalance_enabled = false rebalance_io_priority_policy = "favorAppIos" rebalance_num_of_concurrent_ios_per_device = 7 rebalance_bw_limit_per_device_in_kbps = 1032 vtree_migration_io_priority_policy = "favorAppIos" vtree_migration_num_of_concurrent_ios_per_device = 7 vtree_migration_bw_limit_per_device_in_kbps = 1030 spare_percentage = 66 rm_cache_write_handling_mode = "Passthrough" rebuild_enabled = true rebuild_rebalance_parallelism = 5 fragmentation = false }
And the corresponding data source to get this information from existing storage pools is as follows:
data "powerflex_storage_pool" "example" { //protection_domain_name = "domain1" protection_domain_id = "202a046600000000" //storage_pool_ids = ["c98ec35000000002", "c98e26e500000000"] storage_pool_names = ["pool2", "pool1"] } output "allsdcresult" { value = data.powerflex_storage_pool.example.storage_pools }
Author: Parasar Kodati
Related Blog Posts
Q3 2023: New and Updated Terraform Providers for Dell Infrastructure
Mon, 02 Oct 2023 12:49:02 -0000
|Read Time: 0 minutes
We just concluded three quarters of Terraform provider development for Dell infrastructure, and we have some exciting updates to existing providers as well as two brand new providers for PowerScale and PowerEdge node (Redfish-interface) workflows! You can check out the first two releases of Terraform providers here: Q1-2023 and Q2-2023.
We are excited to announce the following new features for the Terraform integrations for Dell infrastructure:
- NEW provider! v1.0 of the provider for PowerScale
- v1.2 of the provider for PowerFlex
- v1.0 of the provider for PowerMax
- NEW provider! v1.0 Terraform Provider for Redfish v1.0.0
- v1.1 Terraform Provider for OME
Terraform Provider for PowerScale v1.0
The first version of the PowerScale provider has a lot of net new capabilities in the form of new resources and data sources. Add to that a set of examples and utilities for AWS deployment, there is enough great material to have its own blog post. Please see this post--Introducing Terraform Provider for Dell PowerScale--all the details.
Terraform Provider for PowerFlex v1.2: it’s all about day-1 deployment
Day-1 deployment refers to the initial provisioning and configuration of hardware and software resources before any production workloads are deployed. A successful Day-1 deployment sets the foundation for the entire infrastructure's performance, scalability, and reliability. However, Day-1 deployment can be complex and time-consuming, often involving manual tasks, potential errors, and delays. This is where automation and the Dell PowerFlex Terraform Provider come into play.
Dell PowerFlex is the software defined leader of the storage industry, providing the foundational technology of Dell’s multicloud infrastructure as well as APEX Cloud Platforms variants for OpenShift and Azure. PowerFlex was the first platform in Dell’s ISG portfolio to have a Terraform provider. In the latest v1.2 release, the provider leapt forward in day-1 deployment operations of a PowerFlex cluster, now providing:
- New resource and data source for Cluster
- New resource and data source for MDM Cluster
- New resource and data source for User Management
- New data source for vTree (PowerFlex Volume Tree)
Now we’ll get into the details pertaining to these new features.
New resource and data source for Cluster
The cluster resource and data source are at the heart of day-1 deployment as well as ongoing cluster expansion and management. Cluster resource can be used to deploy or destroy 3- or 5-node clusters. Please refer the more detailed PowerFlex deployment guide here. The resource deploys all the foundational components of the PowerFlex architecture:
- Storage Data Client (SDC) -- consumes storage from the PowerFlex appliance
- Storage Data Server (SDS) -- contributes node storage to PowerFlex appliance
- Metadata Manager (MDM) -- manages the storage blocks and tracks data location across the system
- Storage Data Replication (SDR) -- enables native asynchronous replication on PowerFlex nodes
Following are the key elements of this resource:
- cluster for Cluster Installation Details
- lia_password for Lia Password
- mdm_password for MDM Password
- allow_non_secure_communication_with_lia to allow Non-Secure Communication With Lia
- allow_non_secure_communication_with_mdm to Allow Non-Secure Communication With MDM
- disable_non_mgmt_components_auth to Disable Non Mgmt Components Auth
- storage_pools for Storage Pool Details
- mdm_list for Cluster MDM Details
- protection_domains for Cluster Protection Domain Details
- sdc_list for Cluster SDC Details
- sdr_list for Cluster SDR Details
- sds_list for Cluster SDS Details
You can destroy a cluster but cannot update it. You can also import an existing cluster using the following command:
terraform import "powerflex_cluster.resource_block_name" "MDM_IP,MDM_Password,LIA_Password"
You can find example of a complete cluster resource definition here.
New resource and data source for MDM Setup
Out of the core architecture components of PowerFlex, we already have resources for SDC and SDS. The MDM resource is for the ongoing management of the MDM cluster and has the following key parameters for the Primary, Secondary, Tie-breaker, and Standby nodes:
- Node ID
- Node name
- Node port
- IPs of the MDM type
- The management IPs for the MDM node type
- While the Standby MDM is optional, it does require the role parameter to be setup to one of [‘Manager’, ‘TieBreaker’]
You can find multiple examples of using MDM cluster resource here.
New resource and data source for User Management
With the User resource, you can perform all Create, Read, Update, and Delete (CRUD) operations as well as import existing users that are part of a PowerFlex cluster.
To import users, you can use any one of the following import formats:
terraform import powerflex_user.resource_block_name “<id>”
or
terraform import powerflex_user.resource_block_name “id:<id>”
or by username
terraform import powerflex_user.resource_block_name “name:<user_name>”
New data source for vTree (PowerFlex Volume Tree)
Wouldn’t it be great to get all the storage details in one shot? The vTree data source is a comprehensive collection of the required storage volumes and their respective snapshot trees that can be queried using an array of the volume ids, volume names, or the vTree ids themselves. The data source returns vTree migration information as well.
You can find examples of specifying the query details for vTree data source here.
Terraform Provider for PowerMax v1.0
The PowerMax provider went through two beta versions, and we now have the official v1.0. While it’s a small release for the PowerMax provider, there is no arguing the importance of creating, scheduling, and managing snapshots on the World’s most secure mission-critical storage for demanding enterprise applications[1].
Following are the new PowerMax resources and data sources for this release:
- CRUD operations for snapshots-- including support for Secure snapshots.
- Here are examples of the new resource and data source.
- CRUD operations for snapshot policies-- ensure operational SLAs and data protection and retention compliance.
- Here are examples of the new resource and data source.
- CRUD operations for port group objects-- enable end-to-end provisioning workflow automation in Terraform with the existing resources for storage groups, host groups, and masking views.
- Here are examples of how to use the new resource and the data source for port groups.
New Terraform Provider for PowerEdge nodes (Redfish interface)
In addition to the comprehensive fleet management capabilities of OpenManage Enterprise UI, REST API, Ansible collections, and Terraform Provider, Dell has an extensive programmable interface at the node level with the iDRAC interface, Redfish-compliant API, and Ansible collections.
We are also introducing a Terraform provider called redfish to manage individual servers:
terraform {
required_providers {
redfish = {
version = "1.0.0"
source = "registry.terraform.io/dell/redfish"
}
}
}
With this introduction, we now have the complete programmatic interface matrix for PowerEdge server management:
| OpenManage Enterprise | iDRAC/RedFish |
REST API | ✔ | ✔ |
Ansible collections | ✔ | ✔ |
Terraform Providers | ✔ | ✔ |
With the new Terraform Provider for Redfish interface for Dell PowerEdge servers, you can automate and manage server power cycles, iDRAC attributes, BIOS attributes, virtual media, storage volumes, user support, and firmware updates on individual servers. This release adds support for these functionalities and is the first major release of the Redfish provider.
The following resources and data resources are available to get and set the attributes related to the particular attribute groups:
- Power management resource
- iDRAC Attributes resource
- BIOS resource
- Storage Volume resource
- Virtual Media resource
- User account resource
- Simple Update resource
- In addition to the data source corresponding to the attribute groups, two new data sources for Firmware Inventory and System Boot have also been added. Here you can find the examples of all the data sources for the Redfish provider.
Terraform Provider for OME v1.1
In this release of the Terraform Provider for OpenManage Enterprise (OME), multiple resource have been added for device management and security. Following is a list of resources in Terraform provider for Dell OME:
Device discovery and management
New resources under device discovery and management:
- New Discovery resource for automated discovery of devices to be managed.
- New Devices resource to maintain the state of individual devices that are under OME management. Removing the device from the state will take the device out of OME management. The release includes the corresponding data source for devices.
- New Device Action resource.
- New Static Group resource to group devices for easier deployment and compliance.
Security
- New Application CSR resource for Certificate Signing Requests.
- New Application Certificate resource for providing authentication certificate.
- New User resource for performing CRUD operations for OME users.
- New OME Appliance Network resource.
Check out the corresponding data sources for these resources for more information.
Resources
Here are the link sets for key resources for each of the Dell Terraform providers:
- v1.0 of the provider for PowerScale
- v1.0 of the provider for PowerMax
- v1.2 of the provider for PowerFlex
- v1.1 of the provider for PowerStore
- Terraform Provider for Redfish v1.0.0
- Terraform Provider for OME v1.1
[1] Based on Dell internal analysis of cybersecurity capabilities of Dell PowerMax versus cybersecurity capabilities of competitive mainstream arrays supporting open systems and mainframe storage, April 2023
Author: Parasar Kodati, Engineering Technologist, Dell ISG
CSM 1.8 Release is Here!
Fri, 22 Sep 2023 21:29:12 -0000
|Read Time: 0 minutes
Introduction
This is already the third release of Dell Container Storage Modules (CSM)!
The official changelog is available in the CHANGELOG directory of the CSM repository.
CSI Features
Supported Kubernetes distributions
The newly supported Kubernetes distributions are :
- Kubernetes 1.28
- OpenShift 4.13
SD-NAS support for PowerMax and PowerFlex
Historically, PowerMax and PowerFlex are Dell’s high-end and SDS for block storage. Both of these backends recently introduced support for software defined NAS.
This means that the respective CSI drivers can now provision PVC with the ReadWriteMany access mode for the volume type file. In other words, thanks to the NFS protocol different nodes from the Kubernetes cluster can access the same volume concurrently. This feature is particularly useful for applications, such as log management tools like Splunk or Elastic Search, that need to process logs coming from multiple Pods.
CSI Specification compliance
Storage capacity tracking
Like PowerScale in v1.7.0, PowerMax and Dell Unity allow you to check the storage capacity on a node before deploying storage to that node. This isn't that relevant in the case of shared storage, because shared storage generally will always show the same capacity to each node in the cluster. However, it could prove useful if the array lacks available storage.
Using this feature, an object from the CSIStorageCapacity type is created by the CSI driver in the same namespace as the CSI driver, one for each storageClass.
An example:
kubectl get csistoragecapacities -n unity # This shows one object per storageClass.
Volume Limits
The Volume Limits feature is added to both PowerStore and PowerFlex. All Dell storage platforms now implement this feature.
This option limits the maximum number of volumes to which a Kubernetes worker node can connect. This can be configured on a per-node basis, or cluster-wide. Setting this variable to zero disables the limit.
Here are some PowerStore examples.
Per node:
kubectl label node <node name> max-powerstore-volumes-per-node=<volume_limit>
For the entire cluster (all worker nodes):
Specify maxPowerstoreVolumesPerNode or maxVxflexVolumesPerNode in the values.yaml file upon Helm installation.
If you opted-in for the CSP Operator deployment, you can control it by specifying X_CSI_MAX_VOLUMES_PER_NODES in the CRD.
Useful links
Stay informed of the latest updates of the Dell CSM eco-system by subscribing to:
- The Dell CSM Github repository
- Our DevOps & Automation Youtube playlist
- Slack (under the Dell Infrastructure namespace)
- Live streaming on Twitch
Author: Florian Coulombel