Q3 2023: Updated Ansible Collections for Dell Portfolio
Fri, 29 Sep 2023 17:33:34 -0000
|Read Time: 0 minutes
The Ansible collection release schedule for the storage platforms is now monthly--just like the openmanage collection--so starting this quarter, I will roll up the features we released for storage modules for the past three months of the quarter. Over the past quarter, we made major enhancements to Ansible collections for PowerScale and PowerFlex.
Roll out PowerFlex with Roles!
We introduced Ansible Roles for the openmanage Ansible collection to gather and package multiple steps into a single small Ansible code block. In release v1.8 and 1.9 of Ansible Collections for PowerFlex, we are introducing roles for PowerFlex, targeting day-1 deployment as well as ongoing day-2 cluster expansion and management. This is a huge milestone for PowerFlex deployment automation.
Here is a complete list of the different roles and the tasks available under each role:
Role | Workflows |
SDC | |
SDS | |
MDM | |
Tie Breaker (TB) | |
Gateway | |
SDR | |
WebUI | |
PowerScale Common | This role has installation tasks on a node and is common to all the components like SDC, SDS, MDM, and LIA on various Linux distributions. All other roles call upon these tasks with the appropriate Ansible environment variable. The vars folder of this role also has dependency installations for different Linux distros. |
My favorite roles are installation-related, where the role task reduces the Ansible code required to automate by an order of magnitude. For example, this MDM installation role automates 140 lines of Ansible automation:
- name: "Install and configure powerflex mdm"
ansible.builtin.import_role:
name: "powerflex_mdm"
vars:
powerflex_common_file_install_location: "/opt/scaleio/rpm"
powerflex_mdm_password: password
powerflex_mdm_state: present
Other tasks under the role have a similar definition. And following the Ansible module pattern, just flipping the powerflex_mdm_state parameter to absent uninstalls MDM. For the sake of completion, we provided separate tasks for configure and uninstall as part of every role.
Complete PowerFlex deployment
Now here is where all the roles come together. A complete PowerFlex install playbook looks remarkably elegant like this:
---
---
- name: "Install PowerFlex Common"
hosts: all
roles:
- powerflex_common
- name: Install and configure PowerFlex MDM
hosts: mdm
roles:
- powerflex_mdm
- name: Install and configure PowerFlex gateway
hosts: gateway
roles:
- powerflex_gateway
- name: Install and configure PowerFlex TB
hosts: tb
vars_files:
- vars_files/connection.yml
roles:
- powerflex_tb
- name: Install and configure PowerFlex Web UI
hosts: webui
vars_files:
- vars_files/connection.yml
roles:
- powerflex_webui
- name: Install and configure PowerFlex SDC
hosts: sdc
vars_files:
- vars_files/connection.yml
roles:
- powerflex_sdc
- name: Install and configure PowerFlex LIA
hosts: lia
vars_files:
- vars_files/connection.yml
roles:
- powerflex_lia
- name: Install and configure PowerFlex SDS
hosts: sds
vars_files:
- vars_files/connection.yml
roles:
- powerflex_sds
- name: Install PowerFlex SDR
hosts: sdr
roles:
- powerflex_sdr
You can define your inventory based on the exact PowerFlex node setup:
node0 ansible_host=10.1.1.1 ansible_port=22 ansible_ssh_pass=password ansible_user=root
node1 ansible_host=10.x.x.x ansible_port=22 ansible_ssh_pass=password ansible_user=root
node2 ansible_host=10.x.x.y ansible_port=22 ansible_ssh_pass=password ansible_user=root
[mdm]
node0
node1
[tb]
node2
[sdc]
node2
[lia]
node0
node1
node2
[sds]
node0
node1
node2
Note: You can change the defaults of each of the component installations as well update the corresponding /defaults/main.yml, which looks like this for SDC:
---
powerflex_sdc_driver_sync_repo_address: 'ftp://ftp.emc.com/'
powerflex_sdc_driver_sync_repo_user: 'QNzgdxXix'
powerflex_sdc_driver_sync_repo_password: 'Aw3wFAwAq3'
powerflex_sdc_driver_sync_repo_local_dir: '/bin/emc/scaleio/scini_sync/driver_cache/'
powerflex_sdc_driver_sync_user_private_rsa_key_src: ''
powerflex_sdc_driver_sync_user_private_rsa_key_dest: '/bin/emc/scaleio/scini_sync/scini_key'
powerflex_sdc_driver_sync_repo_public_rsa_key_src: ''
powerflex_sdc_driver_sync_repo_public_rsa_key_dest: '/bin/emc/scaleio/scini_sync/scini_repo_key.pub'
powerflex_sdc_driver_sync_module_sigcheck: 1
powerflex_sdc_driver_sync_emc_public_gpg_key_src: ../../../files/RPM-GPG-KEY-powerflex_2.0.*.0
powerflex_sdc_driver_sync_emc_public_gpg_key_dest: '/bin/emc/scaleio/scini_sync/emc_key.pub'
powerflex_sdc_driver_sync_sync_pattern: .*
powerflex_sdc_state: present
powerflex_sdc_name: sdc_test
powerflex_sdc_performance_profile: Compact
file_glob_name: sdc
i_am_sure: 1
powerflex_role_environment:
Please look at the structure of this repo folder to setup your Ansible project so that you don’t miss the different levels of variables for example. I personally can’t wait to redeploy my PowerFlex lab setup both on-prem and on AWS with these roles. I will consider sharing any insights of that in a separate blog.
Ansible collection for PowerScale v2.0, 2.1, and 2.2
Following are the enhancements for Ansible Collection for PowerScale v2.0, 2.1, and 2.2:
- PowerScale is known for its extensive multi-protocol support, and S3 protocol enables use cases like application access to object storage with S3 API and as a data protection target. The new s3_bucket Ansible module now allows you to do CRUD operations for S3 buckets on PowerScale. You can find examples here.
- New modules for more granular NFS settings:
- Nfs_default_settings
- Nfs_global_settings
- Nfs_zone_settings
- The Info module has also been updated to fetch with the above NFS settings.
- map_root and map_non_root in the existing NFS Export (nfs) module for root and non-root access of the share. New examples added to the NFS modules examples.
- Enhanced AccessZone module with:
- Ability to reorder Access Zone Auth providers using the priority parameter of the auth_providers field elements, as shown in the following example:
auth_providers:
- provider_name: "System"
provider_type: "file"
priority: 2
- provider_name: "ansildap"
provider_type: "ldap"
priority: 1
- AD module
- The ADS module for Active Directory integration is updated to support Service Principal Names (SPN). An SPN is a unique identifier for a service instance in a network, typically used within Windows environments and associated with the Kerberos authentication protocol. Learn more about SPNs here.
- Adding an SPN from AD looks like this:
- name: Add an SPN
dellemc.powerscale.ads:
onefs_host: "{{ onefs_host }}"
api_user: "{{ api_user }}"
api_password: "{{ api_password }}"
verify_ssl: "{{ verify_ssl }}"
domain_name: "{{ domain_name }}"
spns:
- spn: "HOST/test1"
state: "{{ state_present }}"
- As you would expect, state: absent will remove the SPN. There is also a command parameter that takes two values: check and fix.
- Network Pool module
- The SmartConnect feature of PowerScale OneFS simplifies network configuration of a PowerScale cluster by enabling intelligent client connection load-balancing and failover capabilities. Learn more here.
- The Network Pool module has been updated to support specifying SmartConnect Zone aliases (DNS names). Here is an example:
- name: Network pool Operations on PowerScale
hosts: localhost
connection: local
vars:
onefs_host: '10.**.**.**'
verify_ssl: false
api_user: 'user'
api_password: 'Password'
state_present: 'present'
state_absent: 'absent'
access_zone: 'System'
access_zone_modify: "test"
groupnet_name: 'groupnet0'
subnet_name: 'subnet0'
description: "pool Created by Ansible"
new_pool_name: "rename_Test_pool_1"
additional_pool_params_mod:
ranges:
- low: "10.**.**.176"
high: "10.**.**.178"
range_state: "add"
ifaces:
- iface: "ext-1"
lnn: 1
- iface: "ext-2"
lnn: 1
iface_state: "add"
static_routes:
- gateway: "10.**.**.**"
prefixlen: 21
subnet: "10.**.**.**"
sc_params_mod:
sc_dns_zone: "10.**.**.169"
sc_connect_policy: "round_robin"
sc_failover_policy: "round_robin"
rebalance_policy: "auto"
alloc_method: "static"
sc_auto_unsuspend_delay: 0
sc_ttl: 0
aggregation_mode: "roundrobin"
sc_dns_zone_aliases:
- "Test"
Ansible collection for PowerStore v2.1
This release of Ansible collections for PowerStore brings updates to two modules to manage and operate NAS on PowerStore:
- Filesystem - support for clone, refresh and restore. Example tasks can be found here.
- NAS server - support for creation and deletion. You can find examples of various Ansible tasks using the module here.
Ansible collection for OpenManage Enterprise
Here are the features that have become available over the last three monthly releases of the Ansible Collections for OpenManage Enterprise.
V8.1
- Support for subject alternative names while generating certificate signing requests on OME.
- Create a user on iDRAC using custom privileges.
- Create a firmware baseline on OME with the filter option of no reboot required.
- Retrieve all server items in the output for ome_device_info.
- Enhancement to add detailed job information for ome_discovery and ome_job_info.
V8.2
- redfish_firmware and ome_firmware_catalog module is enhanced to support IPv6 address.
- Module to support firmware rollback of server components.
- Support for retrieving alert policies, actions, categories, and message id information of alert policies for OME and OME Modular.
- ome_diagnostics module is enhanced to update changed flag status in response.
V8.3
- Module to manage OME alert policies.
- Support for RAID6 and RAID60 for module redfish_storage_volume.
- Support for reboot type options for module ome_firmware.
Conclusion
Ansible is the most extensively used automation platform for IT Operations, and Dell provides an exhaustive set of modules and roles to easily deploy and manage server and storage infrastructure on-prem as well as on Cloud. With the monthly release cadence for both storage and server modules, you can get access to our latest feature additions even faster. Enjoy coding your Dell infrastructure!
Author: Parasar Kodati, Engineering Technologist, Dell ISG