ontap98::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
ontap98::*> systemshell -node localhost -command ls /
(system node systemshell)
BUILD mroot_late
COMPAT.TXT mroot_late.tgz
COPYRIGHT netapp_nodar_build_check
INSTALL nfsroot
README.TXT ontap
VERSION ovl
bin partner
boot platform
cap.xml proc
cfcard root
clus sbin
dev sim
etc sldiag
fw tmp
kmip usr
lib var
libexec varfs.tgz
mnt vs_conf_files.tgz
mroot webjail
mroot.tgz
ontap98::*>
netapp9101::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
netapp9101::*> systemshell -node localhost -command ls /
(system node systemshell)
Error: command failed: Error: Account currently locked. Contact the storage
administrator to unlock it.
netapp9101::*>
netapp9101::> security login unlock -username diag
netapp9101::> security login show -username diag
Vserver: netapp9101
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
diag console password admin no none
netapp9101::>
ロック解除したあとであれば従来通り実行できた
netapp9101::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
netapp9101::*> systemshell -node localhost -command ls /
(system node systemshell)
BUILD mroot_late
COMPAT.TXT mroot_late.tgz
COPYRIGHT netapp_nodar_build_check
INSTALL nfsroot
README.TXT ontap
VERSION ovl
bin partner
boot platform
cap.xml proc
cfcard root
clus sbin
dev sim
etc sldiag
fw tmp
kmip usr
lib var
libexec varfs.tgz
mnt vs_conf_files.tgz
mroot webjail
mroot.tgz
netapp9101::*>
MySQL 8におけるデータベースユーザ作成と権限の割り当てが従来の「grant all on DB名.* to wordpress@localhost identified by ‘パスワード’;」という一文から、「create user ~」と「grant ~」の2つに分かれている点に注意が必要です。
$ sudo mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.32 Source distribution
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database DB名 character set utf8;
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> create user wordpress@localhost identified by 'パスワード';
Query OK, 0 rows affected (0.01 sec)
mysql> grant all privileges on DB名.* to wordpress@localhost;
Query OK, 0 rows affected (0.00 sec)
mysql> quit
Bye
$
手順7: Webサーバ設定
手順7-1: httpdインストール
httpdをインストールします。
Oracle Linux 9.2ではWebサーバとして Apache(httpd) 2.4.53 、nginx 1.20.1、nginx 1.22.1が使えるが、apacheを使う。
$ sudo dnf install httpd -y
Last metadata expiration check: 0:05:50 ago on Tue 12 Sep 2023 11:38:07 AM JST.
Package httpd-2.4.53-11.0.1.el9_2.5.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
$
$ sudo dehydrated --register
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf
To accept these terms of service run "/bin/dehydrated --register --accept-terms".
$ sudo /bin/dehydrated --register --accept-terms
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Generating account key...
+ Registering account key with ACME server...
+ Fetching account URL...
+ Done!
$
初回のSSL証明書発行処理を実行します。
$ sudo dehydrated --cron
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Creating chain cache directory /etc/dehydrated/chains
Processing ホスト1名.ドメイン名 with alternative names: ホスト2名.ドメイン名
+ Creating new directory /etc/dehydrated/certs/ホスト1名.ドメイン名 ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 2 authorizations URLs from the CA
+ Handling authorization for ホスト1名.ドメイン名
+ Handling authorization for ホスト2名.ドメイン名
+ 2 pending challenge(s)
+ Deploying challenge tokens...
+ Responding to challenge for ホスト1名.ドメイン名 authorization...
+ Challenge is valid!
+ Responding to challenge for ホスト2名.ドメイン名 authorization...
+ Challenge is valid!
+ Cleaning challenge tokens...
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!
+ Running automatic cleanup
$
手順7-3: WebサーバへのSSL証明書設定
まず、httpdにmod_sslを追加します。
$ sudo dnf install mod_ssl -y
Last metadata expiration check: 0:13:55 ago on Tue 12 Sep 2023 11:38:07 AM JST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
mod_ssl x86_64 1:2.4.53-11.0.1.el9_2.5 ol9_appstream 119 k
Transaction Summary
================================================================================
Install 1 Package
<略>
$
$ cd /var/www/html
$ ls
$ sudo curl -O https://wordpress.org/latest.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22.3M 100 22.3M 0 0 17.6M 0 0:00:01 0:00:01 --:--:-- 17.6M
$ ls
latest.tar.gz
$ sudo tar xfz latest.tar.gz
$ ls -l
total 22904
-rw-r--r--. 1 root root 23447259 Sep 12 11:57 latest.tar.gz
drwxr-xr-x. 5 nobody nobody 4096 Aug 29 23:14 wordpress
$ sudo rm latest.tar.gz
$
現在の設定値を「sudo getsebool -a |grep httpd_can_network」で確認し、「sudo setsebool -P httpd_can_network_connect on」で有効にする
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> off
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$ sudo setsebool -P httpd_can_network_connect on
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> on
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$
$ sudo vi /etc/dnf/automatic.conf
$ cat /etc/dnf/automatic.conf
[commands]
# What kind of upgrade to perform:
# default = all available upgrades
# security = only the security upgrades
upgrade_type = default
random_sleep = 0
# Maximum time in seconds to wait until the system is on-line and able to
# connect to remote repositories.
network_online_timeout = 60
# To just receive updates use dnf-automatic-notifyonly.timer
# Whether updates should be downloaded when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
download_updates = yes
# Whether updates should be applied when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
apply_updates = yes
[emitters]
# Name to use for this system in messages that are emitted. Default is the
# hostname.
# system_name = my-host
# How to send messages. Valid options are stdio, email and motd. If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages. If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via includes motd, /etc/motd file will have the messages. if
# emit_via includes command_email, then messages will be send via a shell
# command compatible with sendmail.
# Default is email,stdio.
# If emit_via is None or left blank, no messages will be sent.
emit_via = stdio
[email]
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
# Name of the host to connect to to send email messages.
email_host = localhost
[command]
# The shell command to execute. This is a Python format string, as used in
# str.format(). The format function will pass a shell-quoted argument called
# `body`.
# command_format = "cat"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
[command_email]
# The shell command to use to send email. This is a Python format string,
# as used in str.format(). The format function will pass shell-quoted arguments
# called body, subject, email_from, email_to.
# command_format = "mail -Ssendwait -s {subject} -r {email_from} {email_to}"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
[base]
# This section overrides dnf.conf
# Use this to filter DNF core messages
debuglevel = 1
$
そしてdnf-automatic.timerを有効化し、開始します。
$ sudo systemctl enable dnf-automatic.timer
Created symlink /etc/systemd/system/timers.target.wants/dnf-automatic.timer → /usr/lib/systemd/system/dnf-automatic.timer.
$ sudo systemctl status dnf-automatic
○ dnf-automatic.service - dnf automatic
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.service; static)
Active: inactive (dead)
TriggeredBy: ○ dnf-automatic.timer
$ sudo systemctl start dnf-automatic.timer
$ sudo systemctl status dnf-automatic.timer
● dnf-automatic.timer - dnf-automatic timer
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.timer; enabled; pres>
Active: active (waiting) since Tue 2023-09-12 13:11:00 JST; 5s ago
Until: Tue 2023-09-12 13:11:00 JST; 5s ago
Trigger: Wed 2023-09-13 06:44:33 JST; 17h left
Triggers: ● dnf-automatic.service
Sep 12 13:11:00 ホスト名 systemd[1]: Started dnf-automatic timer.
$
netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain adosakana.local
In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with
sufficient privileges to add computers to the "CN=Computers" container within the "ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Error: Machine account creation procedure failed
[ 47] Loaded the preliminary configuration.
[ 130] Created a machine account in the domain
[ 130] SID to name translations of Domain Users and Admins
completed successfully
[ 131] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 142] Successfully connected to ip 172.17.44.49, port 464 using
TCP
[ 233] Kerberos password set for 'SVM3$@ADOSAKANA.LOCAL' succeeded
[ 233] Set initial account password
[ 244] Successfully connected to ip 172.17.44.49, port 445 using
TCP
[ 276] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 311] Successfully authenticated with DC
adserver.adosakana.local
[ 324] Unable to connect to NetLogon service on
adserver.adosakana.local (Error:
RESULT_ERROR_GENERAL_FAILURE)
**[ 324] FAILURE: Unable to make a connection
** (NetLogon:ADOSAKANA.LOCAL), result: 3
[ 324] Unable to make a NetLogon connection to
adserver.adosakana.local using the new machine account
[ 346] Deleted existing account
'CN=SVM3,CN=Computers,DC=adosakana,DC=local'
Error: command failed: Failed to create the Active Directory machine account "SVM3". Reason: general failure.
netapp9101::>
その2: AES session key enabled for NetLogon channel 設定
上記を設定しても、下記の様なエラーとなった。
netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain vm2.adosakana.local
In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of
a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Error: Machine account creation procedure failed
[ 43] Loaded the preliminary configuration.
[ 133] Created a machine account in the domain
[ 133] SID to name translations of Domain Users and Admins
completed successfully
[ 134] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 144] Successfully connected to ip 172.17.44.49, port 464 using
TCP
[ 226] Kerberos password set for 'SVM3$@ADOSAKANA.LOCAL' succeeded
[ 226] Set initial account password
[ 253] Successfully connected to ip 172.17.44.49, port 445 using
TCP
[ 284] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 316] Successfully authenticated with DC
adserver.adosakana.local
[ 323] Encountered NT error (NT_STATUS_PENDING) for SMB command
Read
[ 327] Unable to connect to NetLogon service on
adserver.adosakana.local (Error:
RESULT_ERROR_GENERAL_FAILURE)
**[ 327] FAILURE: Unable to make a connection
** (NetLogon:ADOSAKANA.LOCAL), result: 3
[ 327] Unable to make a NetLogon connection to
adserver.adosakana.local using the new machine account
[ 344] Deleted existing account
'CN=SVM3,CN=Computers,DC=ADOSAKANA,DC=local'
Error: command failed: Failed to create the Active Directory machine account "SVM3". Reason: general failure.
netapp9101::>
netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain adosakana.local
In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of
a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Notice: SMB1 protocol version is obsolete and considered insecure. Therefore it is deprecated and disabled on this
CIFS server. Support for SMB1 might be removed in a future release. If required, use the (privilege: advanced)
"vserver cifs options modify -vserver svm3 -smb1-enabled true" to enable it.
netapp9101::>
ONTAP 9.5P5シミュレータ環境をONTAP 9.7にアップデートした場合には問題なかったのに、運用中のONTAP 9.5P10環境をアップデートしようとしたところ、firmwareアップロードの段階で「THe request body must have content type multipart/form-data with a field named file」というエラーとなった。
netappcluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
netappcluster::*> system services web file-uploads config show
Node Size
----------------- ------------
netappcluster-01 2GB
netappcluster-02 2GB
2 entries were displayed.
netappcluster::*>
次に変更を実施
netappcluster::*> system services web file-uploads config modify -node * -size 4GB
Warning: Files already uploaded or are being uploaded will be lost. Starting a
file upload before the resize operation is finished will cause the
uploaded file to be unavailable.
Do you want to continue? {y|n}: y
[Job 14002] Job is queued: Web File Upload Resize Node Job.
[Job 14003] Job is queued: Web File Upload Resize Node Job.
2 entries were modified.
netappcluster::*>
すぐに反映されないので、上記で出力されたジョブIDのステータスを確認する。
netappcluster::*> job show -id 14002
Owning
Job ID Name Vserver Node State
------ -------------------- ---------- -------------- ----------
14002 Web File Upload Resize Node Job netappcluster netappcluster-01 Success
Description: Web File Upload Resize Node Job
netappcluster::*> job show -id 14003
Owning
Job ID Name Vserver Node State
------ -------------------- ---------- -------------- ----------
14003 Web File Upload Resize Node Job netappcluster netappcluster-02 Success
Description: Web File Upload Resize Node Job
netappcluster::*>
「Success」が含まれていれば変更が完了している。(変更途中は Running )
ただ、変更が終わったあとの設定表記は4GBとならずに「0B」となるが、これで正常とのこと
netappcluster::*> system services web file-uploads config show
Node Size
----------------- ------------
netappcluster-01 0B
netappcluster-02 0B
2 entries were displayed.
netappcluster::*>
SAN Transportについて、何の変哲もないiSCSIストレージを、ESXiサーバとWindowsサーバの両方につなげただけでも使えるのかな、と検証してみようとした。
(上記画像はVMwareから転載)
まあ、実機の環境がなかったので、vSphere仮想マシンのWindowsサーバからiSCSI接続して構築してみたところ、commvaultの場合は「SAN access is only supported for physical machines.」というメッセージでバックアップさせてくれなかった。
Event Code: 91:248
Severity: Minor
Program: vsbkp
Description:
Unable to open the disks for virtual machine [仮想マシン名] for SAN access. SAN access is only supported for physical machines.
「set diag」コマンドでdiagモードに移行したあとで「system node systemshell -node localhost -command ls /」を実行してどうなるかを確認する。
::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
::*> system node systemshell -node localhost -command ls /
(system node systemshell)
Error: command failed: Error: Account currently locked. Contact the storage
administrator to unlock it.
::*>
ONTAPシミュレータ 9.12.1以降は上記の様な「Error」となると思われる。
5) diagアカウントのロックを解除
上記のエラーはdiagアカウントがロック状態であることが原因であるため、ロックを解除する。
「security login show -user-or-group-name diag」を実行して現在の状態を確認(ONTAPバージョンによっては “security login show -username diag”)
::*> security login show -user-or-group-name diag
Vserver: Default
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
diag console password admin yes none
::*>
「Acct Lock: yes」であるためロックされていることを確認できる。
「security login unlock -username diag」を実行する
::*> security login unlock -username diag
Error: command failed: The admin password is not set. Use the "security login
password" command to set the password, then try the command again.
::*>
::*> security login password -username admin
Enter your current password: <初期設定前であればエンターキー入力>
Enter a new password: <新しいパスワード>
Enter it again: <新しいパスワード>
::*> security login unlock -username diag
::*>
ロック解除に成功した場合、なにも表示されないため、確認のために「security login show -user-or-group-name diag」を実行する
::*> security login show -user-or-group-name diag
Vserver: Default
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
diag console password admin no none
::*>
6) 現状のディスク構成を確認
現状のディスク構成を確認するために「system node systemshell -node local -command “ls -l /sim/dev/,disks”」を実行する。
実行後「system node systemshell -node local -command “ls -l /sim/dev/,disks”」を実行し、ファイルが削除されたことも確認します。
::*> system node systemshell -node local -command "cd /sim/dev/,disks; sudo rm *"
::*> system node systemshell -node local -command "ls -l /sim/dev/,disks"
total 0
::*>
ontap9121::> storage aggregate add-disks -aggregate aggr0_ontap9121_01 -diskcount 1
Warning: Aggregate "aggr0_ontap9121_01" is a root aggregate. Adding disks to
the root aggregate is not recommended. Once added, disks cannot be
removed without re-initializing the node.
Do you want to continue? {y|n}: y
Info: Disks would be added to aggregate "aggr0_ontap9121_01" on node
"ontap9121-01" in the following manner:
First Plex
RAID Group rg0, 4 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
data NET-1.11 FCAL 8.79GB 8.82GB
Aggregate capacity available for volume use would be increased by 7.91GB.
Do you want to continue? {y|n}: y
ontap9121::> storage aggregate show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_ontap9121_01
15.03GB 7.88GB 48% online 1 ontap9121-01 raid_dp,
normal
ontap9121::> df -A -h
Aggregate total used avail capacity
aggr0_ontap9121_01 15GB 7320MB 8069MB 48%
aggr0_ontap9121_01/.snapshot 810MB 0B 810MB 0%
2 entries were displayed.
ontap9121::>
Manual checks that can be done using Upgrade ONTAP documentation
メッセージ
Manual validation checks need to be performed. Refer to the Upgrade Advisor Plan or the "What should I verify before I upgrade with or without Upgrade Advisor" section in the "Upgrade ONTAP" documentation for the remaining validation checks that need to be performed before update. Failing to do so can result in an update failure or an I/O disruption.
解決策
Refer to the Upgrade Advisor Plan or the "What should I verify before I upgrade with or without Upgrade Advisor" section in the "Upgrade ONTAP" documentation for the remaining validation checks that need to be performed before update.
ONTAP API to REST transition warning
ONTAP 9.12.1以降だと
ONTAP API to REST transition warning
メッセージ
NetApp ONTAP API has been used on this cluster for ONTAP data storage management within the last 30 days. NetApp ONTAP API is approaching end of availability.
解決策
Transition your automation tools from ONTAP API to ONTAP REST API. CPC-00410 - End of availability: ONTAPI : https://mysupport.netapp.com/info/communications/ECMLP2880232.html
LIFs on home node status
LIF(ネットワークインタフェース)がホーム以外のポートで運用されている場合に表示されます。
home portに戻すことで警告は消えます。home portに戻すことができない場合はONTAP OSのアップデートできません。
One or more LIFs are not on the node, verify that all LIFs are on the home node before attempting NDU.
Ensure that the NX-OS (cluster network switches), IOS (management network switches), and reference configuration file (RCF) software ersions are compatible with the target Data ONTAP release.
Refer to http://mysupport.netapp.com/NOW/download/software/cm_switches/ and http://mysupport.netapp.com/NOW/download/software/cm_switches_ntap/ for more details.
Name Service Configuration DNS Check
メッセージ
None of the configured DNS servers are reacjanle for the following Vservers: 名前. There might be other Vservers for DNS servers are not reachable.
解決策
Delete the DNS server, or verify that the DNS status is "up". Delete the DNS configuration for the Vservers which do not have "dns" as a configured source in the ns-switch database.
NFS mounts
メッセージ
This cluster is serving NFS clients. If NFS soft mounts are used, there is a possibility of frequent NFS timeouts and race conditions that can lead to data corruption during the upgrade
解決策
Use NFS hard mounts, if possible
CIFS status
メッセージ
CIFS is currently in use. Any unprotected sessions may be affected with possible loss of data.
解決作
Stop all unprotected CIFS workloads before performing the update. To list the unprotected CIFS workloads, run the command: vserver cifs session show -continuously-available No, Partial
ontap9121::> cluster image show-update-progress
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:38
Details:
Pre-update Check Status Error-Action
-------------------- ----------------- --------------------------------------
AMPQ Router and OK N/A
Broker Config
Cleanup
Aggregate online OK N/A
status and parity
check
Application OK N/A
Provisioning Cleanup
Autoboot Bootargs OK N/A
Status
Backend OK N/A
Configuration Status
Boot Menu Status OK N/A
CIFS compatibility OK N/A
status check
Capacity licenses OK N/A
install status check
Check For SP/BMC OK N/A
Connectivity To
Nodes
Check LDAP fastbind OK N/A
users using
unsecure connection.
Cloud keymanager OK N/A
connectivity check
Cluster health and OK N/A
eligibility status
Cluster/management OK N/A
switch check
Compatible New OK N/A
Image Check
Current system OK N/A
version check if it
is susceptible to
possible outage
during NDU
Data ONTAP Version OK N/A
and Previous
Upgrade Status
Data aggregates HA OK N/A
policy check
Disk status check OK N/A
for failed, broken
or non-compatibility
Duplicate Initiator OK N/A
Check
Encryption key OK N/A
migration status
check
External OK N/A
key-manager with
legacy KMIP client
check
External keymanager OK N/A
key server status
check
Infinite Volume OK N/A
availibility check
Logically over OK N/A
allocated DP
volumes check
Manual checks that Warning Warning: Manual validation checks
can be done using need to be performed. Refer to the
Upgrade ONTAP Upgrade Advisor Plan or the "What
documentation should I verify before I upgrade with
or without Upgrade Advisor" section
in the "Upgrade ONTAP" documentation
for the remaining validation checks
that need to be performed before
update. Failing to do so can result
in an update failure or an I/O
disruption.
Action: Refer to the Upgrade Advisor
Plan or the "What should I verify
before I upgrade with or without
Upgrade Advisor" section in the
"Upgrade ONTAP" documentation for the
remaining validation checks that need
to be performed before update.
MetroCluster OK N/A
configuration
status check for
compatibility
Minimum number of OK N/A
aggregate disks
check
NAE Aggregate and OK N/A
NVE Volume
Encryption Check
NDMP sessions check OK N/A
NFS mounts status OK N/A
check
NVMe over Fabrics OK N/A
license check
Name Service OK N/A
Configuration DNS
Check
Name Service OK N/A
Configuration LDAP
Check
Node to SP/BMC OK N/A
connectivity check
OKM/KMIP enabled OK N/A
systems - Missing
keys check
ONTAP API to REST Warning Warning: NetApp ONTAP API has been
transition warning used on this cluster for ONTAP data
storage management within the last 30
days. NetApp ONTAP API is approaching
end of availability.
Action: Transition your automation
tools from ONTAP API to ONTAP REST
API. For more details, refer to
CPC-00410 - End of availability:
ONTAPI
https://mysupport.netapp.com/info/
communications/ECMLP2880232.html
ONTAP Image OK N/A
Capability Status
OpenSSL 3.0.x OK N/A
upgrade validation
check
Openssh 7.2 upgrade OK N/A
validation check
Pre-Update OK N/A
Configuration
Verification
RDB Replica Health OK N/A
Check
Replicated database OK N/A
schema consistency
check
Running Jobs Status OK N/A
SAN and NVMe LIF OK N/A
Online Check
SAN compatibility OK N/A
for manual
configurability
check
SAN kernel agent OK N/A
status check
Secure Purge OK N/A
operation Check
Shelves and Sensors OK N/A
check
SnapLock Version OK N/A
Check
SnapMirror OK N/A
Synchronous
relationship status
check
SnapMirror OK N/A
compatibility
status check
Supported platform OK N/A
check
Target ONTAP OK N/A
release support for
FiberBridge 6500N
check
Upgrade Version OK N/A
Compatibility Status
Verify all bgp OK N/A
peer-groups are in
the up state
Verify that e0M is OK N/A
home to no LIFs
with high speed
services.
Volume Conversion OK N/A
In Progress Check
Volume move OK N/A
progress status
check
Volume online OK N/A
status check
iSCSI target portal OK N/A
groups status check
Overall Status Warning Warning
61 entries were displayed.
ontap9121::>
Summary
Client certificates for VM <仮想マシン名> with uuid 33dd4645-3dcf-4efc-b837-0b1cb3a8d7d9 are expiring in -19717 days and need to be regenerated.Upon Certificate expiry, the CVM-Guest VM communication will be broken.
Possible Cause
Description
NGT Client certificates have definite expiry period of 1000 days based on ISO standards.
Recommendation
NGT Client certificates need to be regenerated on the guest VMs. Refer to KB 10075 for further details
しかしうちの環境だと「NGT Enabled:true」で「Communication Link Active:true」とNGTと通信はできている模様
nutanix@NTNX-20190b89-A-CVM:172.17.44.22:~$ ncli ngt get vm-id=<UUID>
VM Id : <UUID>
VM Name : <仮想マシン名>
NGT Enabled : true
Tools ISO Mounted : false
Vss Snapshot : true
File Level Restore : false
Communication Link Active : true
nutanix@NTNX-20190b89-A-CVM:172.17.44.22:~$
「netsh advfirewall show allprofiles」でプロファイル確認
「netsh advfirewall show currentprofile」で現在有効になってるプロファイル確認
「netsh advfirewall firewall show rule name=all」で設定出力
「vssadmin list shadows」で現在存在しているシャドウコピーを確認
「vssadmin list shadowstorage」でシャドウコピーの保存先を確認
「vssadmin list volumes」でディスクのIDを確認
「schtasks /query /xml」で一覧をXML形式で出力し、名前が"ShadowCopyVolume~"のものの内容を確認
@echo off
set TIME2=%TIME: =0%
set LOGDATE=%DATE:~0,4%%DATE:~5,2%%DATE:~8,2%-%TIME2:~0,2%%TIME2:~3,2%
robocopy \\旧FS\share \\新FS\share /mir /copyall /R:0 /W:0 /LOG+:D:\LOGS\share-%LOGDATE%.txt
Applicable only to the installation of management nodes of the Enterprise edition.
Community Management Node Mode
Applicable only to the installation of management nodes of the Community edition.
Compute Node Mode
Applicable to the installation of all nodes except the management nodes, such asCompute nodeImageStore, Ceph backup storage node, and Ceph backup storage mount nodeCeph primary storage node, and Ceph primary storage mount nodePXE deployment server node, local backup server node, and remote backup server node in another data center
Expert Mode
Enterprise Management Node ModeとCommunity Management Node Modeの違いは「terms of the license type, which decides the features available on the Cloud」とのこと
root@zstack137:~# ceph -v
ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable)
root@zstack137:~# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x %-Used x Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr x 3 x 2 x 1 x 1 x 1 x on x x x replicated_rule x 3.08950029648258e-06 x 1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data x 3 x 2 x 32 x x 32 x on x x x replicated_rule x 0 x 0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x 3 x 2 x 32 x 16 x 16 x on x x x replicated_rule x 4.41906962578287e-07 x 198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool x 3 x 2 x 128 x x 32 x warn x x x replicated_rule x 0.0184257291257381 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#
「ceph osd pool autoscale-status」
root@zstack137:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 452.0k 3.0 449.9G 0.0000 1.0 1 on False
cephfs_data 0 3.0 449.9G 0.0000 1.0 32 on False
cephfs_metadata 66203 3.0 449.9G 0.0000 4.0 32 on False
storagepool 2681M 3.0 449.9G 0.0175 1.0 128 warn False
root@zstack137:~#
root@zstack137:~# ceph health
HEALTH_WARN 1 pools have too many placement groups
root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
Pool storagepool has 128 placement groups, should have 32
root@zstack137:~#
root@zstack137:~# ceph -s
cluster:
id: 9e085d6a-77f3-41f1-8f6d-71fadc9c011b
health: HEALTH_WARN
1 pools have too many placement groups
services:
mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 3h)
mgr: zstack136(active, since 3h), standbys: zstack135
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 3h), 9 in (since 3d)
data:
volumes: 1/1 healthy
pools: 4 pools, 193 pgs
objects: 716 objects, 2.7 GiB
usage: 8.3 GiB used, 442 GiB / 450 GiB avail
pgs: 193 active+clean
root@zstack137:~#
ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
/^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
/^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
printf("\n");
printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
for (i in poollist) printf("--------"); printf("----------------\n");
for (i in osdlist) { printf("osd.%i\t", i); sum=0;
for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
for (i in poollist) printf("--------"); printf("----------------\n");
printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
無事実行できた。
root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
/^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
/^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
printf("\n");
printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
for (i in poollist) printf("--------"); printf("----------------\n");
for (i in osdlist) { printf("osd.%i\t", i); sum=0;
for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
for (i in poollist) printf("--------"); printf("----------------\n");
printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all
pool : 3 2 1 4 | SUM
------------------------------------------------
osd.3 4 5 1 13 | 23
osd.8 4 6 0 12 | 22
osd.6 2 4 0 15 | 21
osd.5 6 4 0 16 | 26
osd.2 3 3 0 15 | 21
osd.1 4 3 0 10 | 17
osd.4 1 1 0 16 | 18
osd.0 5 2 0 10 | 17
osd.7 3 4 0 21 | 28
------------------------------------------------
SUM : 32 32 1 128 |
root@zstack137:~#
poolによって差がありすぎている?
中国語のページで「ceph使用问题积累」というところがあって「HEALTH_WARN:pools have too many placement groups」と「HEALTH_WARN: mons are allowing insecure global_id reclaim」についての対処方法が載っている。
後者については「ceph config set mon auth_allow_insecure_global_id_reclaim false」となっていた。
module設定変える前に「ceph mgr module ls」で状態確認
root@zstack137:~# ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -
root@zstack137:~#
SUSEのページにあるSUSE Enterprise Storage 7 DocumentationのAdministration and Operations Guide「12 Determine the cluster state」を見るといろいろな状態確認コマンドがあった。
root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 450 GiB 442 GiB 8.3 GiB 8.3 GiB 1.85
TOTAL 450 GiB 442 GiB 8.3 GiB 8.3 GiB 1.85
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 140 GiB
cephfs_data 2 32 0 B 0 0 B 0 140 GiB
cephfs_metadata 3 32 35 KiB 22 194 KiB 0 140 GiB
storagepool 4 128 2.6 GiB 692 7.9 GiB 1.84 140 GiB
root@zstack137:~# ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 450 GiB 442 GiB 8.3 GiB 8.3 GiB 1.85
TOTAL 450 GiB 442 GiB 8.3 GiB 8.3 GiB 1.85
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
.mgr 1 1 449 KiB 449 KiB 0 B 2 1.3 MiB 1.3 MiB 0 B 0 140 GiB N/A N/A N/A 0 B 0 B
cephfs_data 2 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 140 GiB N/A N/A N/A 0 B 0 B
cephfs_metadata 3 32 35 KiB 18 KiB 17 KiB 22 194 KiB 144 KiB 50 KiB 0 140 GiB N/A N/A N/A 0 B 0 B
storagepool 4 128 2.6 GiB 2.6 GiB 3.0 KiB 692 7.9 GiB 7.9 GiB 9.1 KiB 1.84 140 GiB N/A N/A N/A 0 B 0 B
root@zstack137:~#
TOO_MANY_PGSの時の対処としていかが書かれている
TOO_MANY_PGS The number of PGs in use is above the configurable threshold of mon_pg_warn_max_per_osd PGs per OSD. This can lead to higher memory usage for OSD daemons, slower peering after cluster state changes (for example OSD restarts, additions, or removals), and higher load on the Ceph Managers and Ceph Monitors.
While the pg_num value for existing pools cannot be reduced, the pgp_num value can. This effectively co-locates some PGs on the same sets of OSDs, mitigating some of the negative impacts described above. The pgp_num value can be adjusted with:
じゃあ、「ceph osd pool set storagepool pgp_num 32」を実行してpgp_numを128から32に変更してみる
root@zstack137:~# ceph osd pool stats
pool .mgr id 1
nothing is going on
pool cephfs_data id 2
nothing is going on
pool cephfs_metadata id 3
nothing is going on
pool storagepool id 4
nothing is going on
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 128
root@zstack137:~# ceph osd pool set storagepool pgp_num 32
set pool 4 pgp_num to 32
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 125
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 119
root@zstack137:~#
徐々に変更されていく模様
root@zstack137:~# ceph -s
cluster:
id: 9e085d6a-77f3-41f1-8f6d-71fadc9c011b
health: HEALTH_WARN
Reduced data availability: 1 pg peering
1 pools have too many placement groups
1 pools have pg_num > pgp_num
services:
mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 5h)
mgr: zstack136(active, since 5h), standbys: zstack135
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 5h), 9 in (since 3d); 2 remapped pgs
data:
volumes: 1/1 healthy
pools: 4 pools, 193 pgs
objects: 716 objects, 2.7 GiB
usage: 8.4 GiB used, 442 GiB / 450 GiB avail
pgs: 0.518% pgs not active
16/2148 objects misplaced (0.745%)
190 active+clean
2 active+recovering
1 remapped+peering
io:
recovery: 2.0 MiB/s, 0 objects/s
root@zstack137:~# ceph health
HEALTH_WARN Reduced data availability: 1 pg peering; 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~#
ある程度時間が経過したあと
root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
/^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
/^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
printf("\n");
printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
for (i in poollist) printf("--------"); printf("----------------\n");
for (i in osdlist) { printf("osd.%i\t", i); sum=0;
for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
for (i in poollist) printf("--------"); printf("----------------\n");
printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all
pool : 3 2 1 4 | SUM
------------------------------------------------
osd.3 4 5 1 15 | 25
osd.8 4 6 0 16 | 26
osd.6 2 4 0 16 | 22
osd.5 6 4 0 4 | 14
osd.2 3 3 0 11 | 17
osd.1 4 3 0 13 | 20
osd.4 1 1 0 17 | 19
osd.0 5 2 0 20 | 27
osd.7 3 4 0 16 | 23
------------------------------------------------
SUM : 32 32 1 128 |
root@zstack137:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 452.0k 3.0 449.9G 0.0000 1.0 1 on False
cephfs_data 0 3.0 449.9G 0.0000 1.0 32 on False
cephfs_metadata 66203 3.0 449.9G 0.0000 4.0 32 on False
storagepool 2681M 3.0 449.9G 0.0175 1.0 128 warn False
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x %-Used x Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr x 3 x 2 x 1 x 1 x 1 x on x x x replicated_rule x 3.09735719383752e-06 x 1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data x 3 x 2 x 32 x x 32 x on x x x replicated_rule x 0 x 0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x 3 x 2 x 32 x 16 x 16 x on x x x replicated_rule x 4.43030785390874e-07 x 198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool x 3 x 2 x 128 x x 32 x warn x x x replicated_rule x 0.018471721559763 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#
pg_numを減らせる?
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool set storagepool pg_num 32
set pool 4 pg_num to 32
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 124
root@zstack137:~#
徐々に減ってる
ステータスはHEALTH_OLに変わった
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 119
root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x %-Used x Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr x 3 x 2 x 1 x 1 x 1 x on x x x replicated_rule x 3.10063592223742e-06 x 1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data x 3 x 2 x 32 x x 32 x on x x x replicated_rule x 0 x 0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x 3 x 2 x 32 x 16 x 16 x on x x x replicated_rule x 4.43499772018185e-07 x 198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool x 3 x 2 x 117 x x 32 x warn x x x replicated_rule x 0.0184909123927355 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#
「ceph osd pool autoscale-status」の方のPG_NUMは即反映
root@zstack137:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 452.0k 3.0 449.9G 0.0000 1.0 1 on False
cephfs_data 0 3.0 449.9G 0.0000 1.0 32 on False
cephfs_metadata 66203 3.0 449.9G 0.0000 4.0 32 on False
storagepool 2705M 3.0 449.9G 0.0176 1.0 32 warn False
root@zstack137:~#
root@zstack137:~# ceph health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive, 2 pgs peering
[WRN] PG_AVAILABILITY: Reduced data availability: 2 pgs inactive, 2 pgs peering
pg 4.22 is stuck peering for 2d, current state peering, last acting [6,5,2]
pg 4.62 is stuck peering for 6h, current state peering, last acting [6,5,2]
root@zstack137:~#
しばらく時間がたって変更が終わったあとに状態をとってみた
root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x %-Used x Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr x 3 x 2 x 1 x 1 x 1 x on x x x replicated_rule x 3.13595910483855e-06 x 1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data x 3 x 2 x 32 x x 32 x on x x x replicated_rule x 0 x 0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x 3 x 2 x 32 x 16 x 16 x on x x x replicated_rule x 4.4855224246021e-07 x 198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool x 3 x 2 x 32 x x 32 x warn x x x replicated_rule x 0.0186976287513971 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~# ceph -s
cluster:
id: 9e085d6a-77f3-41f1-8f6d-71fadc9c011b
health: HEALTH_OK
services:
mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 6h)
mgr: zstack136(active, since 6h), standbys: zstack135
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 6h), 9 in (since 3d)
data:
volumes: 1/1 healthy
pools: 4 pools, 97 pgs
objects: 716 objects, 2.7 GiB
usage: 8.6 GiB used, 441 GiB / 450 GiB avail
pgs: 97 active+clean
root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 450 GiB 441 GiB 8.7 GiB 8.7 GiB 1.94
TOTAL 450 GiB 441 GiB 8.7 GiB 8.7 GiB 1.94
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 137 GiB
cephfs_data 2 32 0 B 0 0 B 0 137 GiB
cephfs_metadata 3 32 35 KiB 22 194 KiB 0 137 GiB
storagepool 4 32 2.7 GiB 692 8.0 GiB 1.89 137 GiB
root@zstack137:~#
[global]
<略>
allow nt4 crypto = yes
reject md5 clients = no
server reject md5 schannel = no
server schannel = yes
server schannel require seal = no
<略>
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Warning: An account by this name already exists in Active Directory at
CN=SVM91,CN=Computers,DC=adosakana,DC=local.
If there is an existing DNS entry for the name SVM91, it must be
removed. Data ONTAP cannot remove such an entry.
Use an external tool to remove it after this command completes.
Ok to reuse this account? {y|n}: y
Error: command failed: Failed to create CIFS server SVM91. Reason:
create_with_lug: RPC: Unable to receive; errno = Connection reset by
peer; netid=tcp fd=17 TO=600.0s TT=0.119s O=224b I=0b CN=113/3 VSID=-3
127.0.0.1:766.
ontap91::>
ONTAP 9.1P22 シミュレーター
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Error: Machine account creation procedure failed
[ 56] Loaded the preliminary configuration.
[ 92] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 107] Successfully connected to ip 172.17.44.49, port 389 using
TCP
[ 110] Unable to start TLS: Connect error
[ 110] Additional info:
[ 110] Unable to connect to LDAP (Active Directory) service on
sambaad.ADOSAKANA.LOCAL
**[ 110] FAILURE: Unable to make a connection (LDAP (Active
** Directory):ADOSAKANA.LOCAL), result: 7652
Error: command failed: Failed to create the Active Directory machine account
"SVM91". Reason: LDAP Error: Cannot establish a connection to the
server.
ontap91::>
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Error: Machine account creation procedure failed
[ 61] Loaded the preliminary configuration.
[ 99] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 168] Successfully connected to ip 172.17.44.49, port 389 using
TCP
[ 168] Entry for host-address: 172.17.44.49 not found in the
current source: FILES. Ignoring and trying next available
source
[ 172] Source: DNS unavailable. Entry for
host-address:172.17.44.49 not found in any of the
available sources
**[ 181] FAILURE: Unable to SASL bind to LDAP server using GSSAPI:
** Local error
[ 181] Additional info: SASL(-1): generic failure: GSSAPI Error:
Unspecified GSS failure. Minor code may provide more
information (Cannot determine realm for numeric host
address)
[ 181] Unable to connect to LDAP (Active Directory) service on
sambaad.ADOSAKANA.LOCAL (Error: Local error)
[ 181] Unable to make a connection (LDAP (Active
Directory):ADOSAKANA.LOCAL), result: 7643
Error: command failed: Failed to create the Active Directory machine account
"SVM91". Reason: LDAP Error: Local error occurred.
ontap91::>
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.
Enter the user name: administrator
Enter the password:
Warning: An account by this name already exists in Active Directory at
CN=SVM91,CN=Computers,DC=adosakana,DC=local.
If there is an existing DNS entry for the name SVM91, it must be
removed. Data ONTAP cannot remove such an entry.
Use an external tool to remove it after this command completes.
Ok to reuse this account? {y|n}: y
Error: Machine account creation procedure failed
[ 13] Loaded the preliminary configuration.
[ 92] Created a machine account in the domain
[ 93] SID to name translations of Domain Users and Admins
completed successfully
[ 100] Modified account 'cn=SVM91,CN=Computers,dc=VM2,dc=ADOSAKANA
dc=LOCAL'
[ 101] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 113] Successfully connected to ip 172.17.44.49, port 464 using
TCP
[ 242] Kerberos password set for 'SVM91$@ADOSAKANA.LOCAL' succeeded
[ 242] Set initial account password
[ 277] Successfully connected to ip 172.17.44.49, port 445 using
TCP
[ 312] Successfully connected to ip 172.17.44.49, port 88 using
TCP
[ 346] Successfully authenticated with DC
sambaad.ADOSAKANA.LOCAL
[ 366] Unable to connect to NetLogon service on
sambaad.ADOSAKANA.LOCAL (Error:
RESULT_ERROR_GENERAL_FAILURE)
**[ 366] FAILURE: Unable to make a connection
** (NetLogon:ADOSAKANA.LOCAL), result: 3
[ 366] Unable to make a NetLogon connection to
sambaad.ADOSAKANA.LOCAL using the new machine account
Error: command failed: Failed to create the Active Directory machine account
"SVM91". Reason: general failure.
ontap91::>