GPFS commands are generally in this directory: /usr/lpp/mmfs/bin:

1. View GPFS cluster status

[root@db2ps02 ~]# /usr/lpp/mmfs/bin/mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         db2cluster_20181115111725.wachid.web.id
  GPFS cluster id:           15203668446759678806
  GPFS UID domain:           db2cluster_20181115111725.wachid.web.id
  Remote shell command:      /var/db2/db2ssh/db2locssh
  Remote file copy command:  /var/db2/db2ssh/db2scp
  Repository type:           server-based

GPFS cluster configuration servers:
-----------------------------------
  Primary server:    db2ps01.wachid.web.id
  Secondary server:  db2ps02.wachid.web.id

 Node  Daemon node name      IP address      Admin node name       Designation
-------------------------------------------------------------------------------
   1   db2ps01.wachid.web.id  192.168.10.192  db2ps02.wachid.web.id  quorum-manager
   2   db2ps02.wachid.web.id  192.168.10.193  db2ps01.wachid.web.id  quorum-manager

[root@db2ps02 ~]#

2. Delete gpfs domain

purescale141:/usr/lpp/mmfs/bin # /db2/bin/db2cluster -cfs -delete -domain db2cluster_20120710105156.site

3. Start GPFS cluster

./db2cluster -cfs -start

4. View the cluster node situation

/usr/lpp/mmfs/bin:

mmlsnode

5. View the status of each node

[root@db2ps02 ~]# /usr/lpp/mmfs/bin/mmgetstate -a

 Node number  Node name        GPFS state
------------------------------------------
       1      db2ps02          active
       2      db2ps01          active
[root@db2ps02 ~]# /usr/lpp/mmfs/bin/mmstartup -a
Fri Jul 14 17:08:54 WIB 2023: mmstartup: Starting GPFS ...
db2ps01.wachid.web.id:  The GPFS subsystem is already active.
db2ps02.wachid.web.id:  The GPFS subsystem is already active.
[root@db2ps02 ~]#

6. Add nodes

First add the sdb of suse2 as another node of GPFS, the command is as follows:

suse1:/usr/lpp/mmfs/bin/ mmaddnode –N suse2:quorum-manager

/usr/lpp/mmfs/bin/mmaddnode -N purescale140:quorum-manager

7. Modify the cluster settings

After finishing registering suse2 as Secondary Server, the command is as follows:

mmchcluster -s suse2.site

/usr/lpp/mmfs/bin/mmchcluster -s purescale140.site

//-s means set to SecondaryServer???

Usage: mmchcluster {[-p PrimaryServer] [-s SecondaryServer]}

Or

Mmchcluster -p LATEST

Or

Mmchcluster {[-r RemoteShellCommand] [-R RemoteFileCopyCommand]}

Or

Mmchcluster -C ClusterName

8. Solve the license problem

/db2/bin/db2cluster -cfs -add -license

9. Shut down all GPFS nodes

mmshudown –a

10. Start all GPFS nodes

mmstartup -a

11. Delete the secondary node

mmchcluster –s “”

mmchnode –N –nonquorum suse2.site

mmdelnode suse2.site

12. Forcibly delete the primay node

mmdelnode -f

13. View GPFS nodes and status commands

suse1:/usr/lpp/mmfs/bin/mmlscluster

mmlsnode

mmgetstate –a

14. Confirm the installation of GPFS, the existing version, and the new version in the installation file

root@coralpib269:/devinst/db2_v98fp5/aix64/s120605/ese_dsf/db2/aix/gpfs> db2ckgpfs -v install

3.3.0.14

root@coralpib269:/devinst/db2_v98fp5/aix64/s120605/ese_dsf/db2/aix/gpfs> db2ckgpfs -v media

3.4.0.13

15. Set GPFS resources to maintenance mode (maintenance mode)

Run the following command on any host:

————————————————– —————————————

DB2DIR/bin/db2cluster -cfs -enter -maintenance -all

Just do it again

root@coralpib269:/opt/IBM/db2/V9.8_SB28978/bin> db2cluster -cfs -enter -maintenance -all

The shared file system has sucessfully entered maintenance mode.

If the GPFS resource is set to maintenance mode, the mode needs to be restored to normal state after the installation is complete. Before running the following command, you need to ensure that the resources of SA MP have been restored to normal mode. On any host in the cluster, run the following command:

DB2DIR/bin/db2cluster -cfs -exit -maintenance -all

root@coralpib269:/opt/IBM/db2/V9.8fp5/bin> db2cluster -cfs -exit -maintenance -all

The shared file system successfully exited from maintenance mode.

16. Commit changes to GPFS

The DB2 cluster administrator must submit changes to the DB2 cluster and make them effective. On any host, run the following command:

DB2DIR/bin/db2cluster -cfs -commit

cd /opt/IBM/db2/V9.8fp5/bin

root@coralpib269:/opt/IBM/db2/V9.8fp5/bin> db2cluster -cfs -commit

The shared file system cluster has been successfully updated to version ‘3.4.0.13’.

17. Uninstall GPFS normally

Manually cleaning a DB2 managed clustered file system

This topic guides you through the required steps to manually clean a DB2? managed clustered file system.

About this task:

Using the db2idrop -g command to remove the DB2 pureScale? Feature from your environment removes the GPFS? cluster on all hosts

Except for the host on which the db2idrop command was run.

Use this procedure to remove the GPFS file system and cluster on the remaining host.

All data on the GPFS file system will be lost.

After the db2idrop command has completed, the GPFS cluster will be left over on installation-initiating host (IIH) only.

Manual clean up is only required on the host acting as the IIH.

Procedure

–17.1 List existing GPFS file systems using the following command:

-DB2DIR/bin/db2cluster -cfs -list -filesystem

▫ ▫

Where DB2DIR represents the installation location of your DB2 copy.

The output of this command should be similar to the following:

▫ ▫

FILE SYSTEM NAME MOUNT_POINT

———-

Db2fs1 /db2sd_20091027220651

–17.2 Stop the entire GPFS cluster:

-Db2cluster -cfs -stop -all

–17.3 Set the GPFS quorum type from tiebreaker to majority:

-Db2cluster -cfs -set -tiebreaker -majority

–17.4 Start the GPFS cluster:

-Db2cluster -cfs -start -all

–17.5 To ensure there is no data on the file system before deleting it, mount the file system:

-Mount /db2sd_20091027220651

–17.6 Delete the GPFS file system:

-Db2cluster -cfs -delete -filesystem db2fs1

The output of this command should be similar to the following: 

The file system ‘db2fs1’ has been successfully deleted.

                All cluster configurations have been completed successfully.

    –17.7 List the GPFS domain name:

                -db2cluster -cfs -list -domain

                 The output of this command should be similar to the following:

                 Domain Name: db2cluster_20091027220622.ca.ibm.com

    –17.8 Stop the GPFS cluster:

                -db2cluster -cfs -stop -all

    –17.9 Delete the GPFS cluster:

                -db2cluster -cfs -delete -domain db2cluster_20091027220622.ca.ibm.com

                 The output of this command should be similar to the following:

                 Deleting the domain db2cluster_20091027220622.in.ibm.com from

                 the cluster was successful.

    –17.10 After removing GPFS cluster and file systems, delete the GPFS_CLUSTER and DEFAULT_INSTPROF variable records in the Global Registry.

                -db2greg -delvarrec service=GPFS_CLUSTER,variable=NAME,installpath=-

                -db2greg -delvarrec service=DEFAULT_INSTPROF,variable=DEFAULT,installpath=-

18. Uninstall GPFS forcefully

–18.1 tiebreaker

-Db2cluster -cfs -set -tiebreaker -majority

–18.2 unmount

–18.3 del fs

-Db2cluster -cfs -delete -filesystem db2fs1

-//This command will check whether there are files in fs, so you need to mount it before.

–18.4 delete domain

-Db2cluster -cfs -delete -domain db2cluster_20091027220622.ca.ibm.com

Referensi :

http://db2luwacademy.blogspot.com/2021/05/tutorial-part-2-start-stop-pure-scale.html

Leave a Reply

Your email address will not be published. Required fields are marked *