If all servers are offline at the same time (such as during a power outage), then the cluster will need to be re-created following a crash procedure.
1. Check each node and see if they all have a zero in the “safe_to_bootstrap” value. If any have a 1 as the value, that is the server you start first.
cat /var/lib/mysql/grastate.dat # GALERA saved state version: 2.1 uuid: d89c79f4-b308-11e9-9931-8f96b3cc57b2 seqno: -1 safe_to_bootstrap: 0
2. If they all have an invalid seqno of -1 and none have a value of 1 in “safe_to_bootstrap” then you need to temporary start the database service on each server with a recovery option so you can see the number associated to the “Recovered Position”
sudo -u mysql mysqld --wsrep-recover
srv-db1 shows position is 31:
2019-08-05 18:53:55 0 [Note] WSREP: Recovered position: d89c79f4-b308-11e9-9931-8f96b3cc57b2:31
srv-db2 shows position is 29:
2019-08-05 18:49:16 0 [Note] WSREP: Recovered position: d89c79f4-b308-11e9-9931-8f96b3cc57b2:29
srv-db3 shows position is 30:
2019-08-05 18:49:23 0 [Note] WSREP: Recovered position: d89c79f4-b308-11e9-9931-8f96b3cc57b2:30
3. In this example, srv-db1 has the highest transaction number of 31 and should be the 1st node to be started in the cluster. Modify the grastate.dat and change “safe_to_bootstrap” to 1.
sed -i 's/safe_to_bootstrap: 0/safe_to_bootstrap: 1/' /var/lib/mysql/grastate.dat
4. Now you can follow the normal procedure to start a new Galera cluster using this server as the starting node.