- Back Up and Restore Deployments >
- Restore MongoDB Deployments >
- Restore a Sharded Cluster from a Snapshot
Restore a Sharded Cluster from a Snapshot¶
On this page
When you restore a cluster from a snapshot, Cloud Manager provides you with restore files for the selected restore point.
To learn about the restore process, see Restore Overview.
- FCV of 4.0 or earlier
- FCV of 4.2 or later
Considerations¶
Review change to BinData
BSON sub-type¶
The BSON specification changed the
default subtype for the BSON binary datatype (BinData
) from 2
to 0
. Some binary data stored in a snapshot may be BinData
subtype 2. The Backup automatically detects and converts snapshot
data in BinData
subtype 2 to BinData
subtype 0. If your
application code expects BinData
subtype 2, you must update your
application code to work with BinData
subtype 0.
See also
The notes on the BSON specification explain the particular specifics of this change.
Restore using settings given in restoreInfo.txt
¶
The backup restore file includes a metadata file named
restoreInfo.txt
. This file captures the options the database used
when the snapshot was taken. The database must be run with the listed
options after it has been restored. This file contains:
- Group name
- Replica Set name
- Cluster ID (if applicable)
- Snapshot timestamp (as Timestamp at UTC)
- Restore timestamp (as a BSON Timestamp at UTC)
- Last Oplog applied (as a BSON Timestamp at UTC)
- MongoDB version
- Storage engine type
mongod
startup options used on the database when the snapshot was taken
Snapshots when Agent Cannot Stop Balancer¶
Cloud Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Cannot Stop Balancer.
Backup Considerations¶
All FCV databases must fulfill the appropriate backup considerations.
Prerequisites¶
Disable Client Requests to MongoDB during Restore¶
You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:
- Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
- Ensure that the MongoDB deployment will not receive client requests while you restore data.
Restore a Snapshot¶
- Automatic Restore
- Manual Restore
To have Cloud Manager automatically restore the snapshot:
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the restore point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
If you are restoring a sharded cluster that runs
FCV
of 4.0 or earlier, you must enable cluster checkpoints to perform a PIT restore on a sharded cluster.If no checkpoints that include your date and time are available, Cloud Manager asks you to choose another point in time.
You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.
Select a Date and Time. Click Next.
If you are restoring a sharded cluster that runs
FCV
of 4.0 or earlier and you chose Point In Time:- A list of Checkpoints closest to the time you selected appears.
- To start your point in time restore, you may:
- Choose one of the listed checkpoints, or
- Click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
Choose to restore the files to another cluster.¶
Click Choose Cluster to Restore to.
Complete the following fields:
Field Action Project Select a project to which you want to restore the snapshot. Cluster to Restore to Select a cluster to which you want to restore the snapshot.
Cloud Manager must manage the target sharded cluster.
Warning
Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.
Click Restore.
Cloud Manager notes how much storage space the restore requires in its console.
Click Restore.¶
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the restore point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
In FCV 4.0, you cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup. This note does not apply to FCV 4.2 or later.
Select a Date and Time. Oplog Timestamp Creates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp contains two fields:
Timestamp Timestamp in the number of seconds that have elapsed since the UNIX epoch Increment Order of operation applied in that second as a 32-bit ordinal. Type an Oplog Timestamp and Increment.
Run a query against
local.oplog.rs
on your replica set to find the desired timestamp.Click Next.
If you are restoring a sharded cluster that runs
FCV
of 4.0 or earlier and you chose Point In Time:- A list of Checkpoints closest to the time you selected appears.
- To start your point in time restore, you may:
- Choose one of the listed checkpoints, or
- Click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
- Once you have selected a checkpoint, apply the oplog to this snapshot to bring your snapshot to the date and time you selected. The oplog is applied for all operations up to but not including the selected time.
Click Download to restore the files manually.¶
Configure the snapshot download.¶
Configure the following download options:
Pull Restore Usage Limit Select how many times the link can be used. If you select No Limit
, the link is re-usable until it expires.Restore Link Expiration (in hours) Select the number of hours until the link expires. The default value is 1
. The maximum value is the number of hours until the selected snapshot expires.Click Finalize Request.
If you use 2FA, Cloud Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.
Retrieve the snapshots.¶
Cloud Manager creates links to the snapshot. By default, these links are available for an hour and can be used just once.
To download the snapshots:
- If you closed the restore panel, click Backup, then Restore History.
- When the restore job completes, click (get link) for each shard and for one of the config servers appears.
- Click:
- The copy button to the right of the link to copy the link to use it later, or
- Download to download the snapshot immediately.
Extra step for point-in-time restores
For point-in-time and oplog timestamp restores, additional
instructions are shown. The final step shows the full command
you must run using the mongodb-backup-restore-util
. It
includes all of the necessary options to ensure a full restore.
Select and copy the mongodb-backup-restore-util
command
provided under Run Binary with PIT Options.
Restore the snapshot data files to the destination host.¶
Extract the snapshot archive for the config server and for each shard to a temporary location.
Example
Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).¶
Download the MongoDB Backup Restore Utility to your host.
Note
If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.
Start a
mongod
instance without authentication enabled using the extracted snapshot directory as the data directory.Example
Warning
The MongoDB Backup Restore Utility doesn’t support authentication, so you can’t start this temporary database with authentication.
Run the MongoDB Backup Restore Utility on your destination host. Run it once for the config server and each shard.
Pre-configured
mongodb-backup-restore-util
commandCloud Manager provides the
mongodb-backup-restore-util
with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.You should copy the
mongodb-backup-restore-util
command provided in the Cloud Manager.The
mongodb-backup-restore-util
command uses the following options:Option Necessity Description check circle icon --host
Required Provide the hostname, FQDN, IPv4 address, or IPv6 address for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util
command provided in the Cloud Manager, this field is pre-configured.check circle icon --port
Required Provide the port for the host that serves the mongod
to which the oplog should be applied.check circle icon --opStart
Required Provide the BSON timestamp for the first oplog entry you want to include in the restore.
Note
This value must be less than or equal to the
--opEnd
value.check circle icon --opEnd
Required Provide the BSON timestamp for the last oplog entry you want to include in the restore.
Note
This value cannot be greater than the end of the oplog.
check circle icon --logFile
Optional Provide a path, including file name, where the MBRU log is written. --oplogSourceAddr
Required Provide the URL to the Cloud Manager resource endpoint. check circle icon --apiKey
Required Provide your Cloud Manager Agent API Key. check circle icon --groupId
Required Provide the group ID. check circle icon --rsId
Required Provide the replica set ID. check circle icon --whitelist
Optional Provide a list of databases and/or collections to which you want to limit the restore. --blacklist
Optional Provide a list of databases and/or collections to which you want to exclude from the restore. --seedReplSetMember
Optional Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.
Requires
--oplogSizeMB
and--seedTargetPort
.--oplogSizeMB
Conditional Provide the oplog size in MB.
Required if
--seedReplSetMember
is set.--seedTargetPort
Conditional Provide the port for the replica set’s primary. This may be different from the ephemeral port used.
Required if
--seedReplSetMember
is set.--ssl
Conditional Use if you need TLS/SSL to apply the oplog to the
mongod
.Requires
--sslCAFile
and--sslPEMKeyFile
.--sslCAFile
Conditional Provide the path to the Certificate Authority file.
Required if
--ssl
is set.--sslPEMKeyFile
Conditional Provide the path to the PEM certificate file.
Required if
--ssl
is set.--sslPEMKeyFilePwd
Conditional Provide the password for the PEM certificate file specified in
--sslPEMKeyFile
.Required if
--ssl
is set and that PEM key file is encrypted.--sslClientCertificateSubject
Provide the Client Certificate Subject or Distinguished Name (DN) for the target MongoDB process. --sslRequireValidServerCertificates
Optional Set a flag indicating if the tool should validate certificates that the target MongoDB process presents. --sslServerClientCertificate
Optional Provide the absolute path to Client Certificate file to use for connecting to the Cloud Manager host. --sslServerClientCertificatePassword
Conditional Provide the absolute path to Client Certificate file password to use for connecting to the Cloud Manager host.
Required if
--sslServerClientCertificate
is set and that certificate is encrypted.--sslRequireValidMMSBackupServerCertificate
Optional Set a flag indicating if valid certificates are required when contacting the Cloud Manager host. Default value is true
.--sslTrustedMMSBackupServerCertificate
Optional Provide the absolute path to the trusted Certificate Authority certificates in PEM format for the Cloud Manager host. If this flag is not provided, the system Certificate Authority is used. --httpProxy
Optional Provide the URL of an HTTP proxy server the tool can use.
Copy the completed snapshots to restore to other hosts.¶
- For the config server, copy the restored config server database to the working database path of each replica set member.
- For each shard, copy the restored shard database to the working database path of each replica set member.
Unmanage the Sharded Cluster.¶
Before attempting to restore the data manually, remove the sharded cluster from Automation.
Restore the Sharded Cluster Manually.¶
Follow the tutorial from the MongoDB Manual to restore the sharded cluster.
Reimport the Sharded Cluster.¶
To manage the sharded cluster with automation again, import the sharded cluster back into Cloud Manager.
Start the Sharded Cluster Balancer.¶
Once a restore completes, the sharded cluster balancer is turned off. To start the balancer:
- Click Deployment.
- Click ellipsis h icon on the card for your desired sharded cluster.
- Click Manager Balancer.
- Toggle to Yes.
- Click pencil icon to the right of Set the Balancer State.
- Toggle to Yes.
- Click Save.
- Click Review & Deploy to save the changes.
- Automatic Restore
- Manual Restore
To have Cloud Manager automatically restore the snapshot:
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the restore point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
In FCV 4.0, you cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup. This note does not apply to FCV 4.2 or later.
Select a Date and Time. Oplog Timestamp Creates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp contains two fields:
Timestamp Timestamp in the number of seconds that have elapsed since the UNIX epoch Increment Order of operation applied in that second as a 32-bit ordinal. Type an Oplog Timestamp and Increment.
Run a query against
local.oplog.rs
on your replica set to find the desired timestamp.Click Next.
Choose to restore the files to another cluster.¶
Click Choose Cluster to Restore to.
Complete the following fields:
Field Action Project Select a project to which you want to restore the snapshot. Cluster to Restore to Select a cluster to which you want to restore the snapshot.
Cloud Manager must manage the target sharded cluster.
Warning
Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.
Click Restore.
Cloud Manager notes how much storage space the restore requires in its UI.
Click Restore.¶
Consider Automatic Restore
This procedure involves a large number steps. Some of these steps have severe security implications. If you don’t need to restore to a deployment that Cloud Manager doesn’t manage, consider an automated restore.
To restore a snapshot yourself:
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the Restore Point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
In FCV 4.0, you cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup. This note does not apply to FCV 4.2 or later.
Select a Date and Time. Oplog Timestamp Creates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp contains two fields:
Timestamp Timestamp in the number of seconds that have elapsed since the UNIX epoch Increment Order of operation applied in that second as a 32-bit ordinal. Type an Oplog Timestamp and Increment.
Run a query against
local.oplog.rs
on your replica set to find the desired timestamp.Click Next.
Click Download to Restore the Files Manually.¶
Configure the snapshot download.¶
Configure the following download options:
Pull Restore Usage Limit Select how many times the link can be used. If you select No Limit
, the link is re-usable until it expires.Restore Link Expiration (in hours) Select the number of hours until the link expires. The default value is 1
. The maximum value is the number of hours until the selected snapshot expires.Click Finalize Request.
If you use 2FA, Cloud Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.
Retrieve the Snapshots.¶
Cloud Manager creates links to the snapshot. By default, these links are available for an hour and can be used just once.
To download the snapshots:
- If you closed the restore panel, click Backup, then Restore History.
- When the restore job completes, click (get link) for each shard and for one of the config servers appears.
- Click:
- The copy button to the right of the link to copy the link to use it later, or
- Download to download the snapshot immediately.
Move the Snapshot Data Files to the Target Host.¶
Before moving the snapshot’s data files to the target host, check whether the target host contains any existing files and delete them.
Extract the snapshot archive for the config server and for each shard to a temporary location.
The following commands use </path/to/snapshot/> as a temporary path.
Unmanage the Sharded Cluster.¶
Before attempting to restore the data manually, remove the sharded cluster from Automation.
Note
Steps 8 to 16 use the CSRS files downloaded in Step 7.
Stop the Running MongoDB Processes.¶
If restoring to an existing cluster, shut down the mongod
or
mongos
process on the target host. Using mongosh
, connect to a host running:
mongos |
Run |
|||||||||
mongod |
Run
|
Copy the Completed Snapshots to Restore to Other Hosts.¶
- For the config server, copy the restored config server database to the working database path of each replica set member.
- For each shard, copy the restored shard database to the working database path of each replica set member.
Drop the Minimum Valid Timestamp.¶
Issue the following command:
Verify Hardware and Software Requirements.¶
Storage Capacity | The target host hardware needs enough free storage space for the restored data. If you want to keep any existing sharded cluster data on this host, make sure the host has enough free space for both data sets. |
---|---|
MongoDB Version | The target host and source host must run the same MongoDB
Server version. To check the MongoDB version, run mongod
--version from a terminal or shell. |
To learn more about installation, see /installation.
Create Configuration File.¶
Create a mongod configuration file in your database directory using your preferred text editor.
Note
If you have access to the original configuration file for the
mongod
, you can copy it to your database directory on the target host instead.Grant the user that runs the
mongod
read and write permissions on your configuration file.Modify your configuration as you require for your deployment.
Setting Required Value storage.dbPath
Path to your data directory systemLog.path
Path to your log directory net.bindIp
IP address of the host machine replication.replSetName
Same value across each member in any given replica set sharding.clusterRole
Same value across each member in any given replica set
Restore the CSRS Primary mongod
Data Files.¶
Copy the
mongod
data files from the backup data location to the data directory you created:The
-a
option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.Open your replica set configuration file in your preferred text editor.
Comment out or omit the following configuration file settings:
Start the
mongod
, specifying:The
--config
option and the full path to the configuration file, andThe
disableLogicalSessionCacheRefresh
server parameter. Depending on your path, you may need to specify the path to themongod
binary.If you have
mongod
configured to run as a system service, start it using the recommended process for your platform’s service manager.
Add a New Replica Set Configuration.¶
Insert the following document into the system.replset
collection
in the local
database. Change
<replaceMeWithTheCSRSName>
to the name of your replica set
and <port>
to the port of your replica set.
A successful response should look like this:
Insert the Minimum Valid Timestamp.¶
Issue the following command:
Set the Restore Point to the Restore Timestamp
values from the restoreInfo
file.¶
Set the oplogTruncateAfterPoint
document to the restoreTS.getTime()
and
restoreTS.getInc()
values provided in the
Restore Timestamp
field of the restoreInfo.txt file.
A successful response should look like this:
Note
Each member has its own restoreInfo.txt
file, but the
Restore Timestamp
values should be the same in each file.
Restart as a Single-Node Replica Set to Recover the Oplog.¶
Start the mongod
. Depending on your path, you may need to specify
the path to the mongod
binary. The mongod
replays the oplog up to the
Restore timestamp
.
Important
In the following command, you must use <ephemeralPort>
. This
port must differ from the <port>
you set in Step 14.
Stop the Temporary Single-Node Config Server Replica Set.¶
Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).¶
Download the MongoDB Backup Restore Utility to your host.
If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.
Start a
mongod
instance without authentication enabled using the extracted snapshot directory as the data directory. Depending on your path, you may need to specify the path to themongod
binary.Warning
The MongoDB Backup Restore Utility doesn’t support authentication, so you can’t start this temporary database with authentication.
Run the MongoDB Backup Restore Utility on your target host. Run it once for the replica set.
Pre-configured
mongodb-backup-restore-util
commandCloud Manager provides the
mongodb-backup-restore-util
with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.You should copy the
mongodb-backup-restore-util
command provided in the Cloud Manager.The
mongodb-backup-restore-util
command uses the following options:Option Necessity Description check circle icon --host
Required Provide the hostname, FQDN, IPv4 address, or IPv6 address for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util
command provided in the Cloud Manager, this field is pre-configured.check circle icon --port
Required Provide the port for the host that serves the mongod
to which the oplog should be applied.check circle icon --opStart
Required Provide the BSON timestamp for the first oplog entry you want to include in the restore.
Note
This value must be less than or equal to the
--opEnd
value.check circle icon --opEnd
Required Provide the BSON timestamp for the last oplog entry you want to include in the restore.
Note
This value cannot be greater than the end of the oplog.
check circle icon --logFile
Optional Provide a path, including file name, where the MBRU log is written. --oplogSourceAddr
Required Provide the URL to the Cloud Manager resource endpoint. check circle icon --apiKey
Required Provide your Cloud Manager Agent API Key. check circle icon --groupId
Required Provide the group ID. check circle icon --rsId
Required Provide the replica set ID. check circle icon --whitelist
Optional Provide a list of databases and/or collections to which you want to limit the restore. --blacklist
Optional Provide a list of databases and/or collections to which you want to exclude from the restore. --seedReplSetMember
Optional Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.
Requires
--oplogSizeMB
and--seedTargetPort
.--oplogSizeMB
Conditional Provide the oplog size in MB.
Required if
--seedReplSetMember
is set.--seedTargetPort
Conditional Provide the port for the replica set’s primary. This may be different from the ephemeral port used.
Required if
--seedReplSetMember
is set.--ssl
Conditional Use if you need TLS/SSL to apply the oplog to the
mongod
.Requires
--sslCAFile
and--sslPEMKeyFile
.--sslCAFile
Conditional Provide the path to the Certificate Authority file.
Required if
--ssl
is set.--sslPEMKeyFile
Conditional Provide the path to the PEM certificate file.
Required if
--ssl
is set.--sslPEMKeyFilePwd
Conditional Provide the password for the PEM certificate file specified in
--sslPEMKeyFile
.Required if
--ssl
is set and that PEM key file is encrypted.--sslClientCertificateSubject
Provide the Client Certificate Subject or Distinguished Name (DN) for the target MongoDB process. --sslRequireValidServerCertificates
Optional Set a flag indicating if the tool should validate certificates that the target MongoDB process presents. --sslServerClientCertificate
Optional Provide the absolute path to Client Certificate file to use for connecting to the Cloud Manager host. --sslServerClientCertificatePassword
Conditional Provide the absolute path to Client Certificate file password to use for connecting to the Cloud Manager host.
Required if
--sslServerClientCertificate
is set and that certificate is encrypted.--sslRequireValidMMSBackupServerCertificate
Optional Set a flag indicating if valid certificates are required when contacting the Cloud Manager host. Default value is true
.--sslTrustedMMSBackupServerCertificate
Optional Provide the absolute path to the trusted Certificate Authority certificates in PEM format for the Cloud Manager host. If this flag is not provided, the system Certificate Authority is used. --httpProxy
Optional Provide the URL of an HTTP proxy server the tool can use. check circle icon means that if you copied the
mongodb-backup-restore-util
command provided in Cloud Manager, this field is pre-configured.Issue the following command. Depending on your path, you may need to specify the path to the
mongod
binary.
Restart as a Standalone to Recover the Oplog.¶
Start the mongod
with the following setParameter
options set
to true
:
recoverFromOplogAsStandalone
takeUnstableCheckpointOnShutdown
Depending on your path, you may need to specify
the path to the mongod
binary. The mongod
replays
the oplog up to the Restore timestamp
.
Clean Documents in Config Database that Reference the Previous Sharded Cluster Configuration.¶
Note
This example covers a three shard cluster. Replace the following values with those in your configuration:
SOURCE_SHARD_<X>_NAME
<DEST_SHARD_<X>_NAME
DEST_SHARD_<X>_HOSTNAME
Restart the mongod
as a New Single-node Replica Set.¶
Open the configuration file in your preferred text editor.
Uncomment or add the following configuration file options:
To change the replica set name, update the
replication.replSetName
field with the new name before proceeding.Start the
mongod
with the updated configuration file. Depending on your path, you may need to specify the path to themongod
binary.If you have
mongod
configured to run as a system service, start it using the recommended process for your platform’s service manager.
Initiate the New Replica Set.¶
Initiate the replica set using rs.initiate()
with the
default settings.
Once the operation completes, use rs.status()
to check
that the member has become the primary.
Add Additional Replica Set Members.¶
For each replica set member in the CSRS, start the
mongod
on its host.Once you have started up all remaining members of the cluster successfully, connect
mongosh
to the primary replica set member.From the primary, use the
rs.add()
method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member’smongod
process:If you want to add the member with specific replica
member
configuration settings, you can pass a document tors.add()
that defines the member hostname and anymembers[n]
settings your deployment requires.Each new member performs an initial sync to catch up to the primary. Depending on data volume, network, and host performance factors, initial sync might take a while to complete.
The replica set might elect a new primary while you add additional members. You can only run
rs.add()
from the primary. To identify which member is the current primary, users.status()
.
Configure Any Additional Required Replication Settings.¶
The rs.reconfig()
method updates the replica set
configuration based on a configuration document passed in as a
parameter.
- Run
rs.reconfig()
against the primary member of the replica set. - Reference the original configuration file output of the replica set and apply settings as needed.
Remove Replica Set-Related Collections from the local
Database.¶
Note
These steps appear repetitive, but cover shards instead of the CSRS.
To perform manual restores, you must have the Backup Admin role in Cloud Manager.
Run the following commands to remove the previous replica set configuration and other non-oplog, replication-related collections.
A successful response should look like this:
Insert the Minimum Valid Timestamp.¶
Issue the following command:
Add a New Replica Set Configuration.¶
Insert the following document into the system.replset
collection
in the local
database. Change
<replaceMeWithTheShardName>
to the name of your replica set
and <port>
to the port of your replica set.
A successful response should look like this:
Set the Restore Point to the Restore Timestamp
value from the restoreInfo
file.¶
Set the oplogTruncateAfterPoint
document to the values in the Restore Timestamp
field
given in the restoreInfo.txt file.
A successful response should look like this:
Restart as a Single-Node Replica Set to Recover the Oplog.¶
Start the mongod
. Depending on your path, you may need to specify
the path to the mongod
binary. The mongod
replays the oplog up to
the Restore timestamp
.
Important
This command uses <ephemeralPort>
. This port must differ
from the <port>
you set in Step 14.
Stop the Temporary Single-Node Shard Replica Set.¶
Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).¶
Download the MongoDB Backup Restore Utility to your host.
If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.
Start a
mongod
instance without authentication enabled using the extracted snapshot directory as the data directory. Depending on your path, you may need to specify the path to themongod
binary.Warning
The MongoDB Backup Restore Utility doesn’t support authentication, so you can’t start this temporary database with authentication.
Run the MongoDB Backup Restore Utility on your target host. Run it once for the replica set.
Pre-configured
mongodb-backup-restore-util
commandCloud Manager provides the
mongodb-backup-restore-util
with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.You should copy the
mongodb-backup-restore-util
command provided in the Cloud Manager.The
mongodb-backup-restore-util
command uses the following options:Option Necessity Description check circle icon --host
Required Provide the hostname, FQDN, IPv4 address, or IPv6 address for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util
command provided in the Cloud Manager, this field is pre-configured.check circle icon --port
Required Provide the port for the host that serves the mongod
to which the oplog should be applied.check circle icon --opStart
Required Provide the BSON timestamp for the first oplog entry you want to include in the restore.
Note
This value must be less than or equal to the
--opEnd
value.check circle icon --opEnd
Required Provide the BSON timestamp for the last oplog entry you want to include in the restore.
Note
This value cannot be greater than the end of the oplog.
check circle icon --logFile
Optional Provide a path, including file name, where the MBRU log is written. --oplogSourceAddr
Required Provide the URL to the Cloud Manager resource endpoint. check circle icon --apiKey
Required Provide your Cloud Manager Agent API Key. check circle icon --groupId
Required Provide the group ID. check circle icon --rsId
Required Provide the replica set ID. check circle icon --whitelist
Optional Provide a list of databases and/or collections to which you want to limit the restore. --blacklist
Optional Provide a list of databases and/or collections to which you want to exclude from the restore. --seedReplSetMember
Optional Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.
Requires
--oplogSizeMB
and--seedTargetPort
.--oplogSizeMB
Conditional Provide the oplog size in MB.
Required if
--seedReplSetMember
is set.--seedTargetPort
Conditional Provide the port for the replica set’s primary. This may be different from the ephemeral port used.
Required if
--seedReplSetMember
is set.--ssl
Conditional Use if you need TLS/SSL to apply the oplog to the
mongod
.Requires
--sslCAFile
and--sslPEMKeyFile
.--sslCAFile
Conditional Provide the path to the Certificate Authority file.
Required if
--ssl
is set.--sslPEMKeyFile
Conditional Provide the path to the PEM certificate file.
Required if
--ssl
is set.--sslPEMKeyFilePwd
Conditional Provide the password for the PEM certificate file specified in
--sslPEMKeyFile
.Required if
--ssl
is set and that PEM key file is encrypted.--sslClientCertificateSubject
Provide the Client Certificate Subject or Distinguished Name (DN) for the target MongoDB process. --sslRequireValidServerCertificates
Optional Set a flag indicating if the tool should validate certificates that the target MongoDB process presents. --sslServerClientCertificate
Optional Provide the absolute path to Client Certificate file to use for connecting to the Cloud Manager host. --sslServerClientCertificatePassword
Conditional Provide the absolute path to Client Certificate file password to use for connecting to the Cloud Manager host.
Required if
--sslServerClientCertificate
is set and that certificate is encrypted.--sslRequireValidMMSBackupServerCertificate
Optional Set a flag indicating if valid certificates are required when contacting the Cloud Manager host. Default value is true
.--sslTrustedMMSBackupServerCertificate
Optional Provide the absolute path to the trusted Certificate Authority certificates in PEM format for the Cloud Manager host. If this flag is not provided, the system Certificate Authority is used. --httpProxy
Optional Provide the URL of an HTTP proxy server the tool can use. check circle icon means that if you copied the
mongodb-backup-restore-util
command provided in Cloud Manager, this field is pre-configured.
Clean Documents in the Admin and Config Databases.¶
Note
Replace the following values with those in your configuration:
<ShardName>
<clusterId>
NEW_CONFIG_NAME
NEW_CONFIG_HOSTNAME
Initiate the New Replica Set.¶
Run rs.initiate()
on the replica set:
MongoDB initiates a set that consists of the current member and that uses the default replica set configuration.
Shut Down the New Replica Set.¶
Reimport the Sharded Cluster.¶
To manage the sharded cluster with Automation again, import the sharded cluster back into Cloud Manager.
Restore the Shard Primary mongod
Data Files.¶
Copy the
mongod
data files from the backup data location to the data directory you created:The
-a
option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.Open your replica set configuration file in your preferred text editor.
Comment out or omit the following configuration file settings:
Start the
mongod
, specifying:- The
--config
option and the full path to the configuration file, and - The
disableLogicalSessionCacheRefresh
server parameter.
Depending on your path, you may need to specify the path to the
mongod
binary.If you have
mongod
configured to run as a system service, start it using the recommended process for your platform’s service manager.- The
Create a Temporary User with the __system
Role.¶
Important
Skip this step if the cluster does not enforce authentication.
Clusters that enforce authentication limit
who can change the admin.system.version
collection. Clusters
limit permission to users with the __system
role.
Warning
The __system
role allows a user to take any action
against any object in the database.
Do not keep this user active beyond the scope of this procedure. This procedure includes instructions for removing the user created in this step.
Consider creating this user with the clientSource
authentication restriction
configured such that only the specified hosts can
authenticate as the privileged user.
Authenticate as a user with either the
userAdmin
role on theadmin
database or theuserAdminAnyDatabase
role:Create a user with the
__system
role:Make these passwords random, long, and complex. Keep the system secure and prevent or delay malicious access.
Authenticate as the privileged user:
Restart the mongod
as a New Single-node Replica Set.¶
Open the configuration file in your preferred text editor.
Uncomment or add the following configuration file options:
To change the replica set name, update the
replication.replSetName
field with the new name before proceeding.Start the
mongod
with the updated configuration file. Depending on your path, you may need to specify the path to themongod
binary.If you have
mongod
configured to run as a system service, start it using the recommended process for your platform’s service manager.
Initiate the New Replica Set.¶
Initiate the replica set using rs.initiate()
with the
default settings.
Once the operation completes, use rs.status()
to check
that the member has become the primary.
Add Additional Replica Set Members.¶
For each replica set member in the shard replica set, start the
mongod
on its host.Once you have started up all remaining members of the cluster successfully, connect
mongosh
to the primary replica set member.From the primary, use the
rs.add()
method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member’smongod
process:If you want to add the member with specific replica
member
configuration settings, you can pass a document tors.add()
that defines the member hostname and anymembers[n]
settings your deployment requires.Each new member performs an initial sync to catch up to the primary. Depending on data volume, network, and host performance factors, initial sync might take a while to complete.
The replica set might elect a new primary while you add additional members. You can only run
rs.add()
from the primary. To identify which member is the current primary, users.status()
.
Configure Any Additional Required Replication Settings.¶
The rs.reconfig()
method updates the replica set
configuration based on a configuration document passed in as a
parameter.
- Run
rs.reconfig()
against the primary member of the replica set. - Reference the original configuration file output of the replica set and apply settings as needed.
Remove the Temporary Privileged User.¶
For clusters enforcing authentication, remove the privileged user created earlier in this procedure:
Authenticate as a user with the
userAdmin
role on theadmin
database oruserAdminAnyDatabase
role:Delete the privileged user:
Restart Each mongos
.¶
Restart each mongos
in the cluster.
Include all other command line options as required by your deployment.
If the CSRS replica set name or any member hostname changed, update
the mongos
configuration file setting sharding.configDB
with updated configuration server connection string:
Verify that You Can Access the Cluster.¶
Connect
mongosh
to one of themongos
processes for the cluster.Use
sh.status()
to check the overall cluster status.If
sh.status()
indicates that the balancer is not running, usesh.startBalancer()
to restart the balancer.To confirm that you can access all shards and they are communicating, insert test data into a temporary sharded collection.
Confirm that data is being split and migrated between each shard in your cluster.
You can connect
mongosh
to each shard primary and usedb.collection.find()
to validate that the data was sharded as expected.
Rotate Master Key after Restoring Snapshots Encrypted with AES256-GCM
If you restore an encrypted snapshot that Cloud Manager encrypted with AES256-GCM, rotate your master key after completing the restore.