Skip to content

restore data question #313

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
yangzan66 opened this issue May 29, 2019 · 4 comments
Open

restore data question #313

yangzan66 opened this issue May 29, 2019 · 4 comments

Comments

@yangzan66
Copy link

yangzan66 commented May 29, 2019

when I restore the configRS ,why it reports the error?
mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --gzip --dir /mongobackup/172.17.11.218/default/latest/configRS/dump/
2019-05-29T15:04:05.582+0800 preparing collections to restore from
2019-05-29T15:04:05.583+0800 Failed: cannot do a full restore on a sharded system - remove the 'config' directory from the dump directory first

how shoud I do when restore the data to a exist target mongo cluster? do I need to do the 4 command step by step as follows? if the target cluster has other data,does this have an effect?
thank you !
mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --gzip --drop --dir /mongobackup/172.17.11.218/default/latest/configRS/dump/
mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --gzip --drop --dir /mongobackup/172.17.11.218/default/latest/shard1/dump/ >> ${logFile} 2>&1
mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --gzip --drop --dir /mongobackup/172.17.11.218/default/latest/shard2/dump/ >> ${logFile} 2>&1
mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --gzip --drop --dir /mongobackup/172.17.11.218/default/latest/shard3/dump/ >> ${logFile} 2>&1

@timvaillancourt
Copy link
Contributor

timvaillancourt commented May 29, 2019

That's an interesting problem @yangzan66. Could you provide the full output of mongorestore --version?

It may be possible to work around this by doing a single-database restore passing --db=config to mongorestore (instead of an "all" databases restore). Config servers should only have that single-database anyways.

@yangzan66
Copy link
Author

Hello,
thanks for you reply. the source mongo cluster and the target mongo cluster are the same one. is it the reason?
must I restore the data to a new cluster?
I really want to restore only one database, but it alway report that it's conflict with --oplogReplay .

[root@dbsrvm-mongobackup latest]# mongorestore --host 172.17.11.218 --port 27017 --oplogReplay --db=config --gzip --dir /mongobackup/172.17.11.218/default/latest/configRS/dump/
2019-05-29T18:30:32.698+0800 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2019-05-29T18:30:32.698+0800 Failed: cannot use --oplogReplay with includes specified

[root@dbsrvm-mongobackup latest]# mongorestore --version
mongorestore version: r3.4.19
git version: a2d97db8fe449d15eb8e275bbf318491781472bf
Go version: go1.10.7
os: linux
arch: amd64
compiler: gc
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
[root@dbsrvm-mongobackup latest]#

@yangzan66
Copy link
Author

@timvaillancourt
besides above,my mongo version is 3.4.19

[root@dbsrvm-mongotest03 pkg]# mongo --version
MongoDB shell version v3.4.19
git version: a2d97db8fe449d15eb8e275bbf318491781472bf
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64
[root@dbsrvm-mongotest03 pkg]#

@yangzan66
Copy link
Author

yangzan66 commented May 30, 2019

###I create a new mongo cluster(172.17.1.88) as the target cluster, but the problem are the same.

[root@dbsrvm-mongobackup mongobackup]# mongorestore --host 172.17.1.88 --port 27017 --oplogReplay --gzip --dir /mongobackup/172.17.11.218/default/latest/configRS/dump/
2019-05-30T09:28:50.379+0800 preparing collections to restore from
2019-05-30T09:28:50.379+0800 Failed: cannot do a full restore on a sharded system - remove the 'config' directory from the dump directory first
[root@dbsrvm-mongobackup mongobackup]#
###if add --db option:
[root@dbsrvm-mongobackup mongobackup]# mongorestore --host 172.17.1.88 --port 27017 --oplogReplay --db=config --gzip --dir /mongobackup/172.17.11.218/default/latest/configRS/dump/
2019-05-30T09:31:15.841+0800 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2019-05-30T09:31:15.841+0800 Failed: cannot use --oplogReplay with includes specified

###when I only restore the shard1, it reports error " applyOps not allowed through mongos",as follows:
mongorestore --host 172.17.1.88 --port 27017 --oplogReplay --gzip --dir /mongobackup/172.17.11.218/default/latest/shard1/dump/
2019-05-30T18:18:27.924+0800 [########################] test1.person2 454MB/454MB (100.0%)
2019-05-30T18:18:27.924+0800 no indexes to restore
2019-05-30T18:18:27.924+0800 finished restoring test1.person2 (50256414 documents)
2019-05-30T18:18:27.924+0800 replaying oplog
2019-05-30T18:18:27.957+0800 Failed: restore error: error applying oplog: applyOps: applyOps not allowed through mongos

###is that what I did wrong? the culster doesn't uses Authentication, my backup command and config file as follows:

mongodb-consistent-backup -H 172.17.11.218 --config /etc/mongodb-consistent-backup.conf

[root@dbsrvm-mongobackup mongobackup]# grep -v "#" /etc/mongodb-consistent-backup.conf
production:
port: 27017
log_dir: /var/log/mongodb-consistent-backup
backup:
method: mongodump
name: default
location: /mongobackup/172.17.11.218/
archive:
method: tar
notify:
method: none
upload:
method: none
[root@dbsrvm-mongobackup mongobackup]#

could you give some advice? thank you very much

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants