-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Performance of autocas #7
Comments
Hi Leonardo, our DMRG code is currently only OpenMP parallelized, so that it cannot utilize multiple nodes (we are working on an MPI version, but this will take a few months). Best Max |
Thank you very much for your answer! I tried to play with some input parameters here and there but in most cases all of the orbitals are dropped from the active space as weakly correlated.
I found a bit of improvement by increasing the maximum number of orbitals in the "large active space protocol", meaning that I am able to recover correct CAS sub-active spaces, e.g. (4,4) or (6,6) with respect to the (8,8) which I would be using to include all the pi-pi* couples. I am using a very small basis (STO-3G) to keep the computational time reasonable. Is there anything I could be looking into? Best regards, |
The input parameters look good, but the size of the sub-CAS is probably still to small. Best, |
Dear developers,
I am attempting an autocas calculation on a small molecule, attached.
After having compiled a working installation of OpenMolcas-QCMaquis (+NEVPT2) and scine-autocas, I was trying to
perform for the first time an active space determination.
I tried by executing the following
python3 -m scine_autocas --xyz_file mol2.xyz --basis_set 6-31G* --plot --interface Molcas
The preliminary &SCF calculation was terminated rather quickly, while the follow-up DMRGSCF calculation has been running for 12 hours already. Since it's the first time trying this, I wanted to ask you if this sounds reasonable or if there's something I should be looking into to enhance performance.
I am running with 40 OMP threads and using all the memory at disposal of my computing nodes. BTW, by looking at active processes I found out that at this stage, the calculation is using a single cpu.
I attach also the output of the completed &SCF calculation and ongoing &DMRGSCF calculation, together with the QCMaquis log file.
Finally, I know it is recommended to provide an input yml file but I come across the same issues reported in #5.
Any help is deeply appreciated.
Kindest regards,
Leonardo
My geometry:
27
C -1.08745 0.31737 -0.18798
N 0.00901 1.08097 0.03399
C 1.09740 0.30502 0.25267
C 0.72854 -1.05070 0.17543
C -0.73273 -1.04244 -0.11662
C -1.15761 -2.36303 -0.20081
C -0.01317 -3.16903 0.02491
C 1.13964 -2.37602 0.25388
C 3.59051 0.05087 0.21939
C -3.58313 0.08905 -0.15666
H -2.15229 -2.72979 -0.39441
H -0.01879 -4.25011 0.02261
H 2.13045 -2.75393 0.44587
H 3.55558 -0.23466 -0.83223
H 3.57515 -0.84742 0.83714
H 4.52016 0.58563 0.40700
H -4.50709 0.63435 -0.34205
H -3.55163 -0.20187 0.89360
H -3.57680 -0.80637 -0.77872
C -2.40098 0.97716 -0.48761
H -2.42102 1.20454 -1.56440
C 2.41780 0.94963 0.55513
H 2.44051 1.17141 1.63304
O -2.53116 2.17856 0.24389
H -1.65986 2.59023 0.28515
O 2.56028 2.15326 -0.17039
H 1.69305 2.57357 -0.21034
dmrg.log
scf.log
autocas_project.QCMaquis.log
The text was updated successfully, but these errors were encountered: