-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
add pp_tttt to codegen and fix its builds (two P1 subdirectories which need different nwf values) #560
Conversation
PS The commands to create the .sa and .mad respectivel;y are
|
Found bug source: parameter nwf (number of wave functions) is set in shared header file src/mgOnGpuConfig.h. Since this header is identical for all subprocesses, it leads to to the array w_fp having incorrect allocation if nwf should differ between subprocesses Issue does not appear when generating "u u~ > t t~ t t~" and "g g > t t~ t t~" independently @valassi expects issue to be in the cudacpp plugin |
Copying @roiser for info (Look at the modifications in the generate script: to add a susy process for instance, add a directpry name and then the appropriate "generate" line, which will probably include some "import susy" model of some sort) |
I am having a look at this. Using the latest mg5amcnlo upstream, I get no build errors from nb_page anymore (there is no nb_page anymre, only vecsize_used or vecsize_memmax, maybe this helped). However I still get the build warnings, and the runtime segfaults Note, this is one part of issue #534 about adding a process with many P1 subdirectories |
00465b1
to
d1bdd96
Compare
I have rebased this on the latest upstream/master. I confirm that pptt cde builds ok, while pptttt does not. Both have several P1 subdirectories, but only pptttt shows the issue described by @zeniheisser , namely that nwf should be different in the various P1 an dmust be moved from common code to each P1 |
Note the connected issues
Also will add comments in #272 that I am about to close |
Marking this as related to #644 that described this issue more generally. I just rebased to the latest master |
(no need to modify CODEGEN/checkFormatting.sh after rebasing on upstream/master on 26 Apr 2023) Code is generated ok for both sa and mad. They include two P1 subdirectories each. In sa, P1_Sigma_sm_gg_ttxttx builds ok but issues many warnings like CPPProcess.cc: In function ‘void mg5amcCpu::calculate_wavefunctions(int, const fptype*, const fptype*, mgOnGpu::fptype*, int)’: CPPProcess.cc:334:34: warning: array subscript 13 is above array bounds of ‘mgOnGpu::fptype* [13]’ {aka double* [13]’} [-Warray-bounds] 334 | FFV1_1<W_ACCESS, CD_ACCESS>( w_fp[2], w_fp[6], COUPs[1], cIPD[0], cIPD[1], w_fp[13] ); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Then check.exe segfaults at runtime (not surprisingly) This should be fixed in the cudacpp plugin. In sa, P1_Sigma_sm_uux_ttxttx builds ok and check.exe succeeds at runtime. In mad, P1_gg_ttxttx builds ok with the same warnings above, and (not surprisingly) segfaults at runtime. In mad, P1_uux_ttxttx builds ok and check.exe succeeds at runtime. (Note, in Dec 2022 there were some errors related to nb_page, these are now gone).
…P1*/CPPProcess.h (fix madgraph5#644)
…from src/mgOnGpuConfig.h to SubProcesses/P1*/CPPProcess.h (NB note the difference between nwavefun and nwavefuncs - trailing s - in python!?)
… "nwavefunc" when creating CPPProcess.h (only nwavefuncs was defined - with leading s and with wrong value!)
…of nwf=5 ...!? (madgraph5#644 is not yet fixed, will revert)
Revert "[pp4t] regenerate ggtt.sa: P1 directory gets the wrong nwf=7 instead of nwf=5 ...!? (madgraph5#644 is not yet fixed, will revert)" This reverts commit eae8037.
…f nwf (madgraph5#644), will revert
…ue of nwf (madgraph5#644)" Revert "[pp4t] in codegen, next (ugly) attempt to fix the P1-specific value of nwf (madgraph5#644), will revert" This reverts commit 146cd26
…m CPPProcess.h to CPPProcess.cc
…culate_wavefunctions function! (madgraph5#644)
…wf (madgraph5#644), clean up and disable debug printouts for nwf (NB: I checked that pp_tttt now generates and builds correctly in both P1 subdirectories)
I think that this MR is now almost ready to merge, including a fix for #644: I moved nwf from mgOnGpuConfig.h to the calculate_wavefunction deeply hardcode in CppProcess.cc. I was not able to do it in CppProcess.h, because the correct result for nwf is only available after a call to I am running tests on all processes and then I will merge. (NB I am not including pp_tttt to the repository - yet?) |
STARTED AT Mon May 22 12:32:26 CEST 2023 ./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean ENDED(1) AT Mon May 22 15:51:54 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean ENDED(2) AT Mon May 22 16:16:24 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean ENDED(3) AT Mon May 22 16:25:32 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst ENDED(4) AT Mon May 22 16:28:33 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst ENDED(5) AT Mon May 22 16:31:31 CEST 2023 [Status=0]
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt 0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
All tests have passed, I am self merging. This closes #644 |
NB: I also checked that pp_tttt can be generated correctly, and both P1 directories build fine (here nwf was giving issues in MR madgraph5#560 for madgraph5#644)
Hi @zeniheisser this is a WIP MR with the changes we discussed. I am including the commit log below.
Maybe you can have a look at the build warnings in gg to tttt?
Thanks!
Andrea
--
Add pp_tttt to CODEGEN/generateAndCompare.sh and CODEGEN/checkFormatting.sh
Code is generated ok for both sa and mad. They include two P1 subdirectories each.
In sa, P1_Sigma_sm_gg_ttxttx builds ok but issues many warnings like
Then check.exe segfaults at runtime (not surprisingly)
This should be fixed in the cudacpp plugin.
In sa, P1_Sigma_sm_uux_ttxttx builds ok and check.exe succeeds at runtime.
In mad, P1_gg_ttxttx builds ok with the same warnings above, and (not surprisingly) segfaults at runtime.
In mad, P1_Sigma_sm_uux_ttxttx does not build. There are some errors like
This should be fixed in the fortran code generation and/or patching.