Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Different helicity numbering in fortran and cudacpp? #569

Closed
valassi opened this issue Dec 14, 2022 · 5 comments · Fixed by #570
Closed

Different helicity numbering in fortran and cudacpp? #569

valassi opened this issue Dec 14, 2022 · 5 comments · Fixed by #570
Assignees

Comments

@valassi
Copy link
Member

valassi commented Dec 14, 2022

I am making progress in the random choice of helicity #403.

However the chosen helicities have different indices, Example in ggtt

   1  0.64067963E-01  0.64067963E-01          1.00000000000 12 10
   2  0.58379673E-01  0.58379673E-01          1.00000000000 12 10
   3  0.70810768E-01  0.70810768E-01          1.00000000000 15 15
   4  0.67192668E-01  0.67192668E-01          1.00000000000  2  0
   5  0.71590585E-01  0.71590585E-01          1.00000000000 15 15
   6  0.72862110E-01  0.72862110E-01          1.00000000000 15 15
   7  0.14271254E-01  0.14271254E-01          1.00000000000  2  0
   8  0.63986754E-01  0.63986754E-01          1.00000000000 15 15
   9  0.46316382E-01  0.46316382E-01          1.00000000000 12 10
  10  0.35372741E-01  0.35372741E-01          1.00000000000 15 15
  11  0.73958407E-01  0.73958407E-01          1.00000000000 15 15
  12  0.70691203E-01  0.70691203E-01          1.00000000000  3  3
  13  0.70805000E-01  0.70805000E-01          1.00000000000 15 15
  14  0.30801404E-01  0.30801404E-01          1.00000000000  5  5
  15  0.64111868E-01  0.64111868E-01          1.00000000000 15 15
  16  0.74312047E-01  0.74312047E-01          1.00000000000  2  0
  17  0.60961835E-01  0.60961835E-01          1.00000000000  2  0
  18  0.67698020E-01  0.67698020E-01          1.00000000000  2  0
  19  0.49748773E-01  0.49748773E-01          1.00000000000 15 15
  20  0.71951996E-01  0.71951996E-01          1.00000000000  5  5
  21  0.52116331E-01  0.52116331E-01          1.00000000000 12 10
  22  0.69245648E-01  0.69245648E-01          1.00000000000  2  0
  23  0.64808141E-01  0.64808141E-01          1.00000000000  2  0
  24  0.66861231E-01  0.66861231E-01          1.00000000000 14 12
  25  0.70041112E-01  0.70041112E-01          1.00000000000 15 15
  26  0.61135249E-01  0.61135249E-01          1.00000000000 15 15
  27  0.66574932E-01  0.66574932E-01          1.00000000000  2  0
  28  0.67312068E-01  0.67312068E-01          1.00000000000 14 12
  29  0.47056643E-01  0.47056643E-01          1.00000000000 12 11
  30  0.70509435E-01  0.70509435E-01          1.00000000000  2  0
  31  0.23138767E-01  0.23138767E-01          1.00000000000 15 15
  32  0.76096234E-01  0.76096234E-01          1.00000000000  2  0

This is the "BothDebug" printout: event number, fortran ME, cudacpp ME, ratio, fortran helicity, cudacpp helicity.

It seems that 15=15, 2=0, 14=12, 3=3, 12=11 etc.

I will check the hardcoded code

@valassi valassi self-assigned this Dec 14, 2022
@valassi
Copy link
Member Author

valassi commented Dec 14, 2022

This is ggtt cudacpp

// Helicities for the process [NB do keep 'static' for this constexpr array, see issue #283]

    // Helicities for the process [NB do keep 'static' for this constexpr array, see issue #283]
    static constexpr short tHel[ncomb][mgOnGpu::npar] = {
      { -1, -1, -1, -1 },
      { -1, -1, -1, 1 },
      { -1, -1, 1, -1 },
      { -1, -1, 1, 1 },
      { -1, 1, -1, -1 },
      { -1, 1, -1, 1 },
      { -1, 1, 1, -1 },
      { -1, 1, 1, 1 },
      { 1, -1, -1, -1 },
      { 1, -1, -1, 1 },
      { 1, -1, 1, -1 },
      { 1, -1, 1, 1 },
      { 1, 1, -1, -1 },
      { 1, 1, -1, 1 },
      { 1, 1, 1, -1 },
      { 1, 1, 1, 1 } };

This is Fortran I guess

      INTEGER NHEL(NEXTERNAL,0:NCOMB)
      DATA (NHEL(I,0),I=1,4) / 2, 2, 2, 2/
      DATA (NHEL(I,   1),I=1,4) /-1,-1,-1, 1/
      DATA (NHEL(I,   2),I=1,4) /-1,-1,-1,-1/
      DATA (NHEL(I,   3),I=1,4) /-1,-1, 1, 1/
      DATA (NHEL(I,   4),I=1,4) /-1,-1, 1,-1/
      DATA (NHEL(I,   5),I=1,4) /-1, 1,-1, 1/
      DATA (NHEL(I,   6),I=1,4) /-1, 1,-1,-1/
      DATA (NHEL(I,   7),I=1,4) /-1, 1, 1, 1/
      DATA (NHEL(I,   8),I=1,4) /-1, 1, 1,-1/
      DATA (NHEL(I,   9),I=1,4) / 1,-1,-1, 1/
      DATA (NHEL(I,  10),I=1,4) / 1,-1,-1,-1/
      DATA (NHEL(I,  11),I=1,4) / 1,-1, 1, 1/
      DATA (NHEL(I,  12),I=1,4) / 1,-1, 1,-1/
      DATA (NHEL(I,  13),I=1,4) / 1, 1,-1, 1/
      DATA (NHEL(I,  14),I=1,4) / 1, 1,-1,-1/
      DATA (NHEL(I,  15),I=1,4) / 1, 1, 1, 1/
      DATA (NHEL(I,  16),I=1,4) / 1, 1, 1,-1/

@valassi
Copy link
Member Author

valassi commented Dec 14, 2022

Beurk yes they differ

  • first there is a difference in how the individual helicities are ordered
  • and then of course cudacpp arrays are 0 to 15, fortran are 1 to 16

Example 15=15 is actually 1,1,1,1 that is 15 (in 1-6) of Fortran and 15 (in 0-15) of cudacpp

@valassi
Copy link
Member Author

valassi commented Dec 14, 2022

I manually changed the cudacpp values to mimic fortran - much better!

   1  0.64067963E-01  0.64067963E-01          1.00000000000 12 11
   2  0.58379673E-01  0.58379673E-01          1.00000000000 12 11
   3  0.70810768E-01  0.70810768E-01          1.00000000000 15 14
   4  0.67192668E-01  0.67192668E-01          1.00000000000  2  1
   5  0.71590585E-01  0.71590585E-01          1.00000000000 15 14
   6  0.72862110E-01  0.72862110E-01          1.00000000000 15 14
   7  0.14271254E-01  0.14271254E-01          1.00000000000  2  1
   8  0.63986754E-01  0.63986754E-01          1.00000000000 15 14
   9  0.46316382E-01  0.46316382E-01          1.00000000000 12 11
  10  0.35372741E-01  0.35372741E-01          1.00000000000 15 14
  11  0.73958407E-01  0.73958407E-01          1.00000000000 15 14
  12  0.70691203E-01  0.70691203E-01          1.00000000000  3  2
  13  0.70805000E-01  0.70805000E-01          1.00000000000 15 14
  14  0.30801404E-01  0.30801404E-01          1.00000000000  5  4
  15  0.64111868E-01  0.64111868E-01          1.00000000000 15 14
  16  0.74312047E-01  0.74312047E-01          1.00000000000  2  1
  17  0.60961835E-01  0.60961835E-01          1.00000000000  2  1
  18  0.67698020E-01  0.67698020E-01          1.00000000000  2  1
  19  0.49748773E-01  0.49748773E-01          1.00000000000 15 14
  20  0.71951996E-01  0.71951996E-01          1.00000000000  5  4
  21  0.52116331E-01  0.52116331E-01          1.00000000000 12 11
  22  0.69245648E-01  0.69245648E-01          1.00000000000  2  1
  23  0.64808141E-01  0.64808141E-01          1.00000000000  2  1
  24  0.66861231E-01  0.66861231E-01          1.00000000000 14 13
  25  0.70041112E-01  0.70041112E-01          1.00000000000 15 14
  26  0.61135249E-01  0.61135249E-01          1.00000000000 15 14
  27  0.66574932E-01  0.66574932E-01          1.00000000000  2  1
  28  0.67312068E-01  0.67312068E-01          1.00000000000 14 13
  29  0.47056643E-01  0.47056643E-01          1.00000000000 12 11
  30  0.70509435E-01  0.70509435E-01          1.00000000000  2  1
  31  0.23138767E-01  0.23138767E-01          1.00000000000 15 14

Now I just need to change the fortran vs cudacpp array starting at or 0

@valassi
Copy link
Member Author

valassi commented Dec 14, 2022

Ok fixed in codegen, just changed one False to True...

    # AV - replace the export_cpp.OneProcessExporterCPP method (fix helicity order and improve formatting)
    def get_helicity_matrix(self, matrix_element):
        """Return the Helicity matrix definition lines for this matrix element"""
        helicity_line = '    static constexpr short helicities[ncomb][mgOnGpu::npar] = {\n      '; # AV (this is tHel)
        helicity_line_list = []
        for helicities in matrix_element.get_helicity_matrix(allow_reverse=True): # AV was False: different order in Fortran and cudacpp! #569
            helicity_line_list.append( '{ ' + ', '.join(['%d'] * len(helicities)) % tuple(helicities) + ' }' ) # AV
        return helicity_line + ',\n      '.join(helicity_line_list) + ' };' # AV

valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
… gg_tt.mad to CODEGEN

The Fortran codegen was using allow_reverse=True and the cpp codegen allow_reverse=False.
Now moved to allow_reverse=True also in cudacpp.
valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
valassi added a commit to valassi/madgraph4gpu that referenced this issue Dec 14, 2022
… gg_tt.mad to CODEGEN

The Fortran codegen was using allow_reverse=True and the cpp codegen allow_reverse=False.
Now moved to allow_reverse=True also in cudacpp.
@valassi valassi linked a pull request Dec 14, 2022 that will close this issue
@valassi
Copy link
Member Author

valassi commented Dec 14, 2022

This is fixed in #570, which I will soon merge. Closing,

@valassi valassi closed this as completed Dec 14, 2022
valassi added a commit to mg5amcnlo/mg5amcnlo_cudacpp that referenced this issue Aug 16, 2023
…ort order of helicity from gg_tt.mad to CODEGEN

The Fortran codegen was using allow_reverse=True and the cpp codegen allow_reverse=False.
Now moved to allow_reverse=True also in cudacpp.
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant