-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
16-bit floating-point support for C/C++ #65
Comments
Attached
Any comments and/or suggestions are welcome. |
Is the MPI_CXX_* type aimed at providing C++ bindings? That support has been removed from MPI and should not now be updated or extended. MPI2_HALF looks like a typo for MPI_HALF2? If we have MPI_C_HALF_COMPLEX (for C) do we need MPI_F_HALF_COMPLEX (for Fortran) and MPI_*_HALF_COMPLEX2 (for reductions, in both languages)? How should a user program determine if an MPI library supports these types if they are optional? Should there be a mandatory compile-time constant like MPI_HAS_HALF_TYPES?
|
@dholmes-epcc-ed-ac-uk My understanding is that the C++ binding is removed but using the C binding from C++ programs is still supported. C++ datatypes are required for such case. Actually |
@dholmes-epcc-ed-ac-uk I assume MPI2_HALF was indeed a typo for the reduction type MPI_2HALF. So far optional types were just not in the headers, the standard assumed it was the developer responsibility to detect the lack of such types by whatever means they want/need. @ahori what is the expected link between the newly added half precision type and MPI_REAL2 and MPI_COMPLEX4 ? Are modern compilers translating Fortran REAL*2 type into half precision ? |
We deprecated the C++ functions. We still need C++ types, for example, because C99
The same way you determine whether the 14 optional types we already have in Table 13.2 are supported. |
@ahori We should use the ISO C candidate name |
Correct.
I just added FP16 types almost automatically without any intention. I have noticed that there is no explanation on MPI_C_* and MPI_CXX_* types found in the MPI-IO external 32 section. I checked MPI 2.2 and I found these were added since 2.2. Is there anybody who can explain what they are?
No idea. But the same thing happens on the other optional types. (configure can detect :-)
I have just remembered what Rolf suggested to use "short float" instead of "half." He also suggested to have MPI_FLOAT*_T types as below; Named Predefined Datatypes | C types The traditional type names such as MPI_DOUBLE cannot be deprecated just for backward compatibility. Anyway I will update the text in a few days, before flying to Denver.
I think/suppose/hope so. |
Oh, I got the answer of this. There 2 ways of expressing complex numbers in C++, |
Oh, why do we need C++ types ? What happens when a C++ object appears in the argument of MPI_Send or MPI_Recv ? I believe this is not allowed. Is there any situations where a C++ object can be the argument of an MPI function ? |
Accessing the C binding from C++ is allowed. from MPI-3.1 p.5:
from MPI-3.1 p.36:
And as you mentioned, C++ has I don't mean sending/receiving C++ class object. |
According to here; http://en.cppreference.com/w/cpp/numeric/complex (in the box titled "Non-static data members") the layouts on complex numbers of C++ and C are (guaranteed to be?) the same.
Further, I think the same thing happens on bool types. Am I missing something? |
@ahori Oh, sorry, I didn't know that the layouts on complex numbers of C++ and C are guaranteed to be the same. If so, my opinion is meaningless. But |
@kawashima-fj I had to dig through old email to remember the details, but I found them. @jsquyres captured the background in https://blogs.cisco.com/performance/the-mpi-c-bindings-are-gone-what-does-it-mean-to-you, which includes the following:
This change was implemented in ticket 340. |
@ahori We need C++ complex types in MPI because C's |
@jeffhammond Thanks! I understand. |
Here is the updated (half -> short) version |
@ahori My comment against your PDF: In p.179, In p.182, we should update the following sentence ("nine").
In p.544, In p.182, should we add
|
@ahori I am concerned about My preference would be to recognize that Fortran now supports the equivalent of structs and use those, at the expense of not supporting legacy Fortran usage. I've proposed a more general representation of pair types already (#18 (comment)) but it needs a ticket. Update: I created a ticket for this: #70. |
I agree with you and I will remove |
Here is the second version reflecting your comments. |
I’m sorry for not replying sooner. I cannot attend the meeting this week but you will discuss this topic. Why do we make (a) Because I believe the reason is only (a), because the current proposal in C and C++ WG ISO/IEC JTC 1/SC 22/WG 14 N2016, ISO/IEC JTC 1/SC 22/WG 21 P0192R1: Adding Fundamental Type for Short Float, has the following sentences.
Another document in C++ WG ISO/IEC JTC 1/SC 22/WG 21 P0303R0: Extensions to C++ for Short Float Type has similar wording regarding precision.
This situation is same as In the sense of (a), I think all And in the future, Other editorial comments against the current draft:
|
Problem
There is interest in supporting 16-bit floating point (henceforth FP16) in MPI.
See https://lists.mpi-forum.org/pipermail/mpiwg-p2p/2017-June/thread.html
Proposal
Add a type associated with FP16 that does not depend on the Fortran definition (
MPI_REAL2
).See references. Various non-standard names for FP16 including
__fp16
andshort float
. The candidate ISO name is_Float16
. It may be prudent for MPI to add a type (along the lines ofMPI_Count
andMPI_Aint
) since ISO C and C++ have not standardized names yet and they may not be identical; the typedef would beMPI_Float16
, which it may be deprecated as soon as there is an ISO C/C++ name.Changes to the Text
TODO
Impact on Implementations
The implementation of FP16 is straightforward, following whatever code exists for
MPI_REAL2
today, or by copying code for FP32 withs/32/16/g
.A high-quality implementation may need to use special care when implementing reduction operators that can lose precision.
Impact on Users
FP16 support will be available independent of anything related to Fortran.
Users working on machine learning do not use Fortran anywhere (except perhaps indirectly in BLAS) and are not likely to be satisfied with
MPI_REAL2
, particularly since an implementation can omit support for it if a Fortran compiler is not present.References
_Float16
)_Float16
support for C/C++ commitThe text was updated successfully, but these errors were encountered: