/*! Creates a MPI_Access that is based on the processors included in \a ProcessorGroup.
-This routine may be called for easier use of MPI API.
+This class may be called for easier use of MPI API.
\param ProcessorGroup MPIProcessorGroup object giving access to group management
\param BaseTag and MaxTag define the range of tags to be used.
}
/*
-MPI_Access et "RequestIds" :
+MPI_Access and "RequestIds" :
============================
-. ATTENTION : Dans le document de specification, la distinction
- n'est pas faite clairement entre les "MPITags" (voir ci-dessus)
- qui sont un argument des appels a MPI et les "RequestIds" qui
- ne concernent pas les appels MPI. Ces "RequestIds" figurent
- en effet sous le nom de tag comme argument d'entree/sortie dans l'API
- de MPI_Access decrite dans le document de specification. Mais
- dans l'implementation on a bien le nom RequestId (ou bien
+. WARNING : In the specification document, the distinction
+ between "MPITags" and "RequestIds" is not clear. "MPITags"
+ are arguments of calls to MPI. "RequestIds" does not concern
+ calls to MPI. "RequestIds" are named "tag"as arguments in/out
+ in the MPI_Access API in the specification documentation.
+ But in the implementation we have the right name RequestId (or
RecvRequestId/SendRequestId).
-. Lors de la soumission d'une requete d'ecriture ou de lecture MPI
- via MPI_Access, on obtient un identifieur "RequestId".
- Cet identifieur "RequestId" correspond a une structure RequestStruct
- de MPI_Access a laquelle on accede avec la map
+. When we have a MPI write/read request via MPI_Access, we get
+ an identifier "RequestId".
+ That identifier matches a structure RequestStruct of
+ MPI_Access. The access to that structure is done with the map
"_MapOfRequestStruct".
- Cette structure RequestStruct permet de gerer MPI_Request et
- MPI_Status * de MPI et permet d'obtenir des informations sur
- la requete : target, send/recv, tag, [a]synchrone, type, outcount.
-
-. C'est cet identifieur qui peut etre utilise pour controler une
- requete asynchrone via MPI_Access : Wait, Test, Probe, etc...
-
-. En pratique "RequestId" est simplement un entier de l'intervalle
- [0 , 2**32-1]. Il y a uniquement un compteur cyclique global
- aussi bien pour les [I]Send que pour les [I]Recv.
-
-. Ces "RequestIds" et leur structures associees facilitent les
- communications asynchrones.
- Par exemple on a mpi_access->Wait( int RequestId )
- au lieu de MPI_Wait(MPI_Request *request, MPI_Status *status)
- avec gestion de status.
-
-. L'API de MPI_Access peut fournir les "SendRequestIds" d'un "target",
- les "RecvRequestIds" d'un "source" ou bien les "SendRequestIds" de
- tous les "targets" ou les "RecvRequestIds" de tous les "sources".
- Cela permet d'eviter leur gestion au niveau de Presentation-ParaMEDMEM.
+ That structure RequestStruct give the possibility to manage
+ the structures MPI_Request and MPI_Status * of MPI. It give
+ also the possibility to get informations about that request :
+ target, send/recv, tag, [a]synchronous, type, outcount.
+
+. That identifier is used to control an asynchronous request
+ via MPI_Access : Wait, Test, Probe, etc...
+
+. In practise "RequestId" is simply an integer fo the interval
+ [0 , 2**32-1]. There is only one such a cyclic for
+ [I]Sends and [I]Recvs.
+
+. That "RequestIds" and their associated structures give an easy
+ way to manage asynchronous communications.
+ For example we have mpi_access->Wait( int RequestId ) instead of
+ MPI_Wait(MPI_Request *request, MPI_Status *status).
+
+. The API of MPI_Access may give the "SendRequestIds" of a "target",
+ the "RecvRequestIds" from a "source" or the "SendRequestIds" of
+ all "targets" or the "RecvRequestIds" of all "sources".
+ That avoid to manage them in Presentation-ParaMEDMEM.
*/
int MPI_Access::NewRequest( MPI_Datatype datatype, int tag , int destsourcerank ,
}
/*
-MPI_Access et "tags" (ou "MPITags") :
+MPI_Access and "tags" (or "MPITags") :
=====================================
-. Le constructeur permet optionnellement de fixer une plage de tags
- a utiliser : [BaseTag , MaxTag].
- Par defaut c'est [ 0 , MPI_TAG_UB], MPI_TAG_UB etant la valeur
- maximum d'une implementation de MPI (valeur minimum 32767
- soit 2**15-1). Sur awa avec l'implementation lam MPI_TAG_UB
- vaut 7353944. La norme MPI specifie que cette valeur doit
- etre la meme dans les process demarres avec mpirun.
- Dans le cas de l'usage simultane du meme IntraCommunicator
- dans un meme process (ou de plusieurs IntraCommunicator
- d'intersection non nulle) cela peut eviter toute ambiguite
- et aider au debug.
-
-. Dans MPI_Access les tags sont constitues de deux parties
- (#define ModuloTag 10) :
- + Le dernier digit decimal correspond au MPI_DataType ( 1 pour
- les messages "temps", 2 pour MPI_INT et 3 pour MPI_DOUBLE)
- + La valeur des autres digits correspond a une numerotation
- circulaire des messages.
- + Un message "temps" et le message de donnees associe ont le
- meme numero de message (mais des types et donc des tags
- differents).
-
-. Pour un envoi de message d'un process "source" vers un process
- "target", on dispose de _SendMPITag[target] dans le process
- source (il contient le dernier "tag" utilise pour l'envoi de
- messages vers le process target).
- Et dans le process "target" qui recoit ce message, on dispose
- de _RecvMPITag[source] (il contient le dernier "tag" utilise
- pour la reception de messages du process source).
- Naturellement d'apres la norme MPI les valeurs de ces tags sont
- les memes.
+. The constructor give the possibility to choose an interval of
+ tags to use : [BaseTag , MaxTag].
+ The default is [ 0 , MPI_TAG_UB], MPI_TAG_UB being the maximum
+ value in an implementation of MPI (minimum 32767 = 2**15-1).
+ On awa with the implementation lam MPI_TAG_UB value is
+ 7353944. The norma MPI specify that value is the same in all
+ processes started by mpirun.
+ In the case of the use of the same IntraCommunicator in a process
+ for several distinct data flows (or for several IntraCommunicators
+ with common processes), that permits to avoid ambibuity
+ and may help debug.
+
+. In MPI_Access the tags have two parts (#define ModuloTag 10) :
+ + The last decimal digit decimal correspond to MPI_DataType ( 1 for
+ TimeMessages, 2 for MPI_INT and 3 for MPI_DOUBLE)
+ + The value of other digits correspond to a circular numero for each
+ message.
+ + A TimeMessage and the associated DataMessage have the same numero
+ (but the types are different and the tags also).
+
+. For a Send of a message from a process "source" to a process
+ "target", we have _SendMPITag[target] in the process
+ source (it contains the last "tag" used for the Send of a pour l'envoi de
+ message to the process target).
+ And in the process "target" which receive that message, we have
+ _RecvMPITag[source] (it contains the last "tag" used for the Recv
+ of messages from the process source).
+ Naturally in the MPI norma the values of that tags must be the same.
*/
int MPI_Access::NewSendTag( MPI_Datatype datatype, int destrank , int method ,
bool asynchronous, int &RequestId ) {
return tag ;
}
+// Returns the number of all SendRequestIds that may be used to allocate
+// ArrayOfSendRequests for the call to SendRequestIds
int MPI_Access::SendRequestIdsSize() {
int size = 0 ;
int i ;
return size ;
}
+// Returns in ArrayOfSendRequests with the dimension "size" all the
+// SendRequestIds
int MPI_Access::SendRequestIds(int size, int *ArrayOfSendRequests) {
int destrank ;
int i = 0 ;
return i ;
}
+// Returns the number of all RecvRequestIds that may be used to allocate
+// ArrayOfRecvRequests for the call to RecvRequestIds
int MPI_Access::RecvRequestIdsSize() {
int size = 0 ;
int i ;
return size ;
}
+// Returns in ArrayOfRecvRequests with the dimension "size" all the
+// RecvRequestIds
int MPI_Access::RecvRequestIds(int size, int *ArrayOfRecvRequests) {
int sourcerank ;
int i = 0 ;
return i ;
}
+// Returns in ArrayOfSendRequests with the dimension "size" all the
+// SendRequestIds to a destination rank
int MPI_Access::SendRequestIds(int destrank, int size, int *ArrayOfSendRequests) {
if (size < _SendRequests[destrank].size() ) throw MEDMEM::MEDEXCEPTION("wrong call to MPI_Access::SendRequestIds");
int i = 0 ;
return _SendRequests[destrank].size() ;
}
+// Returns in ArrayOfRecvRequests with the dimension "size" all the
+// RecvRequestIds from a sourcerank
int MPI_Access::RecvRequestIds(int sourcerank, int size, int *ArrayOfRecvRequests) {
if (size < _RecvRequests[sourcerank].size() ) throw MEDMEM::MEDEXCEPTION("wrong call to MPI_Access::RecvRequestIds");
int i = 0 ;
return _RecvRequests[sourcerank].size() ;
}
+// Send in synchronous mode count values of type datatype from buffer to target
+// (returns RequestId identifier even if the corresponding structure is deleted :
+// it is only in order to have the same signature as the asynchronous mode)
int MPI_Access::Send(void* buffer, int count, MPI_Datatype datatype, int target,
int &RequestId) {
int sts = MPI_SUCCESS ;
return sts ;
}
+// Receive (read) in synchronous mode count values of type datatype in buffer from source
+// (returns RequestId identifier even if the corresponding structure is deleted :
+// it is only in order to have the same signature as the asynchronous mode)
+// The output argument OutCount is optionnal : *OutCount <= count
int MPI_Access::Recv(void* buffer, int count, MPI_Datatype datatype, int source,
int &RequestId, int *OutCount) {
int sts = MPI_SUCCESS ;
return sts ;
}
+// Send in asynchronous mode count values of type datatype from buffer to target
+// Returns RequestId identifier.
int MPI_Access::ISend(void* buffer, int count, MPI_Datatype datatype, int target,
int &RequestId) {
int sts = MPI_SUCCESS ;
return sts ;
}
+// Receive (read) in asynchronous mode count values of type datatype in buffer from source
+// returns RequestId identifier.
int MPI_Access::IRecv(void* buffer, int count, MPI_Datatype datatype, int source,
int &RequestId) {
int sts = MPI_SUCCESS ;
return sts ;
}
+// Perform a Send and a Recv in synchronous mode
int MPI_Access::SendRecv(void* sendbuf, int sendcount, MPI_Datatype sendtype,
int dest, int &SendRequestId,
void* recvbuf, int recvcount, MPI_Datatype recvtype,
return sts ;
}
+// Perform a Send and a Recv in asynchronous mode
int MPI_Access::ISendRecv(void* sendbuf, int sendcount, MPI_Datatype sendtype,
int dest, int &SendRequestId,
void* recvbuf, int recvcount, MPI_Datatype recvtype,
return sts ;
}
+// Perform a wait of a Send or Recv asynchronous Request
+// Do nothing for a synchronous Request
+// Manage MPI_Request * and MPI_Status * structure
int MPI_Access::Wait( int RequestId ) {
int status = MPI_SUCCESS ;
if ( !MPICompleted( RequestId ) ) {
return status ;
}
+// Perform a "test" of a Send or Recv asynchronous Request
+// If the request is done, returns true in the flag argument
+// If the request is not finished, returns false in the flag argument
+// Do nothing for a synchronous Request
+// Manage MPI_Request * and MPI_Status * structure
int MPI_Access::Test(int RequestId, int &flag) {
int status = MPI_SUCCESS ;
flag = MPICompleted( RequestId ) ;
return status ;
}
+// Perform a wait of each Send or Recv asynchronous Request of the array
+// array_of_RequestIds of size "count".
+// That array may be filled with a call to SendRequestIdsSize or RecvRequestIdsSize
+// Do nothing for a synchronous Request
+// Manage MPI_Request * and MPI_Status * structure
int MPI_Access::WaitAll(int count, int *array_of_RequestIds) {
if ( _Trace )
cout << "WaitAll" << _MyRank << " : count " << count << endl ;
return retstatus ;
}
+// Perform a "test" of each Send or Recv asynchronous Request of the array
+// array_of_RequestIds of size "count".
+// That array may be filled with a call to SendRequestIdsSize or RecvRequestIdsSize
+// If all requests are done, returns true in the flag argument
+// If all requests are not finished, returns false in the flag argument
+// Do nothing for a synchronous Request
+// Manage MPI_Request * and MPI_Status * structure
int MPI_Access::TestAll(int count, int *array_of_RequestIds, int &flag) {
if ( _Trace )
cout << "TestAll" << _MyRank << " : count " << count << endl ;
return status ;
}
+// Probe checks if a message is available for read from FromSource rank.
+// Returns the corresponding source, MPITag, datatype and outcount
+// Probe is a blocking call which wait until a message is available
int MPI_Access::Probe(int FromSource, int &source, int &MPITag,
MPI_Datatype &datatype, int &outcount) {
MPI_Status aMPIStatus ;
return sts ;
}
+// IProbe checks if a message is available for read from FromSource rank.
+// If there is a message available, returns the corresponding source,
+// MPITag, datatype and outcount with flag = true
+// If not, returns flag = false
int MPI_Access::IProbe(int FromSource, int &source, int &MPITag,
MPI_Datatype &datatype, int &outcount, int &flag) {
MPI_Status aMPIStatus ;
return sts ;
}
+// Cancel concerns a "posted" asynchronous IRecv
+// Returns flag = true if the receiving request was successfully canceled
+// Returns flag = false if the receiving request was finished but not canceled
+// Use cancel, wait and test_cancelled of the MPI API
int MPI_Access::Cancel( int RecvRequestId, int &flag ) {
flag = 0 ;
int sts = _CommInterface.cancel( MPIRequest( RecvRequestId ) ) ;
return sts ;
}
+// Cancel concerns a "pending" receiving message (without IRecv "posted")
+// Returns flag = true if the message was successfully canceled
+// Returns flag = false if the receiving request was finished but not canceled
+// Use Irecv, cancel, wait and test_cancelled of the MPI API
int MPI_Access::Cancel( int source, int theMPITag, MPI_Datatype datatype,
int outcount, int &flag ) {
int sts ;
return sts ;
}
+
+// CancelAll concerns all "pending" receiving message (without IRecv "posted")
+// CancelAll use IProbe and Cancel (see obove)
int MPI_Access::CancelAll() {
int sts = MPI_SUCCESS ;
int target ;
return sts ;
}
+// Same as barrier of MPI API
int MPI_Access::Barrier() {
int status = _CommInterface.barrier( *_IntraCommunicator ) ;
return status ;
}
+// Same as Error_String of MPI API
int MPI_Access::Error_String(int errorcode, char *string, int *resultlen) const {
return _CommInterface.error_string( errorcode, string, resultlen) ;
}
+// Returns source, tag, error and outcount corresponding to receiving RequestId
+// By default the corresponding structure of RequestId is deleted
int MPI_Access::Status(int RequestId, int &source, int &tag, int &error,
int &outcount, bool keepRequestStruct) {
MPI_Status *status = MPIStatus( RequestId ) ;
return _CommInterface.request_free( request ) ;
}
+// Print all informations of all known requests for debugging purpose
void MPI_Access::Check() const {
int i = 0 ;
map< int , RequestStruct * >::const_iterator MapOfRequestStructiterator ;
cout << "EndCheck" << _MyRank << endl ;
}
+// Outputs fields of a TimeMessage structure
ostream & operator<< (ostream & f ,const TimeMessage & aTimeMsg ) {
f << " time " << aTimeMsg.time << " deltatime " << aTimeMsg.deltatime
<< " tag " << aTimeMsg.tag ;
return f;
}
+// Outputs the DataType coded in a Tag
ostream & operator<< (ostream & f ,const _MessageIdent & methodtype ) {
switch (methodtype) {
case _MessageTime :
_Trace = trace ;
}
+// Delete the structure Request corresponding to RequestId identifier after
+// the deletion of the structures MPI_Request * and MPI_Status *
+// remove it from _MapOfRequestStruct (erase)
inline void MPI_Access::DeleteRequest( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
<< " ) Request not found" << endl ;
}
}
+// Delete all requests of the array ArrayOfSendRequests
inline void MPI_Access::DeleteRequests(int size , int *ArrayOfSendRequests ) {
int i ;
for ( i = 0 ; i < size ; i++ ) {
}
}
+// Returns the last MPITag of the destination rank destrank
inline int MPI_Access::SendMPITag(int destrank) {
return _SendMPITag[destrank] ;
}
+// Returns the last MPITag of the source rank sourcerank
inline int MPI_Access::RecvMPITag(int sourcerank) {
return _RecvMPITag[sourcerank] ;
}
+// Returns the number of all SendRequestIds matching a destination rank. It may be
+// used to allocate ArrayOfSendRequests for the call to SendRequestIds
inline int MPI_Access::SendRequestIdsSize(int destrank) {
return _SendRequests[destrank].size() ;
}
+// Returns the number of all RecvRequestIds matching a source rank. It may be
+// used to allocate ArrayOfRecvRequests for the call to RecvRequestIds
inline int MPI_Access::RecvRequestIdsSize(int sourcerank) {
return _RecvRequests[sourcerank].size() ;
}
+// Returns the MPI_Datatype (registered in MPI in the constructor with
+// MPI_Type_struct and MPI_Type_commit) for TimeMessages
inline MPI_Datatype MPI_Access::TimeType() const {
return _MPI_TIME ;
}
+// Returns true if the tag MPITag corresponds to a TimeMessage
inline bool MPI_Access::IsTimeMessage( int MPITag ) const {
return ((MPITag%ModuloTag) == _MessageTime) ; } ;
+// Returns the MPI size of a TimeMessage
inline MPI_Aint MPI_Access::TimeExtent() const {
MPI_Aint extent ;
MPI_Type_extent( _MPI_TIME , &extent ) ;
return extent ;
}
+// Returns the MPI size of a MPI_INT
inline MPI_Aint MPI_Access::IntExtent() const {
MPI_Aint extent ;
MPI_Type_extent( MPI_INT , &extent ) ;
return extent ;
}
+// Returns the MPI size of a MPI_DOUBLE
inline MPI_Aint MPI_Access::DoubleExtent() const {
MPI_Aint extent ;
MPI_Type_extent( MPI_DOUBLE , &extent ) ;
return extent ;
}
+// Returns the MPI size of the MPI_Datatype datatype
inline MPI_Aint MPI_Access::Extent( MPI_Datatype datatype ) const {
if ( datatype == _MPI_TIME )
return TimeExtent() ;
return 0 ;
}
+// Returns the MPITag of the request corresponding to RequestId identifier
inline int MPI_Access::MPITag( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return -1 ;
}
+// Returns the MPITarget of the request corresponding to RequestId identifier
inline int MPI_Access::MPITarget( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return -1 ;
}
+// Returns true if the request corresponding to RequestId identifier was [I]Recv
inline bool MPI_Access::MPIIsRecv( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return false ;
}
+// Returns true if the request corresponding to RequestId identifier was asynchronous
inline bool MPI_Access::MPIAsynchronous( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return false ;
}
+// Returns true if the request corresponding to RequestId identifier was completed
inline bool MPI_Access::MPICompleted( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return true ;
}
+// Returns the MPI_Datatype of the request corresponding to RequestId identifier
inline MPI_Datatype MPI_Access::MPIDatatype( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
return (MPI_Datatype ) NULL ;
}
+// Returns the size of the receiving message of the request corresponding to
+// RequestId identifier
inline int MPI_Access::MPIOutCount( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
return 0 ;
}
+// Increments the previous tag value (cyclically)
+// Look at MPI_Access::NewSendTag/NewRecvTag in MPI_Access.cxx
inline int MPI_Access::IncrTag( int prevtag ) {
int tag ;
if ( (prevtag % ModuloTag) == _MessageTime ) {
tag = _BaseMPITag ;
return tag ;
}
+// Returns the MPITag with the method-type field
+// Look at MPI_Access::NewSendTag/NewRecvTag in MPI_Access.cxx
inline int MPI_Access::ValTag( int tag, int method ) {
return ((tag/ModuloTag)*ModuloTag) + method ;
}
+// Remove a Request identifier from the list _RecvRequests/_SendRequests for
+// the corresponding target.
inline void MPI_Access::DeleteSendRecvRequest( int RequestId ) {
if ( _Trace )
cout << "MPI_Access::DeleteSendRecvRequest" << _MyRank
}
}
+// Delete the MPI structure MPI_Status * of a ReaquestId
inline void MPI_Access::DeleteStatus( int RequestId ) {
if ( _MapOfRequestStruct[RequestId]->MPIStatus != NULL ) {
delete _MapOfRequestStruct[RequestId]->MPIStatus ;
}
}
+// Returns the MPI structure MPI_Request * of a RequestId
inline MPI_Request * MPI_Access::MPIRequest( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
//cout << "MPIRequest" << _MyRank << "(" << RequestId
}
return &mpirequestnull ;
}
+// Returns the MPI structure MPI_Status * of a RequestId
inline MPI_Status * MPI_Access::MPIStatus( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
//cout << "MPIStatus" << _MyRank << "(" << RequestId
}
return NULL ;
}
+// Set the MPICompleted field of the structure Request corresponding to RequestId
+// identifier with the value completed
inline void MPI_Access::SetMPICompleted( int RequestId , bool completed ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
aRequestStruct->MPICompleted = completed ;
}
}
+// Set the MPIOutCount field of the structure Request corresponding to RequestId
+// identifier with the value outcount
inline void MPI_Access::SetMPIOutCount( int RequestId , int outcount ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
aRequestStruct->MPIOutCount = outcount ;
}
}
+// Nullify the MPIStatusfield of the structure Request corresponding to RequestId
+// identifier
inline void MPI_Access::ClearMPIStatus( int RequestId ) {
struct RequestStruct *aRequestStruct = _MapOfRequestStruct[ RequestId ] ;
if ( aRequestStruct ) {
}
}
+// Returns the _MessageIdent enum value corresponding to the MPI_Datatype datatype
+// Look at MPI_Access::NewSendTag/NewRecvTag in MPI_Access.cxx
inline _MessageIdent MPI_Access::MethodId( MPI_Datatype datatype ) const {
_MessageIdent aMethodIdent ;
if ( datatype == _MPI_TIME ) {
}
return aMethodIdent ;
}
+// Returns the MPI_Datatype corresponding to the _MessageIdent enum aMethodIdent
inline MPI_Datatype MPI_Access::Datatype( _MessageIdent aMethodIdent ) const {
MPI_Datatype aDataType ;
switch( aMethodIdent ) {
}
/*
-MPI_AccessDEC et la gestion des SendBuffers :
-=============================================
+MPI_AccessDEC and the management of SendBuffers :
+=================================================
-. Comme dans les communications collectives on n'envoie que des
- parties du meme buffer à chaque process "target", il faut s'assurer
- en asynchrone que toutes ces parties sont disponibles pour
- pouvoir liberer le buffer.
+. In the collective communications collectives we send only parts of
+ the same buffer to each "target". So in asynchronous mode it is
+ necessary that all parts are free before to delete/free the
+ buffer.
-. On suppose que ces buffers ont ete alloues avec un new double[]
+. We assume that buffers are allocated with a new double[]. so a
+ delete [] is done.
-. La structure SendBuffStruct permet de conserver l'adresse du buffer
- et de gerer un compteur de references de ce buffer. Elle comporte
- aussi MPI_Datatype pour pouvoir faire un delete [] (double *) ...
- lorsque le compteur est null.
+. The structure SendBuffStruct permit to keep the adress of the buffer
+ and to manage a reference counter of that buffer. It contains
+ also MPI_Datatype for the delete [] (double *) ... when the counter
+ is null.
-. La map _MapOfSendBuffers etablit la correspondance entre chaque
- RequestId obtenu de MPI_Access->ISend(...) et un SendBuffStruct
- pour chaque "target" d'une partie du buffer.
+. The map _MapOfSendBuffers etablish the correspondance between each
+ RequestId given by a MPI_Access->ISend(...) and a SendBuffStruct
+ for each "target" of a part of the buffer.
-. Tout cela ne concerne que les envois asynchrones. En synchrone,
- on detruit senbuf juste apres l'avoir transmis.
+. All that concerns only asynchronous Send. In synchronous mode,
+ we delete senbuf just after the Send.
*/
/*
-MPI_AccessDEC et la gestion des RecvBuffers :
-=============================================
+MPI_AccessDEC and the management of RecvBuffers :
+=================================================
-S'il n'y a pas d'interpolation, rien de particulier n'est fait.
+If there is no interpolation, no special action is done.
-Avec interpolation pour chaque target :
----------------------------------------
-. On a _TimeMessages[target] qui est un vecteur de TimesMessages.
- On en a 2 dans notre cas avec une interpolation lineaire qui
- contiennent le time(t0)/deltatime precedent et le dernier
+With interpolation for each target :
+------------------------------------
+. We have _TimeMessages[target] which is a vector of TimesMessages.
+ We have 2 TimesMessages in our case with a linear interpolation.
+ They contain the previous time(t0)/deltatime and the last
time(t1)/deltatime.
-. On a _DataMessages[target] qui est un vecteur de DatasMessages
- On en a 2 dans notre cas avec une interpolation lineaire qui
- contiennent les donnees obtenues par Recv au time(t0)/deltatime
- precedent et au dernier time(t1)/deltatime.
+. We have _DataMessages[target] which is a vector of DatasMessages.
+ We have 2 DatasMessages in our case with a linear interpolation.
+ They contain the previous datas at time(t0)/deltatime and at last
+ time(t1)/deltatime.
-. Au temps _t(t*) du processus courrant on effectue l'interpolation
- entre les valeurs des 2 DatasMessages que l'on rend dans la
- partie de recvbuf correspondant au target pourvu que t0 < t* <= t1.
+. At time _t(t*) of current processus we do the interpolation of
+ the values of the 2 DatasMessages which are returned in the part of
+ recvbuf corresponding to the target with t0 < t* <= t1.
-. Par suite de la difference des "deltatimes" entre process, on
- peut avoir t0 < t1 < t* auquel cas on aura une extrapolation.
+. Because of the difference of "deltatimes" between processes, we
+ may have t0 < t1 < t* and there is an extrapolation.
-. Les vecteurs _OutOfTime, _DataMessagesRecvCount et _DataMessagesType
- contiennent pour chaque target true si t* > dernier t1, recvcount et
- MPI_Datatype pour finaliser la gestion des messages a la fin.
+. The vectors _OutOfTime, _DataMessagesRecvCount and _DataMessagesType
+ contain for each target true if t* > last t1, recvcount and
+ MPI_Datatype for the finalize of messages at the end.
*/
/*!
/*
. DoSend :
- + On cree un TimeMessage (voir cette structure dans MPI_Access).
- + Si l'on est en asynchrone on cree deux structures SendBuffStruct
- aSendTimeStruct et aSendDataStruct que l'on remplit.
- + On remplit la structure aSendTimeMessage avec time/deltatime du
- process courant. "deltatime" doit etre nul s'il s'agit du dernier
- pas de temps.
- + Puis pour chaque "target", on envoie le TimeMessage et la partie
- de sendbuf concernee par ce target.
- + Si l'on est en asynchrone, on incremente le compteur et on ajoute
- a _MapOfSendBuffers aSendTimeStruct et aSendDataStruct avec les
- identifieurs SendTimeRequestId et SendDataRequestId recus de
+ + We create a TimeMessage (look at that structure in MPI_Access).
+ + If we are in asynchronous mode, we create two structures SendBuffStruct
+ aSendTimeStruct and aSendDataStruct that we fill.
+ + We fill the structure aSendTimeMessage with time/deltatime of
+ the current process. "deltatime" must be nul if it is the last step of
+ Time.
+ + After that for each "target", we Send the TimeMessage and the part
+ of sendbuf corresponding to that target.
+ + If we are in asynchronous mode, we increment the counter and we add
+ aSendTimeStruct and aSendDataStruct to _MapOfSendBuffers with the
+ identifiers SendTimeRequestId and SendDataRequestId returned by
MPI_Access->Send(...).
- + Et enfin si l'on est en synchrone, on detruit les SendMessages.
+ + And if we are in synchronous mode we delete the SendMessages.
*/
//DoSend : Time + SendBuff
SendBuffStruct * aSendTimeStruct = NULL ;
/*
. CheckTime + DoRecv + DoInterp
- + Pour chaque target on appelle CheckTime
- + Si on a un TimeInterpolator et si le message temps du target
- n'est pas le premier, on appelle l'interpolateur qui stocke
- ses resultats dans la partie du buffer de reception qui
- correspond au "target".
- + Sinon, on recopie les donnees recues pour ce premier pas de
- temps dans la partie du buffer de reception qui correspond au
- "target".
+ + For each target we call CheckTime
+ + If there is a TimeInterpolator and if the TimeMessage of the target
+ is not the first, we call the interpolator which return its
+ results in the part of the recv buffer corresponding to the "target".
+ + If not, there is a copy of received datas for that first step of time
+ in the part of the recv buffer corresponding to the "target".
*/
//CheckTime + DoRecv + DoInterp
if ( recvbuf ) {
/*
. CheckTime(recvcount , recvtype , target , UntilEnd)
- + Au depart, on lit le premier "Message-temps" dans
- &(*_TimeMessages)[target][1] et le premier message de donnees
- dans le buffer alloue (*_DataMessages)[target][1].
- + Par convention deltatime des messages temps est nul si c'est le
- dernier.
- + Boucle while : _t(t*) est le temps courant du processus.
- "tant que _t(t*) est superieur au temps du "target"
- (*_TimeMessages)[target][1].time et que
- (*_TimeMessages)[target][1].deltatime n'est pas nul",
- ainsi en fin de boucle on aura :
- _t(t*) <= (*_TimeMessages)[target][1].time avec
+ + At the beginning, we read the first TimeMessage in
+ &(*_TimeMessages)[target][1] and the first DataMessage
+ in the allocated buffer (*_DataMessages)[target][1].
+ + deltatime of TimesMessages must be nul if it is the last one.
+ + While : _t(t*) is the current time of the processus.
+ "while _t(t*) is greater than the time of the "target"
+ (*_TimeMessages)[target][1].time and
+ (*_TimeMessages)[target][1].deltatime is not nul",
+ So at the end of the while we have :
+ _t(t*) <= (*_TimeMessages)[target][1].time with
_t(t*) > (*_TimeMessages)[target][0].time
- ou bien on aura le dernier message temps du "target".
- + S'il s'agit de la finalisation des receptions des messages
- temps et donnees (UntilEnd vaut true), on effectue la
- boucle jusqu'a ce que l'on trouve
- (*_TimeMessages)[target][1].deltatime nul.
- + Dans la boucle :
- On recopie le dernier message temps dans le message temps
- precedent et on lit le message temps suivant.
- On detruit le buffer de donnees du temps precedent.
- On recopie le pointeur du dernier buffer de donnees dans
- le precedent.
- On alloue un nouveau dernier buffer de donnees
- (*_DataMessages)[target][1] et on lit les donnees
- correspondantes dans ce buffer.
- + Si le temps courant du process est plus grand que le dernier
- temps (*_TimeMessages)[target][1].time du target, on donne
- la valeur true a (*_OutOfTime)[target].
- (*_TimeMessages)[target][1].deltatime est alors nul.
+ or we have the last TimeMessage of the "target".
+ + If it is the finalization of the recv of TimeMessages and
+ DataMessages (UntilEnd value is true), we execute the while
+ until (*_TimeMessages)[target][1].deltatime is nul.
+ + In the while :
+ We copy the last TimeMessage in the previoud TimeMessage and
+ we read a new TimeMessage
+ We delete the previous DataMessage.
+ We copy the last DataMessage pointer in the previous one.
+ We allocate a new last DataMessage buffer
+ (*_DataMessages)[target][1] and we read the corresponding
+ datas in that buffe.
+ + If the current time of the current process is greater than the
+ last time (*_TimeMessages)[target][1].time du target, we give
+ a true value to (*_OutOfTime)[target].
+ (*_TimeMessages)[target][1].deltatime is nul.
*/
int MPI_AccessDEC::CheckTime( int recvcount , MPI_Datatype recvtype , int target ,
bool UntilEnd ) {
/*
. CheckSent() :
- + appelle SendRequestIds de MPI_Access afin d'obtenir tous les
- RequestIds d'envoi de messages a tous les "targets".
- + Pour chaque RequestId, appelle Test de MPI_Access pour savoir
- si le buffer est libre (flag = true). Lorsqu'il s'agit du
- FinalCheckSent, on appelle Wait au lieu de Test.
- + Si le buffer est libre, on decremente le compteur de la
- structure SendBuffStruct obtenue avec _MapOfSendBuffers.
- (voir MPI_AccessDEC et la gestion des SendBuffers ci-dessus)
- + Si le compteur est nul on detruit le TimeMessage ou le
- SendBuffer en fonction du DataType.
- + Puis on detruit la structure SendBuffStruct avant de supprimer
- (erase) cet item de _MapOfSendBuffers
+ + call SendRequestIds of MPI_Access in order to get all
+ RequestIds of SendMessages of all "targets".
+ + For each RequestId, CheckSent call "Test" of MPI_Access in order
+ to know if the buffer is "free" (flag = true). If it is the
+ FinalCheckSent (WithWait = true), we call Wait instead of Test.
+ + If the buffer is "free", the counter of the structure SendBuffStruct
+ (from _MapOfSendBuffers) is decremented.
+ + If that counter is nul we delete the TimeMessage or the
+ SendBuffer according to the DataType.
+ + And we delete the structure SendBuffStruct before the suppression
+ (erase) of that item of _MapOfSendBuffers
*/
int MPI_AccessDEC::CheckSent(bool WithWait) {
int sts = MPI_SUCCESS ;