Mon Nov 23 16:03:12 EST 1992
From owner-mpi-collcomm@CS.UTK.EDU  Tue Nov 24 23:07:28 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA15625; Tue, 24 Nov 92 23:07:28 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA26099; Tue, 24 Nov 92 22:57:48 -0500
Received: from gstws.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA26095; Tue, 24 Nov 92 22:57:45 -0500
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA13365; Tue, 24 Nov 1992 22:57:44 -0500
Date: Tue, 24 Nov 1992 22:57:44 -0500
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9211250357.AA13365@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: MPI collective communication...


Collective communication subcommittee.

Welcome. We have our work cut out for us - first because collective
communication was not included in the first iteration of the MPI draft
and second because "groups" caused the most resistance in the last meeting.

In the next 6 weeks we need to come up with and agree on the definition 
of a set of routines that fall under the jurisdiction of collective
communication. As I see it these routines fall into two categories.

- routines that require the cooperation of a group of processes.
  This includes collective communication like multicast 
  and cooperative routines like synchronization.

- routines that create groups of processes and potentially modify these groups.
  This also needs to include group information routines 
  that we feel are required like who am I in the group.

Two items that need to be coordinated with the pt2pt subcommittee
heterogeneity - it's not in the present MPI draft. If we want to be able
                to execute across heterogeneous networks, then we have to 
                think about how a process is identified in MPI and
                also how a message buffer can get encoded/decoded.
                For the latter we will need to know the type of the
                data in pack/unpack routines. 
                (or specified directly in the send/recv)

Inter-group communication - point to point communication between two members
                of a group.

As a first step I would like to get everyone's ideas out on the table
so we can see what type of consensus we have. And so we don't miss any
good ideas. So what basic routines (functions) do you think are required?
I would like to get your input to this first step by December 5.
----------------------
Since I got the short straw, I'll go first.
My basic philosophy about MPI and our standards effort is to
KEEP THINGS SIMPLE. It is easier to add a function later if
we see lots of users combining the basic routines in standard ways.
It is a waste to support a bunch of routines only 1% of the users ever call.

General:
I would like to see all the routines be functions that return error code(s)
as opposed to subroutines.

=======================================================================
Groups:
=======================================================================
Groups could be implemented separate from the collective communication routines.
The collective routines could take an integer array list of task IDs
and there could be a group routine that returned such a list.
There are efficiency factors here since the list of members of a group
would not have to be looked up every time a collective routine was called.
FUNCTIONS: groupsize()
           groupmembers()

GID: groups could be user named and addressed by name
or they could be addressed by a system supplied (unique) integer group ID.

Question - should groups be allowed to overlap?
Question - should we let groups be dynamic or restrict them to be static?

Group member IDs: There should be a notion of the members of a group
being addressable either directly or indirectly by [0 -- num_of_members-1]
There needs to be a routine to return mygroupINDEX (at least) and maybe
a more general routine that can return any process' group index.
FUNCTIONS: gettaskID( given GID and group index )
           getindex( given GID and taskID )

Creating groups: Here are three alternative methods. 
Method 1 (dynamic)
Most general case is to allow any task to join or leave
any group at any time without the consent of the other group members.
While this creates a simple and flexible user interface, it can be 
difficult to implement because of the potential race conditions.
FUNCTIONS: joingroup()
           leavegroup()

Method 2 (static)
A group could be defined by any single task by listing the task IDs.
Or alternatively all the future members of a group have to simultaneously
define the same group.
FUNCTION: makegroup()

Method 3 (dynamic)
Another method which met with some resistance when presented at the 
last MPI meeting was the notion of creating groups by partitioning
an existing group. The negative comments were the large number of routines
involved and the lack of usefulness of a tree of groups.
I am not keen on this method but for completeness.
FUNCTIONS: from MPI draft
           partition()
           root()
           children()
           parent()
           siblings()
           pushg()
           popg()

==============================================================================
Collective Routines:
==============================================================================
One problem we can get into is defining many different 
collective communication routines gmax, gsum, gadd, etc.
I propose that we have only a handful of routines based 
on the underlying communication logic.
All participating tasks call the same function.

FUNCTIONS:

broadcast()  broadcast a message from one task to all tasks in a group.

reduce()     inverse of broadcast. Data from all tasks in a group
			 is reduced using a predefined function or a user function
			 and the result is placed in a specified task.
			 Function name is specified in the argument list.
			 Pre-defined functions should include: max, min, add, mult,
			 and optionally AND, OR, XOR. (others?)

scatter()    a single task contains different messages for each task.
			 Scatter these messages to all tasks in a group.

gather()     inverse of scatter. gather distinct messages from each task
			 in a group and collect them in a specified task.

synchronize()  barrier synchronization of a group of tasks.

shift()      assume group members form a (logical) ring.
			 shift the message in each task to its right (or left) neighbor.
			 (useful in matrix multiply shift and roll algorithm)

exchange()   equivalent to every task in a group calling scatter.
             (routine used for matrix transpose)

all2all()    equivalent to every task in a group calling broadcast.

                          -----------------------------
   __o        /\          Al Geist
 _`\<,_    /\/  \         Oak Ridge National Laboratory
(_)/ (_)  /      \        (615) 574-3153   gst@ornl.gov
* * * * * * * * * *       -----------------------------
From owner-mpi-collcomm@CS.UTK.EDU  Wed Nov 25 13:41:20 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA21743; Wed, 25 Nov 92 13:41:20 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA09805; Wed, 25 Nov 92 13:14:39 -0500
Received: from relay2.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA09790; Wed, 25 Nov 92 13:14:31 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay2.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA22179; Wed, 25 Nov 92 13:14:32 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 131339.4005; Wed, 25 Nov 1992 13:13:39 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301) id AA12688; Wed, 25 Nov 1992 12:06:04 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA08958; Wed, 25 Nov 92 12:06:02 -0600
Date: Wed, 25 Nov 92 12:06:02 -0600
Message-Id: <9211251806.AA08958@brisk.kai.com>
To: mpi-pt2pt@cs.utk.edu, mpi-collcomm@cs.utk.edu, mpi-formal@cs.utk.edu,
        mpi-ptop@cs.utk.edu
Reply-To: William.Gropp's.message.of.Wed@kai.com,
        25 Nov 92 09:28:43 CST <9211251528.AA12985@godzilla.mcs.anl.gov>
Subject: Nonblocking functions and handlers.
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


Bill Gropp writes:

    (Warning: radical position that I'm not sure even I hold follows:)
    An interesting issue is whether we should defer all nonblocking communications
    to a thread-based execution model.

I'm not so sure this is a radical position Bill since even
nonsynchronized communication will need to be defined formally this way.
Nonsynchronized communication is in effect creating a parallel process
that has the job of passing the communication on. Al Geist earlier asked
the question wheather buffers used by nonsynchronized communication
should be accessible after the communication has started - the answer
should be - no, unless by some explicit mechanism that formally amounts
to a communication with the process mentioned above.  Any nonexplicit
interaction (e.g. a write to the buffer) would have to be specified as
formally equivalent to an explicit interaction.

Also, there is quite a range of terminology in use.  One common error:
"Asynchronous" and "synchronous" has quite a particular meaning in EE
and when CS people use the terms in relation to message passing they
usually mean NONSYNCHRONIZED and SYNCHRONIZED. Also BLOCKING =
SYNCHRONIZED. Let us begin a glossary that defines the terms we use - if
no-one else volunteers I'll take this to be the responsibility of the
Formal Specification Subcommittee. So I'm looking for volunteers from
that subcommittee.

Steven

From owner-mpi-collcomm@CS.UTK.EDU  Wed Nov 25 15:37:50 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25014; Wed, 25 Nov 92 15:37:50 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA12301; Wed, 25 Nov 92 15:15:34 -0500
Received: from relay2.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12297; Wed, 25 Nov 92 15:15:32 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay2.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA26923; Wed, 25 Nov 92 15:15:35 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 151429.20685; Wed, 25 Nov 1992 15:14:29 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301 for <mpi-collcomm@cs.utk.edu>) id AA15317; Wed, 25 Nov 1992 13:22:32 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA09015; Wed, 25 Nov 92 13:22:31 -0600
Date: Wed, 25 Nov 92 13:22:31 -0600
Message-Id: <9211251922.AA09015@brisk.kai.com>
To: geist@gstws.epm.ornl.gov
Cc: mpi-collcomm@cs.utk.edu
In-Reply-To: Al Geist's message of Tue, 24 Nov 1992 22:57:44 -0500 <9211250357.AA13365@gstws.epm.ornl.gov>
Subject: MPI collective communication...
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


   Date: Tue, 24 Nov 1992 22:57:44 -0500
   From: geist@gstws.epm.ornl.gov (Al Geist)

	<discussion on groups>

Is this discussion not the domain of the process and topology subcommittee?

   ==============================================================================
   Collective Routines:
   ==============================================================================
   One problem we can get into is defining many different 
   collective communication routines gmax, gsum, gadd, etc.
   I propose that we have only a handful of routines based 
   on the underlying communication logic.
   All participating tasks call the same function.

   FUNCTIONS:

   broadcast()  broadcast a message from one task to all tasks in a group.

Agreed. So, using my earlier suggestion where communications are
"channels" (logically shared objects .. whatever) with logical names, a
broadcast channel called G

	/* example of a declaration */
	communication broadcast_type(N) <datatype> G

"(N)" identifies the number of participants in the broadcast -
would be broadcast to by

	broadcast(G, expression)

(actually this would be formally equivalent to "send(G, expression)"
since G carries the broadcast semantics)

and each process (task) would recieve the message by

	receive(G, variable)

You must clearly identify the meaning of parallel broadcasts to the same
group. I would choose the constuction

	(... broadcast(G, x) ...) ||
	(... broadcast(G, y) ...) ||
	(... receive(G, v1) -> receive(G, v2) ...) ||
	...
to mean
	v1 = x | y
	v2 = x iff v1 = y
	v2 = y iff v1 = y

   reduce()     inverse of broadcast. Data from all tasks in a group
			    is reduced using a predefined function or a user function
			    and the result is placed in a specified task.
			    Function name is specified in the argument list.
			    Pre-defined functions should include: max, min, add, mult,
			    and optionally AND, OR, XOR. (others?)

I'm not sure I like the introduction of the function name. The inverse
of broadcast though is in effect a many-to-one. So

	/* example of a declaration */
	communication reduce_type(N) <datatype> R

would be written to by 

	send(R, e)

in N processes, and

	reduce(R, v, f)

is equivalent to

	receive( R, result )
	receive( R, v)
	v = f( result, v )
	receive(R, result)
	v = f(result, v)
	... until N receive times

In both broadcast and reduce cases we have left it to the implementation
to count the distinct communication instances.

Again we must concern ourselves with the meaning of parallel reduce constuctions

	(... reduce(R, v1, f) ...) ||
	(... reduce(R, v2, f) ...) ||
	(... send(R, x) -> send(R, y) ...)

It would be simplest to restrict this case and say reduce can only
appear in one process for each reduce type, but what about 

	(... reduce(R, v2, f) ...) ||
	(... send(R, x) -> send(R, y) ...)

does each send in sequence apply to one reduce or subsequent reduces. To
be the inverse of broadcast it would be the former.

   scatter()    a single task contains different messages for each task.
			    Scatter these messages to all tasks in a group.

Isn't this an abbreviation for a sequence of sends on an array of
one-to-one channels? So an array of channels S

	/* example of a declaration */
	communication one-to-one (N) S

where 

	scatter(S, A)

such that A is an array of size N, and the scatter is equivalent to

	parallel do i
		send(S[i], A[i])
	end parallel do

and the corresponding receive looks like

	receive(S[i], v)

   gather()     inverse of scatter. gather distinct messages from each task
			    in a group and collect them in a specified task.

Similarly, this an abbreviation for a sequence of recieves on an array of
one-to-one channels. So an array of channels G

	/* example of a declaration */
	communication one-to-one (N) G

where 

	gather(G, A)

such that A is an array of size N, and the gather is equivalent to

	parallel do i
		receive(G[i], A[i])
	end parallel do

and the corresponding send looks like

	send(G[i], e)

   synchronize()  barrier synchronization of a group of tasks.

This is also a many-to-one where the one is a synchronization process
created by the declaration (yes, I know this sounds odd).

		/* example of a declaration */
	communication sync SYNC
and
	synchronize(SYNC)

is equivalent to the output

	send(SYNC)

i.e. send with no output value.

   shift()      assume group members form a (logical) ring.
			    shift the message in each task to its right (or left) neighbor.
			    (useful in matrix multiply shift and roll algorithm)

This can be constructed from the above.

   exchange()   equivalent to every task in a group calling scatter.
		(routine used for matrix transpose)

This is tricky, and isn't as simple as is implied. I have no trouble
with it if we can specify a deadlock free implementation, but frankly I
think it is out of place here.

   all2all()    equivalent to every task in a group calling broadcast.

Why doesn't this cause deadlock in the group? Nah! It does cause deadlock.

Steven


From owner-mpi-collcomm@CS.UTK.EDU  Wed Nov 25 18:12:56 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA26783; Wed, 25 Nov 92 18:12:56 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA15993; Wed, 25 Nov 92 18:07:19 -0500
Received: from relay1.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA15989; Wed, 25 Nov 92 18:07:17 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay1.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA23256; Wed, 25 Nov 92 18:07:14 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 180640.18440; Wed, 25 Nov 1992 18:06:40 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301 for <mpi-collcomm@cs.utk.edu>) id AA24413; Wed, 25 Nov 1992 16:21:00 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA09165; Wed, 25 Nov 92 16:20:58 -0600
Date: Wed, 25 Nov 92 16:20:58 -0600
Message-Id: <9211252220.AA09165@brisk.kai.com>
To: zenith@kai.com
Cc: geist@gstws.epm.ornl.gov, mpi-collcomm@cs.utk.edu
In-Reply-To: Steven Ericsson Zenith's message of Wed, 25 Nov 92 13:22:31 -0600 <9211251922.AA09015@brisk.kai.com>
Subject: MPI collective communication...
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


An typo. error crept into my last message.

	v1 = x | y
	v2 = x iff v1 = y
	v2 = y iff v1 = y

should, of course, be

 	v1 = x | y
	v2 = x iff v1 = y
	v2 = y iff v1 = x

And in the examples all sends are synchronized (blocking).

Steven


From owner-mpi-collcomm@CS.UTK.EDU  Wed Nov 25 19:37:42 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA28059; Wed, 25 Nov 92 19:37:42 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA16782; Wed, 25 Nov 92 19:14:32 -0500
Received: from relay2.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA16778; Wed, 25 Nov 92 19:14:29 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay2.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA25540; Wed, 25 Nov 92 19:14:34 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 191312.8393; Wed, 25 Nov 1992 19:13:12 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301 for <mpi-collcomm@cs.utk.edu>) id AA26829; Wed, 25 Nov 1992 17:33:21 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA09251; Wed, 25 Nov 92 17:33:20 -0600
Date: Wed, 25 Nov 92 17:33:20 -0600
Message-Id: <9211252333.AA09251@brisk.kai.com>
To: geist@gstws.epm.ornl.gov, mpi-collcomm@cs.utk.edu
In-Reply-To: Steven Ericsson Zenith's message of Wed, 25 Nov 92 13:22:31 -0600 <9211251922.AA09015@brisk.kai.com>
Subject: MPI collective communication...
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


Observation on the following point:

	    synchronize()  barrier synchronization of a group of tasks.

	 This is also a many-to-one where the one is a synchronization process
	 created by the declaration (yes, I know this sounds odd).

			 /* example of a declaration */
		 communication sync SYNC
	 and
		 synchronize(SYNC)

	 is equivalent to the output

		 send(SYNC)

	 i.e. send with no output value.

I should clarify this. Given

	(P||Q);R

This reads P and Q in parallel followed by R; i.e., there is a barrier
at the semicolon. To implement this barrier using Al's primitive the
compiler in effect places a send(SYNC) at the end of P and Q and the
corresponding receive(SYNC);receive(SYNC) at the start of R. Using
something, perhaps more familiar

	begin parallel
	   section
		P
	   end section
	   section
		Q
	   end section
	end parallel
	R

translated using MPI might become the following three programs executed
on three nodes of a distributed memory machine

	program Node0
		P
		synchronize(SYNC)
	end program

	program Node1
		Q
		synchronize(SYNC)
	end program

	program Node2
		receive(SYNC)
		receive(SYNC)
		R
	end program

But now I'm less convinced we need a separate synchronize primitive and
should just permit "empty" messages in send and receive for their
synchronization characteristics. (An implementation may, of course,
choose to send a dummy value to gain the same effect).

Steven
	



From owner-mpi-collcomm@CS.UTK.EDU  Fri Nov 27 12:08:43 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25724; Fri, 27 Nov 92 12:08:43 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA08742; Fri, 27 Nov 92 12:06:12 -0500
Received: from relay1.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08738; Fri, 27 Nov 92 12:06:10 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay1.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA02767; Fri, 27 Nov 92 12:06:08 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 120540.25488; Fri, 27 Nov 1992 12:05:40 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301) id AA12937; Fri, 27 Nov 1992 10:25:06 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA11158; Fri, 27 Nov 92 10:25:05 -0600
Date: Fri, 27 Nov 92 10:25:05 -0600
Message-Id: <9211271625.AA11158@brisk.kai.com>
To: mpi-collcomm@cs.utk.edu
Cc: mpi-formal@cs.utk.edu
In-Reply-To: Steven Ericsson Zenith's message of Wed, 25 Nov 92 13:22:31 -0600 <9211251922.AA09015@brisk.kai.com>
Subject: MPI collective communication...
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


Observation on the following:

	   all2all()    equivalent to every task in a group calling broadcast.

	Why doesn't this cause deadlock in the group? Nah! It does cause deadlock.

I was thinking about this yesterday over my stuffed Tofu :-). Even if we
permit the broadcast to be nonsynchronized we have the problem I
described earlier with defining the behavior of parallel broadcasts. If
all2all is nonsynchronized then the order of received values must be
nondeterministic.

(|| i for N: broadcast(C, e[i])) || (|| k for N:|| j for N: receive(C, v[k, j]))

i.e., the order of values from e in v is nondeterministic. Now maybe I'm
missing something that has to do with the TMC perspective - in any case,
I have never seen the use of such a construction in an application. If
we do specify a deadlock free behavior for all2all is it desirable given
this nondeterminism? I know it's implementation will be tricky to get
right. Can we have some vendor comments please?

I have assumed here that the values broadcast are the same type.

Steven

Footnote: The syntax 

(|| i for N: broadcast(C, e[i])) || (|| k for N:|| j for N: recieve(C, v[k, j]))

illustrates N broadcasts implementing the all2all, where N is the number
of participants, in parallel with N parallel groups of N (parallel) receives.

From owner-mpi-collcomm@CS.UTK.EDU  Fri Nov 27 12:37:48 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25951; Fri, 27 Nov 92 12:37:48 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA08855; Fri, 27 Nov 92 12:17:02 -0500
Received: from sampson.ccsf.caltech.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08851; Fri, 27 Nov 92 12:16:57 -0500
Received: from elephant by sampson.ccsf.caltech.edu with SMTP id AA24714
  (5.65c/IDA-1.4.4 for mpi-collcomm@cs.utk.edu); Fri, 27 Nov 1992 09:16:50 -0800
Received: by elephant (4.1/SMI-4.1)
	id AA13810; Fri, 27 Nov 92 08:13:55 PST
Date: Fri, 27 Nov 92 08:13:55 PST
From: jwf@parasoft.com (Jon Flower)
Message-Id: <9211271613.AA13810@elephant>
To: mpi-collcomm@cs.utk.edu

To: mpi-collcomm@cs.utk.edu
Re: A few ideas

In response to Al Geist's request here are a few more or less
random ideas about collective communication based on my own
experience......

Some comments about groups:

    I think that we should not describe the "group" concept in
    collective communication in terms of lists of task ID's. It
    might be implemented that way but I think the underlying
    concept should be related to the user application topology.

    I think the key to really optimizing collective 
    communication routines is to be to match the system's geometric
    knowledge of the hardware topology with the geometric
    behavior of the "logical topology" of the user code. 
    So, for example, you can do a lot better on a column-restricted
    broadcast if you know that the user's logical topology actually
    matches the hardware of the DELTA (for example).

    Similarly the exchange primitive doesn't make too much
    sense when defined only in terms of a list of nodes since
    at best the user is left with the responsibility of forming
    the list in the right order.

    I would like to see groups described in conjunction with
    the topological info and, in general, I though Rolf's idea
    was pretty good except that I didn't see how to deal with
    a very common case; a broadcast from a "host" program
    to all of its "nodes" or a reduction from all the "nodes"
    into the "host". These can both be represented as the
    combination of a "node" only operation and a point-to-point
    operation but it would be nice to encapsulate them somehow
    since they come up all the time.

    We could leave open a loophole for "expert" users to make their
    own groups from lists of task ID's but I don't know how we'd
    optimize their behavior.

Some comments about the individual functions:

broadcast:
---------
    I'm not sure who to address this issue to - it probably
    falls outside our domain of comment, but how do you
    deal with the case of a "master" program broadcasting
    to all of its slaves? (Actually read "host" for "master"
    and "node" for "slaves".) Does MPI1 even support this
    concept? It comes up all the time in our applications.
    I suppose this is an application of the group concept
    but it's one that I would like to see very streamlined
    because of its generality.

reduce:
------
    The comments for "reduce" and "gather" indicate that only
    one task in the group can get at the result? I would hope
    that there was some way for all the tasks in a group to 
    get the answer too, without following the reduce/gather
    with a broadcast operation since this looses a lot of
    efficiency.

    I like the idea of a function pointer for reduce.
    Is this done by having a general facility for the user
    with a function pointer argument and then providing a list
    of pre-defined "external" functions that do the common tasks?
    This would be my preference since the heterogeneity is then
    taken care of by the system. However, how do you express
    a reduce on a standard data type using a user-specified 
    function? Is there an argument to reduce that says the data
    type so that the system can still byte swap or are we
    going to restrict reduce (and possibly all collective ops)
    to the "byte-stream" data type and force the user to
    deal with it themselves. This latter is horrible because
    putting the byte swapping in the right places for a reduction
    operation is hard.

    I would like to add "average" to the list of predefined 
    functions even though it's a triviality.

gather:
------
    As for reduce - I would hope that all tasks can get at the result
    too.

synchronize:
-----------
   This one seems to be a real thorn. I would like to have a 
   non-blocking synchronize - you call the function to say that
   you're interested in synchronizing a particular group of
   tasks and then later check to see whether they've all done
   or not. This is very valuable in certain types of event-driven 
   simulation, for example, where you might start each time 
   step by invoking the sync. function and then go off and
   respond to incoming events. Periodically you then check to
   see if everyone in your group has checked in and if so, 
   increase global virtual time for the next step.

   A non-blocking sync. also allows a single (master) task to wait for
   the completion of either/or subtasks in two disjoint slave 
   groups. Obviously this can be done in another way but is very
   elegant and simple to code with non-blocking syncs.

   I would propose both a blocking and a non-blocking "wait for
   sync to complete" function in the same way that the point-to-
   point style has both.

shift:
-----
   How do you specify the (non-)periodicity of the edge elements?
   In fact what does left and right actually mean - is there an implied
   ordering in the entries of a group?

exchange, all2all:
-----------------
    These are life savers in my opinion since they encapsulate
    the biggest problem that I've seen in user codes. Writing
    these with point-to-point message passing primitives almost
    guarantees that the code doesn't scale and that it runs out
    of memory as you go to more nodes or even bigger problems.

    On the downside I agree with Steve Zenith that implementations
    of these functions are hard. I would also say that the ways that
    user's use these functions often because they don't want to
    think about a better decomposition method and so it's possible
    that my supporting these functions we are contributing to less
    than optimal coding at the user level. I would still vote
    them in, however, on the grounds that I would get fewer
    phone calls from customers!

Generalities:
============
I think the set of functions listed is rich enough 
for most applications. It would be interesting to see how many
arguments these things end up with when you try to write down
functional specs. I wonder if it might be worth having two
functions in each category; one with very few arguments that
does what most user's will probably want and another that
has all the arguments and flexibility. This might reduce the
number of "simple" mistakes that can be made. 

For example, I often forget the "EXTERN MAX" that you need 
to pass MAX as a function pointer in FORTRAN programs. Perhaps 
the simple form of the reduce operation could have a variable 
indicating the operation type instead?

Do the collective routines have message types like the 
point-to-point routines? In general I don't think they need to
since everyone is participating at once. On the other hand if
you make a mistake in this regard having a different message
type for each one sometimes facilitates looking them up in
a debugger. The one area where a message type might be 
interesting is in regard to the "synchronize" primitive
as discussed in the comments above.

	Jon Flower, jwf@parasoft.com
	ParaSoft Corp.
	818-792-9941
From owner-mpi-collcomm@CS.UTK.EDU  Sat Dec  5 22:08:57 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA22904; Sat, 5 Dec 92 22:08:57 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA19180; Sat, 5 Dec 92 21:55:26 -0500
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19176; Sat, 5 Dec 92 21:55:23 -0500
Received: by msr.EPM.ORNL.GOV (5.61/1.34)
	id AA02074; Sat, 5 Dec 92 21:55:20 -0500
Date: Sat, 5 Dec 92 21:55:20 -0500
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9212060255.AA02074@msr.EPM.ORNL.GOV>
To: mpi-collcomm@cs.utk.edu
Subject: A proposal for collective communication interface. Opinions?

Collective Communication Proposal.

After reading Marc Snir's point-to-point outline, I think our 
work in the collective communication subcommittee is more clear.
A few of the Goals from the outline that I felt were particularly relevant:

1. Design an application programming interface.

2. Design an interface that is not too different from current practice

3. Define an interface that can be quickly implemented on many vendor platforms.

4. Focus on a proposal that can be agreed upon in 6 months.

5. Provide a reliable communication interface.

===============================================================================

Primary Requirement.
--------------------------------------------------------------------
The collective communication interface should be an extension of the
point-to-point interface. 
--------------------------------------------------------------------
As Marc points out on page 2 
 "SEND and RECV are a particular case of broadcast in a group of size 2;
 this observation can be used to check if the definition of collective
 communication semantics are consistent with the definition of 
 point-to-point communication."

This leads to the following points:
a. Collective routines like broadcast should provide the same 
   message data format as point-to-point routines.
   Be that [from page 7] scalar, contiguous, buffer with stride, typed
   or a union of these.

b. Collective communication should follow the same message context paradigms
   and recognize the same context control functions.

c. By using a structured name space (described on pages 7-8) 
   where all processes are identified by a (group,rank) pair
   for both the point-to-point and collective routines,
   then the users will have a consistent naming scheme
   across all the MPI communication routines.
   And those desiring a flat name space can have it by 
   using the default group "ALL".

d. Syntax of collective routines should follow the point-to-point scheme,
   whatever that turns out to be.

Collective communication is a matter of convenience for the user
and a matter of efficiency for the implementer. We must not
lose track of the fact that ANY collective communication function
can be implemented using only the MPI point-to-point routines.
I bring this up because in the spirit of simplicity and robustness
The following proposal contains only the most commonly used
and currently available functions.

=========================================================================

I propose the following minimum set of collective routines
be presented at the next committee meeting.

1. info = MPI_BCAST( buf, bytes, type, gid, root )

   Function:
   Called by all members of the group "gid" 
   using the same argument for "bytes", "type", "gid", and "root".
   On return the contents of "buf" on "root" is contained in "buf"
   on all group members.
   On return "info" contains the error code.

2. info = MPI_GATHER( buf, bytes, type, gid, root )

   Function:
   Called by all members of the group "gid" 
   using the same argument for "bytes", "type", "gid", and "root".
   On return all the individual "buf" are concatenated into the "root" buf,
   which must be of size at least gsize*bytes.
   The data is laid in the "root" buf in rank order that is
   | gid,0 data | gid,1 data | ...| gid, root data | ...| gid, gsize-1 data |
   Other member's "buf" are unchanged on return.
   On return "info" contains the error code.

3. info = MPI_GLOBAL_OP( inbuf, bytes, type, gid, op, outbuf )

   Function:
   Called by all members of the group "gid"
   using the same argument for "bytes", "type", "gid", and "op".
   On return the "outbuf" of all group members contains the 
   result of the global operation "op" applied pointwise to
   the collective "inbuf". For example, if the op is max and
   inbuf contains two float point numbers then 
	 outbuf(1) = global max( inbuf(1)) and 
	 outbuf(2) = global max( inbuf(2)) 
   A set of standard operations are supplied with MPI including:
     global max - for each data type
     global min - for each data type
	 global sum - for each data type
	 global mult- for each data type
	 global AND - for integer and logical type
	 global OR  - for integer and logical type
	 global XOR - for integer and logical type
   Optionally the users may define their own global functions for this routine.
   On return "info" contains the error code.

4. info = MPI_SYNCH( gid )

   Function:
   Called by all members of the group "gid"
   Returns only when all members have called this function.
   On return "info" contains the error code.

5. gid = MPI_MKGROUP( list_of_processes )

   Function:
   Called by all processes in the list.
   Forms a logical group containing the listed processes
   and assigns each process a unique rank in the group.
   The ranks are consecutively numbered from 0 to gsize-1.
   On return "gid" is an MPI assigned group ID (or error code if < 0)

6. gsize = MPI_GROUPSIZE( gid )

   Function:
   Can be called by any process.
   On return "gsize" is the number of members in the group "gid"
   (or error code if < 0).

7. rank = MPI_MYRANK( gid )

   Function:
   Can be called only by members of group "gid".
   On return "rank" is the rank of the calling process in group "gid"
   (an integer between 0 and gsize-1) or error code if < 0.

===========================================================================
Comments?
From owner-mpi-collcomm@CS.UTK.EDU  Mon Dec 14 15:48:54 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24894; Mon, 14 Dec 92 15:48:54 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17266; Mon, 14 Dec 92 15:48:41 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 14 Dec 1992 20:48:40 GMT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from THUD.CS.UTK.EDU by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17227; Mon, 14 Dec 92 15:48:19 -0500
From: Jack Dongarra <dongarra@cs.utk.edu>
Received:  by thud.cs.utk.edu (5.61++/2.7c-UTK)
	id AA03749; Mon, 14 Dec 92 15:48:17 -0500
Date: Mon, 14 Dec 92 15:48:17 -0500
Message-Id: <9212142048.AA03749@thud.cs.utk.edu>
To: mpi-collcomm@cs.utk.edu, mpi-pt2pt@cs.utk.edu
Subject: Re: Message Passing Interface Forum
Forwarding: Mail from '"Dr. C.D. Wright" <CDW10@LIVERPOOL.AC.UK>'
      dated: Mon, 14 Dec 92 12:16:10 GMT

---------- Begin Forwarded Message ----------
>From @ibm.liv.ac.uk:CDW10@LIVERPOOL.AC.UK Mon Dec 14 07:20:05 1992
Return-Path: <@ibm.liv.ac.uk:CDW10@LIVERPOOL.AC.UK>
Received: from mail.liv.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22696; Mon, 14 Dec 92 07:19:55 -0500
Received: from ibm.liverpool.ac.uk by mailhub.liverpool.ac.uk via JANET 
          with NIFTP (PP) id <21042-0@mailhub.liverpool.ac.uk>;
          Mon, 14 Dec 1992 12:19:28 +0000
Received: from UK.AC.LIVERPOOL by MAILER(4.4.t); 14 Dec 1992 12:20:02 GMT
Date: Mon, 14 Dec 92 12:16:10 GMT
From: "Dr. C.D. Wright" <CDW10@LIVERPOOL.AC.UK>
Subject: Re: Message Passing Interface Forum
To: dongarra@edu.utk.cs
Message-Id: <"mailhub.li.044:14.11.92.12.19.28"@liverpool.ac.uk>
Status: RO

Hi.

Since I am in the UK it is clear that I can't actively participate
in the MPI Forum.  I do, however, have one particular problem with
every comms library I have used so far that I would like to see
addressed in any new "standard", and I hope you can pass this on to
whoever is the appropriate person to deal with it.

In many packages such as PVM, PARMACS, p4, etc, it is possible to
probe for and/or receive messages selectively, the selection being
based on the message type (usually in integer) and/or the sender.
This is overly restrictive.  It would be far more useful if the
message's format were sufficiently well defined for the user to be
able to provide their own selection function to be passed in and
used as the basis for reception and/or probing.

That's it.  Hope you can do something with this gripe/suggestion.

Colin.
----------- End Forwarded Message -----------

From owner-mpi-collcomm@CS.UTK.EDU  Tue Dec 15 19:28:40 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA21809; Tue, 15 Dec 92 19:28:40 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA15597; Tue, 15 Dec 92 19:28:32 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 16 Dec 1992 00:28:32 GMT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from helios.llnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA15573; Tue, 15 Dec 92 19:28:06 -0500
Received: by helios.llnl.gov (4.1/LLNL-1.18)
	id AA11599; Tue, 15 Dec 92 16:30:03 PST
Date: Tue, 15 Dec 92 16:30:03 PST
From: tony@helios.llnl.gov (Anthony Skjellum)
Message-Id: <9212160030.AA11599@helios.llnl.gov>
To: dongarra@cs.utk.edu, mpi-collcomm@cs.utk.edu, mpi-pt2pt@cs.utk.edu
Subject: Re: Message Passing Interface Forum

That is what we have been talking about in Zipcode for a long time.
- Tony
From owner-mpi-collcomm@CS.UTK.EDU  Thu Dec 31 22:14:12 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA14668; Thu, 31 Dec 92 22:14:12 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12447; Thu, 31 Dec 92 22:14:01 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 01 Jan 1993 03:14:00 GMT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12429; Thu, 31 Dec 92 22:13:45 -0500
Received: from carbon.pnl.gov (130.20.65.121) by pnlg.pnl.gov; Thu, 31 Dec 92
 19:09 PST
Received: from fermi.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA21172; Thu,
 31 Dec 92 19:08:30 PST
Received: by fermi.pnl.gov (4.1/SMI-4.1) id AA11537; Thu, 31 Dec 92 19:08:29 PST
Date: Thu, 31 Dec 92 19:08:29 PST
From: d3g681@fermi.pnl.gov
To: littlefield@fermi.pnl.gov, mpi-collcomm@cs.utk.edu, mpi-ptop@cs.utk.edu
Message-Id: <9301010308.AA11537@fermi.pnl.gov>
X-Envelope-To: mpi-ptop@cs.utk.edu, mpi-collcomm@cs.utk.edu

Posted to mpi-collcomm and mpi-ptop.

I have just taken the archived discussion from netlib@ornl and not
found anything more recent than december 15 (collcomm) and 21 (ptop).
Since I asked for my name to be on the mailing lists and have seen
nothing I assume that things have been quiet since then.

Al Geist's proposal (Dec. 5) for collective communication and the
reasoning behind it seems to provide a resonable starting point for
the discussion of interface and functionality.  I have only a few
minor comments in this regard, but given that the efficiency of
collective communications is critically sensitive to hardware topology
it *must* be essential to more closely integrate the definition of
process groups with topology.  I restrict my comments here to
this subject.

For example, on the Touchstone Delta efficient sub-group global-ops
would suggest that process groups map as best possible to square
sub-meshes, on the iPSC as sub-cubes, on the KSR as sub-rings.
Currently, if one's interest is in performing efficient collective
communication in subgroups, there is no way of performing this mapping
in a portable way.  In this instance one might want something that
functions along these lines

  Create NG process groups with P(0), P(1), ..., P(NG-1) processes in each
  group and assign each process to one of these groups so that collective
  communication within each (and perhaps also between all) subgroup is
  optimized.

Such a mapping might also be readily accomodated as a sub-partitioning
of an existing process group, with the default being ALL.  I could
envisage writing, for instance, a fast-multipole integration using this
functionality.

Comments?

Robert J. Harrison

Mail Stop K1-90                             tel: 509-375-2037
Battelle Pacific Northwest Laboratory       fax: 509-375-6631
P.O. Box 999, Richland WA 99352          E-mail: rj_harrison@pnl.gov





From owner-mpi-collcomm@CS.UTK.EDU  Fri Jan  1 11:54:06 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA16862; Fri, 1 Jan 93 11:54:06 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13835; Fri, 1 Jan 93 11:53:57 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 01 Jan 1993 16:53:56 GMT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13817; Fri, 1 Jan 93 11:53:46 -0500
Received: by msr.EPM.ORNL.GOV (5.61/1.34)
	id AA04566; Fri, 1 Jan 93 11:53:35 -0500
Date: Fri, 1 Jan 93 11:53:35 -0500
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9301011653.AA04566@msr.EPM.ORNL.GOV>
To: d3g681@fermi.pnl.gov, littlefield@fermi.pnl.gov, mpi-collcomm@cs.utk.edu,
        mpi-ptop@cs.utk.edu
Subject: Re: groups and topology.


>I have only a few
>minor comments in this regard, but given that the efficiency of
>collective communications is critically sensitive to hardware topology
>it *must* be essential to more closely integrate the definition of
>process groups with topology.

>Currently, if one's interest is in performing efficient collective
>communication in subgroups, there is no way of performing this mapping
>in a portable way.

It is critical that MPI be portable even if efficiency suffers.
Portability is primary reason for having a standard.

Efficiency is important and tightly coupled to the implementation
on a given vendor's machine. My feeling is that our MPI work
should specify the functionality at the user level
and not dictate how MPI is implemented underneath.

Mapping is the key word in integrating topology and groups,
and mapping is not defined (so far) in MPI. It is related to
the spawning and placement of tasks. I can envision some implementations
allowing tasks to migrate to improve load balance and fault tolerance.
This greatly compounds the mapping problem, but I don't think MPI
should exclude such implementations.
The hope would be that vendors would supply MPI implementations
that map process number to node number in a way that their
collective routines would be efficient with default ALL group
AND that the vendor's mapping would be documented so that
a user could specify subgroups that could exploit this same efficiency.

Al Geist
From owner-mpi-collcomm@CS.UTK.EDU  Sat Jan 16 06:36:12 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA18118; Sat, 16 Jan 93 06:36:12 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA01632; Sat, 16 Jan 93 06:35:42 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 16 Jan 1993 06:35:41 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sol.cs.wmich.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA01612; Sat, 16 Jan 93 06:35:37 -0500
Received: from id.wmich.edu (id.cs.wmich.edu) by cs.wmich.edu (4.1/SMI-4.1)
	id AA06149; Sat, 16 Jan 93 06:30:31 EST
Date: Sat, 16 Jan 93 06:30:31 EST
From: john@cs.wmich.edu (John Kapenga)
Message-Id: <9301161130.AA06149@cs.wmich.edu>
To: mpi-collcomm@cs.utk.edu
Subject: A Collection of Primitives


\section{Introduction}
This is a description of some topology independent combined
communication primitives. These primitives are more commonly
referred to relative to "nodes" (eg Single Node Broadcast),
rather than the politically correct "process" (eg Single Process
Broadcast). The names below might look better with the word
Process deleted. These primitives often go under other names
as well I've put the names used in previous posts next to the
names below. The MP_* names appear in Al Geist's post with calls, 
the names further to the right appear in Al Geist's earlier post and
Jon Flowers post.

Some things below were developed in discussion at the last MPI
meeting (IE don't give me credit but you can give me blame). 
This is a list of primitives for discussion.
%
\section{Topology Independent Collective Communication Primitives}
Assume there is a group of $N$ processes. The following collective 
communication primitives can be defined.

Barrier:                  BARRIER       MPI_SYNCH        synchronize
    Every process blocks at the barrier until all processes reach it.
    (unless we have a non-blocking version too, then I would prefer the
     name synchronize)
Collective Operator:      COP           MPI_GLOBAL_OP
    START: every process $i$ has a value $m_i$.
    STOP: a single designated process has the combine of all values
          $m_i: 0 <= i < N$.
    The operations supported in COP are fixed, they include
        add, multiply, min, max, and, or, xor. 
        Types supported include: int, float and double.
Global Operator:          GOP           
    START: every process $i$ has a value $m_i$.
    STOP: every process has the combine of all values $m_i: 0 <= i < N$.
    The operations supported in COP are fixed, they include
        add, multiply, min, max, and, or, xor. 
        Types supported include: int, float and double.
Single Process Broadcast-    SPB        MPI_BCAST
    START: a single designated process $i$ has a message $m$.
    STOP: every process has message $m$.
Multiple Process Broadcast-  MPB	
    START: every process $i$ has a message $m_i$.
    STOP: every process has all messages $m_i: 0 <= i < N$.
Single Process Accumulate-   SPA                          reduce
    START: every process $i$ has a message $m_i$.
    STOP: a single designated process $i$ contains the combine of all the $m_i$.
          Any process can combine two messages into a single new message. (This
          makes the most sense when the combine is associative and commutative.)
Multiple Process Accumulate- MPA
    START every process $i$ has $N$ messages $m_{i,j}: 0 <= j < N$.
    STOP: every process $j$ has the combine of the $N$ messages
          $m_{i,j}: 0 <= i < N$
Single Process Scatter-      SPS
    START: a single designated process $i$ has $N$ messages $m_j: 0 <= j < N$.
    STOP: every process $j$ has message $m_j$.
Single Process Gather-       SPG        MPI_GATHER        gather
    START: every process $i$ has 1 message $m_i$.
    STOP: a single designated process $i$ has all messages $m_i: 0 <= i < N$.
Total Process Exchange-      TPE                          all2all
    START: every process $i$ has $N$ messages $m_{i,j}:  0 <= j < N$.
    STOP: every process $j$ has $N$ messages $m_{i,j}: 0 <= i < N$.

Note a Multiple Process Gather would be the same as a Multiple Process Scatter,
this is called a Total Process Exchange or all2all.
%
\section{Some Background}
For some background, the following simple relationships are known.

Theorem 1:
Assume no computation time and unit communication time per hop for all
messages.  For any network the following diagram holds. A directed arrow
from A to B indicates an algorithm for solving A also solves B and the
optimal time for solving B is not more than the optimal time for solving A.
Horizontal double arrows indicate the relationship holds in both directions.

                             Total Process Exchange
                                      |
                                      V
Multiple Process Broadcast    <----------------->   Multiple Process Accumulate
        |                                                   |
        V                                                   V
Single Process Gather         <----------------->   Single Process Scatter
        |                                                   |
        V                                                   V
Single Process Accumulate     <----------------->   Single Process Broadcast


Theorem 2:
The following optimal complexities can be proven (the log is base 2).
The tree is a balanced binary tree and the times for a linear array are
the same as the ring. p is the number of processors. (W means to a constant)

Problem                     ring      tree          mesh            hypercube
-------------------------------------------------------------------------
single process broadcast    W(p)      W(log p)      W(p ** (1/d))   W(log p)
single process scatter      W(p)      W(p)          W(p)            W(p/log p)
multiple process broadcast  W(p)      W(p)          W(p)            W(p/log p)
total process exchange      W(p**2)   W(p**2)       W(p**((d+1)/d)) W(p)     

Theorem 3:
Additionally, assuming a process can only send one message at a time
(even if it has many links) Some optimal complexities for the above
communications primitives can again be determined.

Problem                    ring      tree          mesh             hypercube
------------------------------------------------------------------------
single process broadcast   W(p)      W(log p)      W(p ** (1/d))    W(log p)
single process scatter     W(p)      W(p)          W(p)             W(p)     
multiprocess broadcast     W(p)      W(p)          W(p)             W(p)     
total process exchange     W(p**2)   W(p**2)       W(p**((d+1)/d))  W(p log p)     
Some results in this direction are also known for wormhole routing.
Cluster architecture machines and be included as well.
%
\section{Remarks}
SPB and SPA
These require a spanning tree of the group.  One difference between
the COP and a SPA followed by a SPB is that the COP uses fixed operations,
while the combine functions should be user supplied. The user supplied function
must be run as a user process on data in user memory on a computation
processor. The COP on the other hand is safe in a system process and my be
able to be run of the communication processor directly.

I tend to use the GOP more often than the COP.

For Steve Ericsson Zeinth's question on non-deterministic order of receives.
My implementations of such primitives have been very deterministic. They 
loosely synchronize to protect the message system. A receiving node on an
all2all (TPE) knows who sent each message, so even if it could be implemented
by N parallel scatters (SPS) the receiver would know where to put each of the 
N incoming messages.

For the primitives above SPB and SPA it becomes important to be very careful
not to overload most current message systems.

We talked about the combine function. Should it be strictly binary or
expect to combine a list of size n? I'll claim binary is enough because
fan in any reasonable implementation is likely to be low at any node.

Two of Jon Flower's requests are for a the GOP (note gop() was such a function
even in an iPSC-1 library) and the  MNB, which is the same as a SNG followed
by a SNB.

I prefer the form of global communication primatives shown by AL Geist,
where all processes make the same call.

The BARRIER and the other primitives could share many of the "512 variations"
currently proposed for the send. In particular a non-blocking BARRIER does
make sense (as requested by Jon Flower). 

There are many questions about details of any colcom primitives, most
of those questions should be clearer as the pt2pt specification matures.
We can discuss the colcom primitives we would propose. 

I would expect BARRIER, COP, GOP, SNA, SNB, SNG and SNS.

I have used 2 of the 3 others (and know where the other might be used).
But If I'm the only one who uses them ... :-)

We could provide (ALL) these primitives based on MPI pt2pt primitives for
groups of with actually topology : Hypercube, Mesh, 2-level Cluster and Generic.
These could be ready a few weeks after the pt2pt specification is stable.
Note these would be much slower than kernel based primitives, but better
than many user codes.

john

From owner-mpi-collcomm@CS.UTK.EDU  Sat Jan 16 06:45:00 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA19956; Sat, 16 Jan 93 06:45:00 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA02116; Sat, 16 Jan 93 06:44:37 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 16 Jan 1993 06:44:36 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sol.cs.wmich.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA02108; Sat, 16 Jan 93 06:44:35 -0500
Received: from id.wmich.edu (id.cs.wmich.edu) by cs.wmich.edu (4.1/SMI-4.1)
	id AA06158; Sat, 16 Jan 93 06:39:29 EST
Date: Sat, 16 Jan 93 06:39:29 EST
From: john@cs.wmich.edu (John Kapenga)
Message-Id: <9301161139.AA06158@cs.wmich.edu>
To: mpi-collcomm@cs.utk.edu
Subject: groups and architectures


I strongly agree with Jon Flowers in that topology is important in the
group definition. It seems that any effort to define a group on a large
machine, say 64K nodes, would be futile without using a very regular structure.

I would hope that there are group defining functions which require topology 
and carry that information with them. Most applications on large machines
I know of treat the machine as a unit of a given topology for each stage
of the computation. Whatever else the MPI dose, it must support that mode of
operation efficiently.

For example, a program might do an inquire to find out what kind of machine
topology the machine really is, and then request a 2d-mesh group of a given
size, knowing it will be well laid out on the machine. I know this is against
the architecture independent spirit. If that type of facility is not to be
allowed then it must be shown that on current machines the same effect can
still be achieved.

I would suggest we need the ability to map standard structures onto current
large machines. If we have some primitives that can be safely ignored on later
machines there is no harm.

john
From owner-mpi-collcomm@CS.UTK.EDU  Mon Jan 25 15:20:44 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25035; Mon, 25 Jan 93 15:20:44 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18241; Mon, 25 Jan 93 15:20:12 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 25 Jan 1993 15:20:11 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from beagle.cps.msu.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18223; Mon, 25 Jan 93 15:20:06 -0500
Received: from uranium.cps.msu.edu by beagle.cps.msu.edu (4.1/rpj-5.0); id AA05995; Mon, 25 Jan 93 15:19:58 EST
Received: by uranium.cps.msu.edu (4.1/4.1)
	id AA12809; Mon, 25 Jan 93 15:19:58 EST
Date: Mon, 25 Jan 93 15:19:58 EST
From: huangch@cps.msu.edu
Message-Id: <9301252019.AA12809@uranium.cps.msu.edu>
To: mpi-intro@cs.utk.edu
Subject: Subscription 
Cc: mpi-collcomm@cs.utk.edu


Please add my name into your mailing list.

Thanks,

--Chengchang Huang
From owner-mpi-collcomm@CS.UTK.EDU  Mon Feb 15 06:51:52 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA26138; Mon, 15 Feb 93 06:51:52 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12752; Mon, 15 Feb 93 06:51:16 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 15 Feb 1993 06:51:15 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12744; Mon, 15 Feb 93 06:51:11 -0500
Date: Mon, 15 Feb 93 11:51:03 GMT
Message-Id: <21574.9302151151@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: A Collection of Primitives
To: mpi-collcomm@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk

Dear MPI Colleagues

I found the "Collection of Primitives" most useful. We have a similar
suite of communication routines which we find most useful.

a) When considering finding maxima/minima of distributed data items, it
is often useful to also be able to locate the maxima/minima either in
terms of the process holding that data value, or its position within a
distributed data structure.  The approach we have taken to this is to
introduce a set of procedures which choose a value from a set, rather
than combining a set of values.  The programmer provides an integer
identifier associated with each data value, this may be simply a process
number or a position within a distributed data set such as matrix row
number, and the routine provides the maxima/minima and there
identifiers.  (Ties are resolved by choosing the lowest identifer value,
and all identifiers must be unique.) I propose that we should add a
routine, or routines, of this nature.

b) After some discussion with other interested persons locally, I come
to the conclusion that we should take time at the meeting to consider
what the collcomm operations involving a mixture of communications plus
calculations, such as combination, mean in a heterogeneous environment -
bith in terms of mixed language applications and mixed processor types. 

c) John poses the question of which operations to retain.  I have never
seen an application which uses a large number of these kinds of
functions, but on the other hand I have seen applications which between
them use all of the functions we have implemented.  I therefore suggest
that we retain all of them. 

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Sat Feb 20 10:11:10 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA29389; Sat, 20 Feb 93 10:11:10 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22263; Sat, 20 Feb 93 10:10:14 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 20 Feb 1993 10:10:13 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from vnet.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22249; Sat, 20 Feb 93 10:10:10 -0500
Message-Id: <9302201510.AA22249@CS.UTK.EDU>
Received: from KGNVMA by vnet.ibm.com (IBM VM SMTP V2R2) with BSMTP id 7138;
   Sat, 20 Feb 93 10:07:58 EST
Date: Sat, 20 Feb 93 09:43:04 EST
From: "Daniel D. Frye" <DANIELF@KGNVMA.VNET.IBM.COM>
To: mpi-collcomm@cs.utk.edu

I would recommend we add the following collective communication
routines

  mpi-index - Each process sends a distinct message to all the other
              processes in the group, aka - all-to-all personalized
              communication. Each process in the calling group partitions
              its local buffer into N blocks of equal size, where N is
              the number of processes in the group.  The ith process in
              the sends its jth block in the out buffer to the jth process
              and this block is stored at the ith block in its in buffer.
              Therefore the ith block of the out buffer will be copied
              locally to the ith block of the in buffer.  The only
              arguments necessary are out buffer, in buffer, length of
              the block, gid, and tag/context/whatever.

  mpi-shift - Perform a shift or rotation within a group.  Send a
              block of data any specified number of steps along the
              group either up or down.  The difference between shift
              and rotation is whether or not there is "wrap-around".
              The arguments necessary are out buffer, in buffer, length
              of the block, gid, # of steps, and (perhaps) a flag to
              decide shift or rotation (possibly we want 2 routines?),
              and tag/context/whatever.

  mpi-prefix - Apply parallel prefix (aka scan) with respect to an
               associative reduction operation on data distributed across
               a across and place the corresponding result in each process
               in the group (necessary, I believe, for the generalized
               combine operation we invented in Dallas.)  The operation
               can be any of the functions used in the mpi-reduce operation.


Has anyone taken a shot at a list of reduce operations?


Furthermore, before I forget, given non-blocking collective communication
operations (head-shaking here), we need to define order.  It's more
complicated than ptp message-passing but probably still possible.  I'm
sure we can guarantee order for (e.g.) two successive broadcasts in the
same group with the same root, but not if they have different roots.
Similarly for the cases with a particular destination.   More tricky are
the cases where every process gets a different result.  Can order be
defined for mpi-combine and still preserver some performance?

Thanks.
Dan Frye

From owner-mpi-collcomm@CS.UTK.EDU  Sun Feb 21 11:09:09 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA09315; Sun, 21 Feb 93 11:09:09 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17311; Sun, 21 Feb 93 11:08:33 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 21 Feb 1993 11:08:32 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17303; Sun, 21 Feb 93 11:08:31 -0500
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA18416; Sun, 21 Feb 93 10:07:17 CST
Date: Sun, 21 Feb 93 10:07:17 CST
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9302211607.AA18416@Aurora.CS.MsState.Edu>
To: DANIELF@KGNVMA.VNET.IBM.COM
Subject: hi
Cc: mpi-collcomm@cs.utk.edu

Dan,

It is not obvious to me that we can require the same order for two
successive broadcasts from the same root.  I say this because hardware
implementations (which would be fast) might not support this form of
determinism.  Second, performance characteristics might be better on
average if a different apparent permutation of the participants were
used (for the same root) each time.  I would furthermore add that an
algorithm might like to control that question.

In broadcasts, I see that there are four reasonable cases, modulo
the permutations just discussed.  An algorithm with the root node
sending ceil(log N) messages, an algorithm with each node sending at most
two messages; same algorithms, with the root node off-loading its
data to another node (hot-spot reduction), and then sending no
other messages.

- Tony

From owner-mpi-collcomm@CS.UTK.EDU Sat Feb 20 09:13:40 1993
Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA17871; Sat, 20 Feb 93 09:13:40 CST
Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA13806; Sat, 20 Feb 93 09:14:36 CST
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22263; Sat, 20 Feb 93 10:10:14 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 20 Feb 1993 10:10:13 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from vnet.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22249; Sat, 20 Feb 93 10:10:10 -0500
Message-Id: <9302201510.AA22249@CS.UTK.EDU>
Received: from KGNVMA by vnet.ibm.com (IBM VM SMTP V2R2) with BSMTP id 7138;
   Sat, 20 Feb 93 10:07:58 EST
Date: Sat, 20 Feb 93 09:43:04 EST
From: "Daniel D. Frye" <DANIELF@KGNVMA.VNET.IBM.COM>
To: mpi-collcomm@cs.utk.edu
Status: RO
Content-Length: 2441
X-Lines: 47

I would recommend we add the following collective communication
routines

  mpi-index - Each process sends a distinct message to all the other
              processes in the group, aka - all-to-all personalized
              communication. Each process in the calling group partitions
              its local buffer into N blocks of equal size, where N is
              the number of processes in the group.  The ith process in
              the sends its jth block in the out buffer to the jth process
              and this block is stored at the ith block in its in buffer.
              Therefore the ith block of the out buffer will be copied
              locally to the ith block of the in buffer.  The only
              arguments necessary are out buffer, in buffer, length of
              the block, gid, and tag/context/whatever.

  mpi-shift - Perform a shift or rotation within a group.  Send a
              block of data any specified number of steps along the
              group either up or down.  The difference between shift
              and rotation is whether or not there is "wrap-around".
              The arguments necessary are out buffer, in buffer, length
              of the block, gid, # of steps, and (perhaps) a flag to
              decide shift or rotation (possibly we want 2 routines?),
              and tag/context/whatever.

  mpi-prefix - Apply parallel prefix (aka scan) with respect to an
               associative reduction operation on data distributed across
               a across and place the corresponding result in each process
               in the group (necessary, I believe, for the generalized
               combine operation we invented in Dallas.)  The operation
               can be any of the functions used in the mpi-reduce operation.


Has anyone taken a shot at a list of reduce operations?


Furthermore, before I forget, given non-blocking collective communication
operations (head-shaking here), we need to define order.  It's more
complicated than ptp message-passing but probably still possible.  I'm
sure we can guarantee order for (e.g.) two successive broadcasts in the
same group with the same root, but not if they have different roots.
Similarly for the cases with a particular destination.   More tricky are
the cases where every process gets a different result.  Can order be
defined for mpi-combine and still preserver some performance?

Thanks.
Dan Frye


From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar  4 10:37:50 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA29481; Thu, 4 Mar 93 10:37:50 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA00367; Thu, 4 Mar 93 10:37:05 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 4 Mar 1993 10:37:03 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA00357; Thu, 4 Mar 93 10:37:00 -0500
Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA19768
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Thu, 4 Mar 1993 10:36:56 -0500
Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1)
	id AA21448; Thu, 4 Mar 93 15:36:53 GMT
Date: Thu, 4 Mar 93 15:36:53 GMT
From: jim@meiko.co.uk (James Cownie)
Message-Id: <9303041536.AA21448@hub.meiko.co.uk>
Received: by float.co.uk (5.0/SMI-SVR4)
	id AA01959; Thu, 4 Mar 93 15:34:12 GMT
To: mpi-collcomm@cs.utk.edu
Cc: jim@meiko.co.uk
Subject: Synchronisation semantics
Content-Length: 1419

Sorry if you get this twice, I sent something similar yesterday, but
didn't get it back myself, so I guess it's disappeared into the great
bit-bucket in the sky.

As I understand the current collective communication proposal, the
synchronisation semantics of the global operations are only weakly
specified. Either 
1) each process can continue as soon as its contribution to the global
   operation is complete 
or 
2) they can be implemented as if there were a group synchronisation.

The first case allows code like this to execute

	Process 1	Process 2	Process 3

	broadcast(rx)   receive from 1	broadcast(tx)
	send to 2	broadcast(rx)	

the second would cause it to deadlock.

I don't believe we should leave this an open issue, since in the
absence of a specification, the user MUST assume that a group
synchronisation occurs. (And if the assume it does they'll get bitten
when it doesn't).

I believe that we should assert that the synchronisation happens.

Those users who explicitly do NOT want it can then make use of the
non-blocking forms of the collective operations (whichever we allow
in) to relax the synchronisation point.

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com


From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar  4 12:12:41 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA02263; Thu, 4 Mar 93 12:12:41 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05340; Thu, 4 Mar 93 12:10:21 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 4 Mar 1993 12:10:19 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gstws.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05321; Thu, 4 Mar 93 12:10:18 -0500
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA14943; Thu, 4 Mar 1993 12:10:17 -0500
Date: Thu, 4 Mar 1993 12:10:17 -0500
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9303041710.AA14943@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Re: Synchronisation semantics



>I believe that we should assert that the synchronisation happens.

I one the other hand would like to declare the example you give
as an erroneous program and put it in the (growing larger) class
of the errroneous programs that can now be written in pt2pt.
And
I would prefer that the user's applications not be forced to wait
on synchronization to occur. It is a mixed bag in existing interfaces
some use method 1 some use method 2. Method 1 is faster
and I don't hear user's complaining about their codes breaking
when using the existing method 1 interfaces.
So I am inclined to specify:
1) each process can continue as soon as its contribution to the global
   operation is complete 

Do other people in this subcommittee have an opinion?

Al Geist
From owner-mpi-collcomm@CS.UTK.EDU  Fri Mar  5 03:32:04 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA23652; Fri, 5 Mar 93 03:32:04 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22953; Fri, 5 Mar 93 03:31:40 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 5 Mar 1993 03:31:39 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22945; Fri, 5 Mar 93 03:31:36 -0500
Received: from fermi.pnl.gov (130.20.182.50) by pnlg.pnl.gov; Thu, 4 Mar 93
 12:10 PST
Received: by fermi.pnl.gov (4.1/SMI-4.1) id AA02720; Thu, 4 Mar 93 12:09:19 PST
Date: Thu, 04 Mar 93 12:09:17 -0800
From: Robert J Harrison <d3g681@fermi.pnl.gov>
Subject: Re: Synchronisation semantics
To: mpi-collcomm@cs.utk.edu
Message-Id: <9303042009.AA02720@fermi.pnl.gov>
In-Reply-To: Your message of "Thu, 04 Mar 93 12:10:17 EST."
 <9303041710.AA14943@gstws.epm.ornl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

In message <9303041710.AA14943@gstws.epm.ornl.gov> you write:
> 
> 
> >I believe that we should assert that the synchronisation happens.
> 
> I one the other hand would like to declare the example you give
> as an erroneous program and put it in the (growing larger) class
> of the errroneous programs that can now be written in pt2pt.
> And
> I would prefer that the user's applications not be forced to wait
> on synchronization to occur. It is a mixed bag in existing interfaces
> some use method 1 some use method 2. Method 1 is faster
> and I don't hear user's complaining about their codes breaking
> when using the existing method 1 interfaces.
> So I am inclined to specify:
> 1) each process can continue as soon as its contribution to the global
>    operation is complete 
> 
> Do other people in this subcommittee have an opinion?
> 
> Al Geist


I do not think that one can define what this

> 1) each process can continue as soon as its contribution to the global
>    operation is complete 

means without reference to an implementation.  Also, some implementations
may require synchronization (e.g. for efficiency, or due to h/w or s/w 
limitations).  Other implementations may not.  With proper use
of tagging etc. no synchronization is required for correct execution
no matter what order messages arrive in, apart from the usual
concerns about available buffer space.

Thus, from consideration of orthogonality of function and efficiency,
I would suggest that

1) The synchronization properties of global operations be left
   undefined where this is not required for their termination
   with correct numerical results (e.g. a global summation).
   Any constraints on tags, etc., for correct execution should
   also be defined, though I think we should work very hard to
   remove any such contraints.

2) A separate primitve that acts as a barrier or synchronization
   be provided (I think this is the case already).

Primitive 2 might be provided as a special form of primitive 1, so
that unecessary communication is avoided.  However, this seems
to me a minor optimization.

Robert.

Robert J. Harrison

Mail Stop K1-90                             tel: 509-375-2037
Battelle Pacific Northwest Laboratory       fax: 509-375-6631
P.O. Box 999, Richland WA 99352          E-mail: rj_harrison@pnl.gov





From owner-mpi-collcomm@CS.UTK.EDU  Mon Mar  8 07:07:24 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA01919; Mon, 8 Mar 93 07:07:24 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17798; Mon, 8 Mar 93 07:06:48 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 8 Mar 1993 07:06:47 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA17790; Mon, 8 Mar 93 07:06:43 -0500
Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA03142
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Mon, 8 Mar 1993 07:06:39 -0500
Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1)
	id AA10106; Mon, 8 Mar 93 12:06:35 GMT
Date: Mon, 8 Mar 93 12:06:35 GMT
From: jim@meiko.co.uk (James Cownie)
Message-Id: <9303081206.AA10106@hub.meiko.co.uk>
Received: by float.co.uk (5.0/SMI-SVR4)
	id AA02376; Mon, 8 Mar 93 12:03:45 GMT
To: geist@gstws.epm.ornl.gov
Cc: mpi-collcomm@cs.utk.edu
In-Reply-To: Al Geist's message of Thu, 4 Mar 1993 12:10:17 -0500 <9303041710.AA14943@gstws.epm.ornl.gov>
Subject: Synchronisation semantics
Content-Length: 1001

Jim> I believe that we should assert that the synchronisation happens.
OK, so maybe I was a bit stronger than I meant to be.

I actually don't mind too much one way or the other, as long as we
understand what it is that we're doing. 

Therefore are we specifying
> 1) each process CAN continue as soon as its contribution to the global
>    operation is complete 

or

1) each process MUST continue as soon as its contribution to the global
   operation is complete 

(In other words is an implementation free to treat all global operations
as a global synchronisation or not ?) I'm happy with the first of
these statements, but not the second. (However it should be re-worded to make
the possiblity clearer in a draft).

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com


From owner-mpi-collcomm@CS.UTK.EDU  Mon Mar  8 09:14:45 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA05445; Mon, 8 Mar 93 09:14:45 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22382; Mon, 8 Mar 93 09:14:09 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 8 Mar 1993 09:14:07 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22373; Mon, 8 Mar 93 09:14:06 -0500
Received: by msr.EPM.ORNL.GOV (5.67/1.34)
	id AA13188; Mon, 8 Mar 93 09:13:48 -0500
Date: Mon, 8 Mar 93 09:13:48 -0500
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9303081413.AA13188@msr.EPM.ORNL.GOV>
To: jim@meiko.co.uk
Subject: Re:  Synchronisation semantics
Cc: mpi-collcomm@cs.utk.edu

The draft will read:
1) each process CAN continue as soon as its contribution to the global
   operation is complete.

Cheers,
 Al
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 10 08:13:16 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA28509; Wed, 10 Mar 93 08:13:16 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA11051; Wed, 10 Mar 93 08:11:02 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 10 Mar 1993 08:11:01 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from super.super.org by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA11039; Wed, 10 Mar 93 08:10:59 -0500
Received: from b125.super.org by super.super.org (4.1/SMI-4.1)
	id AA20486; Wed, 10 Mar 93 08:10:57 EST
Received: by b125.super.org (4.1/SMI-4.1)
	id AA01741; Wed, 10 Mar 93 08:10:56 EST
Date: Wed, 10 Mar 93 08:10:56 EST
From: lederman@b125.super.org (Steve Huss-Lederman)
Message-Id: <9303101310.AA01741@b125.super.org>
To: mpi-collcomm@cs.utk.edu
Subject: non-blocking routines

I casually raised the issue at the last meeting of whether we were
going to make the collective communications completely compatible with
the point-to-point standard.  Specifically, I raised the issue of
whether there would be a non-blocking broadcast.  This started a chain
of events that ultimately lead to a non-blocking wait.  I can only hope
that in the official minutes the originator's name will be lost.  I
already see the laughing occurring when one first hears this suggestion
out of context :-)

But seriously, since I started this thing I would like to now see it
resolved.  The next points involve the global picture and then some
details of non-blocking collective communications follow.  I have not
filled in a lot of details until I think the global picture is
resolved.

The way this whole thing started was for symmetry with
point-to-point.  I think people agree that you could take advantage of
a non-blocking collective communication in the same way you can a
non-blocking send.  It was even pointed out that collective
communications are generally more expensive in terms of latency and
time so there might be an even bigger justification.  So the group
voted for a non-blocking broadcast by a fairly large majority (if my
recollection is correct).  Now the slippery slope argument sets in.  If
you have a non-blocking broadcast, you need a non-blocking gather,
scan, etc.  This finally led to the non-blocking wait or more
appropriately called a non-blocking barrier.  As the votes progressed,
there were fewer total votes and fewer yes votes for the non-blocking
version.  I interpret this as people starting to understand the
consequences of the first vote and starting to have second thoughts.
Do people agree with this interpretation?  Is my memory/notes correct?

So the big picture question is whether we should have non-blocking
collective communications calls at all.  Here is my current feelings.
I think that they have merit and can be useful.  However, they add a
lot of complexity to routines that are already difficult to do
correctly and efficiently.  I think it is unlikely that we can specify
these routines and get done in 3 more meetings.  (I am also posting
this idea in a more general context to the whole committee.)  It also
falls outside current practice.  If we decide to pursue a more complex
standard and extend the deadline, then we should include this too.
However, I would think a more manageable first standard that can get
done quickly would be better.

Given that, I raise a few of the issues involved in non-blocking
collective communications.  I only list some to show what is involved.
If we decide to continue down this path, then I will be more explicit
and get involved more in details.

If we have non-blocking calls, then we need all the routines like
point-to-point has.  For example, we need a wait, probe and either two
calls or an option to choose between blocking and not.  Another issue
is dealing with two non-blocking calls in a row.  For example, suppose
you do two non-blocking broadcasts in a row but use a different root.
It seems to me that an intermediate node could get two different
messages from another intermediate node and have trouble telling which
broadcast it is supposed to be for.  Are we going to allow this?  If
so, the coding of the broadcast may be much harder on some systems.
If not, you restrict the user in a way that is unnatural.

Steve

P.S. - The moral is: never make a casual suggestion at an MPI
meeting.  You'll probably live to regret it :-).
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 11 12:49:49 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA02571; Thu, 11 Mar 93 12:49:49 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA06487; Thu, 11 Mar 93 12:48:54 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 11 Mar 1993 12:48:53 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from canidae.cps.msu.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA06461; Thu, 11 Mar 93 12:48:46 -0500
Received: from pit-bull.cps.msu.edu by canidae.cps.msu.edu (4.1/rpj-5.0); id AA01661; Thu, 11 Mar 93 12:48:43 EST
Received: by pit-bull.cps.msu.edu (4.1/4.1)
	id AA04285; Thu, 11 Mar 93 12:48:42 EST
Date: Thu, 11 Mar 93 12:48:42 EST
From: kalns@cps.msu.edu
Message-Id: <9303111748.AA04285@pit-bull.cps.msu.edu>
To: mpi-collcomm@cs.utk.edu
Subject: reduction and gather

Dear Collective Communications Subcommittee:

I have not participated in this forum in the past;
however, I have been an active MPI reader for the past two
months.  I would like to comment on the following:

1. Reduction
   a. each participating process gets the result
   b. additional ops

2. Gather
   a. concatenation in rank order
----------

Al Geist proposed the following interface for reduction:

>  info = MPI_GLOBAL_OP( inbuf, bytes, type, gid, op, outbuf )
>
>  Function:
>  Called by all members of the group "gid"
>  using the same argument for "bytes", "type", "gid", and "op".
>  On return the "outbuf" of all group members contains the
>  result of the global operation "op" applied pointwise to
>  the collective "inbuf". For example, if the op is max and
>  inbuf contains two float point numbers then
>        outbuf(1) = global max( inbuf(1)) and
>        outbuf(2) = global max( inbuf(2))
>  A set of standard operations are supplied with MPI including:
>    global max - for each data type
>    global min - for each data type
>    global sum - for each data type
>    global mult- for each data type
>    global AND - for integer and logical type
>    global OR  - for integer and logical type
>    global XOR - for integer and logical type

Every process receives the result of the reduction operation.

John Kapenga proposed two different reductions in
"Collection of Primitives" where in one
case all processes receive the result, the other only a
single process receives the result.

I concur with John's more flexible approach since for some
applications, only a single process needs the result.
Consider Gaussian Elimination with columns of the coefficient
matrix distributed to processors.  The following code illustrates.
This code must be translated into message-passing (SPMD) code
for each processor. (Assuming one process/processor)

s1:  DO I=1,N
s2:    LOC = MAXLOC(A[I,I:N])              /* max location in row */
s3:    EXCHANGE(A[1:N,I],A[1:N,LOC])       /* exchange columns */
s4:    A[I,I:N] = A[I,I:N] / A[I,I]
s5:    DO J=I+1,N
s6:       DO K=I+1,N
s7;          A[J,K] = A[J,K] - A[J,I] * A[I,K]
s8:       END DO
s9:    END DO
s10: END DO

The only processes that need to know the max location are
the process which owns column I and the process which
owns column LOC, in order to exchange columns.

The above code also illustrates where MAXLOC (and MINLOC)

Al Geist proposed the following interface for gather:
>  info = MPI_GATHER( buf, bytes, type, gid, root )
>
>  Function:
>  Called by all members of the group "gid"
>  using the same argument for "bytes", "type", "gid", and "root".
>  On return all the individual "buf" are concatenated into the "root" buf,
>  which must be of size at least gsize*bytes.
>  The data is laid in the "root" buf in rank order that is
>  | gid,0 data | gid,1 data | ...| gid, root data | ...| gid, gsize-1 data |
>  Other member's "buf" are unchanged on return.
>  On return "info" contains the error code.

Why must the data be laid out in "rank order"? This may not
always be necessary.  There is certainly additional overhead
in arranging it this way instead of just concatenating messages (with
GCPID) as they arrive. Perhaps there could be an option to obtain
in rank order when necessary.

Regards,
Edgar

======================================================================
| Edgar T. Kalns                     | Internet: kalns@cps.msu.edu   |
| Advanced Computing Systems Lab     | Tel: (517) 353-8666           |   
| Department of Computer Science     |                               |
| Michigan State University          |                               |
| East Lansing, MI 48824, USA        |                               |
======================================================================

From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 11 13:25:26 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04034; Thu, 11 Mar 93 13:25:26 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA09574; Thu, 11 Mar 93 13:24:28 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 11 Mar 1993 13:24:27 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from deepthought.cs.utexas.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA09566; Thu, 11 Mar 93 13:24:25 -0500
From: rvdg@cs.utexas.edu (Robert van de Geijn)
Received: from grit.cs.utexas.edu by deepthought.cs.utexas.edu (5.64/1.2/relay) with SMTP
	id AA23445; Thu, 11 Mar 93 12:24:26 -0600
Received: by grit.cs.utexas.edu (5.64/Client-v1.3)
	id AA13089; Thu, 11 Mar 93 12:24:16 -0600
Date: Thu, 11 Mar 93 12:24:16 -0600
Message-Id: <9303111824.AA13089@grit.cs.utexas.edu>
To: kalns@cps.msu.edu
Cc: mpi-collcomm@cs.utk.edu
In-Reply-To: kalns@cps.msu.edu's message of Thu, 11 Mar 93 12:48:42 EST <9303111748.AA04285@pit-bull.cps.msu.edu>
Subject: reduction and gather

   Dear Collective Communications Subcommittee:

   Al Geist proposed the following interface for reduction:
n
   >  info = MPI_GLOBAL_OP( inbuf, bytes, type, gid, op, outbuf )
   >
 
   Every process receives the result of the reduction operation.

   John Kapenga proposed two different reductions in
   "Collection of Primitives" where in one
   case all processes receive the result, the other only a
   single process receives the result.

   I concur with John's more flexible approach since for some
   applications, only a single process needs the result.
   Consider Gaussian Elimination with columns of the coefficient
   matrix distributed to processors.  The following code illustrates.
   This code must be translated into message-passing (SPMD) code
   for each processor. (Assuming one process/processor)

There are a number of reasons to have two versions: Indeed, the
"Fan-in" is often used, and can be implemented on most systems
requiring half the time of the GSUM to all (for large vectors).
Indeed, I propose a third version: A combine leaving the result in
pieces distributed among the nodes.  (This would be the inverse of the
GCOLX routine, with a combine added, in Intel Lingo).  an integer
array would indicate the size of the piece to be left at each node.
Again, there are performance issues behind the need for this last
operation, since the GSUM to all performs this operation, and more.

Robert




=====================================================================
  Robert A. van de Geijn                     rvdg@cs.utexas.edu  
  Assistant Professor
  Department of Computer Sciences            (Work)  (512) 471-9720
  The University of Texas                    (Home)  (512) 251-8301 
  Austin, TX 78712                           (FAX)   (512) 471-8885 
=====================================================================
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 11 13:44:15 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04337; Thu, 11 Mar 93 13:44:15 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA10540; Thu, 11 Mar 93 13:43:30 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 11 Mar 1993 13:43:29 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA10532; Thu, 11 Mar 93 13:43:16 -0500
Received: by msr.EPM.ORNL.GOV (5.67/1.34)
	id AA01838; Thu, 11 Mar 93 13:43:03 -0500
Date: Thu, 11 Mar 93 13:43:03 -0500
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9303111843.AA01838@msr.EPM.ORNL.GOV>
To: mpi-collcomm@cs.utk.edu
Subject: Re: Edgar's questions
Cc: kalns@cps.msu.edu


Hi Edgar,

>John Kapenga proposed two different reductions in
>all processes receive the result
>single process receives the result
>I concur with John's more flexible approach

I also agree that we can have both functions,
and the collective communication draft I  am maddly writing
contains both. (and some others submitted by Frye.)

>Why must the data be laid out in "rank order"? This may not
>always be necessary.

It is a convience to the user so that he may quickly
find data from a particular task. Since bytes is constant
root can place each message in the correct location in buf
with no extra overhead. So there is no incentive to have 
a random order.

Al Geist
From owner-mpi-collcomm@CS.UTK.EDU  Fri Mar 12 11:23:01 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA23453; Fri, 12 Mar 93 11:23:01 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08903; Fri, 12 Mar 93 11:22:14 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 12 Mar 1993 11:22:12 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from [128.219.8.54] by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08885; Fri, 12 Mar 93 11:22:09 -0500
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA15629; Fri, 12 Mar 1993 11:21:55 -0500
Date: Fri, 12 Mar 1993 11:21:55 -0500
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9303121621.AA15629@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: First draft of Collective Communication section of MPI.


\documentstyle[12pt]{article}
\begin{document}

\section{Collective Communication}

[I have placed comments and questions in square braces.]

\subsection{Introduction}

This section is a draft of the current proposal for collective communication.
Collective communication is defined to be communication that involves
a group of tasks. Examples are broadcast and global sum.
Because of the need to deal with groups of tasks, this section will also
present a proposal for the formation, partitioning, and managing
of basic groups. 
A basic group has two properties:
It has a group identifier which is associated with a set of tasks.
Each task in a group has a unique rank $0 - (p-1)$ in the group.
There is an initial default group {\bf ALL} that contains
all the tasks.
Giving or forming groups with topological features is presented in section 4.
[by the Topology subcommittee]

The collective communication routines are built above the point-to-point
routines. While vendors may optimize certain collective routines for
their architectures, a complete library of the collective communication
routines written entirely in point-to-point will be available.
The following communication functions are proposed.
\begin{itemize}
\item
Broadcast from one member to all members of a group.
\item
Barrier across all group members
\item
Gather data from all group members to one member.
\item
Scatter data from one member to all members of a group.
\item
Global operations such as sum, max, min, etc., were the result
is known by all group members and a variation where the result is
known by only one member. The ability to have user defined
global operations.
\item
Simultaneous shift of data around the group, the simplest example
being all members sending their data to (rank+1) with wrap around.
For portability, the topology section provides routines for 
defining who is a member's neighbor in a given direction and hops away.
\item
Scan across all members of a group (also called parallel prefix).
\item
Broadcast from all members to all members of a group.
\item
Scatter data from all members to all members of a group
(also called complete exchange or index).

To simplify the collective communication interface it is
designed with two layers. The low level routines have all the
generality of, and make use of, the buffer descriptor routines
of the point-to-point section which allows arbitrarily complex
messages to be constructed. The second level routines are
similar to the upper level point-to-point routines in that they send
only a contiguous buffer.

\section{Group Functions}

Before defining a collective operation between a group of tasks,
it is necessary to create and manage a group.
A group is identified by a group name that is supplied by the user.
[It is sufficient with static groups to only have an opaque group ID,
which is returned to the user during group formation.
But if we allow dynamic groups in some (future) version of MPI,
then there is no way for a new task to join a group since the 
user doesn't have the opportunity to label the group.
To allow for future extensibility of the group concept
the present draft specifies that groups be named. 
The underlying implementation can map this label to any
type of group ID that is convenient or fast. This could be
an elaborate structure or a simple integer.]
Each member of a group has a unique rank in the group. 
The rank values are the integers 0 to number-of-members minus 1.
Each group has a topology associated with it. 
The collective communication routines are implemented in terms
of the topology associated with a given group.
Athough the function would be the same, a broadcast in a group 
with a ring topology could be implemented differently from a broadcast in
a group with hypercube topology.

The default topology for a group is fully connected. 
Existing groups including the {\bf ALL} group
can switch their associated topology using the functions described in
section 4. This allows the user to match the group topology to the
algorithm executed by the group or the underlying hardware.

The debate rages on about whether groups should be dynamic or static.
A static group is defined to be a group where once it is formed
its membership never changes.
Static groups are just a subset of dynamic groups. The added generality
of dynamic groups is perceived as a useful property to have in MPI
at some future time. One of the most important properties dynamic
groups allows is the development of fault tolerant applications.
Given the time constraints for MPI-1,
the following proposal is written so that dynamic groups 
are possible, but MPI-1 only specifies the restricted case
where the groups are static. 

my\_rank = MPI\_PARTGROUP(group, newgroup)

     Returns only after all members of group have called it.
     The newgroup argument is used as a key. All members with
     the same newgroup argument are placed in the same group
     and their rank in this new group is returned.

old\_group = MPI\_LVGROUP(group)

     In the restricted case of MPI-1, returns only after all
     members of group have called it. 
     Frees all memory and system resources used by group. 
     Returns the name of the old group
     from which they were last partitioned (and of which
     they are still a member). It is an error to call lvgroup(ALL).

size    = MPI\_GSIZE(group)

     Returns the (instantaneous) size of group. [can be called by any task?]

rank    = MPI\_GETRANK(group,pid)

     Given that pid is the unique (possibly opaque) task identifier,
     returns the rank of pid in group.

pid     = MPI\_GETPID(group,rank)

     Given that pid is the unique (possibly opaque) task identifier,
     returns the pid of the task identified by (group,rank).

pid = MPI\_MYPID()

This is included here for completeness to show how a task could
get its rank in a group.

my\_rank = MPI\_JOINGROUP(group)

     Dynamic group function available in MPI-2. 
     Can be called by an individual task with any argument for group.
     If group doesn't exist, then it is created and this task
     becomes its first member.
     If the group exists, then this task is placed in the group
     and given the lowest available rank. For example, if there is
     a gap in the ranks due to a process failure, then this task
     would fill the gap.

\section{Communication Functions}

The proposed communication functions are divided into two layers.
The lowest level uses the same buffer descriptor routines 
available in point-to-point to create noncontiguous, multiple data type
messages. The second level handles only contiguous single data type
messages. Like the point-to-point high level interface, the second
level of collective communication routines handles heterogeneity.

There has been discussion about the synchronization properties
of the collective communication routines. In this proposal
routines can (but are not required to) return as soon as their 
participation in the collective communication is complete.

Each of the following functions returns an error code 
in the info argument.

\subsection{Level 2 routines}

info = MPI\_BCAST( buf, nitems, type, tag, group, from\_rank )

MPI\_BCAST broadcasts a message to all members of a group.
It is called by all members of group using the same arguments for
nitems, type, tag, group, and from\_rank.
On return the contents of the array buf on the member with from\_rank
is contained in buf on all group members.
type is the data type to be sent, nitems is the number of 
these items, tag is a user supplied message tag.

info = MPI\_BARRIER( group, tag )

MPI\_BARRIER blocks the calling task until all group members have called it
using the same tag, 
MPI\_BARRIER returns only when all group members have called this function.

info = MPI\_GATHER( inbuf, outbuf, nitems, type, tag, group, to\_rank~)

MPI\_GATHER gathers the nitems in each group member's inbuf
and places these items in rank order in the to\_rank member's outbuf.
It is called by all members of group using the same arguments for
nitems, type, tag, group, and to\_rank.
The receiving member must declare outbuf to be at least
(nitems * sizeof(type)) * (gsize(group)).
outbuf is unchanged on all the other group members.


info = MPI\_SCATTER( inbuf, outbuf, nitems, type, tag, group, from\_rank~)

MPI\_SCATTER sends different pieces of the from\_rank member's inbuf
to each of the other group members.
The routine is called by all members of the group using the same arguments for
nitems, type, tag, group, and from\_rank.
The data is laid in the from\_rank member's inbuf in rank order.
The other member's inbuf is unchanged by the routine.
On return each member's outbuf contains its nitems piece of the
originators inbuf.

info = MPI\_GLOBAL\_OP( inbuf, outbuf, nitems, type, tag, group, op~)

MPI\_GLOBAL\_OP performs a global operation on the inbuf and
returns the result in outbuf.
The routine is called by all group members using the same arguments
for nitems, type, tag, group, and op.
On return the outbuf of each member contains the result of 
the global operation op applied pointwise the the collective inbuf.
For example, if the op is max and inbuf contains two floating point numbers,
then outbuf(1) $=$ global max(inbuf(1)) and outbuf(2) $=$ global max(inbuf(2)).
A set of standard operations are supplied with MPI including:
\begin{itemize}
\item global max for each data type
\item global min for each data type
\item global sum for each data type
\item global mult for each data type
\item global AND for integer and logical
\item global OR for integer and logical
\item global XOR for integer and logical
\item global scalar max and who has it
\item global scalar min and who has it
\end{itemize}

info = MPI\_USER\_OP( inbuf, outbuf, nitems, type, tag, group, func~)

Same as the global operation function above except the user
supplies the function that is performed on each member rather
than using the standard operations.

info = MPI\_REDUCE(inbuf, outbuf, nitems, type, tag, group, to\_rank, op~)

Same as the global operation function above except only the 
to\_rank member receives the result in its outbuf. The outbuf
of all other routines is unchanged.

info = MPI\_SHIFT( inbuf, outbuf, nitems, type, tag, group, steps~)

Simultaneous shift of data a given number of steps around the group, 
the simplest example
being all members sending their data to (rank+1) with wrap around.
For portability, the topology section provides routines for
defining who is a member's neighbor in a given direction and hops away.

info = MPI\_SCAN( inbuf, outbuf, nitems, type, tag, group, op )

MPI\_SCAN is used to perform a parallel prefix with respect to
an associative reduction operation on data distributed across the group. 
The same standard operations as found in MPI\_GLOBAL\_OP are supplied
with MPI.

info = MPI\_ALLCAST( inbuf, outbuf, nitems, type, tag, group )

Broadcast from all members to all members of a group.

info = MPI\_ALLSCATTER( inbuf, outbuf, nitems, type, tag, group~)

Each process sends a distinct message to all the other
processes in the group, aka - all-to-all personalized
communication. Each process in the calling group partitions
its local buffer into N blocks of equal size, where N is
the number of processes in the group.  The ith process in
the sends its jth block in the out buffer to the jth process
and this block is stored at the ith block in its in buffer.
Therefore the ith block of the out buffer will be copied
locally to the ith block of the in buffer.

\subsection{Level 1 routines}

[I suggest that the level 1 routines be deferred to MPI-2
as well as the buffer descriptor versions of point-to-point.
But if point-to-point includes bd versions then it will be
easy to include comparable version of collective communication routines.
I like the bd version of point-to-point and collective, but I feel
it deviates too far from common practice for MPI-1.]

Level 1 routines allow the user to communicate noncontiguous messages
containing multiple data types. The present proposal is for the 
collective routines to use the same routines that are in the
point-to-point interface to create these arbitrary messages.
Not all collective operations make sense in this context.
The following functions are provided in level 1:

\begin{tabular}{l}
info = MPI\_BCASTBD( bd, tag, group, from\_rank )            \\
info = MPI\_GATHERBD( inbd, outbd, tag, group, to\_rank )    \\
info = MPI\_SCATTERBD( inbd, outbd, tag, group, from\_rank ) \\
info = MPI\_USER\_OPBD( inbd, outbd, tag, group, func )     \\
info = MPI\_SHIFTBD( inbd, outbd, tag, group, steps )       \\
info = MPI\_ALLCASTBD( inbd, outbd, tag, group )            \\
info = MPI\_ALLSCATTERBD( inbd, outbd, tag, group )         \\
\end{tabular}

The descriptions of the functions is the same as in level 2
with the exception that instead of a contiguous block of data
of the same data type each block of data is described by a
buffer descriptor for both input and output buffers.
data types.

\subsection{Nonblocking Communication}

[There was discussion at the last meeting about having nonblocking
variants of the collective communication routines.
They are not presented here because a formal proposal was never 
submitted to the collective communication subcommittee for discussion.
The proposal must explain how the routines work, how they are
used in an application preferably with an example, and if 
possible how the routines could be implemented with discussion
about message order guarantees, robustness, and cancellation.
I feel that the nonblocking routines are far too complex for MPI-1,
and should not be discussed in the present proposal.]

\end{document}
From owner-mpi-collcomm@CS.UTK.EDU  Sun Mar 14 13:59:12 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA05206; Sun, 14 Mar 93 13:59:12 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA16184; Sun, 14 Mar 93 13:58:44 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 14 Mar 1993 13:58:42 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA16158; Sun, 14 Mar 93 13:58:04 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93
 10:57 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29374; Sun,
 14 Mar 93 10:55:26 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05149; Sun, 14 Mar 93 10:55:22
 PST
Date: Sun, 14 Mar 93 10:55:22 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: proposal to mpi-collcomm
To: d39135@sodium.pnl.gov, geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov,
        jim@meiko.co.uk, lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk,
        mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu,
        tony@Aurora.CS.MsState.Edu
Message-Id: <9303141855.AA05149@sodium.pnl.gov>
X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

Al & Tony, et.al.:

I am about to send to mpi-collcomm, two notes regarding changes I
propose to the collective communication specification.  (One note
summarizes the changes; the other discusses the reasons for them.)

I am also sending these notes to mpi-context and friends because
they relate to other discussions going on there.

Thought you'd like to know...
--Rik

----------------------------------------------------------------------
rj_littlefield@pnl.gov               Rik Littlefield
Tel: 509-375-3927                    Pacific Northwest Lab, MS K1-87
                                     P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Sun Mar 14 15:04:40 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA06334; Sun, 14 Mar 93 15:04:40 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18143; Sun, 14 Mar 93 15:04:09 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 14 Mar 1993 15:04:08 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18125; Sun, 14 Mar 93 15:03:46 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93
 12:01 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29382; Sun,
 14 Mar 93 11:59:17 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05208; Sun, 14 Mar 93 11:59:13
 PST
Date: Sun, 14 Mar 93 11:59:13 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: collcomm changes, summary
To: geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov, jim@meiko.co.uk,
        lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu,
        mpi-context@cs.utk.edu, ranka@top.cis.syr.edu,
        tony@Aurora.CS.MsState.Edu
Cc: d39135@sodium.pnl.gov
Message-Id: <9303141959.AA05208@sodium.pnl.gov>
X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

SUMMARY OF SUGGESTED CHANGES TO COLLECTIVE COMMUNICATION PROPOSAL

The draft proposal that Al Geist distributed several days ago
contains some features that would prevent it from being
implemented as a layer on top of MPI point-to-point facilities.

The purpose of this note is to propose changes to the group
control routines in order to permit layering, and to propose
other changes for better and more predictable performance.

A discussion of the rationale for these proposed changes 
will be distributed separately because of its length.

The main changes introduced in this note are:

. The concept of group identification is firmed up.  Most
  operations use a "group handle" that is local to the process.
  (Think of the group handle as being just the address of a
  potentially large and complex "group descriptor".)  There is
  still a "group ID" that is globally unique, but it has only a
  secondary role and can be ignored by most applications.  The
  "group name" is entirely removed from MPI-1.  (Group names are
  still anticipated in MPI-2, but upward-compatibility is
  maintained in a different way from the draft proposal.)

. A semantic restriction is introduced, that a process can access
  information about a group only if the process holds a group
  handle for it.  Group handles can be obtained in two ways: 1)
  they are produced by group formation routines, and 2) a process
  can explicitly distribute copies of its group handles to other
  processes, using new routines introduced specifically for that
  purpose.

. A cacheing mechanism is introduced, that allows modules to
  attach arbitrary information to a group descriptor in such a
  way that it can be quickly retrieved.  Cacheing facilitates the
  construction of collective communication routines that are
  "fast after the first execution in a group", no matter how the
  other group operations are implemented.

. A new group formation routine is introduced, that is less
  synchronous and more general than MPI_PARTGROUP.

Specifically, the following routines are proposed to be added or
modified:

1. Arbitrary group formation:

    newgrp_handle = MPI_FORMGROUP (grouptag,groupsize,knownmembers)

    where
     grouptag     is a user-provided integer tag, sufficiently unique
                  to disambiguate overlapping groups that might be
                  formed simultaneously (say by multiple threads).

     groupsize    is the number of members that will compose the group.

     knownmembers is a set of pid's of some or all members of the group.
                  Each member of the group must provide the same
                  set of knownmembers.

     newgrp_handle  is a group handle for the newly formed group

    This new routine must be called synchronously, but only by those
    processes forming the group.

2. Group partitioning:

    newgrp_handle = MPI_PARTGROUP (oldgrp_handle,grouptag)

    where the semantics are the same as the draft proposal except that
    the return value is now a new group handle instead of a rank.
    (The rank can be determined by a separate call to
    MPI_GETRANK(group_handle,pid) .)

3. Group disbanding:

    MPI_LVGROUP (group_handle)

    where the semantics are the same as the draft proposal except that
    MPI_LVGROUP now does not return any result.  (Since groups can now
    be formed arbitrarily, not just by partitioning, it is not obvious
    what MPI_LVGROUP could return in general.)  This routine can be
    called only by members of the group.

4. Distribution of group handles and disposition of distributed handles:

    MPI_SendGroupHandle (pid,context,tag,old_group_handle)

    new_group_handle = MPI_RecvGroupHandle (pid,context,tag)

    MPI_FreeGroupHandle (group_handle)

    (The latter routine is similar to MPI_LVGROUP except that
    it can be called only for distributed group handles.  This is
    solely for semantic clarity; a single interface routine would do.)

5. Cacheing group-specific process-local information:

    The following routines get and free keys for use with group
    cacheing.

      key = MPI_GetAttributeKey ()
      MPI_FreeAttributeKey ()

    The following routines cache and retrieve information.

      MPI_SetGroupAttribute  (grouphandle,key,value,destructor_routine)
      status = MPI_TestGroupAttribute (grouphandle,key,&value)

    where
      key         must be unique within the group
      value       is anything the size of a pointer
      destructor_routine   is an application-provided routine that
                           is called by MPI_LVGROUP, with arguments
                           being the group handle, cached key and value.

    Cached information is stripped from the new group handle
    returned by MPI_SendGroupHandle.

    In a conforming implementation, MPI_TestGroupAttribute must
    be no slower than a point-to-point communication call.

6. Retrieving global group ID:

    global_id = MPI_GetGlobalGroupID (grouphandle)

7. Other collective communications:

   Consistently substitute "grouphandle" in place of "group".

----------------------------------------------------------------------
rj_littlefield@pnl.gov               Rik Littlefield
Tel: 509-375-3927                    Pacific Northwest Lab, MS K1-87
                                     P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Sun Mar 14 15:50:50 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA06925; Sun, 14 Mar 93 15:50:50 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19504; Sun, 14 Mar 93 15:50:24 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 14 Mar 1993 15:50:22 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19411; Sun, 14 Mar 93 15:49:35 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93
 12:48 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29389; Sun,
 14 Mar 93 12:46:59 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05301; Sun, 14 Mar 93 12:46:57
 PST
Date: Sun, 14 Mar 93 12:46:57 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: collcomm changes, rationale
To: geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov, jim@meiko.co.uk,
        lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu,
        mpi-context@cs.utk.edu, ranka@top.cis.syr.edu,
        tony@Aurora.CS.MsState.Edu
Cc: d39135@sodium.pnl.gov
Message-Id: <9303142046.AA05301@sodium.pnl.gov>
X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

RATIONALE FOR SUGGESTED CHANGES TO COLLECTIVE COMMUNICATION PROPOSAL

In a related summary, I outlined a set of suggested changes to
the concepts and routines in the collective communication proposal.

The purpose of this note is to present the rationale for those
suggestions and to discuss possible alternatives.

The discussion is organized into 5 areas, flagged with "----- Topic #".

Entries flagged with > are from my summary of suggested changes.
Entries flagged with >>> are from the draft proposal sent out by Al Geist.

----- Topic #1: Group Identification -----

> . The concept of group identification has been firmed up.  Most
>   operations use a "group handle" that is local to the process.
>   (Think of the group handle as being just the address of a
>   potentially large and complex "group descriptor".)
>        ...
> . A semantic restriction is introduced, that a process can access
>   information about a group only if the process holds a group
>   handle for it.  Group handles can be obtained in two ways: 1)
>   they are produced by group formation routines, and 2) a process
>   can explicitly distribute copies of its group handles to other
>   processes, using new routines introduced specifically for that
>   purpose.

There are two issues here: one of being able to layer collective
communications on top of point-to-point at all, and a secondary
one of efficiency.

The more fundamental issue is layering.  Given only MPI point-
to-point functionality, how can a group identifier (whatever it
is) be transmitted between processes so as to be useful to the
receiver?

Presumably we want to allow group identifiers to be passed around so
that any process holding the group identifier can use it for purposes
like translating between (group,rank) and pid.  We also want to allow
this translation to be done asynchronously, i.e., without requiring
the explicit cooperation of any other MPI process at the time of
translation.  Since MPI pt-pt does not support asynchronous servers or
an interrupt receive capability, this implies that the group
identifier must come complete with enough information to resolve all
translations without communication.

This prompts the concept that the group identifier must be associated
with a "group descriptor" that is large and complex enough to
fully describe the group.

How is the association done?  This is a question of efficiency.  If
the identifier is allowed to be process-local, the group descriptor
can be located very quickly -- just make the identifier be a pointer
to the group descriptor.  Requiring the identifier to have global
scope would not be so good.  In that case, either the identifier has
to be carefully constructed or the association has to be done with
some sort of table search.  These issues also arise with global pid's.
However, groups can be formed much more often and in greater numbers
than processes.  I doubt that careful construction tricks could be
assured to be adequate, and if not, then a table search would be
required on each collective communication call.

The conclusion is that, for most purposes, a process-local
identifier generated by the system is preferred.  Such things are
typically called "handles", hence the term "group handle".

> 4. Distribution of group handles and disposition of distributed handles:
> 
>     MPI_SendGroupHandle (pid,context,tag,old_group_handle)
> 
>     new_group_handle = MPI_RecvGroupHandle (pid,context,tag)
> 
>     MPI_FreeGroupHandle  (group_handle)

The next question is how group handles should be distributed.

Implicit distribution is out because MPI pt-pt doesn't support
a server capability, and presumably we aren't willing to synchronize
all of the processes whenever somebody creates a group handle.

So, explicit distribution is required.  How do we handle it?

Two ideas that I do not like are the following.  MPI might provide
routines to translate to and from some machine- and process-
independent format, so that the translated information could be sent
using normal point-point primitives.  This strategy requires that the
user program manage the storage of indefinite-length objects, which
makes for an ugly Fortran interface.  Or, group descriptors (and their
translation routines) might be built into point-point MPI as another
data type.  This violates the spirit of layering collective
communication on point-to-point, and has the same storage management
problem.

The three routines proposed above were the cleanest interface
I could think of.

----- Topic #2: Global Group ID -----

>   There is
>   still a "group ID" that is globally unique, but it has only a
>   secondary role and can be ignored by most applications.  
>         ...
>     global_id = MPI_GetGlobalGroupID (grouphandle)

Given that we are now able (and required) to pass around copies of
group handles, it is not clear to me that MPI really needs special
support for the concept of a global group ID.  On the other hand,
it's easy to provide, since we have to construct one or more
globally unique context values for each group anyway.  So just
use the first such context value as the global ID.  This gives
something unique that all processes can agree on.  

But note that knowing just the global group ID does not let you
get other information about the group -- you have to hold a group
handle for that.

(We could add a routine that would accept the global group ID and
return a handle for that group, presuming that the process held
one.  This would be cheap to do, since group handles are managed
by MPI anyway, and I can vaguely imagine that it might help some
applications.  On the other hand, there are no similar "handle
lookup" facilities provided elsewhere in MPI, and I'm reluctant
to set that kind of precedent without clear need.)

----- Topic #3: Group Formation -----

> . A new group formation routine is introduced, that is less
>   synchronous and more general than MPI_PARTGROUP.
>        ...
> 1. Arbitrary group formation:
> 
>     newgrp_handle = MPI_FORMGROUP (grouptag,groupsize,knownmembers)
> 
>     where
>      grouptag     is a user-provided integer tag, sufficiently unique
>                   to disambiguate overlapping groups that might be
>                   formed simultaneously, say by multiple threads.
> 
>      groupsize    is the number of members that will compose the group.
> 
>      knownmembers is a set of pid's of some or all members of the group.
>                   Each member of the group must provide the same
>                   set of knownmembers.
> 
>      newgrp_handle     is a group handle for the newly formed group
> 
>     This new routine must be called synchronously, but only by those
>     processes forming the group.  

The draft proposal distributed by Al Geist says that

>>> A group is identified by a group name that is supplied by the user.

A group name by itself is not enough to allow implementing groups
as a layer on top of point-to-point, unless we impose
restrictions that I think would be not acceptable.

The problem is: how does a group-forming routine know whom it
should send messages to, in order to form the group?

MPI_PARTGROUP does not have a problem with this, because it has
to be called synchronously by all members of the group.  Since
each current member of the group holds a handle (descriptor) for
that group, it is easy for each member to figure out who talks to
whom.

Unfortunately, there are some important application designs that
I do not see how to implement with just MPI_PARTGROUP.

For example, I am now doing an application that uses a
master-slaves strategy to asynchronously parcel out chunks of
work, with each chunk being done by several processes working
collaboratively.  Collective communication between those
processes is required, so it seems natural to organize them into
MPI groups.  Using a synchronous group partitioning routine
would introduce a risk of load imbalance, because the varying
chunk size implies that groups can finish their work at
different times, and synchronous partitioning would delay their
reassignment.

Applications like this could benefit from a group formation
routine that is called synchronously, but only by those
processes forming the group -- hence MPI_FORMGROUP.

This type of routine does have the problem of identifying its
collaborators, and the only solution I can think of is to
tell it.  That's what the knownmembers argument is for.

I have specified knownmembers in terms of pid's because I assume
that point-to-point communication based on pid's is always fast
and unrestricted.  If knownmembers were based on (group,rank)
pairs, then per the discussion above, all processes making this
call would have to hold handles (descriptors) for the referenced
groups.  This seems to me to be more trouble than it's worth, but
others may disagree.

Another comment about efficiency...  The size of the knownmembers
set affects the efficiency of group formation.  At one extreme,
only one member is required to be known.  This is scalable in a
memory sense, but not in a time sense, because it implies O(P)
group formation time for a group of P processes.  At the other
extreme, all members can be specified.  This is not scalable in a
memory sense, but allows guaranteed O(log P) formation time.
Other tradeoffs are possible, such as O(sqrt P) knownmembers and
O(sqrt P) formation time.  The interface as specified allows
each application to choose the type of scalability it wants.

----- Topic #4: Group Names -----

>   ...  The
>   "group name" is entirely removed from MPI-1.  (Group names are
>   still anticipated in MPI-2, but upward-compatibility is
>   maintained in a different way from the draft proposal.)

The draft distributed by Al Geist states:

>>> To allow for future extensibility of the group concept
>>> the present draft specifies that groups be named. 

Requiring names has the drawback that 1) it burdens the user with
at least the appearance of having to create unique names, in
order to be upward-compatible with dynamic groups, even though 2)
in a layered MPI-1, there is no way in general to check global
uniqueness, and thus programs can work fine with non-unique names.

This combination strikes me as actually impeding upward-
compatibility.  The tendency will be for programmers to use
non-unique names because it works and it's easy.  But such programs
would break when MPI-2 came along and started actually using
the names for something.  I don't like encouraging people to
write programs that are going to break.

I do support upward compatibility.  However, rather than requiring
names in MPI-1, I propose that they be deferred entirely to
MPI-2, at which point they can be supported either just through
MPI_JOINGROUP (as an alternative to MPI_FORMGROUP) or via
additional routines to attach globally unique names to groups
that have already been formed via MPI_JOINGROUP.

----- Topic #5: Cacheing -----

> 5. Cacheing group-specific process-local information:
> 
>     The following routines get and free keys for use with group
>     cacheing.
> 
>       key = MPI_GetAttributeKey ()
>       MPI_FreeAttributeKey ()
> 
>     The following routines cache and retrieve information.
> 
>       MPI_SetGroupAttribute  (grouphandle,key,value,destructor_routine)
>       MPI_TestGroupAttribute (grouphandle,key,&value)
> 
>     where
>       key         must be unique within the group
>       value       is anything the size of a pointer
>       destructor_routine   is an application-provided routine that
>                            is called by MPI_LVGROUP, with arguments
>                            being the group handle, cached key and value.
> 
>     Cached information is stripped from the new group handle
>     returned by MPI_SendGroupHandle.
> 
>     In a conforming implementation, MPI_TestGroupAttribute must
>     be no slower than a point-to-point communication call.

This feature is purely for efficiency, but I think it's so valuable,
cheap, and clean that something like it has to go in.

One feature of collective communication is that the fastest
algorithm for any particular job usually depends on the machine
topology, which processes belong to the group, and the amount of
data being manipulated.  For example, global combine of L data
elements across P = RC processes on a 2-D RxC mesh can be done in
O(L log(P)) time using a fanin/fanout algorithm, or in O(L + sqrt(P))
time using a nested rings algorithm.  The former is better for
small L, the latter for big L, and using the wrong one can easily
cost a factor of 3 in execution time.

So, there is strong motivation to write collective communication
routines that are adaptive in the sense of figuring out which
algorithm is best.  The problem is that it can take quite a lot
of time to make the decision, starting from a scratch position
of not even knowing which processes belong to the group.  It's
going to take lots of calls to the inquiry routines to get that
information, and then some more cycles to make the proper decisions.

Obviously it would be profitable to cache the information and/or
decisions.  The question is, where?  

It is tempting to say that the collective communication routine
could or should keep its own cache, indexed by group handle
and/or global group ID.  The problem is, groups are dynamic in
the sense of being formed and disbanded, so that unless group IDs
can get very large, eventually they will have to be reused.  Now,
it wouldn't do to have a collective communication routine use
stale cached information, so if the collective communication
routine is keeping its own cache, then it needs to be notified of
the reuse so that it can release the cached stuff.
Alternatively, perhaps the cached information could be
automatically released.  (Either strategy guarantees immediate
release of cached info when the group handle/descriptor is
released.  I presume we want to do that, to avoid getting into
the morass of garbage collection.)

The method proposed here can be thought of as implementing both
strategies.  The idea is that the routines that free group
handles (and the associated descriptors) loop through the cached
information, calling an application-provided destructor routine
for each piece of cached information.  Typically, the cached
information will be a pointer to a hunk of memory managed by the
collective communication, which the destructor will free in
whatever way it has to.  Upon return from the destructor, the
group-freeing routine will release the little piece of memory
holding the pointer, and everything will be cleaned up.

If that group handle/descriptor is ever reused, it will be
reinitialized to indicate no cached information, and
MPI_TestGroupAttribute will return "not found".

An efficient-after-first-call group-global operation using 
cacheing might look like this:

   static int gop_key_assigned = 0;    /* 0 only on first entry */
   static MPI_key_type gop_key;        /* key for this module's stuff */

   efficient_global_op (grphandle, ...)
   struct group_descriptor_type *grphandle;
   {
     struct gop_stuff_type *gop_stuff;   /* whatever we need */

     if (!gop_key_assigned)     /* get a key on first call ever */
     { gop_key_assigned = 1;
       if ( ! (gop_key = MPI_GetAttributeKey()) ) {
         MPI_abort ("Insufficient keys available");
       }
     }

     if (MPI_TestGroupAttribute (grphandle,gop_key,&gop_stuff))
     { /* This module has executed in this group before.
          We will use the cached information */
     }
     else
     { /* This is a group that we have not yet cached anything in.
          We will now do so.
        */

       gop_stuff = /* malloc a gop_stuff_type */
  
       /* ... fill in *gop_stuff with whatever we want ... */

       MPI_SetGroupAttribute (grphandle, gop_key, gop_stuff, 
                              gop_stuff_destructor);
     }

     /* ... use contents of *gop_stuff to do the global op ... */
     
    }

    gop_stuff_destructor (gop_stuff)   /* called by MPI on group close */
    struct gop_stuff_type *gop_stuff;
    {
      /* ... free storage pointed to by gop_stuff ... */
    }


----------------------------------------------------------------------
rj_littlefield@pnl.gov               Rik Littlefield
Tel: 509-375-3927                    Pacific Northwest Lab, MS K1-87
                                     P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 16 04:54:47 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA14266; Tue, 16 Mar 93 04:54:47 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA15014; Tue, 16 Mar 93 04:54:12 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 16 Mar 1993 04:54:10 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA15006; Tue, 16 Mar 93 04:54:01 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA13913
  (5.65c/IDA-1.4.4); Tue, 16 Mar 1993 10:52:14 +0100
Received: by f1neuman.gmd.de id AA15815; Tue, 16 Mar 1993 10:53:37 GMT
Date: Tue, 16 Mar 1993 10:53:37 GMT
From: Rolf.Hempel@gmd.de
Message-Id: <9303161053.AA15815@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu, mpi-ptop@cs.utk.edu
Subject: Al's COLLCOMM proposal
Cc: gmap10@f1neuman.gmd.de


I would like to comment on Al's Collective Communications draft which
he sent out a few days ago. First of all, I agree with Al in most
points, especially that we should not attempt to include everything
into MPI-1. Given the limited time available, it seems to me a good
idea to leave the level-1 routines and all dynamic stuff for MPI-2.

In section 2 Al says that "each group has a topology associated with
it". As far as I know this is still an open issue. Do we agree that
there is always a default topology (like a ring, to make the shift
operation meaningful in all cases, or fully connected)? Otherwise a
topology is an optional attribute which a group may or may not have.

Another question then is how this assignment is done. At the last
Dallas meeting we discussed two basic options:
1. A topology is defined after the creation of the group. The topology
   thus is an attribute which is assigned to the group, and which can
   be overwritten without creating a new group.
2. A topology definition always creates a new group (or even two of
   them, the second one being the collection of processes which are not
   used by the topology). The advantage of this choice is that the
   rank of a process within a group never changes. When a group with
   topology is created, the processes can be arranged in the optimal
   way from the very beginning.

Personally, I prefer the second option. One additional advantage is the
following: assume that the original group has 10 processes, and then
a (3,3) grid topology is defined. Does a global operation on this group
include all 10 processes? If the (3,3) grid formation creates a new
subgroup of 9 processes, the answer is clear.

The draft is not consistent in the relationship of groups and
topologies. In the Introduction it says "Giving or forming groups with
topological features is presented in section 4" which suggests option
2. above. On the other hand, under Group Functions it states that
"Existing groups including the ALL group can switch their associated
topology", which sounds like option 1. Do we all agree on choosing
option 2?

On page 2 the draft states that "The collective communication routines
are implemented in terms of the topology associated with a given group.
Although the function would be the same, a broadcast in a group with
a ring topology could be implemented differently from a broadcast in
a group with hypercube topology". I see a confusion of application
and machine topologies here. The optimal implementation of a broadcast
is guided by the machine topology, which could be a hypercube. Even
if the logical group topology is a mesh, the global operation would
follow the hypercube structure. However, this implementation detail
is completely invisible to the user and should not be part of the
standard. The only thing the user sees is the mesh topology and the
result of the broadcast.

The proposed MPI_SHIFT function could be made much more useful by
adding another argument. Here's my proposal:

 Info = MPI_SHIFT(inbuf,outbuf,nitems,type,tag,group,direction,steps)

The additional integer argument "direction" selects the coordinate
direction in the group topology, and "steps" is the number of steps
in that direction. In the case of cartesian structures the meaning is
immediately clear. One could apply the function also in the case of a
general graph. In this case "direction" would specify the neighbor
number. "steps" could either be ignored, or we could define a
transitive scheme of the kind "neighbor of neighbor of neighbor ...",
with the indirection depth being specified by "steps".

A hot topic for further discussions will be the "group names" proposed
by Al. I see his point, but I don't see how the user-supplied group
name solves the problem which arises if a new process wants to join
a group. Even if the user tells MPI the global name of the group,
global knowledge of all groups in the system is required to find the
other group members to talk to. I agree to most points of Rik
Littlefields comments. The only thing which does not convince me yet
is the explicite caching mechanism. If the information caching is
handled consistently between the group management and collective
communication routines (in order to avoid usage of stale group
information), I still hope that it could be done without showing up
at the user interface.

As a last point, I would like to forward the following note by
Tom Henderson:

> Rolf,
> 
> Would it be a good idea to merge the mpi-collcomm and mpi-ptop
> mailing lists? It seems like lots of stuff on that mailing list
> now is closely related to process topology. I suppose the
> mpi-collcomm stuff could just be forwarded to the mpi-ptop list.  
> 
> Tom

I agree. What do others think?

Rolf
From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 16 08:15:58 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA15709; Tue, 16 Mar 93 08:15:58 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA25944; Tue, 16 Mar 93 08:15:02 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 16 Mar 1993 08:15:00 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gstws.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA25934; Tue, 16 Mar 93 08:14:57 -0500
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA12817; Tue, 16 Mar 1993 08:14:54 -0500
Date: Tue, 16 Mar 1993 08:14:54 -0500
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9303161314.AA12817@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Revised Collective Draft - consistent with p2p draft.



Hi Folks,

I had written the first collective communication draft before
seeing the latest point-to-point draft from Marc. The two need
to be consistent in MPI. Marc has revised my first draft to be
consistent with the point-to point section and has clarified
the section considerably. Many thanks Marc.

The new collective communication draft is attached below,
this should be the focus of our discussion in this subcommittee.
The major changes:
1. The context/group management routines are now a part of the p2p section
   rather than the collcomm section. The routines are repeated
   in the following draft for completeness.

2. The buffer descriptor version of the collective routines are 
   described in much more detail in the new draft and form the core
   of the collective communication routines.

Rik has also sent both collective and p2p committees an alternate
proposal for managing context/group. Other comments are welcome.

Al Geist

--------------------------- Draft follows ---------------------------


\documentstyle[12pt]{article}


\newcommand{\discuss}[1]{
\ \\ \ \\ {\small {\bf Discussion:} #1} \ \\ \ \\
}

\newcommand{\missing}[1]{
\ \\ \ \\ {\small {\bf Missing:} #1} \\ \ \\
}

\begin{document}

\title{ Collective Communication}


\author{Al Geist \\ Marc Snir}
\maketitle

\section{Collective Communication}
\subsection{Introduction}

This section is a draft of the current proposal for collective communication.
Collective communication is defined to be communication that involves
a group of processes.  Examples are broadcast and global sum.
A collective operation is executed by having all processes in the group call the
communication routine, with matching parameters.
Routines can (but are not required to) return as soon as their
participation in the collective communication is complete.  The completion
of a call indicates that the caller is now free to access the locations in the
communication buffer, or any other location that can be referenced by the
collective operation.  However, it does not indicate that other processes in
the group have started the operation (unless otherwise indicated in the
description of the operation).   However, the successful completion of
a collective communication call may depend on the execution of a matching call
at all processes in the group.

The syntax and semantics of the collective operations is
defined so as to be consistent with the syntax and semantics of the point to
point operations.

The reader is referred to the point-to-point communication section of the current
MPI draft for information concerning groups (aka contexts) and group formation
operations, and for general information on types of objects used by the MPI
library.

The collective communication routines are built above the point-to-point
routines.  While vendors may optimize certain collective routines for
their architectures, a complete library of the collective communication
routines can be written entirely using point-to-point communication
functions.  We are using naive implementations of the collective calls in terms
of point to point operations in order to provide an operational definition of
their semantics.

The following communication functions are proposed.
\begin{itemize}
\item
Broadcast from one member to all members of a group.
\item
Barrier across all group members
\item
Gather data from all group members to one member.
\item
Scatter data from one member to all members of a group.
\item
Global operations such as sum, max, min, etc., were the result
is known by all group members and a variation where the result is
known by only one member. The ability to have user defined
global operations.
\item
Simultaneous shift of data around the group, the simplest example
being all members sending their data to (rank+1) with wrap around.
\item
Scan across all members of a group (also called parallel prefix).
\item
Broadcast from all members to all members of a group.
\item
Scatter data from all members to all members of a group
(also called complete exchange or index).
\end{itemize}

To simplify the collective communication interface it is
designed with two layers. The low level routines have all the
generality of, and make use of, the buffer descriptor routines
of the point-to-point section which allows arbitrarily complex
messages to be constructed. The second level routines are
similar to the upper level point-to-point routines in that they send
only a contiguous buffer.

\missing {

The current draft does not include the nonblocking collective communication
calls that where discussed at the last meeting.
}


\subsection{Group Functions}

The point to point document discusses the use of groups (aka contexts), and
describe the operations available for the creation and manipulation of
groups and group objects. For sake of completeness, we list
them anew here.


{\bf \ \\ MPI\_CREATE(handle, type, persistence)} \\
Create new opaque object
\begin{description}
\item[OUT handle] handle to object
\item[IN type] state value that identifies the type of object to be created
\item[IN persistence] state value; either {\tt MPI\_PERSISTENT} or {\tt
MPI\_EPHEMERAL}.
\end{description}

{\bf \ \\ MPI\_FREE(handle)} \\
Destroy object associated with handle.
\begin{description}
\item[IN handle] handle to object
\end{description}


{\bf \ \\ MPI\_ASSOCIATED(handle, type)}  \\
Returns the type of the object the handle is currently associated with, if
such exists.  Returns the special type {\tt MPI\_NULL} if the handle is
not currently associated with any object.
\begin{description}
\item[IN handle] handle to object
\item[OUT type] state
\end{description}


{\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)}  \\

Create a new context that includes all processes in the old context.
The rank of the processes in the previous context is preserved.  The call must
be executed by all processes in the old context.  It is a blocking call:  No
call returns until all processes have called the function.
\begin{description}
\item[OUT newcontext]  handle to newly created context.  The handle should not
be associated with an object before the call.
\item[IN context] handle to old context
\end{description}

{\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \\
A new context is created for
each distinct value of {\tt key}; this context is shared by all processes that
made the call with this key value.  Within each new context the processes are
ranked according to the order of the {\tt index} values they provided; in case
of ties, processes are ranked according to their rank in the old context.
This call is blocking:  No call returns until all processes in the old context
executed the call.
\begin{description}
\item[OUT newcontext] handle to newly created context at calling process.   This
handle should not be associated with an object before the call.
\item[IN context] handle to old context
\item[IN key] integer
\item[IN index] integer
\end{description}

{\bf \ \\ MPI\_RANK(rank, context)} \\
Return the rank of the calling process within the specified context.
\begin{description}
\item[OUT rank] integer
\item[IN context] context handle
\end{description}


{\bf \ \\ MPI\_SIZE(size, context)} \\
Return the number of processes that belong to the specified context.
\begin{description}
\item[OUT size] integer
\item[IN context] context handle
\end{description}

\paragraph*{Extensions}
Possible extensions for dynamic process spawning (MPI2):

{\bf \ \\ MPI\_PROCESS(process, context, rank)} \\
Returns a handle to
the process identified by the {\tt rank} and {\tt context} parameters.
\begin{description}
\item[OUT process] handle to process object
\item[IN context] handle to context object
\item[IN rank] integer
\end{description}

{\bf \ \\ MPI\_CREATE\_CONTEXT(newcontext, list\_of\_process\_handles)} \\
creates a new context out of an explicit list of members
and rank them in their order of occurrence in the list.
\begin{description}
\item[OUT newcontext] handle to newly created context.  Handle should not
be associated with an object before the call.
\item[IN list\_of\_process\_handles]
List of handles to processes to be included in new group.
\end{description}

This, coupled with a mechanism for requiring the
spawning of new processes to the computation, will allow to create a new
all inclusive context that includes the additional processes.


\subsection{Communication Functions}

The proposed communication functions are divided into two layers.
The lowest level uses the same buffer descriptor objects
available in point-to-point to create noncontiguous, multiple data type
messages. The second level is similar to the block send/receive
point-to-point operations in that it supports only contiguous buffers of
arithmetic storage units.   For each communication operation, we list these two
level of calls together.


\subsubsection{Synchronization}

\paragraph*{Barrier synchronization}

{\bf \ \\ MPI\_BARRIER( group, tag )} \\

MPI\_BARRIER blocks the calling process until all group members have called
it; the call returns at any process only after all group members have
entered the call.
\begin{description}
\item[IN group] group handle
\item[tag] communication tag (integer)
\end{description}

{\tt \ \\ MPI\_BARRIER( group, tag )}  \\ is
\begin{verbatim}
MPI_CREATE(buffer_handle, MPI_BUFFER, MPI_PERSISTENT);
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
if (rank==0)
{
   for (i=1; i < size; i++)
      MPI_RECV(buffer_handle, i, tag, group);
   for (i=1; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
}
else
{
   MPI_SEND(buffer_handle, 0, tag, group);
   MPI_RECV(buffer_handle, 0, tag, group);
}
MPI_FREE(buffer_handle);
\end{verbatim}

\subsubsection{Data move functions}

\paragraph*{Circular shift}

{\bf \ \\ MPI\_CSHIFT( inbuf, outbuf, tag, group, shift)} \\

Process with rank {\tt i} sends the data in its input buffer to
process with rank $\tt (i+ shift) \bmod  group\_size$, who receives the
data in its output buffer. All processes make the call with the same values for
{\tt tag, group}, and {\tt shift}.  The {\tt shift} value can be positive, zero,
or negative.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\bf \ \\ MPI\_CSHIFTB( inbuf, outbuf, len, tag, group, shift)} \\

Behaves like {\tt MPI\_CSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, tag, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\tt \ \\ MPI\_CSHIFT( inbuf, outbuf, tag, group, shift)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_ISEND( handle, inbuf, mod(rank+shift, size), tag, group);
MPI_RECV( outbuf, mod(rank-shift,size), tag, group)
MPI_WAIT(handle);
\end{verbatim}

\discuss{
Do we want to support the case {\tt inbuf = outbuf} somehow?
}

\paragraph*{End-off shift}

{\bf \ \\ MPI\_EOSHIFT( inbuf, outbuf, tag, group, shift)} \\

Process with rank {\tt i}, $\tt \max( 0, -shift) \le i < min( size, size -
shift)$, sends the data
in its input buffer to process with rank {\tt i+ shift}, who receives the data
in its output buffer.   The output buffer of processes which do not receive
data is left unchanged.   All processes
make the call with the same values for {\tt tag, group}, and {\tt shift}.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\bf \ \\ MPI\_EOSHIFTB( inbuf, outbuf, len, tag, group, shift)} \\

Behaves like {\tt MPI\_EOSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, tag, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

\discuss{

Two other possible definitions for end-off shift: (i) zero filling for processes
that don't receive messages, or (ii) boundary values explicitly provided as an
additional parameter.  Any preferences?
(Fortran 90 allows to optionally provide boundary values, and does zero filling,
if none were provided)

}

\paragraph*{Broadcast}

{\bf \ \\  MPI\_BCAST( buffer\_handle, tag, group, root )} \\

{\tt MPI\_BCAST} broadcasts a message from the process with rank {\tt root} to
all other processes
of the group. It is called by all members of group using the same arguments for
{\tt tag, group, and root}.
On return the contents of the buffer of the process with rank {\tt root}
is contained in buffer of all group members.
\begin{description}
\item[INOUT buffer\_handle]  Handle for buffer where from message is
sent or received.
\item[IN tag] tag of communication operation (integer)
\item[IN group] context of communication (handle)
\item[IN root] rank of broadcast root (integer)
\end{description}


{\bf \ \\  MPI\_BCASTB( buf, len, tag, group, root )} \\

{\tt MPI\_BCASTB} behaves like broadcast, restricted to a block buffer.
It is called by all processes with the same arguments for {\tt len, tag, group}
and {\tt root}.
\begin{description}
\item[INOUT buffer]  Starting address of buffer (choice type)
\item[IN len] Number of words in buffer (integer)
\item[IN tag] tag of communication operation (integer)
\item[IN group] context of communication (handle)
\item[in root] rank of broadcast root (integer)
\end{description}


{\tt \ \\  MPI\_BCAST( buffer\_handle, tag, group, root )} \\
is
\begin{verbatim}
MPI_SIZE( &size, context);
MPI_RANK( &rank, context);
MPI_IRECV(handle, buffer_handle, root, tag, group);
if (rank==root)
   for (i=0; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
MPI_WAIT(handle)
\end{verbatim}

\paragraph*{Gather}

{\bf \ \\ MPI\_GATHER( inbuf, outbuf, tag, group, root, len) } \\

Each process (including the root process) sends the content of its input
buffer to the root process.  The root process concatenates all the
incoming messages in the order of the senders' rank and places the
results in its output buffer.
It is called by all members of group using the same arguments for
{\tt tag, group}, and {\tt root}.   The input buffer of each process may have
different length.
\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor -- significant only at root
(choice)
\item[IN tag] operation tag (integer)
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\item[OUT len] difference between output buffer size (in bytes) and
number of bytes received.
\end{description}

\discuss{

It would be more elegant (but no more convenient) to have a return status
object.
}

{\bf \ \\ MPI\_GATHERB( inbuf, inlen, outbuf, tag, group, root) } \\

{\tt MPI\_GATHER} behaves like {\tt MPI\_GATHER} restricted to block
buffers, and with the additional restriction that all input buffers should
have the same length.   All processes should provided the same values for
{\tt inlen, tag, group}, and {\tt root} .
\begin{description}
\item[IN inbuf] first variable of input buffer (choice)
\item[IN inlen] Number of (word) variables in input buffer (integer)
\item[OUT outbuf] first variable of output buffer -- significant only at
root (choice)
\item[IN tag] operation tag (integer)
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\end{description}


{\tt \ \\ MPI\_GATHERB( inbuf, inlen, outbuf, tag, group, root) } \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_ISENDB(handle, inbuf, inlen, root, tag, group);
if (rank==root)
   for (i=0; i < size; i++)
   {
      MPI_RECVB(outbuf, inlen, i, tag, group, return_status);
      outbuf += inlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\paragraph*{Scatter}

{\bf \ \\ MPI\_SCATTER( list\_of\_inbufs, outbuf, tag, group, root, len)} \\

The root process sends the content of its {\tt i}-th input buffer
to the process with rank {\tt i}; each process (including the root process)
stores the incoming message in its output buffer.
The difference between the size of
the output buffer (in bytes) and the number of bytes received is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt root}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[OUT outbuf] buffer descriptor handle
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\item[OUT len]  number of remaining bytes in the output buffer at each process
(integer)
\end{description}


{\tt \ \\ MPI\_SCATTER( list\_of\_inbufs, outbuf, tag, group, root, len)} \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_IRECV(handle, outbuf, root, tag, group);
if (rank=root)
   for (i=0; i < size; i++)
      MPI_SEND(inbuf[i], i, tag, group);
MPI_WAIT(handle, return_status);
MPI_RETURN_STATUS(return_status, len, source, tag);
\end{verbatim}


{\bf \ \\ MPI\_SCATTERB( inbuf, outbuf, len, tag, group, root)}
\\

{\tt MPI\_SCATTERB} behaves like {\tt MPI\_SCATTER} restricted to block buffers,
and with the additional restriction that all output buffers have the same
length. The input buffer block of the root process is partitioned into
{\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt i}-th process in the group and stored in its output buffer.
The routine is called by all members of the group using the same
arguments for {\tt tag, group, len}, and {\tt root}.
\begin{description}
\item[IN inbuf] first entry in input buffer -- significant only at root
(choice).
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries to be stored in output buffer (integer)
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\end{description}


{\tt \ \\ MPI\_SCATTERB( inbuf, outbuf, outlen, tag, group, root) } \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_IRECVB( handle, outbuf, outlen, root, tag, group);
if (rank=root)
   for (i=0; i < size; i++)
   {
      MPI_SENDB(inbuf, outlen, i, tag, group, return_status);
      inbuf += outlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\paragraph*{All-to-all scatter}

{\bf \ \\ MPI\_ALLSCATTER( list\_of\_inbufs, outbuf, tag, group, len)} \\

Each process in the group sends its {\tt i}-th buffer in its input buffer list
to the process with rank {\tt i} (itself included); each process concatenates
the incoming messages in its output buffer, in the order of the senders' ranks.
The number of bytes left in the output buffer is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag} and {\tt group}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[OUT outbuf] buffer descriptor handle
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[OUT len]  number of remaining bytes in the output buffer (integer)
\end{description}




{\bf \ \\ MPI\_ALLSCATTERB( inbuf, outbuf, len, tag, group)} \\

{\tt MPI\_ALLSCATTERB} behaves like {\tt MPI\_ALLSCATTER} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have
the same length. The input buffer block of each process is partitioned
into {\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt it}-th process in the group.  Each process concatenates the incoming
messages, in the order of the senders' ranks, and store them in its output
buffer. The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (choice).
root (integer)
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries sent from each process to each other (integer).
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\end{description}


{\tt \ \\ MPI\_ALLSCATTERB( inbuf, outbuf, len, tag, group)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
for (i=0; i < rank; i++)
   {
    MPI_IRECVB(recv_handles[i], outbuf, len, tag, group);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDB(send_handle[i], inbuf, len, i, tag, group);
    inbuf += len;
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}

\paragraph*{All-to-all broadcast}

{\bf \ \\ MPI\_ALLCAST( inbuf, outbuf, tag, group, len)} \\

Each process in the group broadcasts its input buffer
to all processes (including itself);
each process concatenates
the incoming messages in its output buffer, in the order of the senders' ranks.
The number of bytes left in the output buffer is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag} and {\tt group}.
\begin{description}
\item[IN inbuf] buffer descriptor handle for input buffer
\item[OUT outbuf] buffer descriptor handle for output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[OUT len]  number of remaining untouched bytes in each output buffer
(integer)
\end{description}




{\bf \ \\ MPI\_ALLCASTB( inbuf, outbuf, len, tag, group)} \\

{\tt MPI\_ALLCASTB} behaves like {\tt MPI\_ALLCAST} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have the same length.
The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (choice).
root (integer)
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries sent from each process to each other
(including itself).
\item[IN group] handle
\end{description}


{\tt \ \\ MPI\_ALLCASTB( inbuf, outbuf, len, tag, group)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
for (i=0; i < rank; i++)
   {
    MPI_IRECVB(recv_handles[i], outbuf, len, tag, group);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDB(send_handle[i], inbuf, len, i, tag, group);
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}


\subsubsection{Global Compute Operations}

\paragraph*{Reduce}

{\bf \ \\ MPI\_REDUCE( inbuf, outbuf, tag, group, root, op)} \\

Combines the values provided in the input buffer of each process in the
group, using the operation {\tt op}, and returns the combined value in
the output buffer of the process with rank {\tt root}.
Each process can provide one value, or a sequence of values, in which case the
combine operation is executed pointwise on each entry of the sequence.
For example, if the operation is {\tt max} and input buffers contains two
floating point numbers, then outbuf(1) $=$ global max(inbuf(1)) and
outbuf(2) $=$ global max(inbuf(2)). All input
buffers should define sequences of equal length of entries of types
that match the type of the operands of {\tt op}.  The
output buffer should define a sequence of the same length of entries of
types that match the type of the result of {\tt op}.
(Note that,
here as for all other communication operations, the type of entries inserted in
a message depend on the information provided by the input buffer descriptor, and
not on the declarations of these variables in the calling program.   The types
of the variables in the calling program need not match the types defined by the
buffer descriptor, but in such case the outcome of a reduce operation may be
implementation dependent.)

The operation
defined by {\tt op} is associative and commutative, and the implementation can
take advantage of associativity and commutativity in order to change
order of evaluation.
The routine is called by all group members using the same arguments
for {\tt tag, group, root} and {\tt op}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer -- significant only at root
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}

We list below the operations are supported for Fortran, each with the
corresponding value of the {\tt op} parameter.
\begin{description}
\item[MPI\_IMAX] integer maximum
\item[MPI\_RMAX] real maximum
\item[MPI\_DMAX] double precision real maximum
\item[MPI\_IMIN] integer minimum
\item[MPI\_RMIN] real minimum
\item[MPI\_DMIN] double precision real minimum
\item[MPI\_ISUM] integer sum
\item[MPI\_RSUM] real sum
\item[MPI\_DSUM] double precision real sum
\item[MPI\_CSUM] complex sum
\item[MPI\_DCSUM] double precision complex sum
\item[MPI\_IPROD] integer product
\item[MPI\_RPROD] real product
\item[MPI\_DPROD] double precision real product
\item[MPI\_CPROD] complex product
\item[MPI\_DCPROD] double precision complex product
\item[MPI\_AND] logical and
\item[MPI\_IAND] integer (bit-wise) and
\item[MPI\_OR] logical or
\item[MPI\_IOR] integer (bit-wise) or
\item[MPI\_XOR] logical xor
\item[MPI\_IXOR] integer (bit-wise) xor
\item[MPI\_MAXLOC] rank of process with maximum integer value
\item[MPI\_MAXRLOC] rank of process with maximum real value
\item[MPI\_MAXDLOC] rank of process with maximum double precision real value
\item[MPI\_MINLOC] rank of process with minimum integer value
\item[MPI\_MINRLOC] rank of process with minimum real value
\item[MPI\_MINDLOC] rank of process with minimum double precision real value
\end{description}

{\bf \ \\ MPI\_REDUCEB( inbuf, outbuf, len, tag, group, root, op)} \\

Is same as {\tt MPI\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}

\discuss{

If we are to be compatible with the point to point block operations, the
{\tt len} parameter should indicate the number of words in buffer.  But it
might be more natural to have {\tt len} indicate the number of entries in
the buffer, so that if the entries are complex or double precision, {\tt
len} will be half the number of words in the buffer.

}


{\bf \ \\ MPI\_USER\_REDUCE( inbuf, outbuf, tag, group, root, function)} \\

Same as the reduce operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned values all agree.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer -- significant only at root
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN function] user provided function
\end{description}

{\bf \ \\ MPI\_USER\_REDUCEB( inbuf, outbuf, len, tag, group, root, function)}
\\
Is same as {\tt MPI\_\_USER\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}


\discuss{

Do we also want a version of reduce that broadcasts the result to all processes
in the group?  (This can be achieved by a reduce followed by a broadcast, but a
combined function may be somewhat more efficient.

}

\paragraph*{Scan}

{\bf \ \\  MPI\_SCAN( inbuf, outbuf, tag, group, op )} \\

MPI\_SCAN is used to perform a parallel prefix with respect to
an associative reduction operation on data distributed across the group.
The operation returns in the output buffer of the process with rank {\tt i} the
reduction of the values in the input buffers of processes with ranks {\tt
0,...,i}.  The type of operations supported and their semantic, and the
constraints on input and output buffers are as for {\tt MPI\_REDUCE}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN op] operation (status)
\end{description}

{\bf \ \\  MPI\_SCANB( inbuf, outbuf, len, tag, group, op )} \\
Same as {\tt MPI\_SCAN}, restricted to block buffers.

\begin{description}
\item[IN inbuf] first input buffer element (choice)
\item[OUT outbuf] first output buffer element (choice)
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN op] operation (status)
\end{description}


{\bf \ \\  MPI\_USER\_SCAN( inbuf, outbuf, tag, group, function )} \\

Same as the scan operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned values all agree.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

{\bf \ \\ MPI\_USER\_SCANB( inbuf, outbuf, len, tag, group, function)}
\\
Is same as {\tt MPI\_USER\_SCAN}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

\discuss{

Do we want scan operations executed by segments? (The HPF definition of prefix
and suffix operation might be handy -- in addition to the scanned vector of
values there is a mask that tells where segments start and end.)
}

\missing{

Nonblocking (immediate) collective operations.  The syntax is obvious:   for
each collective operation  {\tt MPI\_op(params)} one may have a new nonblocking
collective operation of the form {\tt MPI\_Iop(handle, params)}, that initiates
the execution of the corresponding operation.  The execution of the operation
is completed by executing {\tt MPI\_WAIT(handle,...},  {\tt
MPI\_STATUS(handle,...)},  {\tt MPI\_WAITALL}, {\tt MPI\_WAITANY}, or {\tt
MPI\_STATUSANY}.   There are three issues to consider:

(i) The exact definition of the semantics of there operations (in particular
constraints on order.

(ii) The complexity of implementation (including the complexity of having the
same {\tt WAIT} or {\tt STATUS} functions apply both to point-to-point and to
collective operations).

(iii) The accrued performance advantage.
}

\subsection{Correctness}

\discuss{ This is still very preliminary}

The semantics of the collective communication operations can be derived from
their operational definition in terms of  point-to-point communication.  It is
assumed that messages pertaining to one
operation cannot be confused with messages pertaining to another operation.
Also messages pertaining to two distinct occurrences of the same operation
cannot be confused, if the two occurrences have distinct parameters.
The relevant parameters for this purpose are {\tt group}, {\tt tag}, {\tt
root} and {\tt op}.
messages pertaining to another occurrence of the same operation, with different
parameters.   The implementer can, of course, use another, more efficient
implementation, as long as it has the same effect.

\discuss{

This statement does not yet apply to the current, incomplete and
somewhat careless definitions I provided in this draft.

The definition above means that messages pertaining to a collective
communication carry information identifying the operation itself, and the
values of the {\tt tag, group} and,
where relevant, {\tt root} or {\tt op} parameters.
Is this acceptable?

}


A few examples:

\begin{verbatim}
MPI_BCAST(buf, len, tag, group, 0);
MPI_BCAST(buf, len, tag, group, 1);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag, but different
roots.  Since the operations are distinguishable, messages from one broadcast
cannot be confused with messages from the other broadcast; the program is safe
and will execute as expected.

\begin{verbatim}
MPI_BCAST(buf, len, tag, group, 0);
MPI_BCAST(buf, len, tag, group, 0);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag and root.
Since point-to-point communication preserves the order of messages
here, too, messages from one broadcast will not be confused with messages from
the other broadcast; the program is safe and will execute as intended.

\begin{verbatim}
MPI_RANK(&rank, group)
if (rank==0)
  {
   MPI_BCASTB(buf, len, tag, group, 0);
   MPI_SENDB(buf, len, 2, tag, group);
  }
elseif (rank==1)
  {
   MPI_RECVB(buf, len, MPI_DONTCARE, tag, group);
   MPI_BCASTB(buf, len, tag, group, 0);
   MPI_RECVB(buf, len, MPI_DONTCARE, tag, group);
  }
else
  {
   MPI_SENDB(buf, len, 2, tag, group);
   MPI_BCASTB(buf, len, tag, group, 0);
  }
\end{verbatim}

Process zero executes a broadcast followed by a send to process one;
process two executes a send to process one, followed by a broadcast;
and process one executes a receive, a broadcast and a receive.
A possible outcome is for the operations to be matched as illustrated by the
diagram below.

\begin{verbatim}


    0                       1                      2

                / - >  receive            / - send
              /                         /
broadcast   /         broadcast       /   broadcast
           /                        /
  send   -             receive  < -


\end{verbatim}

The reason is that broadcast is not a synchronous operation; the call at a
process may return before the other processes have entered the broadcast.
Thus, the message sent by process zero can arrive to process one before the
message sent by process two, and before the call to broadcast on process one.

\end{document}



From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 16 13:43:41 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25648; Tue, 16 Mar 93 13:43:41 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12864; Tue, 16 Mar 93 13:43:08 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 16 Mar 1993 13:43:07 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA12826; Tue, 16 Mar 93 13:42:10 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Tue, 16 Mar 93
 10:35 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA01248; Tue,
 16 Mar 93 10:33:37 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA07439; Tue, 16 Mar 93 10:33:34
 PST
Date: Tue, 16 Mar 93 10:33:34 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: Re:  Al's COLLCOMM proposal
To: Rolf.Hempel@gmd.de, mpi-collcomm@cs.utk.edu, mpi-ptop@cs.utk.edu
Cc: gmap10@f1neuman.gmd.de, rj_littlefield@pnlg.pnl.gov
Message-Id: <9303161833.AA07439@sodium.pnl.gov>
X-Envelope-To: mpi-ptop@cs.utk.edu, mpi-collcomm@cs.utk.edu

Rolf Hempel writes:

> I agree to most points of Rik
> Littlefields comments. The only thing which does not convince me yet
> is the explicite caching mechanism. If the information caching is
> handled consistently between the group management and collective
> communication routines (in order to avoid usage of stale group
> information), I still hope that it could be done without showing up
> at the user interface.

Just a point of clarification.  

I do NOT propose that cacheing be visible at the interface between
the application program and a collective communication routine that
it calls.  The example I provided was perhaps not explicit enough on
this point.  It said:

   efficient_global_op (grphandle, ...)
   struct group_descriptor_type *grphandle;
     <and so on>

I intended "..." to mean only the arguments that would be provided
to any collective communication routine, e.g., data buffer, number
of elements, and so on.  Nothing about cacheing there.

I think Rolf would agree that the standard collective communication
routines need an internal facility like this to coordinate with the
standard group management routines, if they are to achieve high
efficiency.  

My proposal is essentially to standardize and export that facility so
as to permit new collective communication routines to run as
efficiently as the built-ins.  In this vein, you may wish to think of
standardized cacheing as a feature to increase MPI's extensibility.

--Rik

----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 16 14:02:53 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA26216; Tue, 16 Mar 93 14:02:53 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13784; Tue, 16 Mar 93 14:02:08 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 16 Mar 1993 14:02:07 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13776; Tue, 16 Mar 93 14:02:06 -0500
Message-Id: <9303161902.AA13776@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 4245;
   Tue, 16 Mar 93 14:02:05 EST
Date: Tue, 16 Mar 93 14:01:00 EST
From: "Marc Snir" <snir@watson.ibm.com>
X-Addr: (914) 945-3204  (862-3204)
        28-226 IBM T.J. Watson Research Center
        P.O. Box 218 Yorktown Heights NY 10598
To: mpi-collcomm@cs.utk.edu
Subject: draft by Geist and Snir
Reply-To: SNIR@watson.ibm.com

Next message will be the postcript file, for nonlatexers.
From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 16 14:04:18 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA26260; Tue, 16 Mar 93 14:04:18 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13847; Tue, 16 Mar 93 14:03:24 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 16 Mar 1993 14:03:21 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13831; Tue, 16 Mar 93 14:03:16 -0500
Message-Id: <9303161903.AA13831@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 4261;
   Tue, 16 Mar 93 14:03:15 EST
Date: Tue, 16 Mar 93 14:03:14 EST
From: "Marc Snir" <snir@watson.ibm.com>
To: MPI-COLLCOMM@CS.UTK.EDU

%!PS-Adobe-2.0
%%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software
%%Title: COLLECT2.DVI.*
%%Pages: 25 1
%%BoundingBox: 0 0 612 792
%%EndComments
%%BeginProcSet: texc.pro
/TeXDict 250 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch
load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{
isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale
Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get
round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10
N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{
/vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{
statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N
/FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin
/FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array
/BitMaps X /BuildChar{CharBuilder} N /Encoding IE N end dup{/foo setfont}2
array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}
B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont
setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup
length 4 sub get} B /ch-xoff{128 ch-data dup length 3 sub get sub} B /ch-yoff{
ch-data dup length 2 sub get 127 sub} B /ch-dx{ch-data dup length 1 sub get} B
/ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0
N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S
dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0
ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice
ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]/id ch-image N
/rw ch-width 7 add 8 idiv string N /rc 0 N /gp 0 N /cp 0 N{rc 0 ne{rc 1 sub
/rc X rw}{G}ifelse}imagemask restore}B /G{{id gp get /gp gp 1 add N dup 18 mod
S 18 idiv pl S get exec}loop}B /adv{cp add /cp X}B /chg{rw cp id gp 4 index
getinterval putinterval dup gp add /gp X adv}B /nd{/cp 0 N rw exit}B /lsh{rw
cp 2 copy get dup 0 eq{pop 1}{dup 255 eq{pop 254}{dup dup add 255 and S 1 and
or}ifelse}ifelse put 1 adv}B /rsh{rw cp 2 copy get dup 0 eq{pop 128}{dup 255
eq{pop 127}{dup 2 idiv S 128 and or}ifelse}ifelse put 1 adv}B /clr{rw cp 2
index string putinterval adv}B /set{rw cp fillstr 0 4 index getinterval
putinterval adv}B /fillstr 18 string 0 1 17{2 copy 255 put pop}for N /pl[{adv
1 chg}bind{adv 1 chg nd}bind{1 add chg}bind{1 add chg nd}bind{adv lsh}bind{
adv lsh nd}bind{adv rsh}bind{adv rsh nd}bind{1 add adv}bind{/rc X nd}bind{1
add set}bind{1 add clr}bind{adv 2 chg}bind{adv 2 chg nd}bind{pop nd}bind]N /D{
/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S
ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr
ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI
save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook
known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if
/VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1
string dup 0 3 index put cvn put} for}N /p /show load N /RMat[1 0 0 -1 0 0]N
/BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V
statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval
(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale
rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex
ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{moveto}B
/delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}
B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B
/k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1
w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{
/SS save N}B /eos{clear SS restore}B end
%%EndProcSet
TeXDict begin 1000 300 300 @start /Fa 2 61 df<127012F812FCA2127C120CA41218A212
30A212601240060F7C840E>59 D<15181578EC01E0EC0780EC1E001478EB03E0EB0F80013CC7FC
13F0EA03C0000FC8FC123C12F0A2123C120FEA03C0EA00F0133CEB0F80EB03E0EB0078141EEC07
80EC01E0EC007815181D1C7C9926>I E /Fb 33 118 df<137013F01201EA03C0EA0780EA0F00
121E121C123C123812781270A212F05AA87E1270A212781238123C121C121E7EEA0780EA03C0EA
01F0120013700C24799F18>40 D<126012F012787E7E7EEA0780120313C0120113E01200A213F0
1370A813F013E0A2120113C0120313801207EA0F00121E5A5A5A12600C247C9F18>I<123C127E
127FA3123F120F120E121E127C12F81270080C788518>44 D<127812FCA412780606778518>46
D<387FFFC0B512E0A3C8FCA4B512E0A36C13C0130C7E9318>61 D<137013F8A213D8A2EA01DCA3
138CEA038EA41306EA0707A4380FFF80A3EA0E03A2381C01C0A2387F07F038FF8FF8387F07F015
1C7F9B18>65 D<EA7FFFB512806C1300EA01C0B3A4EA7FFFB512806C1300111C7D9B18>73
D<EA7FE012FF127F000EC7FCB11470A5387FFFF0B5FC7E141C7F9B18>76
D<38FC01F8EAFE03A2383B06E0A4138EA2EA398CA213DCA3EA38D8A213F81370A21300A638FE03
F8A3151C7F9B18>I<387E07F038FF0FF8387F07F0381D81C0A313C1121CA213E1A313611371A2
13311339A31319A2131D130DA3EA7F07EAFF87EA7F03151C7F9B18>I<EAFFFEEBFF8014C0EA1C
03EB01E013001470A514E01301EB03C0EA1FFF1480EBFE00001CC7FCA8B47EA3141C7F9B18>80
D<3807F380EA1FFF5AEA7C1FEA7007EAF00312E0A290C7FC7E1278123FEA1FF0EA0FFEEA01FF38
001F80EB03C0EB01E01300A2126012E0130100F013C0EAFC07B512801400EAE7FC131C7E9B18>
83 D<387FFFF8B5FCA238E07038A400001300B2EA07FFA3151C7F9B18>I<38FF83FEA3381C0070
B2001E13F0000E13E0EA0F013807C7C03803FF806C1300EA007C171C809B18>I<38FE03F8A338
700070A36C13E0A513F8A2EA39DCA2001913C0A3138CEA1D8DA4000D13801305EA0F07A2EA0E03
151C7F9B18>87 D<38FF07F8A3381C01C0EA1E03000E1380EA0F0700071300A2EA038EA2EA01DC
A213FC6C5AA21370A9EA01FC487E6C5A151C7F9B18>89 D<EA1FE0EA3FF8487EEA783EEA300FC6
7EA248B4FC120F123FEA7F07127812F012E0A26C5AEA783F387FFFF0EA3FFBEA0FE114147D9318
>97 D<127E12FE127E120EA5133EEBFF80000F13C0EBE3E0EB80F0EB00701478000E1338A5120F
14781470EB80F0EBC3E0EBFFC0000E138038067E00151C809B18>I<EB1F80133F131F1303A5EA
03F3EA0FFBEA1FFFEA3E1FEA780FEA700712F0EAE003A5130712F01270EA780FEA3E3F381FFFF0
380FFBF83803E3F0151C7E9B18>100 D<EA03F0EA0FFC487EEA3E1F38780780EA700300F013C0
EAE001A2B5FCA300F0C7FC1270387801C0123CEA3F07381FFF8000071300EA01FC12147D9318>
I<EB1FC0EB7FE013FFEA01F1EBC0C01400A3387FFFC0B5FCA23801C000AEEA7FFFA3131C7F9B18
>I<3803F1F03807FFF85A381E1F30383C0F00EA3807A5EA3C0FEA1E1EEA1FFC485AEA3BF00038
C7FC123CEA1FFF14C04813E0387801F038F00078481338A36C1378007813F0EA7E03383FFFE000
0F13803803FE00151F7F9318>I<127E12FE127E120EA5133FEBFF80000F13C0EBE1E013801300
A2120EAA387FC3FC38FFE7FE387FC3FC171C809B18>I<EA0380487EA36C5AC8FCA4EA7FC012FF
127F1201AEB5FC14801400111D7C9C18>I<EA7FE012FF127F1200B3A4387FFFC0B512E06C13C0
131C7E9B18>108 D<387DF1F038FFFBF86CB47E381F1F1CEA1E1EA2EA1C1CAB387F1F1F39FFBF
BF80397F1F1F001914819318>I<EA7E3F38FEFF80007F13C0380FE1E013801300A2120EAA387F
C3FC38FFE7FE387FC3FC1714809318>I<EA01F0EA0FFE487E383E0F80EA3803387001C0A238E0
00E0A5EAF001007013C0EA7803383C0780EA3E0F381FFF006C5AEA01F013147E9318>I<EA7E3E
38FEFF80007F13C0380FE3E0EB80F0EB00701478000E1338A5120F14781470EB80F0EBC3E0EBFF
C0000E1380EB7E0090C7FCA7EA7FC0487E6C5A151E809318>I<387F87E038FF9FF8EA7FBF3803
FC78EBF030EBE0005BA35BA8EA7FFEB5FC6C5A15147F9318>114 D<EA0FF7EA3FFF5AEAF81FEA
E007A212F0007CC7FCEA7FF0EA1FFCEA07FEEA001F38600780EAE00312F0130738FC0F00B5FC5B
EAE7F811147D9318>I<487E1203A4387FFFC0B5FCA238038000A9144014E0A21381EBC3C0EA01
FF6C1380EB7E0013197F9818>I<387E07E0EAFE0FEA7E07EA0E00AC1301EA0F073807FFFC6C13
FE3801FCFC1714809318>I E /Fc 65 126 df<EA01E0487E487EEA0F3CEA0E1CA4133CEB39FC
1379EA0FF13807E1E0EBC1C013811383000F1380EA1FC7003D1300EA79E7EAF0EFEAE0FE137EEB
3C08141CEAF07E3878FF3C387FE7F8EA3FC3380F81F0161E7F9D1A>38 D<1338137813F8EA01E0
EA03C0EA0780EA0F00121E121C123C123812781270A312F05AA87E1270A312781238123C121C12
1E7EEA0780EA03C0EA01E0EA00F8137813380D2878A21A>40 D<126012F012787E7E7EEA0780EA
03C0120113E0120013F01370A313781338A813781370A313F013E0120113C01203EA0780EA0F00
121E5A5A5A12600D287CA21A>I<13301378A8387FFFF0B512F8A26C13F038007800A813301516
7E991A>43 D<123C127E127FA3123F1207120F120E123E12FC12F812E0080D77851A>I<387FFF
C0B512E0A26C13C013047D901A>I<127812FCA41278060676851A>I<14C0EB01E01303A214C013
07A2EB0F80A2EB1F00A2131E133EA25BA25BA2485AA25B1203A2485AA2485AA290C7FC5AA2123E
A25AA2127812F8A25A126013277DA21A>I<EA01F0EA07FC487EEA1F1FEA1C0738380380A23870
01C0A338E000E0A9EAF001007013C0A2EA780300381380EA3C07001C1300EA1F1FEA0FFE6C5AEA
01F0131E7D9D1A>I<13C012011203A21207120F127F12FD12791201B2EA7FFFA3101E7B9D1A>I<
EA07F8EA0FFE487E383C0F80387803C0EAF00100E013E0EAF000A21260C7FCA2130114C01303EB
0780EB0F00130E133E5B5BEA01E0485A485A48C7FC001E13E05AEA7FFFB5FC7E131E7D9D1A>I<
123C127EA4123C1200A9123C127C127EA3123E120E121E121C123C12F812F012E0071C77941A>
59 D<14C0EB03E01307EB0FC0EB3F80EB7F00EA01FC485AEA07E0EA1FC0485A007EC7FC5AA212
7E6C7E6C7EEA07E0EA03F86C7EEA007FEB3F80EB0FC0EB07E01303EB00C0131A7D9B1A>I<387F
FFF0B512F8A26C13F0C8FCA4387FFFF0B512F8A26C13F0150C7E941A>I<126012F87E127E6C7E
6C7EEA07F06C7EC67E137FEB3F80EB0FC0EB07E0A2EB0FC0EB3F80EB7F0013FCEA03F8485AEA1F
C0485A007EC7FC5A5A1260131A7D9B1A>I<1338137CA2136C13EEA313C6A2EA01C7A438038380
A4380701C0A213FFA24813E0EA0E00A4481370387F01FC38FF83FE387F01FC171E7F9D1A>65
D<EAFFFEEBFF8014C0381C03E0130014F01470A414E01301EB07C0381FFF80A214C0381C01E0EB
00F014701438A5147814F01301B512E014C01400151E7E9D1A>I<EBFE383803FFB84813F8EA0F
83EA1E00001C1378123C4813381270A200F013005AA87E00701338A212786C1378001C1370001E
13F0380F83E03807FFC06C13803800FE00151E7E9D1A>I<EA7FFEB5FC6C1380381C07C0EB01E0
EB00F0147014781438A2143C141CA8143C1438A21478147014F0EB01E0EB07C0EA7FFFB512006C
5A161E7F9D1A>I<B512F8A3381C0038A41400A3130EA3EA1FFEA3EA1C0EA390C7FCA3141CA5B5
12FCA3161E7E9D1A>I<387FFFFCB5FC7E380E001CA41400A3EB0380A3EA0FFFA3EA0E03A390C7
FCA8EA7FE012FF127F161E7F9D1A>I<3801F8E0EA03FEEA07FFEA0F0FEA1E03EA3C011238EA78
001270A200F013005AA5EB0FF8A338F000E01270130112781238EA3C03121EEA0F0FEA07FFEA03
FEEA01F8151E7E9D1A>I<38FF83FEA3381C0070AA381FFFF0A3381C0070AB38FF83FEA3171E7F
9D1A>I<B51280A33801C000B3A6B51280A3111E7C9D1A>I<387F03F838FF87FC387F03F8381C01
E0EB03C014801307EB0F00131E131C133C5B5B7FEA1DFC121F139E130E130FEA1E07001C138013
0314C0EB01E0A2EB00F01470007F13FC38FF81FE387F00FC171E7F9D1A>75
D<EA7FE0487E6C5A000EC7FCB3141CA5387FFFFCB5FC7E161E7F9D1A>I<007E133FB4EB7F806C
1400381D80DCA313C1A2001C139CA213E3A2EB631C1377A21336A2133E131CA21300A7007F137F
39FF80FF80397F007F00191E809D1A>I<38FE03FE12FFA2381D8070A213C0121CA213E0A21360
1370A213301338A21318131CA2130C130EA21306A213071303A238FF81F0A21380171E7F9D1A>
I<EA0FFE383FFF804813C0EA7C07EA700100F013E0EAE000B1EAF001A2007013C0EA7C07EA7FFF
6C1380380FFE00131E7D9D1A>I<EAFFFEEBFF8014C0381C03E0EB00F0147014781438A4147814
7014F0EB03E0381FFFC01480EBFE00001CC7FCA9B47EA3151E7E9D1A>I<EAFFFC13FF1480381C
07C0EB01E0EB00F01470A414F0EB01E0EB07C0381FFF8014001480381C07C0EB01E01300A514E2
14E7A338FF80FF147E143C181E7F9D1A>82 D<3807F1C0EA1FFDEA3FFFEA7C1FEA7007EAF003EA
E001A390C7FC7E1278123FEA1FF8EA0FFEEA01FF38000F80EB03C0130114E01300126012E0A2EA
F001EB03C038FE0780B5FCEBFE00EAE3FC131E7D9D1A>I<387FFFFEB5FCA238E0380EA4000013
00B3A23803FF80A3171E7F9D1A>I<38FF83FEA3381C0070B3A2001E13F0000E13E0EA0F013807
C7C03803FF806C1300EA007C171E7F9D1A>I<38FF01FEA3381C0070A3001E13F0000E13E0A338
0701C0A438038380A43801C700A4EA00C613EEA3136C137CA21338171E7F9D1A>I<00FE13FEEA
FF01EAFE000070131C0078133C00381338A7137C001C137013EEA513C6A2380DC760A31383A300
0F13E0A2380701C0171E7F9D1A>I<383FFFF85AA23870007014F0EB01E014C0EA0003EB0780EB
0F00130E131E5B133813785B5B1201485A5B120748C7FC001E1338121C123C5A1270B512F8A315
1E7E9D1A>90 D<EAFFF8A3EAE000B3AFEAFFF8A30D2776A21A>I<EAFFF8A3EA0038B3AFEAFFF8
A30D277EA21A>93 D<387FFFC0B512E0A26C13C013047D7E1A>95 D<EA1FF0EA3FFC487EEA781F
38300780EA0003A213FF1207121FEA3F83EA7C0312F012E0A3EAF007EA7C1F383FFFFCEA1FFDEA
07F016157D941A>97 D<12FEA3120EA6133FEBFFC0000F13E0EBE1F0EB8070EB00781438000E13
3C141CA5000F133C14381478EB80F0EBC3E0EBFFC0000E138038067E00161E7F9D1A>I<3801FF
80000713C04813E0EA1F01383C00C0481300127012F05AA57E1270007813707E381F01F0380FFF
E06C13C00001130014157D941A>I<EB1FC0A31301A6EA01F9EA07FDEA0FFFEA1F0FEA3C07EA78
031270EAF00112E0A5EAF0031270EA78071238EA3E1F381FFFFCEA0FFDEA03F1161E7E9D1A>I<
EA01FCEA07FF481380381F07C0383C01E0EA7800007013F000F013705AB512F0A300E0C7FC7E12
70007813707E381F01F0380FFFE06C13C00001130014157D941A>I<EB0FF0EB1FF8133FEB7878
EBF030EBE000A4387FFFF0B5FCA23800E000AF383FFF804813C06C1380151E7F9D1A>I<3801F8
FC3807FFFE5A381F0F8C381C0380003C13C0EA3801A3EA3C03001C1380EA1F0FEBFF00485AEA39
F80038C7FC123C121C381FFF8014F04813F8387C00FC0070131C00F0131E48130EA36C131E0078
133C383F01F8381FFFF06C13E00001130017217F941A>I<12FEA3120EA6133FEBFF80000F13C0
EBE1E013801300A2120EAB38FFE3FE13E713E3171E7F9D1A>I<EA01C0487EA36C5AC8FCA5EA7F
E0A31200AF387FFF80B512C06C1380121F7C9E1A>I<12FEA3120EA6EB0FFCEB1FFEEB0FFCEB03
C0EB0780EB0F00131E5B5B13FC120F13DE138F380E0780EB03C0A2EB01E0EB00F038FFE3FE14FF
14FE181E7F9D1A>107 D<EAFFE0A31200B3A6B512E0A3131E7D9D1A>I<387DF1F038FFFBF86CB4
7E381F1F1CEA1E1EA2EA1C1CAC387F1F1F39FF9F9F80397F1F1F00191580941A>I<EAFE3FEBFF
80B512C0380FE1E013801300A2120EAB38FFE3FE13E713E317157F941A>I<EA01F0EA07FCEA1F
FF383E0F80EA3C07387803C0EA700138E000E0A6EAF001007013C0EA7803383C0780EA3E0F381F
FF00EA07FCEA01F013157D941A>I<EAFE3FEBFFC0B512E0380FE1F0EB8070EB00781438000E13
3C141CA5000F133C14381478EB80F0EBC3E0EBFFC0000E1380EB7E0090C7FCA8EAFFE0A316207F
941A>I<387F87F038FF9FFCEA7FBF3803FC3CEBF018EBE000A25BA25BA9EA7FFFB5FC7E16157E
941A>114 D<380FFB80EA3FFF5AEAF80FEAE003A300F8C7FCEA7FC0EA3FFCEA0FFF38007F80EB
07C0EA600112E012F0130338FC0F80B512005BEAE7F812157C941A>I<13C01201A6387FFFE0B5
FCA23801C000AA1470A314F0EBE1E0EA00FFEB7FC0EB3F00141C7F9B1A>I<38FE0FE0A3EA0E00
AC1301A2EA0F073807FFFE7EEA01FC17157F941A>I<387F83FC38FFC7FE387F83FC380E00E0A3
380701C0A338038380A33801C700A3EA00EEA3137CA2133817157F941A>I<387FC7F8EBCFFCEB
C7F8380703C038038380EBC700EA01EFEA00FE137C13781338137C13EE120113C7380383800007
13C0EA0F01387FC7FC00FF13FE007F13FC17157F941A>120 D<387FC3FC38FFC7FE387FC3FC38
0E00E0A27EEB01C013811203EB838013C31201EBC700EA00E7A213E61366136E133CA31338A35B
A21230EA78E01271EA7FC06C5A001EC7FC17207F941A>I<387FFFF0B5FCA238E001E0EB03C0EB
078038000F00131E5B5B5B485A485A485A380F0038121E5A5AB512F8A315157E941A>I<EB07E0
131F133FEB7C0013F05BAB1201EA07C0B45A90C7FC7FEA07C0EA01E01200AB7F137CEB3FE0131F
130713277DA21A>I<127CB4FC7FEA07C0EA01E01200AB7F137CEB3FE0131F133FEB7C0013F05B
AB1201EA07C0B45A90C7FC127C13277DA21A>125 D E /Fd 56 123 df<903807F83F017FB512
C03A01FC0FE3E03903F01FC7EA07E0D80FC01387ED83C0ED8000A6B612FCA2390FC01F80B2397F
F8FFF8A223237FA221>11 D<13181330136013C01201EA0380120713005A121EA2123E123CA212
7CA3127812F8AD1278127CA3123CA2123E121EA27E7E13801203EA01C012001360133013180D31
7BA416>40 D<12C012607E7E121C7E120F7E1380EA03C0A213E01201A213F0A3120013F8AD13F0
1201A313E0A2120313C0A2EA078013005A120E5A12185A5A5A0D317DA416>I<1238127C12FE12
FFA2127F123B1203A212071206A2120C121C12181270122008117C8610>44
D<EAFFFCA50E057F8D13>I<1238127C12FEA3127C123807077C8610>I<13181378EA01F812FFA2
1201B3A7387FFFE0A213207C9F1C>49 D<EA03FCEA0FFF383C1FC0387007E0007C13F0EAFE0314
F8A21301127CEA3803120014F0A2EB07E014C0EB0F80EB1F00133E13385BEBE018EA01C0EA0380
EA0700000E1338380FFFF05A5A5AB5FCA215207D9F1C>I<EA01FE3807FFC0380F07E0381E03F0
123FEB01F813811301EA1F03000C13F0120014E0EB07C0EB1F803801FE007F380007C0EB01F014
F8EB00FCA214FE127CA212FEA214FCEA7C01007813F8383C07F0380FFFC03803FE0017207E9F1C
>I<1470A214F8A3497EA2497EA3EB06FF80010E7FEB0C3FA201187F141F01387FEB300FA20160
7F140701E07F90B5FCA239018001FCA200038090C7FCA20006147FA23AFFE00FFFF8A225227EA1
2A>65 D<B67E15E03907F001F86E7E157EA2157FA5157E15FE5DEC03F890B55AA29038F001FCEC
007E811680151F16C0A6ED3F80A2ED7F00EC01FEB612F815C022227EA128>I<D903FE13809038
1FFF819038FF01E33901F8003FD803E0131F4848130F48481307121F48C71203A2481401127EA2
00FE91C7FCA8127EED0180127F7E15036C6C1400120F6C6C1306D803F05B6C6C13386CB413F090
381FFFC0D903FEC7FC21227DA128>I<B67E15F03907F003FCEC007E81ED1F80ED0FC0ED07E0A2
16F01503A316F8A916F0A3ED07E0A2ED0FC0ED1F80ED3F00157EEC03FCB612F0158025227EA12B
>I<B612FCA23807F000153C151C150C150EA215061418A3150014381478EBFFF8A2EBF0781438
1418A21503A214001506A3150EA2151E153EEC01FCB6FCA220227EA125>I<B612F8A23807F001
EC007815381518151CA2150CA21418A21500A214381478EBFFF8A2EBF07814381418A491C7FCA8
B512E0A21E227EA123>I<D903FE134090391FFFC0C090387F00F1D801F8133F4848130FD807C0
1307000F1403485A48C71201A2481400127EA200FE1500A791380FFFFC127E007F9038001FC0A2
7EA26C7E6C7E6C7E6C7ED801FC133F39007F80E790381FFFC30103130026227DA12C>I<B53883
FFFEA23A07F0001FC0AD90B6FCA29038F0001FAFB53883FFFEA227227EA12C>I<B512E0A23803
F800B3ACB512E0A213227FA115>I<B538803FFCA23A07F0000380ED0700150E15185D15E04A5A
4A5A4AC7FC140E1418143814FCEBF1FE13F3EBF77F01FE7FEBF83F496C7E81140F6E7E8114036E
7E816E7E811680ED3FC0B53883FFFCA226227EA12C>75 D<B512E0A2D807F0C7FCB31518A41538
A21570A215F014011407B6FCA21D227EA122>I<D8FFF0EC0FFF6D5C000716E0D806FC1437A301
7E1467A26D14C7A290391F800187A290390FC00307A3903807E006A2903803F00CA2903801F818
A3903800FC30A2EC7E60A2EC3FC0A2EC1F80A3EC0F00D8FFF091B5FC140630227EA135>I<D8FF
F8EB1FFE7F0007EC00C07FEA06FF6D7E6D7E6D7E130F806D7E6D7E6D7E130080EC7F80EC3FC0EC
1FE0EC0FF0140715F8EC03FCEC01FEEC00FF157FA2153F151F150F15071503A2D8FFF013011500
27227EA12C>I<EB07FC90383FFF809038FC07E03903F001F848486C7E4848137E48487FA248C7
EA1F80A24815C0007E140FA200FE15E0A9007E15C0007F141FA26C15806D133F001F15006C6C13
7E6C6C5B6C6C485A3900FC07E090383FFF80D907FCC7FC23227DA12A>I<B6FC15E03907F007F0
EC01FC1400157EA2157FA5157EA215FC1401EC07F090B512E0150001F0C7FCADB57EA220227EA1
26>I<B512FEECFFC03907F007F0EC01F86E7E157E157FA6157E5D4A5AEC07F090B512C05D9038
F00FE06E7E6E7E6E7EA81606EC00FEEDFF0CB538803FF8ED0FF027227EA12A>82
D<3801FC043807FF8C381F03FC383C007C007C133C0078131CA200F8130CA27E1400B4FC13E06C
B4FC14C06C13F06C13F86C13FC000313FEEA003FEB03FFEB007F143FA200C0131FA36C131EA26C
133C12FCB413F838C7FFE00080138018227DA11F>I<007FB61280A2397E03F80F007814070070
14030060140100E015C0A200C01400A400001500B3A20003B512F8A222227EA127>I<B538803F
FCA23A07F0000180B3A60003EC03007F000114066C6C130E017E5B90383F80F890380FFFE00101
90C7FC26227EA12B>I<B53A0FFFF01FFEA2260FF00090C712E000076E14C0A26C6C9138800180
153F6D1503000103C01300A26C6C90387FE006156F7F6D9038C7F00CA20280EBF81C90263F8183
1318A2D91FC36D5A150114E3903A0FE600FE60A202F6EBFFE0D907FC6D5AA201035D4A133FA26D
486DC7FCA20100141E4A130EA237227FA13A>87 D<3A7FFFC1FFF0A23A03FC000C006C6C5B0000
14386D5B90387F8060013F5B14C190381FE380010F90C7FC14F7EB07FE6D5AA26D7E1300808149
7F14BF9038031FE0496C7E130E90380C07F8496C7E133890383001FE496C7E13E04848EB7F8049
EB3FC03AFFFC03FFFEA227227FA12A>I<B538800FFEA2D807F8C712C015016C6C14806C6CEB03
005D6C6C13065D90387F801C90383FC0185D90381FE07090380FF06015E06D6C5A903803FD8014
FF6D90C7FC5C1300AC90381FFFF0A227227FA12A>I<003FB512E0A29038801FC0383E003F003C
14800038EB7F00485B5C1301386003FC5C130700005B495A131F5C133F495A91C7FC5B49136048
5A12035B000714E0485A5B001FEB01C013C0383F8003007F1307EB003FB6FCA21B227DA122>I<
EA07FC381FFF80383F0FC0EB07E0130314F0121E1200A213FF1207EA1FC3EA3F03127E12FCA4EA
7E07EB1DF8381FF8FF3807E07F18167E951B>97 D<B47EA2121FABEB8FE0EBBFF8EBF07CEBC01E
EB801FEC0F80A215C0A81580141F1500EBC03EEB607C381E3FF8381C0FC01A237EA21F>I<EBFF
80000713E0380F83F0EA1F03123E127E387C01E090C7FC12FCA6127C127EA2003E13306C136038
0FC0E03807FF803800FE0014167E9519>I<EB03FEA2EB007EABEA01FCEA07FF380F81FEEA1F00
003E137E127E127C12FCA8127CA27E001E13FEEA0F833907FF7FC0EA01FC1A237EA21F>I<13FE
3807FF80380F87C0381E01E0003E13F0EA7C0014F812FCA2B5FCA200FCC7FCA3127CA2127E003E
13186C1330380FC0703803FFC0C6130015167E951A>I<EB3F80EBFFC03801F3E0EA03E7EA07C7
120FEBC3C0EBC000A6EAFFFCA2EA0FC0B2EA7FFCA213237FA211>I<3801FE1F0007B51280380F
87E7EA1F03391E01E000003E7FA5001E5BEA1F03380F87C0EBFF80D819FEC7FC0018C8FC121CA2
381FFFE014F86C13FE80123F397C003F8048131F140FA3007CEB1F00007E5B381F80FC6CB45A00
0113C019217F951C>I<B47EA2121FABEB87E0EB9FF8EBB8FCEBE07CEBC07EA21380AE39FFF1FF
C0A21A237EA21F>I<120E121FEA3F80A3EA1F00120EC7FCA7EAFF80A2121FB2EAFFF0A20C247F
A30F>I<B47EA2121FABECFF80A2EC38005C14C0EB83800187C7FC138E139E13BE13FFEBDF80EB
8FC0A2EB87E0EB83F0A2EB81F8EB80FC147E39FFF1FFC0A21A237EA21E>107
D<EAFF80A2121FB3ADEAFFF0A20C237FA20F>I<3AFF87F00FE090399FFC3FF83A1FB87E70FC90
39E03EC07C9039C03F807EA201801300AE3BFFF1FFE3FFC0A22A167E952F>I<38FF87E0EB9FF8
381FB8FCEBE07CEBC07EA21380AE39FFF1FFC0A21A167E951F>I<13FE3807FFC0380F83E0381E
00F0003E13F848137CA300FC137EA7007C137CA26C13F8381F01F0380F83E03807FFC03800FE00
17167E951C>I<38FF8FE0EBBFF8381FF07CEBC03E497E1580A2EC0FC0A8EC1F80A2EC3F00EBC0
3EEBE0FCEBBFF8EB8FC00180C7FCA8EAFFF0A21A207E951F>I<EAFF1FEB3FC0381F67E013C7A3
EB83C0EB8000ADEAFFF8A213167E9517>114 D<EA07F3EA1FFFEA780FEA7007EAF003A26CC7FC
B4FC13F0EA7FFC6C7E6C7E120738003F80EAC00F130712E0A200F01300EAFC1EEAEFFCEAC7F011
167E9516>I<13C0A41201A212031207120F121FB5FCA2EA0FC0ABEBC180A51207EBE300EA03FE
C65A11207F9F16>I<38FF83FEA2381F807EAF14FEA2EA0F833907FF7FC0EA01FC1A167E951F>I<
39FFF01FE0A2390FC00600A2EBE00E0007130CEBF01C0003131813F800015BA26C6C5AA2EB7EC0
A2137F6D5AA26DC7FCA2130EA21B167F951E>I<3AFFE3FF87F8A23A1F807C00C0D80FC0EB0180
147E13E0000790387F030014DF01F05B00031486EBF18FD801F913CC13FB9038FF07DC6C14F8EB
FE03017E5BA2EB7C01013C5BEB380001185B25167F9528>I<39FFF07FC0A2390FC01C006C6C5A
6D5A6C6C5A00015B3800FD80017FC7FCA27F6D7E497E80EB67F013E33801C1F8380381FC48C67E
000E137E39FF81FFE0A21B167F951E>I<39FFF01FE0A2390FC00600A2EBE00E0007130CEBF01C
0003131813F800015BA26C6C5AA2EB7EC0A2137F6D5AA26DC7FCA2130EA2130CA25B1278EAFC38
13305BEA69C0EA7F80001FC8FC1B207F951E>I<387FFFF0A2387C07E038700FC0EA601F00E013
8038C03F005B13FEC65A1201485AEBF0301207EA0FE0EBC070EA1F80003F1360EB00E0EA7E03B5
FCA214167E9519>I E /Fe 48 124 df<90380FC3E090387FEFF09038E07C783801C0F8D80380
13303907007000A7B61280A23907007000B0387FE3FFA21D20809F1B>11
D<EB1F80EB7FC03801E0E0EA0381A2EA070190C7FCA6B512E0A2EA0700B0387FC3FEA21720809F
19>I<90380F80F890387FE7FE9038E06E063901C0FC0F380380F8380700F00270C7FCA6B7FCA2
3907007007B03A7FE3FE3FF0A22420809F26>14 D<127012F812FCA2127C120CA31218A2123812
3012601240060E7C9F0D>39 D<136013C0EA0180EA03005A12065A121C12181238A212301270A3
1260A212E0AC1260A21270A312301238A21218121C120C7E12077EEA0180EA00C013600B2E7DA1
12>I<12C012607E7E121C120C7E12077E1380A2120113C0A31200A213E0AC13C0A21201A31380
1203A213005A12065A121C12185A5A5A0B2E7DA112>I<127012F812FCA2127C120CA31218A212
38123012601240060E7C840D>44 D<EAFFC0A30A037F8A0F>I<127012F8A3127005057C840D>I<
EA03F0EA0FFCEA1E1EEA1C0E487E00781380EA7003A300F013C0AD00701380A3EA780700381300
EA1C0EEA1E1EEA0FFCEA03F0121F7E9D17>48 D<EA03F0487EEA1E1CEA380E7F1270EB038012F0
A214C0A5EA7007A2EA380F121CEA1FFBEA07F338000380A2130714001230EA780EA2EA701CEA30
78EA1FF0EA0FC0121F7E9D17>57 D<127012F8A312701200AA127012F8A3127005147C930D>I<
EA0FC0EA3FF0EA7078EA6038EAE03C12F0A212601200137813F013E0EA01C0138012031300A7C7
FCA51207EA0F80A3EA07000E207D9F15>63 D<EB0380A3497EA3EB0DE0A3EB18F0A3EB3078A349
7EA3EBE01E13C0EBFFFE487FEB800FA200031480EB0007A24814C01403EA0F8039FFE03FFEA21F
207F9F22>65 D<B512E014F83807803E80801580A515005C143E5CEBFFF880EB801E8015801407
15C0A51580140FEC1F00143EB512FC14F01A1F7E9E20>I<B512E014FC3807803E140FEC0780EC
03C015E0140115F01400A215F8A915F0A2140115E0A2EC03C0EC0780EC0F00143EB512FC14E01D
1F7E9E23>68 D<B6FCA23807801F140780A215801401A214C1A2ECC000A2138113FFA213811380
A491C7FCA8EAFFFEA2191F7E9E1E>70 D<39FFF8FFF8A23907800F00AC90B5FCA2EB800FAD39FF
F8FFF8A21D1F7E9E22>72 D<EAFFFCA2EA0780B3A9EAFFFCA20E1F7F9E10>I<39FF807FF813C0
0007EB07809038E00300A2EA06F0A21378133CA2131EA2130FA2EB078314C31303EB01E3A2EB00
F3A2147BA2143F80A280A2000F7FEAFFF0801D1F7E9E22>78 D<B512E014F83807807C141E141F
801580A515005C141E147CEBFFF814E00180C7FCACEAFFFCA2191F7E9E1F>80
D<007FB512E0A238780F010070130000601460A200E0147000C01430A400001400B23807FFFEA2
1C1F7E9E21>84 D<EA1FE0487EEA78387FEA300E1200A3EA03FE121FEA3E0E127812F800F01330
A3131E38783F70383FEFE0380F878014147E9317>97 D<120E12FEA2120EA9133FEBFF80380FC3
C0EB00E0000E13F014701478A7147014F0120FEB01E0EBC3C0380CFF80EB3E0015207F9F19>I<
EA03F8EA0FFCEA1E1E123CEA380CEA7800127012F0A612701278EA3803123CEA1F0EEA0FFCEA03
F010147E9314>I<EB0380133FA21303A9EA03E3EA0FFBEA1E0FEA3C07EA7803A2127012F0A612
70A2EA78071238EA1E1F380FFBF8EA03E315207E9F19>I<EA03F0EA0FFCEA1E1E487EEA380712
783870038012F0B5FCA200F0C7FCA31270127838380180EA1C03380F0700EA07FEEA01F811147F
9314>I<133C13FEEA01CFEA038F1306EA0700A7EAFFF0A2EA0700B0EA7FF0A21020809F0E>I<EB
01E03803E3F0380FFF70EA1C1C383C1E00EA380EEA780FA4EA380EEA3C1EEA1C1CEA3FF8EA33E0
0030C7FCA21238EA3FFE381FFF804813C0387003E0EB00F0481370A36C13F0387801E0383E07C0
380FFF00EA03FC141F7F9417>I<120E12FEA2120EA9133E13FF380FC380EB01C0A2120EAD38FF
E7FCA216207F9F19>I<121C121E123E121E121CC7FCA6120E127EA2120EAFEAFFC0A20A1F809E
0C>I<13E0EA01F0A3EA00E01300A61370EA07F0A212001370B3A21260EAF0E0EAF1C0EA7F80EA
3E000C28829E0E>I<120E12FEA2120EA9EB1FF0A2EB0F80EB0E00130C5B5B137013F0EA0FF813
38EA0E1C131E130E7F1480130314C038FFCFF8A215207F9F18>I<120E12FEA2120EB3A9EAFFE0
A20B20809F0C>I<390E3F03F039FEFF8FF839FFC1DC1C390F80F80EEB00F0000E13E0AD3AFFE7
FE7FE0A223147F9326>I<EA0E3EEAFEFF38FFC380380F01C0A2120EAD38FFE7FCA216147F9319>
I<EA01F8EA07FE381E0780383C03C0EA3801387000E0A200F013F0A6007013E0EA7801003813C0
EA3C03381E07803807FE00EA01F814147F9317>I<EA0E3F38FEFF8038FFC3C0380F01E0380E00
F0A21478A7147014F0120FEB01E0EBC3C0380EFF80EB3E0090C7FCA7EAFFE0A2151D7F9319>I<
EA0E78EAFEFCEAFF9EEA0F1E130C1300120EACEAFFE0A20F147F9312>114
D<EA1F90EA3FF0EA7070EAE030A3EAF0001278EA7F80EA3FE0EA0FF01200EAC0781338A212E0A2
EAF070EADFE0EA8F800D147E9312>I<1206A4120EA2121E123EEAFFF8A2EA0E00AA1318A5EA07
3013E0EA03C00D1C7F9B12>I<380E01C0EAFE1FA2EA0E01AC1303A2EA070FEBFDFCEA01F11614
7F9319>I<38FF87F8A2381E01E0000E13C01480A238070300A3EA0386A2138EEA01CCA213FC6C
5AA21370A315147F9318>I<39FF9FF3FCA2391C0780F01560ECC0E0D80E0F13C0130C14E00007
EBE180EB186114713903987300EBB033A2143F3801F03EEBE01EA20000131CEBC00C1E147F9321
>I<387FC7FCA2380703E0148038038300EA01C7EA00EE13EC13781338133C137C13EEEA01C713
8738030380380701C0000F13E038FF87FEA21714809318>I<38FF87F8A2381E01E0000E13C014
80A238070300A3EA0386A2138EEA01CCA213FC6C5AA21370A31360A35B12F0EAF18012F3007FC7
FC123C151D7F9318>I<EA3FFFA2EA380EEA301CEA703CEA6038137013F0EA01E013C0EA0380EA
0783EA0F03120EEA1C07EA3C061238EA701EEAFFFEA210147F9314>I<B512FCA21602808C17>I
E /Ff 10 118 df<1238127C12FEA3127C12381200A61238127C12FEA3127C123807147C930F>
58 D<B512FEECFFC03907F007F0EC01F86E7E157E81A2ED1F80A316C0A91680A3ED3F00A2157E
5D4A5AEC07F0B612C04AC7FC221F7E9E28>68 D<D8FFF0EC7FF86D14FF00071600D806FCEB01BF
A3017EEB033FA26D1306A290381F800CA390380FC018A2903807E030A2903803F060A3903801F8
C0A2903800FD80A2EC7F00A2143EA33BFFF01C07FFF8A22D1F7E9E32>77
D<EA01FE3807FF80381F0FC0123EA2127CEB030000FCC7FCA6127C127E003E1360003F13C0EA1F
813807FF00EA01FC13147E9317>99 D<3801FC3C3807FFFE380F07DEEA1E03003E13E0A5001E13
C0380F0780EBFF00EA19FC0018C7FCA2121C381FFF8014F06C13F8003F13FC387C007C0070133E
00F0131EA30078133CA2383F01F8380FFFE000011300171E7F931A>103
D<121C123F5AA37E121CC7FCA6B4FCA2121FB0EAFFE0A20B217EA00E>105
D<38FE0FC0EB3FE0381E61F0EBC0F8EA1F801300AD38FFE3FFA218147D931D>110
D<48B4FC000713C0381F83F0383E00F8A248137CA200FC137EA6007C137CA26C13F8A2381F83F0
3807FFC00001130017147F931A>I<EA0FE6EA3FFEEA701EEA600EEAE006A2EAF800EAFFC0EA7F
F8EA3FFCEA1FFE1203EA001FEAC007A212E0EAF006EAF81EEAFFFCEAC7F010147E9315>115
D<38FF07F8A2EA1F00AD1301A2EA0F073807FEFFEA03F818147D931D>117
D E /Fg 3 21 df<B612FCA21E027C8C27>0 D<EA03F0EA0FFC487E487E481380A2B512C0A86C
1380A26C13006C5A6C5AEA03F012147D9519>15 D<150C153C15F0EC03C0EC0F00143C14F0EB07
C0011FC7FC1378EA01E0EA0780001EC8FC127812E01278121EEA0780EA01E0EA0078131FEB07C0
EB00F0143C140FEC03C0EC00F0153C150C1500A8B612FCA21E277C9F27>20
D E /Fh 70 124 df<90380F83E090387FE7F09038F07E783801C0F8EA0380EC7000EA0700A8B6
12C0A23907007000B1397FE3FF80A21D2380A21C>11 D<EB0FC0EB3FE0EBF0703801C038380380
78A23807003091C7FCA7B512F8A2380700781438B0397FE1FF80A2192380A21B>I<EB0FF8133F
EBF078EA01C0EA03801438EA0700A8B512F8A238070038B1397FF3FF80A2192380A21B>I<9038
07E03F90393FF0FF809039F03BC1C03A01C01F00E03903803E01A23A07001C00C01600A7B712E0
A23907001C011500B03A7FF1FFCFFEA2272380A229>I<127012F812FCA2127C120CA41218A212
30A212601240060F7CA20E>39 D<1330136013C0EA0180EA03005A1206120E120C121C12181238
A212301270A3126012E0AE12601270A312301238A21218121C120C120E120612077EEA0180EA00
C0136013300C327DA413>I<12C012607E7E7E120E120612077E1380120113C0A2120013E0A313
601370AE136013E0A313C01201A21380120313005A1206120E120C5A5A5A5A0C327DA413>I<49
7EB0B612FEA23900018000B01F227D9C26>43 D<127012F812FCA2127C120CA41218A21230A212
601240060F7C840E>I<EAFFE0A30B037F8B10>I<127012F8A3127005057C840E>I<EB0180A213
031400A25B1306A2130E130CA2131C1318A313381330A213701360A213E05BA212015BA2120390
C7FCA25A1206A2120E120CA3121C1218A212381230A212701260A212E05AA211317DA418>I<EA
01F0EA07FCEA0E0E487E38380380A2007813C0EA7001A300F013E0AE007013C0A3EA7803003813
80A2381C0700EA0E0EEA07FCEA01F013227EA018>I<EA01801203120F12FF12F31203B3A8EAFF
FEA20F217CA018>I<EA03F0EA0FFCEA1C1F38300F80EA6007EB03C012C000F013E0EAF801A3EA
2003120014C0A2EB0780A2EB0F00131E131C5B5B5B485A485A38070060120E120C4813E04813C0
EA7FFFB5FCA213217EA018>I<EA03F0EA0FFCEA1C1F383007801270007813C0A21303EA380712
001480A2EB0F00130E133CEA03F8A2EA001E7FEB078014C0130314E01220127012F8A200F013C0
1260EB07801230381C1F00EA0FFCEA03F013227EA018>I<130EA2131EA2133EA2136E13EE13CE
1201138EEA030E12071206120E120C1218A212301270126012E0B512F8A238000E00A73801FFF0
A215217FA018>I<00101380EA1C07381FFF005B5B13F00018C7FCA613F8EA1BFEEA1F0F381C07
80EA180314C0EA000114E0A4126012F0A214C0EAC0031260148038300700EA1C1EEA0FFCEA03F0
13227EA018>I<137E48B4FC3803C180380701C0EA0E03121CEB018048C7FCA2127812701320EA
F1FCEAF3FEEAF60738FC038000F813C0130112F014E0A51270A3003813C0130300181380381C07
00EA0E0EEA07FCEA01F013227EA018>I<12601270387FFFE0A214C0EA600038E0018038C00300
A21306C65AA25BA25BA25BA213E0A3485AA51203A86C5A13237DA118>I<EA01F0EA07FCEA0E0F
38180780EA3803383001C01270A31278EB0380123E383F0700EA1FCEEA0FFCEA03F87FEA0F7F38
1C3F80EA380F387007C0130338E001E01300A5387001C0A238380380381E0F00EA0FFEEA03F013
227EA018>I<EA01F0EA07FCEA0E0E487E383803801278127038F001C0A314E0A5127013031278
EA3807EA1C0DEA0FF9EA07F1380081C0130113031480A2383007001278130EEA701C6C5AEA1FF0
EA0FC013227EA018>I<127012F8A312701200AB127012F8A3127005157C940E>I<127012F8A312
701200AB127012F8A312781218A41230A3126012E01240051F7C940E>I<B612FEA2C9FCA8B612
FEA21F0C7D9126>61 D<497E497EA3497EA3497E130CA2EB1CF8EB1878A2EB383C1330A2497EA3
497EA348B51280A2EB800739030003C0A30006EB01E0A3000EEB00F0001F130139FFC00FFFA220
237EA225>65 D<B512F814FE3907800F80EC07C0EC03E0140115F0A515E01403EC07C0EC0F8090
B512005C9038801F80EC07C0EC03E0EC01F0140015F8A6EC01F0140315E0EC0FC0B6120014FC1D
227EA123>I<90380FE01090383FF8309038F81C703801E0063903C003F03807800148C7FC121E
003E1470123C127C15301278A212F81500A700781430A2127CA2003C1460123E121E6C14C06C7E
3903C001803901E003003800F80EEB3FF8EB0FE01C247DA223>I<B512F014FE3807801FEC07C0
1403EC01E0EC00F015F81578157C153CA3153EA9153CA2157C1578A215F0EC01E01403EC07C0EC
1F00B512FE14F81F227EA125>I<B612C0A23807800F14031401140015E0A215601460A3150014
E0138113FFA2138113801460A21518A214001530A4157015F01401EC07E0B6FCA21D227EA121>
I<B612C0A23807800F14031401140015E0A21560A21460A21500A214E0138113FFA21381138014
60A491C7FCA8EAFFFEA21B227EA120>I<903807F00890383FFC189038FC0E383801E0033903C0
01F83807800048C71278121E15385AA2007C14181278A212F81500A6EC1FFF1278007CEB0078A2
123CA27EA27E6C7E6C6C13F83801F0013900FC079890383FFE08903807F80020247DA226>I<39
FFFC3FFFA239078001E0AD90B5FCA2EB8001AF39FFFC3FFFA220227EA125>I<EAFFFCA2EA0780
B3ACEAFFFCA20E227EA112>I<EAFFFEA2EA0780B3EC0180A41403A215005CA25C143FB6FCA219
227EA11E>76 D<D8FFC0EB03FF6D5B000715E0A2D806F0130DA301781319A36D1331A36D1361A3
6D13C1A29038078181A3903803C301A3EB01E6A3EB00FCA31478EA1F80D8FFF0EB3FFF14302822
7EA12D>I<39FF800FFF13C00007EB01F89038E000607F12061378A27F133E131E7FA2EB078014
C01303EB01E0A2EB00F01478A2143CA2141E140FA2EC07E0A214031401A2381F8000EAFFF01560
20227EA125>I<EB0FE0EB7FFCEBF83E3903E00F8039078003C0390F0001E0A2001EEB00F0003E
14F8003C1478007C147CA20078143CA200F8143EA9007C147CA3003C1478003E14F8001E14F06C
EB01E0EB80033907C007C03903E00F803900F83E00EB7FFCEB0FE01F247DA226>I<B512F014FC
3807803FEC0F801407EC03C0A215E0A515C0A2EC0780140FEC3F00EBFFFC14F00180C7FCADEAFF
FCA21B227EA121>I<B512E014F83807803E140F6E7E816E7EA64A5A5D4AC7FC143EEBFFF85CEB
80788080140E140FA481A3ED818015C114073AFFFC03E300EC01FEC8127C21237EA124>82
D<3803F020380FFC60381C0EE0EA3803EA7001A2EAE000A21460A36C1300A21278127FEA3FF0EA
1FFE6C7E0003138038003FC0EB07E01301EB00F0A2147012C0A46C136014E06C13C0EAF80138EF
038038C7FF00EA81FC14247DA21B>I<007FB512F8A2387C07800070143800601418A200E0141C
00C0140CA500001400B3A20003B5FCA21E227EA123>I<3BFFF03FFC07FEA23B0F0007C001F002
03EB00E01760D807806D13C0A33B03C007F001801406A216032701E00C781300A33A00F0183C06
A3903978383E0CEC301EA2161C90393C600F18A390391EC007B0A3010F14E0EC8003A36D486C5A
A32F237FA132>87 D<EA0FE0EA1FF8EA3C1C7FEA18071200A25BEA03FF120FEA3F07127C127812
F01418A2130F1278387C3FB8383FF3F0380FC3C015157E9418>97 D<120E12FEA2121E120EAAEB
1F80EB7FE0380FC0F0EB0078000E1338143C141C141EA7141C143C000F1338EB8070EBC1F0380C
7FC0EB1F0017237FA21B>I<EA01FEEA07FF380F0780121C383803000078C7FC127012F0A71278
14C07E381E0180380F0300EA07FEEA01F812157E9416>I<14E0130FA213011300AAEA03F0EA07
FEEA1F07EA3C01EA38001278127012F0A712701278EA3801EA3C03381E0EF0380FFCFEEA03F017
237EA21B>I<EA01FCEA07FF380F0780381C03C0EA3801007813E0EA7000B5FCA200F0C7FCA512
7814607E6C13C0380F83803807FF00EA00FC13157F9416>I<133C13FEEA01CFEA038FA2EA0700
A9EAFFF8A2EA0700B1EA7FF8A2102380A20F>I<14F03801F1F83807FFB8380F1F38381E0F00EA
1C07003C1380A5001C1300EA1E0FEA0F1EEA1FFCEA19F00018C7FCA2121CEA1FFF6C13C04813E0
383801F038700070481338A400701370007813F0381E03C0380FFF803801FC0015217F9518>I<
120E12FEA2121E120EAAEB1F80EB7FC0380FC1E0EB80F0EB0070120EAE38FFE7FFA218237FA21B
>I<121C121E123E121E121CC7FCA8120E12FEA2121E120EAFEAFFC0A20A227FA10E>I<EA01C0EA
03E0A3EA01C0C7FCA8EA01E0120FA212011200B3A4EA60C012F11380EA7F00123E0B2C82A10F>
I<120E12FEA2121E120EAAEB0FFCA2EB07E0EB0380EB0700130E13185B137813F8EA0F9C131EEA
0E0E7F1480EB03C0130114E014F038FFE3FEA217237FA21A>I<120E12FEA2121E120EB3ABEAFF
E0A20B237FA20E>I<390E1FC07F3AFE7FE1FF809039C0F303C03A1F807E01E0390F003C00000E
1338AE3AFFE3FF8FFEA227157F942A>I<380E1F8038FE7FC038FFC1E0381F80F0380F0070120E
AE38FFE7FFA218157F941B>I<EA01FCEA07FF380F0780381C01C0383800E0007813F000701370
00F01378A700701370007813F0003813E0381C01C0380F07803807FF00EA01FC15157F9418>I<
380E1F8038FE7FE038FFC1F0380F0078120E143CA2141EA7143CA2000F1378EB8070EBC1F0380E
7FC0EB1F0090C7FCA8EAFFE0A2171F7F941B>I<3801F060EA07FCEA1F06381C03E0EA3C01EA78
00A25AA712781301123C1303EA1F0EEA0FFCEA03F0C7FCA8EB0FFEA2171F7E941A>I<EA0E3CEA
FEFEEAFFCFEA1F8FEA0F061300120EADEAFFF0A210157F9413>I<EA0F88EA3FF8EA7078EAE038
1318A3EAF000127FEA3FE0EA1FF0EA01F8EA003CEAC01CA212E0A2EAF018EAF878EADFF0EA8FC0
0E157E9413>I<1206A5120EA3121E123EEAFFF8A2EA0E00AA130CA51308EA0718EA03F0EA01E0
0E1F7F9E13>I<000E137038FE07F0A2EA1E00000E1370AC14F01301380703783803FE7FEA01F8
18157F941B>I<38FFC3FEA2381E00F8000E1360A26C13C0A338038180A213C300011300A2EA00
E6A3137CA31338A217157F941A>I<39FF8FF9FFA2391E01C07CD81C031338000EEBE030A2EB06
600007EB7060A2130E39038C30C01438139C3901D81980141DA2EBF00F00001400A2497EEB6006
20157F9423>I<387FC1FFA2380780F8000313E03801C1C014803800E3001377133E133C131C13
3E13771367EBC3803801C1C0380380E0380700F0EA0F8038FFC1FFA2181580941A>I<38FFC3FE
A2381E00F8000E1360A26C13C0A338038180A213C300011300A2EA00E6A3137CA31338A21330A2
13701360A2EAF0C012F1EAF380007FC7FC123E171F7F941A>I<383FFFC0A2383C038038380700
EA300EEA701EEA603C13385BEA00F0485A3803C0C01380EA07005AEA1E01001C1380EA3803EA70
07B5FCA212157F9416>I<B512FEA21702808D18>I E /Fi 22 118 df<121C127FEAFF80A5EA7F
00121C09097B8813>46 D<13075B137FEA07FFB5FCA212F8C6FCB3AB007F13FEA317277BA622>
49 D<EBFF80000713F0001F13FC383F03FFD87C001380007FEB7FC0EAFF80EC3FE0A3141FEA7F
00001C133FC7FC15C0A2EC7F80A2ECFF00495A5CEB03F0495A495A495A90383E00E05B13789038
F001C0EA01C0EA038048B5FC5A5A5A481480B6FCA31B277DA622>I<EB7F803801FFF0000713FC
380F81FE381F80FF487E9038E07F80A5381FC0FFD807001300C7FC495AEB03F8495AEBFFC014F0
EB01FC6DB4FCEC7F8015C0143F15E0121EEA7F80A2EAFFC0A315C0147FD87F801380387E00FF6C
481300380FFFFC000313F0C613801B277DA622>I<14075C5C5C5C5CA25B5B497E130F130E131C
1338137013F013E0EA01C0EA0380EA07005A120E5A5A5A5AB612F8A3C71300A7017F13F8A31D27
7EA622>I<91393FF00180903903FFFE07010FEBFF8F90393FF007FF9038FF80014848C7127FD8
07FC143F49141F4848140F485A003F15075B007F1503A3484891C7FCAB6C7EEE0380A2123F7F00
1F15076C6C15006C6C5C6D141ED801FE5C6C6C6C13F890393FF007F0010FB512C0010391C7FC90
38003FF829297CA832>67 D<B712C0A33903FE003FED0FE015031501A21500A316F09138038070
A31600A21407140F90B5FCA3EBFE0F14071403A591C8FCA9B512FEA324297DA82B>70
D<91387FE003903903FFFC0F011FEBFF1F90397FF00FFF9038FF8001D803FEC7FC484880484880
4980485A003F815B007F81A3484891C7FCA90203B512F8A2EA7FC0DA00011300A2123F7F121F6C
7E7F6C7E6C6C5B3800FF8090387FF00F011FB5123F0103EBFC0F9039007FE0032D297CA836>I<
B512FEA300011300B3B1B512FEA317297FA81A>73 D<48B47E000F13F0381F81FC486C7E147FA2
EC3F80A2EA0F00C7FCA2EB0FFF90B5FC3807FC3FEA1FE0EA3F80127F130012FEA3147F7E6CEBFF
C0393F83DFFC380FFF0F3801FC031E1B7E9A21>97 D<EB1FF0EBFFFE3803F03F390FE07F80EA1F
C0EA3F80A2127F9038001E004890C7FCA97E7F003FEB01C013C0001F1303390FE007803903F01F
003800FFFCEB1FE01A1B7E9A1F>99 D<EC3FF8A31403ACEB1FE3EBFFFB3803F03F380FE00F381F
C007383F8003A2127F13005AA97EA2EA3F801407381FC00F380FE01F3A03F03FFF803800FFF3EB
3FC3212A7EA926>I<EB3FE03801FFF83803F07E380FE03F391FC01F80393F800FC0A2EA7F00EC
07E05AA390B5FCA290C8FCA47E7F003F14E01401D81FC013C0380FE0033903F81F803900FFFE00
EB1FF01B1B7E9A20>I<1207EA1FC013E0123FA3121F13C0EA0700C7FCA7EAFFE0A3120FB3A3EA
FFFEA30F2B7DAA14>105 D<3BFFC07F800FF0903AC1FFE03FFC903AC783F0F07E3B0FCE03F9C0
7F903ADC01FB803F01F8D9FF00138001F05BA301E05BAF3CFFFE1FFFC3FFF8A3351B7D9A3A>
109 D<38FFC07F9038C1FFC09038C787E0390FCE07F09038DC03F813F813F0A313E0AF3AFFFE3F
FF80A3211B7D9A26>I<EB3FE03801FFFC3803F07E390FC01F80391F800FC0003F14E0EB000748
14F0A34814F8A86C14F0A2393F800FE0A2001F14C0390FC01F803907F07F003801FFFC38003FE0
1D1B7E9A22>I<38FFE1FE9038E7FF809038FE07E0390FF803F8496C7E01E07F140081A2ED7F80
A9EDFF00A25DEBF0014A5A01F85B9038FE0FE09038EFFF80D9E1FCC7FC01E0C8FCA9EAFFFEA321
277E9A26>I<38FFC3F0EBCFFCEBDC7E380FD8FF13F85BA3EBE03C1400AFB5FCA3181B7E9A1C>
114 D<3803FE30380FFFF0EA3E03EA7800127000F01370A27E6C1300EAFFE013FE387FFFC06C13
E06C13F0000713F8C613FC1303130000E0137C143C7EA26C13787E38FF01F038F7FFC000C11300
161B7E9A1B>I<1370A413F0A312011203A21207381FFFF0B5FCA23807F000AD1438A73803F870
000113F03800FFE0EB1F8015267FA51B>I<39FFE03FF8A3000F1303B11407A2140F0007131F3A
03F03BFF803801FFF338003FC3211B7D9A26>I E /Fj 13 119 df<EB01E01303130F137FEA1F
FFB5FCA213BFEAE03F1200B3B0007FB512F0A41C2F7AAE29>49 D<913A03FF800380023FEBF007
49B5EAFC0F0107ECFF1F011F9038803FBF903A3FF80007FFD9FFE07F48497F48497F4890C8127F
4848153F49151F121F49150F123F5B007F1607A34992C7FC12FFAB127F7FEF0780A2123F7F001F
160F6D1600120F6D5D6C6C153E6C6D5C6C6D14FC6C6D495AD93FF8495A903A1FFF801FC0010790
B55A01014AC7FCD9003F13F80203138031337BB13C>67 D<EB7FF80003B5FC000F14C0391FE01F
F09038F007F88114036E7EEA0FE0EA07C0EA0100C7FCA2EB01FF133F3801FFF13807FE01EA1FF0
EA3FE0EA7FC0138012FF1300A3EB800314076C6C487E263FF03E13F8391FFFF87F0007EBF03FC6
EB801F25207E9F28>97 D<EB07FF017F13E048B512F83903FC03FC3807F807EA0FF0EA1FE0EA3F
C0EC03F8007FEB01F0903880004000FF1400AA6C7EA2003F141E7F001F143E6C6C137C6C6C13F8
3903FE03F06CB512E06C6C1380903807FC001F207D9F25>99 D<EB0FFE90387FFFC048B57E3903
FE0FF03907F801F848486C7E48487F4848137FA2007F80491480A212FFA290B6FCA30180C8FCA3
127FA27F003FEC07807F001F140F6C6CEB1F006C6C133E3903FF01FCC6EBFFF8013F13E0010790
C7FC21207E9F26>101 D<EA03C0EA0FF0487EA37F5BA36C5AEA03C0C8FCA8EA01F812FFA4120F
1207B3A4B51280A411337DB217>105 D<EA01F812FFA4120F1207B3B3A4B512C0A412327DB117>
108 D<2703F007F8EB0FF000FFD93FFFEB7FFE4A6DB5FC903CF1F03FC3E07F80903CF3C01FE780
3FC0260FF780EBEF0000079026000FFEEB1FE001FE5C495CA2495CB2B500C1B50083B5FCA44020
7D9F45>I<3903F007F800FFEB3FFF4A7F9039F1F03FC09039F3C01FE0380FF7800007496C7E13
FE5BA25BB2B500C1B51280A429207D9F2E>I<EB07FE90383FFFC090B512F03903FC03FC3907F0
00FE4848137F4848EB3F80003F15C0A24848EB1FE0A300FF15F0A8007F15E0A36C6CEB3FC0A26C
6CEB7F80000F15003907F801FE3903FE07FC6CB55AD8003F13C0D907FEC7FC24207E9F29>I<13
78A513F8A41201A212031207120F381FFFFEB5FCA33807F800AF140FA7141F3803FC1EEBFE3E38
01FFFC38007FF0EB1FC0182E7EAD20>116 D<D801F8EB03F000FFEB01FFA4000FEB001F000714
0FB1151FA2153F157F6C6C497E903AFE03EFFF806CB512CF6C6C130FEB0FFC29207D9F2E>I<B5
38803FFEA43A07F80003C06D1307000315806D130F000115006D5B6C141EA26D6C5AA2ECC07C01
3F1378ECE0F8011F5B14F1010F5B14F3903807FBC0A214FF6D5BA26D90C7FCA26D5AA2147CA227
207E9F2C>I E /Fk 19 117 df<1238127C12FEA212FF127F123B1203A41206A2120CA2121812
381270122008137B8611>44 D<1318133813F8120712FF12F81200B3AD487E387FFFF0A214287C
A71E>49 D<137F3801FFC0380781F0380E00F80018137C121E003F137EEB803EA3381F007E000E
137CC7FCA25C5C495AEB07C001FFC7FCA2EB01E06D7E147C80A280A21580123C127EB4FCA31500
485B007C133E00305B001C5B380F01F06CB45AC690C7FC19297EA71E>51
D<EB0FE0EB3FF0EBF8383801E00C3803803E0007137EEA0F00120E121E001C133C003C90C7FCA2
127C1278130438F87FC0EBFFF038F9807838FB003C00FE131C141E48131F805A1580A41278A312
7C003C1400A2001C131E121E000E5B6C5B3803C0F03801FFC06C6CC7FC19297EA71E>54
D<137F3801FFC03807C1E0380F0070001E7F001C133C003C131C48131EA200F87FA41580A41278
141F127C003C133F121C001E136F6C13CF3807FF8F0001130FD8001013001300A2141EA2121E00
3F5BA25C1470003E5B381801C0380E0780D807FEC7FCEA01F819297EA71E>57
D<1418143CA3147EA314FFA3903801BF80149FA29038030FC0A390380607E0A3496C7EA3496C7E
A3496C7EA2EB3FFF497F903860007EA2497FA20001158049131FA2000315C090C7120F487ED81F
C0EB1FE026FFF801B5FCA2282A7EA92D>65 D<02FF13100107EBE03090391FC0707090387E001C
01F8EB0EF048481303485A4848130148481300A248C812705A123E1630127E127CA200FC1500A8
4AB5FC127C007E90380007F01503123EA2123F7E6C7EA26C7E6C7E6C6C13076C7E017E131C9039
1FC07870903907FFE0100100EB8000282B7DA92F>71 D<D8FFF0913807FFC06D5C0007EEF80000
035E017C141BA36D1433A36D1463A26D6C13C3A3903907C00183A3903903E00303A2903801F006
A3903800F80CA3EC7C18A3EC3E30A2EC1F60A3EC0FC0A33907800780D80FC04A7ED8FFFC91B512
C06E5A32297EA837>77 D<EBFE013803FF83380781E7381E0077001C133F487F00787F127000F0
7FA280A27EA26C90C7FC127EEA7FC0EA3FFCEBFFC06C13F06C7F6C7F00017F38001FFF01011380
EB003F140F15C0140712C01403A37E1580A26C13076C14006C130E00EF5B38E3C07838C1FFF038
803FC01A2B7DA921>83 D<EA07FC381FFF80383E07C0383F01E06D7E1478121EC7FCA3EB0FF8EA
01FF3807F878EA1FC0EA3F00127CA2481460A314F8A2EA7C01393F077CC0391FFE3F803907F01F
001B1A7E991E>97 D<EB7FE03801FFF83807C07C380F00FC121E123E003C1378007C1300127812
F8A8127CA2003C130C123E6C1318380F80303807E0603801FFC038007F00161A7E991B>99
D<137E3803FF80380783E0380F00F0121E481378A2007C133C127812F8B512FCA200F8C7FCA512
78127C003C130C123E001E1318380F80303807E0603801FFC038007F00161A7E991B>101
D<EA078012FFA2120F1207ACEB83F8EB8FFCEB9C1EEBB00F9038E0078013C0A21380B139FFFCFF
FCA21E2A7FA921>104 D<120FEA1F80A213C01380A2EA0F00C7FCA8EA0780127FA2120F1207B3
A2EAFFF8A20D297FA811>I<EA078012FFA2120F1207B3B2EAFFFCA20E2A7FA911>108
D<380783F838FF8FFCEB9C1E380FB00F3907E0078013C0A21380B139FFFCFFFCA21E1A7F9921>
110 D<380787C038FF9FE0EBB9F0EA0FF1EA07E1EBC0E01400A25BAF7FEAFFFEA2141A7F9917>
114 D<3807F840381FFFC0EA3C07EA7003EA6001EAE000A36C1300127EEA7FF0EA3FFC6CB4FC00
07138038003FC0130738C001E013007EA36C13C0EAF80138FE078038C7FF00EA83F8131A7E9918
>I<487EA41203A31207A2120F123FB51280A238078000AD14C0A73803C180EBE300EA01FEEA00
7C12257FA417>I E /Fl 12 119 df<DBFFC01360020701F813E0023F13FE9139FFC01F01903A
03FE0007C3D907F8EB01E7D91FE0EB00F74948143F4948141F49C8FC4848150F48481507491503
120748481501A2485A1700123F5B1860127FA348481600AD6C7E1860A2123FA27F001F17E018C0
6C7E17016C6C1680000316037F6C6CED07006C6C150E6D6C141E6D6C5C6D6C5CD907F85CD903FE
EB03E0903A00FFC01F8091263FFFFEC7FC020713F8020013C0333D7BBB3E>67
D<EB3FC03801FFF83807C07E390E001F80001E6D7E393F8007E013C06E7EA26E7EEA1F80EA0F00
C7FCA4141FEB07FFEB3FF9EBFF01EA03F8EA07F0EA1FE013C0EA3F80EA7F00A248150C5AA31403
A26C13076C130E3A3F800C7C183A1FC03C7E383A0FE0703FF03A03FFE01FE03A007F800F802628
7CA62B>97 D<EB03FE90381FFFC090387E01F09038F800384848133C484813FE00071301EA0FC0
EA1F80A2003FEB00FC90C71278481400A2127E12FEAA127E127FA215037E6D1307001F14066C6C
130E150C6C6C131C6C6C1338C66C13F090387E03C090381FFF00EB03FC20287DA626>99
D<EB03FCEB1FFF90387E07C09038F803F03903F001F848486C7E49137C000F147E4848133E003F
143F90C77EA2481580A2127E12FEA2B7FCA248C9FCA6127E127FA26CEC0180A26C6C130316006C
6C5B6C6C130E0003140C6C6C133CD800FC137090383F01E090380FFF80D901FEC7FC21287EA626
>101 D<EA01C0EA07F0487EA56C5AEA01C0C8FCABEA01F8127FA312071201B3AB487EB512E0A3
133A7FB917>105 D<EA01F812FFA312071201B3B3AF487EB512F0A3143C7FBB17>108
D<2701F803F8EB03F800FFD91FFFEB1FFF913B7C0FC07C0FC0913BE007E0E007E03C07F9C003E1
C0032601FB80D9F3807FD9FF00EBF70049D901FE6D7EA2495CA3495CB3A4486C496C497EB500F0
B500F0B512F0A344267EA549>I<3901F807F800FFEB1FFE9138781F809138E00FC03A07F9C007
E03801FB80EBFF00496D7EA25BA35BB3A4486C497EB500F1B512E0A32B267EA530>I<EB01FE90
380FFFC090383F03F09038F8007C48487F48487F4848EB0F804848EB07C0A248C7EA03E04815F0
A3007EEC01F8A300FE15FCA9007E15F8A2007F14036C15F0A26C15E06D1307000F15C06C6CEB0F
806C6CEB1F006C6C133E6C6C5B90383F03F090380FFFC0D901FEC7FC26287EA62B>I<1318A513
38A41378A213F8A2120112031207001FB5FCB6FCA2D801F8C7FCB2EC0180AA3800FC031500137C
EB7E07EB3F0EEB0FFCEB03F019367EB421>116 D<D801F8EB03F000FFEB01FFA30007EB000F00
011403B3A51507A3150F12006D131F017CEB3BFC017E903873FFE090381F81E390380FFF83903A
01FE03F0002B277EA530>I<B538801FFFA33A07FC0007F86C48EB03E0ED01C0120116806C6CEB
0300A3017E1306A2017F130E6D130CA26D6C5AA2ECC038010F1330A26D6C5AA36D6C5AA214F901
015BA26DB4C7FCA3147EA2143CA3141828267EA42D>I E end
%%EndProlog
%%BeginSetup
%%Feature: *Resolution 300
TeXDict begin
%%EndSetup
%%Page: 1 1
bop 477 509 a Fl(Collecti)q(v)o(e)31 b(Comm)n(unication)864
656 y Fk(Al)20 b(Geist)843 731 y(Marc)g(Snir)772 848 y(Marc)n(h)h(16,)e(1993)
164 1054 y Fj(1)83 b(Collectiv)n(e)25 b(Comm)n(unication)164
1178 y Fi(1.1)70 b(In)n(tro)r(duction)164 1270 y Fh(This)17
b(section)f(is)h(a)g(draft)g(of)g(the)g(curren)o(t)e(prop)q(osal)k(for)e
(collectiv)o(e)d(comm)o(uni)o(cation.)164 1330 y(Collectiv)o(e)20
b(comm)o(unication)g(is)i(de\014ned)h(to)g(b)q(e)f(comm)o(unication)e(that)j
(in)o(v)o(olv)o(es)e(a)164 1391 y(group)h(of)e(pro)q(cesses.)35
b(Examples)20 b(are)g(broadcast)i(and)f(global)g(sum.)34 b(A)20
b(collectiv)o(e)164 1451 y(op)q(eration)c(is)e(executed)g(b)o(y)g(ha)o(ving)h
(all)g(pro)q(cesses)g(in)g(the)f(group)i(call)e(the)h(comm)o(uni-)164
1511 y(cation)c(routine,)g(with)g(matc)o(hing)e(parameters.)19
b(Routines)11 b(can)g(\(but)g(are)g(not)g(required)164 1571
y(to\))19 b(return)g(as)g(so)q(on)h(as)f(their)f(participation)h(in)f(the)h
(collectiv)o(e)d(comm)o(uni)o(cation)g(is)164 1631 y(complete.)28
b(The)20 b(completion)d(of)j(a)g(call)f(indicates)f(that)i(the)g(caller)e(is)
h(no)o(w)h(free)f(to)164 1692 y(access)d(the)g(lo)q(cations)g(in)g(the)g
(comm)o(unic)o(ation)e(bu\013er,)i(or)g(an)o(y)g(other)g(lo)q(cation)g(that)
164 1752 y(can)h(b)q(e)g(referenced)e(b)o(y)i(the)f(collectiv)o(e)e(op)q
(eration.)24 b(Ho)o(w)o(ev)o(er,)15 b(it)h(do)q(es)i(not)f(indicate)164
1812 y(that)i(other)g(pro)q(cesses)g(in)f(the)g(group)i(ha)o(v)o(e)e(started)
g(the)h(op)q(eration)g(\(unless)g(other-)164 1872 y(wise)e(indicated)g(in)g
(the)g(description)g(of)h(the)g(op)q(eration\).)26 b(Ho)o(w)o(ev)o(er,)15
b(the)j(successful)164 1932 y(completion)d(of)i(a)g(collectiv)o(e)c(comm)o
(unication)h(call)i(ma)o(y)f(dep)q(end)h(on)i(the)e(execution)164
1993 y(of)h(a)f(matc)o(hing)f(call)g(at)i(all)f(pro)q(cesses)g(in)g(the)g
(group.)237 2053 y(The)h(syn)o(tax)f(and)h(seman)o(tics)e(of)i(the)g
(collectiv)o(e)c(op)q(erations)18 b(is)f(de\014ned)f(so)h(as)h(to)164
2113 y(b)q(e)c(consisten)o(t)f(with)h(the)f(syn)o(tax)h(and)g(seman)o(tics)e
(of)i(the)f(p)q(oin)o(t)h(to)g(p)q(oin)o(t)g(op)q(erations.)237
2173 y(The)24 b(reader)f(is)h(referred)e(to)j(the)e(p)q(oin)o(t-to-p)q(oin)o
(t)i(comm)o(unic)o(ation)c(section)j(of)164 2233 y(the)16 b(curren)o(t)f(MPI)
g(draft)h(for)g(information)f(concerning)g(groups)i(\(ak)m(a)g(con)o(texts\))
e(and)164 2293 y(group)i(formation)f(op)q(erations,)h(and)g(for)g(general)f
(information)f(on)i(t)o(yp)q(es)g(of)f(ob)s(jects)164 2354
y(used)g(b)o(y)g(the)g(MPI)g(library)l(.)237 2414 y(The)f(collectiv)o(e)d
(comm)o(unic)o(ation)h(routines)h(are)h(built)f(ab)q(o)o(v)o(e)h(the)g(p)q
(oin)o(t-to-p)q(oin)o(t)164 2474 y(routines.)35 b(While)20
b(v)o(endors)h(ma)o(y)f(optimize)e(certain)i(collectiv)o(e)e(routines)j(for)h
(their)961 2599 y(1)p eop
%%Page: 2 2
bop 164 307 a Fh(arc)o(hitectures,)22 b(a)h(complete)c(library)j(of)g(the)g
(collectiv)o(e)e(comm)o(unic)o(ation)g(routines)164 367 y(can)11
b(b)q(e)g(written)g(en)o(tirely)d(using)k(p)q(oin)o(t-to-p)q(oin)o(t)g(comm)o
(unic)o(ation)d(functions.)19 b(W)l(e)11 b(are)164 428 y(using)16
b(naiv)o(e)e(impleme)o(n)o(tations)f(of)j(the)f(collectiv)o(e)e(calls)i(in)g
(terms)f(of)h(p)q(oin)o(t)h(to)g(p)q(oin)o(t)164 488 y(op)q(erations)h(in)f
(order)h(to)f(pro)o(vide)g(an)g(op)q(erational)h(de\014nition)f(of)h(their)e
(seman)o(tics.)237 548 y(The)h(follo)o(wing)g(comm)o(unication)d(functions)k
(are)f(prop)q(osed.)237 640 y Fg(\017)24 b Fh(Broadcast)17
b(from)e(one)h(mem)o(b)q(er)d(to)k(all)f(mem)n(b)q(ers)e(of)i(a)h(group.)237
739 y Fg(\017)24 b Fh(Barrier)15 b(across)i(all)f(group)h(mem)o(b)q(ers)237
838 y Fg(\017)24 b Fh(Gather)17 b(data)g(from)e(all)h(group)h(mem)n(b)q(ers)d
(to)j(one)f(mem)o(b)q(er.)237 936 y Fg(\017)24 b Fh(Scatter)16
b(data)h(from)e(one)i(mem)n(b)q(er)d(to)i(all)g(mem)o(b)q(ers)d(of)k(a)f
(group.)237 1035 y Fg(\017)24 b Fh(Global)13 b(op)q(erations)g(suc)o(h)g(as)g
(sum,)f(max,)f(min,)g(etc.,)h(w)o(ere)g(the)g(result)g(is)h(kno)o(wn)286
1095 y(b)o(y)f(all)g(group)i(mem)o(b)q(ers)c(and)j(a)g(v)m(ariation)g(where)g
(the)f(result)g(is)h(kno)o(wn)g(b)o(y)f(only)286 1155 y(one)k(mem)o(b)q(er.)i
(The)f(abilit)o(y)d(to)j(ha)o(v)o(e)e(user)i(de\014ned)f(global)g(op)q
(erations.)237 1254 y Fg(\017)24 b Fh(Sim)o(ultaneous)e(shift)i(of)g(data)h
(around)g(the)e(group,)k(the)c(simplest)f(example)286 1314
y(b)q(eing)16 b(all)g(mem)o(b)q(ers)d(sending)k(their)e(data)i(to)g
(\(rank+1\))g(with)f(wrap)h(around.)237 1413 y Fg(\017)24 b
Fh(Scan)16 b(across)i(all)d(mem)o(b)q(ers)f(of)i(a)h(group)g(\(also)g(called)
e(parallel)g(pre\014x\).)237 1512 y Fg(\017)24 b Fh(Broadcast)17
b(from)e(all)h(mem)n(b)q(ers)e(to)j(all)e(mem)o(b)q(ers)e(of)k(a)g(group.)237
1610 y Fg(\017)24 b Fh(Scatter)18 b(data)h(from)e(all)g(mem)o(b)q(ers)e(to)k
(all)e(mem)o(b)q(ers)e(of)k(a)f(group)h(\(also)g(called)286
1670 y(complete)14 b(exc)o(hange)h(or)i(index\).)237 1763 y(T)l(o)24
b(simplify)d(the)i(collectiv)o(e)e(comm)o(unic)o(ation)g(in)o(terface)h(it)h
(is)h(designed)f(with)164 1823 y(t)o(w)o(o)18 b(la)o(y)o(ers.)28
b(The)18 b(lo)o(w)h(lev)o(el)d(routines)j(ha)o(v)o(e)e(all)h(the)h(generalit)
o(y)e(of,)i(and)g(mak)o(e)e(use)164 1883 y(of,)i(the)g(bu\013er)g(descriptor)
f(routines)h(of)g(the)g(p)q(oin)o(t-to-p)q(oin)o(t)g(section)g(whic)o(h)f
(allo)o(ws)164 1944 y(arbitrarily)h(complex)e(messages)j(to)g(b)q(e)g
(constructed.)32 b(The)19 b(second)h(lev)o(el)e(routines)164
2004 y(are)f(similar)e(to)i(the)f(upp)q(er)i(lev)o(el)c(p)q(oin)o(t-to-p)q
(oin)o(t)k(routines)f(in)g(that)g(they)f(send)h(only)164 2064
y(a)g(con)o(tiguous)f(bu\013er.)164 2233 y Ff(Missing:)237
2293 y Fe(The)h(curren)o(t)f(draft)g(do)q(es)h(not)f(include)j(the)e(non)o
(blo)q(c)o(king)h(collectiv)o(e)g(comm)o(unication)164 2354
y(calls)e(that)f(where)g(discussed)i(at)d(the)i(last)f(meeting.)961
2599 y Fh(2)p eop
%%Page: 3 3
bop 164 307 a Fi(1.2)70 b(Group)24 b(F)-6 b(unctions)164 400
y Fh(The)15 b(p)q(oin)o(t)g(to)g(p)q(oin)o(t)g(do)q(cumen)o(t)e(discusses)i
(the)g(use)g(of)g(groups)h(\(ak)m(a)f(con)o(texts\),)f(and)164
460 y(describ)q(e)f(the)g(op)q(erations)i(a)o(v)m(ailable)e(for)h(the)f
(creation)g(and)h(manipulation)f(of)h(groups)164 520 y(and)j(group)g(ob)s
(jects.)k(F)l(or)16 b(sak)o(e)g(of)h(completeness,)d(w)o(e)h(list)h(them)f
(anew)h(here.)164 640 y Fd(MPI)p 279 640 17 2 v 20 w(CREA)-5
b(TE\(handle,)20 b(t)n(yp)r(e,)d(p)r(ersistence\))164 700 y
Fh(Create)f(new)g(opaque)h(ob)s(ject)164 799 y Fd(OUT)i(handle)25
b Fh(handle)16 b(to)h(ob)s(ject)164 900 y Fd(IN)h(t)n(yp)r(e)24
b Fh(state)16 b(v)m(alue)g(that)h(iden)o(ti\014es)e(the)h(t)o(yp)q(e)g(of)g
(ob)s(ject)g(to)g(b)q(e)h(created)164 1000 y Fd(IN)h(p)r(ersistence)24
b Fh(state)16 b(v)m(alue;)g(either)f Fc(MPI)p 1020 1000 16
2 v 17 w(PERSISTENT)e Fh(or)j Fc(MPI)p 1447 1000 V 18 w(EPHEMERAL)o
Fh(.)164 1159 y Fd(MPI)p 279 1159 17 2 v 20 w(FREE\(handle\))164
1219 y Fh(Destro)o(y)g(ob)s(ject)g(asso)q(ciated)h(with)f(handle.)164
1317 y Fd(IN)i(handle)26 b Fh(handle)16 b(to)g(ob)s(ject)164
1476 y Fd(MPI)p 279 1476 V 20 w(ASSOCIA)-5 b(TED\(handle,)21
b(t)n(yp)r(e\))164 1536 y Fh(Returns)h(the)f(t)o(yp)q(e)h(of)g(the)f(ob)s
(ject)h(the)f(handle)h(is)g(curren)o(tly)e(asso)q(ciated)j(with,)f(if)164
1597 y(suc)o(h)15 b(exists.)20 b(Returns)c(the)f(sp)q(ecial)g(t)o(yp)q(e)f
Fc(MPI)p 1041 1597 16 2 v 18 w(NULL)g Fh(if)g(the)h(handle)h(is)f(not)h
(curren)o(tly)164 1657 y(asso)q(ciated)h(with)f(an)o(y)g(ob)s(ject.)164
1755 y Fd(IN)i(handle)26 b Fh(handle)16 b(to)g(ob)s(ject)164
1856 y Fd(OUT)j(t)n(yp)r(e)k Fh(state)164 2014 y Fd(MPI)p 279
2014 17 2 v 20 w(COPY)p 461 2014 V 22 w(CONTEXT\(new)n(con)n(text,)18
b(con)n(text\))237 2135 y Fh(Create)f(a)g(new)f(con)o(text)g(that)h(includes)
e(all)h(pro)q(cesses)i(in)e(the)g(old)h(con)o(text.)k(The)164
2195 y(rank)c(of)f(the)g(pro)q(cesses)h(in)f(the)h(previous)f(con)o(text)f
(is)h(preserv)o(ed.)21 b(The)16 b(call)g(m)o(ust)f(b)q(e)164
2255 y(executed)j(b)o(y)i(all)f(pro)q(cesses)i(in)e(the)h(old)g(con)o(text.)
31 b(It)19 b(is)h(a)g(blo)q(c)o(king)f(call:)28 b(No)20 b(call)164
2315 y(returns)c(un)o(til)f(all)h(pro)q(cesses)h(ha)o(v)o(e)e(called)h(the)g
(function.)164 2414 y Fd(OUT)j(new)n(con)n(text)24 b Fh(handle)12
b(to)h(newly)f(created)g(con)o(text.)19 b(The)13 b(handle)f(should)h(not)286
2474 y(b)q(e)j(asso)q(ciated)i(with)e(an)g(ob)s(ject)g(b)q(efore)g(the)g
(call.)961 2599 y(3)p eop
%%Page: 4 4
bop 164 307 a Fd(IN)18 b(con)n(text)24 b Fh(handle)16 b(to)h(old)f(con)o
(text)164 463 y Fd(MPI)p 279 463 17 2 v 20 w(NEW)p 438 463
V 20 w(CONTEXT\(new)n(con)n(text,)i(con)n(text,)g(k)n(ey)-5
b(,)18 b(index\))164 524 y Fh(A)13 b(new)h(con)o(text)f(is)h(created)f(for)i
(eac)o(h)e(distinct)g(v)m(alue)h(of)g Fc(key)p Fh(;)f(this)h(con)o(text)f(is)
g(shared)164 584 y(b)o(y)20 b(all)g(pro)q(cesses)h(that)g(made)e(the)i(call)e
(with)i(this)f(k)o(ey)f(v)m(alue.)34 b(Within)20 b(eac)o(h)g(new)164
644 y(con)o(text)c(the)g(pro)q(cesses)i(are)f(rank)o(ed)f(according)h(to)g
(the)g(order)f(of)h(the)g Fc(index)e Fh(v)m(alues)164 704 y(they)j(pro)o
(vided;)h(in)g(case)f(of)h(ties,)g(pro)q(cesses)g(are)g(rank)o(ed)f
(according)h(to)h(their)e(rank)164 764 y(in)g(the)g(old)g(con)o(text.)26
b(This)18 b(call)f(is)h(blo)q(c)o(king:)24 b(No)18 b(call)g(returns)g(un)o
(til)f(all)g(pro)q(cesses)164 825 y(in)f(the)g(old)g(con)o(text)g(executed)e
(the)i(call.)164 920 y Fd(OUT)j(new)n(con)n(text)24 b Fh(handle)13
b(to)g(newly)g(created)f(con)o(text)g(at)i(calling)e(pro)q(cess.)21
b(This)286 981 y(handle)16 b(should)h(not)g(b)q(e)f(asso)q(ciated)h(with)f
(an)h(ob)s(ject)f(b)q(efore)g(the)g(call.)164 1080 y Fd(IN)i(con)n(text)24
b Fh(handle)16 b(to)h(old)f(con)o(text)164 1180 y Fd(IN)i(k)n(ey)24
b Fh(in)o(teger)164 1280 y Fd(IN)18 b(index)25 b Fh(in)o(teger)164
1436 y Fd(MPI)p 279 1436 V 20 w(RANK\(rank,)18 b(con)n(text\))164
1496 y Fh(Return)e(the)g(rank)g(of)h(the)f(calling)f(pro)q(cess)i(within)f
(the)g(sp)q(eci\014ed)g(con)o(text.)164 1592 y Fd(OUT)j(rank)24
b Fh(in)o(teger)164 1692 y Fd(IN)18 b(con)n(text)24 b Fh(con)o(text)15
b(handle)164 1848 y Fd(MPI)p 279 1848 V 20 w(SIZE\(size,)k(con)n(text\))164
1909 y Fh(Return)d(the)g(n)o(um)o(b)q(er)e(of)j(pro)q(cesses)g(that)f(b)q
(elong)h(to)g(the)f(sp)q(eci\014ed)g(con)o(text.)164 2005 y
Fd(OUT)j(size)24 b Fh(in)o(teger)164 2104 y Fd(IN)18 b(con)n(text)24
b Fh(con)o(text)15 b(handle)164 2233 y Fd(Extensions)49 b Fh(P)o(ossible)15
b(extensions)h(for)h(dynamic)d(pro)q(cess)j(spa)o(wning)g(\(MPI2\):)164
2354 y Fd(MPI)p 279 2354 V 20 w(PR)n(OCESS\(pro)r(cess,)i(con)n(text,)e
(rank\))164 2414 y Fh(Returns)f(a)g(handle)g(to)h(the)e(pro)q(cess)i(iden)o
(ti\014ed)d(b)o(y)i(the)g Fc(rank)e Fh(and)j Fc(context)c Fh(param-)164
2474 y(eters.)961 2599 y(4)p eop
%%Page: 5 5
bop 164 307 a Fd(OUT)19 b(pro)r(cess)k Fh(handle)17 b(to)f(pro)q(cess)h(ob)s
(ject)164 407 y Fd(IN)h(con)n(text)24 b Fh(handle)16 b(to)h(con)o(text)e(ob)s
(ject)164 507 y Fd(IN)j(rank)25 b Fh(in)o(teger)164 664 y Fd(MPI)p
279 664 17 2 v 20 w(CREA)-5 b(TE)p 531 664 V 21 w(CONTEXT\(new)n(con)n(text,)
18 b(list)p 1242 664 V 22 w(of)p 1309 664 V 20 w(pro)r(cess)p
1508 664 V 19 w(handles\))164 724 y Fh(creates)h(a)h(new)f(con)o(text)g(out)g
(of)h(an)g(explicit)d(list)i(of)h(mem)n(b)q(ers)d(and)j(rank)f(them)f(in)164
784 y(their)d(order)i(of)f(o)q(ccurrence)g(in)g(the)g(list.)164
881 y Fd(OUT)j(new)n(con)n(text)24 b Fh(handle)16 b(to)g(newly)f(created)h
(con)o(text.)k(Handle)15 b(should)h(not)h(b)q(e)286 941 y(asso)q(ciated)g
(with)f(an)h(ob)s(ject)f(b)q(efore)g(the)g(call.)164 1041 y
Fd(IN)i(list)p 324 1041 V 22 w(of)p 391 1041 V 20 w(pro)r(cess)p
590 1041 V 19 w(handles)26 b Fh(List)17 b(of)g(handles)g(to)g(pro)q(cesses)h
(to)f(b)q(e)g(included)f(in)286 1101 y(new)g(group.)237 1198
y(This,)i(coupled)f(with)h(a)g(mec)o(hanism)d(for)j(requiring)f(the)h(spa)o
(wning)g(of)g(new)g(pro-)164 1258 y(cesses)i(to)h(the)g(computation,)f(will)f
(allo)o(w)i(to)g(create)f(a)h(new)f(all)g(inclusiv)o(e)f(con)o(text)164
1318 y(that)e(includes)e(the)h(additional)g(pro)q(cesses.)164
1462 y Fi(1.3)70 b(Comm)n(unicati)o(on)21 b(F)-6 b(unctions)164
1554 y Fh(The)23 b(prop)q(osed)i(comm)o(uni)o(cation)c(functions)i(are)g
(divided)f(in)o(to)h(t)o(w)o(o)g(la)o(y)o(ers.)41 b(The)164
1614 y(lo)o(w)o(est)12 b(lev)o(el)e(uses)j(the)f(same)f(bu\013er)i
(descriptor)f(ob)s(jects)g(a)o(v)m(ailable)g(in)g(p)q(oin)o(t-to-p)q(oin)o(t)
164 1675 y(to)20 b(create)f(noncon)o(tiguous,)i(m)o(ultiple)c(data)j(t)o(yp)q
(e)f(messages.)31 b(The)20 b(second)g(lev)o(el)d(is)164 1735
y(similar)10 b(to)j(the)f(blo)q(c)o(k)g(send/receiv)o(e)e(p)q(oin)o(t-to-p)q
(oin)o(t)j(op)q(erations)h(in)e(that)h(it)e(supp)q(orts)164
1795 y(only)k(con)o(tiguous)g(bu\013ers)g(of)h(arithmetic)c(storage)k(units.)
21 b(F)l(or)15 b(eac)o(h)f(comm)o(unicati)o(on)164 1855 y(op)q(eration,)j(w)o
(e)e(list)h(these)g(t)o(w)o(o)g(lev)o(el)e(of)j(calls)e(together.)164
1984 y Fd(1.3.1)55 b(Sync)n(hronization)164 2077 y(Barrier)19
b(sync)n(hronization)164 2137 y(MPI)p 279 2137 V 20 w(BARRIER\()f(group,)h
(tag)f(\))237 2257 y Fh(MPI)p 336 2257 15 2 v 17 w(BARRIER)e(blo)q(c)o(ks)h
(the)f(calling)h(pro)q(cess)g(un)o(til)f(all)h(group)h(mem)n(b)q(ers)d(ha)o
(v)o(e)164 2317 y(called)i(it;)h(the)g(call)f(returns)h(at)h(an)o(y)f(pro)q
(cess)g(only)g(after)g(all)g(group)h(mem)n(b)q(ers)d(ha)o(v)o(e)164
2377 y(en)o(tered)f(the)h(call.)164 2474 y Fd(IN)i(group)25
b Fh(group)17 b(handle)961 2599 y(5)p eop
%%Page: 6 6
bop 164 307 a Fd(tag)24 b Fh(comm)o(unication)13 b(tag)k(\(in)o(teger\))164
469 y Fc(MPI)p 245 469 16 2 v 17 w(BARRIER\()23 b(group,)g(tag)i(\))164
529 y Fh(is)164 631 y Fc(MPI_CREATE)o(\(bu)o(ff)o(er_)o(han)o(dle)o(,)d
(MPI_BUFFER,)g(MPI_PERSIS)o(TEN)o(T\);)164 691 y(MPI_SIZE\()g(&size,)i
(group\);)164 751 y(MPI_RANK\()e(&rank,)i(group\);)164 812
y(if)h(\(rank==0\))164 872 y({)241 932 y(for)f(\(i=1;)g(i)h(<)h(size;)e
(i++\))318 992 y(MPI_RECV\()o(buf)o(fer)o(_ha)o(nd)o(le,)e(i,)j(tag,)f
(group\);)241 1052 y(for)g(\(i=1;)g(i)h(<)h(size;)e(i++\))318
1112 y(MPI_SEND\()o(buf)o(fer)o(_ha)o(nd)o(le,)e(i,)j(tag,)f(group\);)164
1173 y(})164 1233 y(else)164 1293 y({)241 1353 y(MPI_SEND\(b)o(uf)o(fer)o
(_ha)o(ndl)o(e,)e(0,)j(tag,)f(group\);)241 1413 y(MPI_RECV\(b)o(uf)o(fer)o
(_ha)o(ndl)o(e,)e(0,)j(tag,)f(group\);)164 1474 y(})164 1534
y(MPI_FREE\(b)o(uff)o(er)o(_ha)o(ndl)o(e\);)164 1664 y Fd(1.3.2)55
b(Data)19 b(mo)n(v)n(e)g(functions)164 1756 y(Circular)h(shift)164
1816 y(MPI)p 279 1816 17 2 v 20 w(CSHIFT\()f(in)n(buf,)h(outbuf,)f(tag,)f
(group,)h(shift\))237 1937 y Fh(Pro)q(cess)13 b(with)g(rank)f
Fc(i)h Fh(sends)f(the)h(data)g(in)f(its)g(input)h(bu\013er)f(to)h(pro)q(cess)
g(with)g(rank)164 1997 y(\()p Fc(i)8 b Fh(+)g Fc(shift)p Fh(\))k(mo)q(d)h
Fc(group)p 664 1997 16 2 v 17 w(size)p Fh(,)g(who)j(receiv)o(es)c(the)j(data)
h(in)e(its)h(output)g(bu\013er.)21 b(All)164 2057 y(pro)q(cesses)g(mak)o(e)e
(the)h(call)f(with)i(the)f(same)f(v)m(alues)i(for)g Fc(tag,)j(group)p
Fh(,)19 b(and)i Fc(shift)p Fh(.)164 2117 y(The)16 b Fc(shift)f
Fh(v)m(alue)h(can)g(b)q(e)g(p)q(ositiv)o(e,)f(zero,)h(or)h(negativ)o(e.)164
2231 y Fd(IN)h(in)n(buf)26 b Fh(handle)16 b(to)h(input)f(bu\013er)h
(descriptor)164 2333 y Fd(OUT)i(outbuf)25 b Fh(handle)16 b(to)g(output)h
(bu\013er)g(descriptor)164 2435 y Fd(IN)h(tag)25 b Fh(op)q(eration)17
b(tag)g(\(in)o(teger\))961 2599 y(6)p eop
%%Page: 7 7
bop 164 307 a Fd(IN)18 b(group)25 b Fh(handle)16 b(to)h(group)164
409 y Fd(IN)h(shift)25 b Fh(in)o(teger)164 583 y Fd(MPI)p 279
583 17 2 v 20 w(CSHIFTB\()19 b(in)n(buf,)h(outbuf,)f(len,)f(tag,)g(group,)h
(shift\))237 704 y Fh(Beha)o(v)o(es)c(lik)o(e)g Fc(MPI)p 596
704 16 2 v 18 w(CSHIFT)p Fh(,)f(with)i(bu\013ers)h(restricted)f(to)h(b)q(e)f
(blo)q(c)o(ks)h(of)g(n)o(umeric)164 764 y(units.)j(All)10 b(pro)q(cesses)i
(mak)o(e)d(the)j(call)e(with)i(the)f(same)g(v)m(alues)g(for)h
Fc(len,)24 b(tag,)g(group)p Fh(,)164 824 y(and)17 b Fc(shift)p
Fh(.)164 926 y Fd(IN)h(in)n(buf)26 b Fh(initial)15 b(lo)q(cation)i(of)f
(input)g(bu\013er)164 1027 y Fd(OUT)j(outbuf)25 b Fh(initial)14
b(lo)q(cation)j(of)g(output)f(bu\013er)164 1129 y Fd(IN)i(len)25
b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(in)h(input)g(\(and)h(output\))g
(bu\013ers)164 1231 y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o
(teger\))164 1333 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164
1434 y Fd(IN)h(shift)25 b Fh(in)o(teger)164 1596 y Fc(MPI)p
245 1596 V 17 w(CSHIFT\()e(inbuf,)h(outbuf,)f(tag,)h(group,)f(shift\))164
1656 y Fh(is)164 1758 y Fc(MPI_SIZE\()f(&size,)i(group\);)164
1818 y(MPI_RANK\()e(&rank,)i(group\);)164 1878 y(MPI_ISEND\()e(handle,)h
(inbuf,)g(mod\(rank+sh)o(ift)o(,)g(size\),)g(tag,)h(group\);)164
1939 y(MPI_RECV\()e(outbuf,)h(mod\(rank-sh)o(ift)o(,s)o(ize)o(\),)f(tag,)i
(group\))164 1999 y(MPI_WAIT\(h)o(and)o(le)o(\);)164 2221 y
Ff(Discussion:)40 b Fe(Do)15 b(w)o(e)g(w)o(an)o(t)f(to)h(supp)q(ort)g(the)g
(case)g Fb(inbuf)23 b(=)h(outbuf)15 b Fe(someho)o(w?)961 2599
y Fh(7)p eop
%%Page: 8 8
bop 164 307 a Fd(End-o\013)18 b(shift)164 367 y(MPI)p 279 367
17 2 v 20 w(EOSHIFT\()h(in)n(buf,)h(outbuf,)e(tag,)g(group,)h(shift\))237
488 y Fh(Pro)q(cess)i(with)f(rank)g Fc(i)p Fh(,)g(max)o(\()p
Fc(0)p Fa(;)8 b Fg(\000)p Fc(shift)p Fh(\))17 b Fg(\024)j Fc(i)g
Fa(<)g Fc(min)p Fh(\()p Fc(size)p Fa(;)7 b Fc(size)j Fg(\000)j
Fc(shift)p Fh(\),)164 548 y(sends)f(the)g(data)h(in)e(its)h(input)g(bu\013er)
g(to)g(pro)q(cess)h(with)f(rank)g Fc(i+)25 b(shift)p Fh(,)10
b(who)j(receiv)o(es)164 608 y(the)18 b(data)i(in)e(its)h(output)g(bu\013er.)
29 b(The)19 b(output)g(bu\013er)g(of)g(pro)q(cesses)g(whic)o(h)f(do)h(not)164
668 y(receiv)o(e)f(data)j(is)g(left)e(unc)o(hanged.)35 b(All)19
b(pro)q(cesses)i(mak)o(e)d(the)j(call)e(with)i(the)f(same)164
729 y(v)m(alues)c(for)h Fc(tag,)24 b(group)p Fh(,)14 b(and)j
Fc(shift)p Fh(.)164 843 y Fd(IN)h(in)n(buf)26 b Fh(handle)16
b(to)h(input)f(bu\013er)h(descriptor)164 944 y Fd(OUT)i(outbuf)25
b Fh(handle)16 b(to)g(output)h(bu\013er)g(descriptor)164 1046
y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
1148 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164 1249
y Fd(IN)h(shift)25 b Fh(in)o(teger)164 1424 y Fd(MPI)p 279
1424 V 20 w(EOSHIFTB\()19 b(in)n(buf,)g(outbuf,)g(len,)g(tag,)f(group,)g
(shift\))237 1544 y Fh(Beha)o(v)o(es)13 b(lik)o(e)f Fc(MPI)p
591 1544 16 2 v 17 w(EOSHIFT)p Fh(,)f(with)j(bu\013ers)h(restricted)e(to)h(b)
q(e)g(blo)q(c)o(ks)g(of)h(n)o(umeric)164 1604 y(units.)20 b(All)10
b(pro)q(cesses)i(mak)o(e)d(the)j(call)e(with)i(the)f(same)g(v)m(alues)g(for)h
Fc(len,)24 b(tag,)g(group)p Fh(,)164 1665 y(and)17 b Fc(shift)p
Fh(.)164 1766 y Fd(IN)h(in)n(buf)26 b Fh(initial)15 b(lo)q(cation)i(of)f
(input)g(bu\013er)164 1868 y Fd(OUT)j(outbuf)25 b Fh(initial)14
b(lo)q(cation)j(of)g(output)f(bu\013er)164 1970 y Fd(IN)i(len)25
b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(in)h(input)g(\(and)h(output\))g
(bu\013ers)164 2071 y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o
(teger\))164 2173 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164
2275 y Fd(IN)h(shift)25 b Fh(in)o(teger)961 2599 y(8)p eop
%%Page: 9 9
bop 164 420 a Ff(Discussion:)237 477 y Fe(Tw)o(o)11 b(other)g(p)q(ossible)i
(de\014nitions)h(for)d(end-o\013)g(shift:)19 b(\(i\))11 b(zero)h(\014lling)i
(for)d(pro)q(cesses)h(that)164 533 y(don't)g(receiv)o(e)i(messages,)e(or)g
(\(ii\))h(b)q(oundary)g(v)m(alues)h(explicitly)h(pro)o(vided)f(as)e(an)g
(additional)164 589 y(parameter.)25 b(An)o(y)18 b(preferences?)28
b(\(F)l(ortran)15 b(90)i(allo)o(ws)g(to)g(optionally)i(pro)o(vide)e(b)q
(oundary)164 646 y(v)m(alues,)f(and)f(do)q(es)h(zero)f(\014lling,)i(if)e
(none)h(w)o(ere)f(pro)o(vided\))164 956 y Fd(Broadcast)164
1017 y(MPI)p 279 1017 17 2 v 20 w(BCAST\()20 b(bu\013er)p 677
1017 V 19 w(handle,)f(tag,)g(group,)f(ro)r(ot)g(\))237 1137
y Fc(MPI)p 318 1137 16 2 v 17 w(BCAST)e Fh(broadcasts)j(a)f(message)f(from)f
(the)h(pro)q(cess)i(with)e(rank)h Fc(root)e Fh(to)h(all)164
1197 y(other)23 b(pro)q(cesses)g(of)g(the)g(group.)41 b(It)23
b(is)f(called)g(b)o(y)g(all)g(mem)o(b)q(ers)e(of)j(group)h(using)164
1257 y(the)17 b(same)e(argumen)o(ts)h(for)h Fc(tag,)25 b(group,)e(and)i(root)
p Fh(.)c(On)c(return)g(the)f(con)o(ten)o(ts)h(of)164 1318 y(the)h(bu\013er)h
(of)g(the)f(pro)q(cess)h(with)g(rank)g Fc(root)e Fh(is)h(con)o(tained)g(in)g
(bu\013er)h(of)g(all)f(group)164 1378 y(mem)o(b)q(ers.)164
1479 y Fd(INOUT)g(bu\013er)p 518 1479 17 2 v 20 w(handle)25
b Fh(Handle)c(for)h(bu\013er)g(where)f(from)g(message)g(is)g(sen)o(t)h(or)286
1540 y(receiv)o(ed.)164 1641 y Fd(IN)c(tag)25 b Fh(tag)17 b(of)f(comm)o
(unication)d(op)q(eration)18 b(\(in)o(teger\))164 1743 y Fd(IN)g(group)25
b Fh(con)o(text)15 b(of)i(comm)o(unic)o(ation)d(\(handle\))164
1845 y Fd(IN)k(ro)r(ot)24 b Fh(rank)16 b(of)h(broadcast)g(ro)q(ot)h(\(in)o
(teger\))164 2007 y Fd(MPI)p 279 2007 V 20 w(BCASTB\()i(buf,)e(len,)h(tag,)f
(group,)g(ro)r(ot)g(\))237 2127 y Fc(MPI)p 318 2127 16 2 v
17 w(BCASTB)j Fh(b)q(eha)o(v)o(es)i(lik)o(e)e(broadcast,)26
b(restricted)21 b(to)j(a)f(blo)q(c)o(k)g(bu\013er.)42 b(It)22
b(is)164 2187 y(called)c(b)o(y)g(all)g(pro)q(cesses)i(with)f(the)f(same)g
(argumen)o(ts)g(for)h Fc(len,)24 b(tag,)g(group)17 b Fh(and)164
2247 y Fc(root)p Fh(.)164 2349 y Fd(INOUT)h(bu\013er)24 b Fh(Starting)17
b(address)g(of)f(bu\013er)h(\(c)o(hoice)e(t)o(yp)q(e\))164
2451 y Fd(IN)j(len)25 b Fh(Num)o(b)q(er)14 b(of)j(w)o(ords)g(in)f(bu\013er)g
(\(in)o(teger\))961 2599 y(9)p eop
%%Page: 10 10
bop 164 307 a Fd(IN)18 b(tag)25 b Fh(tag)17 b(of)f(comm)o(unication)d(op)q
(eration)18 b(\(in)o(teger\))164 409 y Fd(IN)g(group)25 b Fh(con)o(text)15
b(of)i(comm)o(unic)o(ation)d(\(handle\))164 511 y Fd(in)19
b(ro)r(ot)24 b Fh(rank)16 b(of)h(broadcast)g(ro)q(ot)h(\(in)o(teger\))164
672 y Fc(MPI)p 245 672 16 2 v 17 w(BCAST\()24 b(buffer)p 598
672 V 16 w(handle,)f(tag,)h(group,)g(root)g(\))164 733 y Fh(is)164
834 y Fc(MPI_SIZE\()e(&size,)i(context\);)164 895 y(MPI_RANK\()e(&rank,)i
(context\);)164 955 y(MPI_IRECV\()o(han)o(dl)o(e,)e(buffer_hand)o(le,)g
(root,)i(tag,)g(group\);)164 1015 y(if)h(\(rank==roo)o(t\))241
1075 y(for)f(\(i=0;)g(i)h(<)h(size;)e(i++\))318 1135 y(MPI_SEND\()o(buf)o
(fer)o(_ha)o(nd)o(le,)e(i,)j(tag,)f(group\);)164 1196 y(MPI_WAIT\(h)o(and)o
(le)o(\))164 1325 y Fd(Gather)164 1386 y(MPI)p 279 1386 17
2 v 20 w(GA)-5 b(THER\()19 b(in)n(buf,)h(outbuf,)e(tag,)h(group,)f(ro)r(ot,)g
(len\))237 1506 y Fh(Eac)o(h)g(pro)q(cess)h(\(including)f(the)f(ro)q(ot)j
(pro)q(cess\))f(sends)f(the)g(con)o(ten)o(t)g(of)g(its)g(input)164
1566 y(bu\013er)h(to)h(the)f(ro)q(ot)h(pro)q(cess.)30 b(The)19
b(ro)q(ot)h(pro)q(cess)g(concatenates)f(all)g(the)g(incoming)164
1626 y(messages)13 b(in)g(the)g(order)h(of)g(the)f(senders')g(rank)g(and)h
(places)g(the)f(results)g(in)g(its)g(output)164 1687 y(bu\013er.)33
b(It)19 b(is)h(called)f(b)o(y)g(all)h(mem)n(b)q(ers)e(of)i(group)h(using)f
(the)g(same)f(argumen)o(ts)g(for)164 1747 y Fc(tag,)24 b(group)p
Fh(,)14 b(and)j Fc(root)p Fh(.)j(The)c(input)h(bu\013er)f(of)h(eac)o(h)f(pro)
q(cess)h(ma)o(y)d(ha)o(v)o(e)i(di\013eren)o(t)164 1807 y(length.)164
1909 y Fd(IN)i(in)n(buf)26 b Fh(handle)16 b(to)h(input)f(bu\013er)h
(descriptor)164 2010 y Fd(OUT)i(outbuf)25 b Fh(handle)18 b(to)h(output)g
(bu\013er)g(descriptor)f({)h(signi\014can)o(t)f(only)g(at)h(ro)q(ot)286
2071 y(\(c)o(hoice\))164 2172 y Fd(IN)f(tag)25 b Fh(op)q(eration)17
b(tag)g(\(in)o(teger\))164 2274 y Fd(IN)h(group)25 b Fh(group)17
b(handle)164 2376 y Fd(IN)h(ro)r(ot)24 b Fh(rank)16 b(of)h(receiving)e(pro)q
(cess)h(\(in)o(teger\))949 2599 y(10)p eop
%%Page: 11 11
bop 164 307 a Fd(OUT)19 b(len)24 b Fh(di\013erence)d(b)q(et)o(w)o(een)f
(output)i(bu\013er)g(size)f(\(in)g(b)o(ytes\))g(and)h(n)o(um)o(b)q(er)e(of)
286 367 y(b)o(ytes)c(receiv)o(ed.)164 578 y Ff(Discussion:)237
638 y Fe(It)k(w)o(ould)g(b)q(e)g(more)f(elegan)o(t)g(\(but)h(no)f(more)g(con)
o(v)o(enien)o(t\))h(to)f(ha)o(v)o(e)g(a)g(return)h(status)164
698 y(ob)s(ject.)164 939 y Fd(MPI)p 279 939 17 2 v 20 w(GA)-5
b(THERB\()19 b(in)n(buf,)h(inlen,)f(outbuf,)f(tag,)h(group,)f(ro)r(ot\))237
1060 y Fc(MPI)p 318 1060 16 2 v 17 w(GATHER)12 b Fh(b)q(eha)o(v)o(es)h(lik)o
(e)f Fc(MPI)p 847 1060 V 17 w(GATHER)f Fh(restricted)i(to)h(blo)q(c)o(k)f
(bu\013ers,)h(and)g(with)164 1120 y(the)h(additional)g(restriction)g(that)g
(all)g(input)g(bu\013ers)h(should)g(ha)o(v)o(e)e(the)h(same)g(length.)164
1180 y(All)g(pro)q(cesses)i(should)g(pro)o(vided)f(the)g(same)g(v)m(alues)h
(for)f Fc(inlen,)24 b(tag,)g(group)p Fh(,)14 b(and)164 1240
y Fc(root)h Fh(.)164 1342 y Fd(IN)j(in)n(buf)26 b Fh(\014rst)17
b(v)m(ariable)f(of)g(input)g(bu\013er)h(\(c)o(hoice\))164 1443
y Fd(IN)h(inlen)26 b Fh(Num)o(b)q(er)14 b(of)j(\(w)o(ord\))f(v)m(ariables)g
(in)g(input)g(bu\013er)h(\(in)o(teger\))164 1545 y Fd(OUT)i(outbuf)25
b Fh(\014rst)11 b(v)m(ariable)f(of)h(output)h(bu\013er)f({)g(signi\014can)o
(t)g(only)f(at)h(ro)q(ot)h(\(c)o(hoice\))164 1646 y Fd(IN)18
b(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164 1748
y Fd(IN)h(group)25 b Fh(group)17 b(handle)164 1850 y Fd(IN)h(ro)r(ot)24
b Fh(rank)16 b(of)h(receiving)e(pro)q(cess)h(\(in)o(teger\))164
2011 y Fc(MPI)p 245 2011 V 17 w(GATHERB\()23 b(inbuf,)g(inlen,)h(outbuf,)f
(tag,)h(group,)g(root\))164 2072 y Fh(is)164 2173 y Fc(MPI_SIZE\()e(&size,)i
(group\);)164 2233 y(MPI_RANK\()e(&rank,)i(group\);)164 2293
y(MPI_ISENDB)o(\(ha)o(nd)o(le,)e(inbuf,)h(inlen,)h(root,)g(tag,)g(group\);)
164 2354 y(if)h(\(rank==roo)o(t\))241 2414 y(for)f(\(i=0;)g(i)h(<)h(size;)e
(i++\))241 2474 y({)949 2599 y Fh(11)p eop
%%Page: 12 12
bop 318 307 a Fc(MPI_RECVB)o(\(ou)o(tbu)o(f,)22 b(inlen,)i(i,)g(tag,)h
(group,)e(return_sta)o(tus)o(\);)318 367 y(outbuf)g(+=)i(inlen;)241
428 y(})164 488 y(MPI_WAIT\(h)o(and)o(le)o(\);)164 616 y Fd(Scatter)164
676 y(MPI)p 279 676 17 2 v 20 w(SCA)-5 b(TTER\()20 b(list)p
680 676 V 21 w(of)p 746 676 V 20 w(in)n(bufs,)g(outbuf,)f(tag,)f(group,)g(ro)
r(ot,)g(len\))237 797 y Fh(The)e(ro)q(ot)g(pro)q(cess)g(sends)g(the)f(con)o
(ten)o(t)f(of)i(its)f Fc(i)p Fh(-th)h(input)f(bu\013er)g(to)h(the)f(pro)q
(cess)164 857 y(with)k(rank)h Fc(i)p Fh(;)g(eac)o(h)f(pro)q(cess)i
(\(including)d(the)h(ro)q(ot)i(pro)q(cess\))f(stores)g(the)g(incoming)164
917 y(message)d(in)h(its)f(output)i(bu\013er.)26 b(The)18 b(di\013erence)e(b)
q(et)o(w)o(een)h(the)h(size)f(of)h(the)f(output)164 977 y(bu\013er)i(\(in)e
(b)o(ytes\))h(and)h(the)f(n)o(um)o(b)q(er)f(of)h(b)o(ytes)g(receiv)o(ed)e(is)
i(returned)g(in)g Fc(len)p Fh(.)26 b(The)164 1038 y(routine)17
b(is)g(called)g(b)o(y)g(all)f(mem)o(b)q(ers)f(of)i(the)h(group)g(using)g(the)
f(same)f(argumen)o(ts)h(for)164 1098 y Fc(tag,)24 b(group)p
Fh(,)14 b(and)j Fc(root)p Fh(.)164 1191 y Fd(IN)h(list)p 324
1191 V 22 w(of)p 391 1191 V 20 w(in)n(bufs)26 b Fh(list)15
b(of)i(bu\013er)g(descriptor)e(handles)164 1290 y Fd(OUT)k(outbuf)25
b Fh(bu\013er)16 b(descriptor)g(handle)164 1389 y Fd(IN)i(tag)25
b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164 1488 y Fd(IN)h(group)25
b Fh(handle)164 1587 y Fd(IN)18 b(ro)r(ot)24 b Fh(rank)16 b(of)h(sending)f
(pro)q(cess)h(\(in)o(teger\))164 1686 y Fd(OUT)i(len)24 b Fh(n)o(um)o(b)q(er)
18 b(of)h(remaining)e(b)o(ytes)i(in)g(the)f(output)i(bu\013er)f(at)h(eac)o(h)
e(pro)q(cess)286 1746 y(\(in)o(teger\))164 1899 y Fc(MPI)p
245 1899 16 2 v 17 w(SCATTER\()23 b(list)p 597 1899 V 17 w(of)p
666 1899 V 18 w(inbufs,)g(outbuf,)g(tag,)h(group,)f(root,)h(len\))164
1959 y Fh(is)164 2053 y Fc(MPI_SIZE\()e(&size,)i(group\);)164
2113 y(MPI_RANK\()e(&rank,)i(group\);)164 2173 y(MPI_IRECV\()o(han)o(dl)o(e,)
e(outbuf,)h(root,)h(tag,)g(group\);)164 2233 y(if)h(\(rank=root)o(\))241
2293 y(for)f(\(i=0;)g(i)h(<)h(size;)e(i++\))318 2354 y(MPI_SEND\()o(inb)o
(uf[)o(i],)e(i,)j(tag,)f(group\);)164 2414 y(MPI_WAIT\(h)o(and)o(le)o(,)f
(return_st)o(atu)o(s\);)164 2474 y(MPI_RETURN)o(_ST)o(AT)o(US\()o(ret)o(urn)o
(_s)o(tat)o(us,)f(len,)i(source,)f(tag\);)949 2599 y Fh(12)p
eop
%%Page: 13 13
bop 164 367 a Fd(MPI)p 279 367 17 2 v 20 w(SCA)-5 b(TTERB\()20
b(in)n(buf,)f(outbuf,)g(len,)f(tag,)h(group,)f(ro)r(ot\))237
488 y Fc(MPI)p 318 488 16 2 v 17 w(SCATTERB)d Fh(b)q(eha)o(v)o(es)i(lik)o(e)f
Fc(MPI)p 910 488 V 17 w(SCATTER)f Fh(restricted)i(to)g(blo)q(c)o(k)g
(bu\013ers,)h(and)164 548 y(with)e(the)h(additional)f(restriction)g(that)h
(all)f(output)h(bu\013ers)g(ha)o(v)o(e)f(the)g(same)g(length.)164
608 y(The)i(input)g(bu\013er)h(blo)q(c)o(k)e(of)i(the)f(ro)q(ot)h(pro)q(cess)
g(is)f(partitioned)g(in)o(to)g Fc(n)f Fh(consecutiv)o(e)164
668 y(blo)q(c)o(ks,)24 b(eac)o(h)e(consisting)h(of)g Fc(len)e
Fh(w)o(ords.)42 b(The)22 b Fc(i)p Fh(-th)h(blo)q(c)o(k)f(is)h(sen)o(t)f(to)h
(the)g Fc(i)p Fh(-th)164 729 y(pro)q(cess)15 b(in)g(the)f(group)h(and)h
(stored)f(in)f(its)g(output)h(bu\013er.)21 b(The)15 b(routine)f(is)h(called)e
(b)o(y)164 789 y(all)18 b(mem)o(b)q(ers)e(of)j(the)f(group)h(using)g(the)g
(same)e(argumen)o(ts)h(for)h Fc(tag,)24 b(group,)f(len)p Fh(,)164
849 y(and)17 b Fc(root)p Fh(.)164 951 y Fd(IN)h(in)n(buf)26
b Fh(\014rst)17 b(en)o(try)e(in)h(input)g(bu\013er)h({)f(signi\014can)o(t)g
(only)g(at)h(ro)q(ot)g(\(c)o(hoice\).)164 1052 y Fd(OUT)i(outbuf)25
b Fh(\014rst)16 b(en)o(try)f(in)h(output)h(bu\013er)g(\(c)o(hoice\).)164
1154 y Fd(IN)h(len)25 b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(to)i(b)q(e)
f(stored)h(in)f(output)g(bu\013er)h(\(in)o(teger\))164 1256
y Fd(IN)h(group)25 b Fh(handle)164 1357 y Fd(IN)18 b(ro)r(ot)24
b Fh(rank)16 b(of)h(sending)f(pro)q(cess)h(\(in)o(teger\))164
1519 y Fc(MPI)p 245 1519 V 17 w(SCATTERB\()23 b(inbuf,)g(outbuf,)g(outlen,)g
(tag,)h(group,)g(root\))164 1579 y Fh(is)164 1681 y Fc(MPI_SIZE\()e(&size,)i
(group\);)164 1741 y(MPI_RANK\()e(&rank,)i(group\);)164 1802
y(MPI_IRECVB)o(\()f(handle,)g(outbuf,)g(outlen,)g(root,)h(tag,)g(group\);)164
1862 y(if)h(\(rank=root)o(\))241 1922 y(for)f(\(i=0;)g(i)h(<)h(size;)e(i++\))
241 1982 y({)318 2042 y(MPI_SENDB)o(\(in)o(buf)o(,)f(outlen,)g(i,)h(tag,)h
(group,)e(return_sta)o(tus)o(\);)318 2103 y(inbuf)h(+=)g(outlen;)241
2163 y(})164 2223 y(MPI_WAIT\(h)o(and)o(le)o(\);)164 2353 y
Fd(All-to-all)d(scatter)164 2413 y(MPI)p 279 2413 17 2 v 20
w(ALLSCA)-5 b(TTER\()19 b(list)p 789 2413 V 22 w(of)p 856 2413
V 20 w(in)n(bufs,)h(outbuf,)e(tag,)h(group,)f(len\))949 2599
y Fh(13)p eop
%%Page: 14 14
bop 237 307 a Fh(Eac)o(h)22 b(pro)q(cess)h(in)e(the)h(group)h(sends)f(its)f
Fc(i)p Fh(-th)h(bu\013er)h(in)e(its)h(input)f(bu\013er)i(list)164
367 y(to)d(the)g(pro)q(cess)g(with)g(rank)g Fc(i)f Fh(\(itself)g(included\);)
g(eac)o(h)h(pro)q(cess)g(concatenates)g(the)164 428 y(incoming)e(messages)i
(in)f(its)h(output)g(bu\013er,)h(in)e(the)h(order)g(of)g(the)g(senders')f
(ranks.)164 488 y(The)14 b(n)o(um)o(b)q(er)d(of)j(b)o(ytes)f(left)g(in)g(the)
g(output)i(bu\013er)f(is)f(returned)g(in)g Fc(len)p Fh(.)20
b(The)13 b(routine)164 548 y(is)j(called)e(b)o(y)i(all)f(mem)o(b)q(ers)e(of)j
(the)g(group)h(using)f(the)g(same)e(argumen)o(ts)h(for)i Fc(tag)d
Fh(and)164 608 y Fc(group)p Fh(.)164 698 y Fd(IN)k(list)p 324
698 17 2 v 22 w(of)p 391 698 V 20 w(in)n(bufs)26 b Fh(list)15
b(of)i(bu\013er)g(descriptor)e(handles)164 796 y Fd(OUT)k(outbuf)25
b Fh(bu\013er)16 b(descriptor)g(handle)164 894 y Fd(IN)i(tag)25
b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164 992 y Fd(IN)h(group)25
b Fh(handle)164 1090 y Fd(OUT)19 b(len)24 b Fh(n)o(um)o(b)q(er)15
b(of)h(remaining)f(b)o(ytes)h(in)g(the)g(output)g(bu\013er)h(\(in)o(teger\))
164 1240 y Fd(MPI)p 279 1240 V 20 w(ALLSCA)-5 b(TTERB\()19
b(in)n(buf,)h(outbuf,)f(len,)f(tag,)g(group\))237 1361 y Fc(MPI)p
318 1361 16 2 v 17 w(ALLSCATTERB)7 b Fh(b)q(eha)o(v)o(es)k(lik)o(e)e
Fc(MPI)p 967 1361 V 17 w(ALLSCATTER)e Fh(restricted)j(to)h(blo)q(c)o(k)f
(bu\013ers,)164 1421 y(and)19 b(with)f(the)g(additional)g(restriction)g(that)
g(all)g(blo)q(c)o(ks)g(sen)o(t)g(from)f(one)i(pro)q(cess)g(to)164
1481 y(another)d(ha)o(v)o(e)e(the)g(same)g(length.)21 b(The)15
b(input)g(bu\013er)g(blo)q(c)o(k)f(of)h(eac)o(h)g(pro)q(cess)g(is)g(par-)164
1541 y(titioned)k(in)o(to)g Fc(n)h Fh(consecutiv)o(e)e(blo)q(c)o(ks,)i(eac)o
(h)f(consisting)h(of)g Fc(len)f Fh(w)o(ords.)32 b(The)20 b
Fc(i)p Fh(-th)164 1601 y(blo)q(c)o(k)e(is)h(sen)o(t)f(to)h(the)g
Fc(it)p Fh(-th)f(pro)q(cess)i(in)e(the)h(group.)29 b(Eac)o(h)19
b(pro)q(cess)h(concatenates)164 1661 y(the)15 b(incoming)f(messages,)g(in)h
(the)g(order)h(of)g(the)f(senders')f(ranks,)i(and)g(store)g(them)d(in)164
1722 y(its)19 b(output)h(bu\013er.)31 b(The)19 b(routine)g(is)g(called)g(b)o
(y)f(all)h(mem)o(b)q(ers)e(of)i(the)g(group)i(using)164 1782
y(the)16 b(same)f(argumen)o(ts)h(for)g Fc(tag,)24 b(group)p
Fh(,)14 b(and)j Fc(len)p Fh(.)164 1872 y Fd(IN)h(in)n(buf)26
b Fh(\014rst)17 b(en)o(try)e(in)h(input)g(bu\013er)h(\(c)o(hoice\).)j(ro)q
(ot)d(\(in)o(teger\))164 1970 y Fd(OUT)i(outbuf)25 b Fh(\014rst)16
b(en)o(try)f(in)h(output)h(bu\013er)g(\(c)o(hoice\).)164 2068
y Fd(IN)h(len)25 b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(sen)o(t)h(from)f
(eac)o(h)h(pro)q(cess)h(to)f(eac)o(h)g(other)h(\(in)o(teger\).)164
2166 y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
2263 y Fd(IN)h(group)25 b Fh(handle)164 2414 y Fc(MPI)p 245
2414 V 17 w(ALLSCATTERB)o(\()e(inbuf,)g(outbuf,)g(len,)h(tag,)g(group\))164
2474 y Fh(is)949 2599 y(14)p eop
%%Page: 15 15
bop 164 307 a Fc(MPI_SIZE\()22 b(&size,)i(group\);)164 367
y(MPI_RANK\()e(&rank,)i(group\);)164 428 y(for)h(\(i=0;)e(i)j(<)f(rank;)f
(i++\))241 488 y({)267 548 y(MPI_IRECV)o(B\()o(rec)o(v_h)o(and)o(le)o(s[i)o
(],)e(outbuf,)h(len,)h(tag,)h(group\);)267 608 y(outbuf)e(+=)i(len;)241
668 y(})164 729 y(for)g(\(i=0;)e(i)j(<)f(size;)f(i++\))241
789 y({)267 849 y(MPI_ISEND)o(B\()o(sen)o(d_h)o(and)o(le)o([i])o(,)f(inbuf,)g
(len,)h(i,)h(tag,)f(group\);)267 909 y(inbuf)f(+=)i(len;)241
969 y(})164 1029 y(MPI_WAITAL)o(L\(s)o(en)o(d_h)o(and)o(le\))o(;)164
1090 y(MPI_WAITAL)o(L\(r)o(ec)o(v_h)o(and)o(le\))o(;)164 1220
y Fd(All-to-all)c(broadcast)164 1280 y(MPI)p 279 1280 17 2
v 20 w(ALLCAST\()e(in)n(buf,)h(outbuf,)f(tag,)f(group,)h(len\))237
1400 y Fh(Eac)o(h)k(pro)q(cess)h(in)e(the)h(group)h(broadcasts)h(its)d(input)
h(bu\013er)g(to)h(all)e(pro)q(cesses)164 1460 y(\(including)g(itself)s(\);)i
(eac)o(h)e(pro)q(cess)h(concatenates)g(the)f(incoming)f(messages)h(in)g(its)
164 1521 y(output)d(bu\013er,)g(in)f(the)g(order)g(of)h(the)f(senders')g
(ranks.)28 b(The)19 b(n)o(um)o(b)q(er)d(of)j(b)o(ytes)f(left)164
1581 y(in)d(the)f(output)i(bu\013er)f(is)g(returned)f(in)g
Fc(len)p Fh(.)20 b(The)15 b(routine)g(is)f(called)g(b)o(y)h(all)f(mem)o(b)q
(ers)164 1641 y(of)j(the)f(group)h(using)f(the)g(same)g(argumen)o(ts)f(for)h
Fc(tag)g Fh(and)g Fc(group)p Fh(.)164 1743 y Fd(IN)i(in)n(buf)26
b Fh(bu\013er)17 b(descriptor)f(handle)g(for)g(input)g(bu\013er)164
1844 y Fd(OUT)j(outbuf)25 b Fh(bu\013er)16 b(descriptor)g(handle)g(for)h
(output)f(bu\013er)164 1946 y Fd(IN)i(tag)25 b Fh(op)q(eration)17
b(tag)g(\(in)o(teger\))164 2048 y Fd(IN)h(group)25 b Fh(handle)164
2149 y Fd(OUT)19 b(len)24 b Fh(n)o(um)o(b)q(er)16 b(of)i(remaining)f(un)o
(touc)o(hed)g(b)o(ytes)g(in)g(eac)o(h)h(output)g(bu\013er)g(\(in-)286
2210 y(teger\))164 2371 y Fd(MPI)p 279 2371 V 20 w(ALLCASTB\()h(in)n(buf,)h
(outbuf,)f(len,)f(tag,)h(group\))949 2599 y Fh(15)p eop
%%Page: 16 16
bop 237 307 a Fc(MPI)p 318 307 16 2 v 17 w(ALLCASTB)15 b Fh(b)q(eha)o(v)o(es)
i(lik)o(e)f Fc(MPI)p 910 307 V 17 w(ALLCAST)f Fh(restricted)i(to)g(blo)q(c)o
(k)g(bu\013ers,)h(and)164 367 y(with)11 b(the)g(additional)g(restriction)g
(that)g(all)g(blo)q(c)o(ks)g(sen)o(t)g(from)f(one)i(pro)q(cess)g(to)f
(another)164 428 y(ha)o(v)o(e)g(the)h(same)f(length.)20 b(The)12
b(routine)g(is)g(called)f(b)o(y)h(all)g(mem)n(b)q(ers)e(of)i(the)g(group)i
(using)164 488 y(the)i(same)f(argumen)o(ts)h(for)g Fc(tag,)24
b(group)p Fh(,)14 b(and)j Fc(len)p Fh(.)164 589 y Fd(IN)h(in)n(buf)26
b Fh(\014rst)17 b(en)o(try)e(in)h(input)g(bu\013er)h(\(c)o(hoice\).)j(ro)q
(ot)d(\(in)o(teger\))164 691 y Fd(OUT)i(outbuf)25 b Fh(\014rst)16
b(en)o(try)f(in)h(output)h(bu\013er)g(\(c)o(hoice\).)164 793
y Fd(IN)h(len)25 b Fh(n)o(um)o(b)q(er)19 b(of)h(en)o(tries)g(sen)o(t)g(from)f
(eac)o(h)h(pro)q(cess)h(to)g(eac)o(h)f(other)h(\(including)286
853 y(itself)s(\).)164 955 y Fd(IN)d(group)25 b Fh(handle)164
1117 y Fc(MPI)p 245 1117 V 17 w(ALLCASTB\()e(inbuf,)g(outbuf,)g(len,)h(tag,)g
(group\))164 1177 y Fh(is)164 1279 y Fc(MPI_SIZE\()e(&size,)i(group\);)164
1339 y(MPI_RANK\()e(&rank,)i(group\);)164 1399 y(for)h(\(i=0;)e(i)j(<)f
(rank;)f(i++\))241 1459 y({)267 1519 y(MPI_IRECV)o(B\()o(rec)o(v_h)o(and)o
(le)o(s[i)o(],)e(outbuf,)h(len,)h(tag,)h(group\);)267 1579
y(outbuf)e(+=)i(len;)241 1640 y(})164 1700 y(for)g(\(i=0;)e(i)j(<)f(size;)f
(i++\))241 1760 y({)267 1820 y(MPI_ISEND)o(B\()o(sen)o(d_h)o(and)o(le)o([i])o
(,)f(inbuf,)g(len,)h(i,)h(tag,)f(group\);)241 1880 y(})164
1941 y(MPI_WAITAL)o(L\(s)o(en)o(d_h)o(and)o(le\))o(;)164 2001
y(MPI_WAITAL)o(L\(r)o(ec)o(v_h)o(and)o(le\))o(;)164 2131 y
Fd(1.3.3)55 b(Global)20 b(Compute)f(Op)r(erations)164 2223
y(Reduce)164 2283 y(MPI)p 279 2283 17 2 v 20 w(REDUCE\()g(in)n(buf,)h
(outbuf,)e(tag,)g(group,)h(ro)r(ot,)e(op\))237 2404 y Fh(Com)o(bines)g(the)h
(v)m(alues)g(pro)o(vided)f(in)h(the)g(input)f(bu\013er)i(of)f(eac)o(h)g(pro)q
(cess)g(in)g(the)164 2464 y(group,)d(using)g(the)g(op)q(eration)g
Fc(op)p Fh(,)f(and)h(returns)g(the)f(com)o(bined)e(v)m(alue)j(in)f(the)g
(output)949 2599 y(16)p eop
%%Page: 17 17
bop 164 307 a Fh(bu\013er)20 b(of)f(the)g(pro)q(cess)h(with)f(rank)g
Fc(root)p Fh(.)29 b(Eac)o(h)19 b(pro)q(cess)h(can)g(pro)o(vide)e(one)i(v)m
(alue,)164 367 y(or)j(a)f(sequence)g(of)g(v)m(alues,)i(in)e(whic)o(h)f(case)i
(the)f(com)o(bine)e(op)q(eration)j(is)f(executed)164 428 y(p)q(oin)o(t)o
(wise)e(on)i(eac)o(h)e(en)o(try)g(of)h(the)g(sequence.)34 b(F)l(or)21
b(example,)e(if)i(the)f(op)q(eration)i(is)164 488 y Fc(max)c
Fh(and)i(input)f(bu\013ers)h(con)o(tains)g(t)o(w)o(o)f(\015oating)h(p)q(oin)o
(t)g(n)o(um)o(b)q(ers,)e(then)h(outbuf\(1\))164 548 y(=)j(global)h(max\(in)o
(buf\(1\)\))e(and)h(outbuf\(2\))i(=)e(global)g(max\(in)o(buf\(2\)\).)38
b(All)21 b(input)164 608 y(bu\013ers)15 b(should)g(de\014ne)f(sequences)f(of)
i(equal)e(length)h(of)h(en)o(tries)e(of)i(t)o(yp)q(es)f(that)g(matc)o(h)164
668 y(the)j(t)o(yp)q(e)g(of)g(the)g(op)q(erands)i(of)e Fc(op)p
Fh(.)23 b(The)18 b(output)f(bu\013er)h(should)g(de\014ne)e(a)i(sequence)164
729 y(of)i(the)f(same)g(length)g(of)h(en)o(tries)e(of)i(t)o(yp)q(es)g(that)g
(matc)o(h)e(the)h(t)o(yp)q(e)g(of)h(the)f(result)h(of)164 789
y Fc(op)p Fh(.)k(\(Note)17 b(that,)h(here)f(as)h(for)f(all)g(other)h(comm)o
(unic)o(ation)d(op)q(erations,)j(the)g(t)o(yp)q(e)f(of)164
849 y(en)o(tries)10 b(inserted)h(in)g(a)h(message)e(dep)q(end)i(on)g(the)f
(information)f(pro)o(vided)h(b)o(y)g(the)g(input)164 909 y(bu\013er)j
(descriptor,)g(and)g(not)g(on)h(the)e(declarations)h(of)g(these)g(v)m
(ariables)g(in)f(the)h(calling)164 969 y(program.)24 b(The)17
b(t)o(yp)q(es)f(of)i(the)f(v)m(ariables)g(in)f(the)h(calling)f(program)h
(need)g(not)g(matc)o(h)164 1029 y(the)e(t)o(yp)q(es)g(de\014ned)g(b)o(y)g
(the)g(bu\013er)g(descriptor,)g(but)h(in)e(suc)o(h)i(case)f(the)g(outcome)f
(of)i(a)164 1090 y(reduce)f(op)q(eration)j(ma)o(y)c(b)q(e)j(implem)o(e)o(n)o
(tation)d(dep)q(enden)o(t.\))237 1150 y(The)h(op)q(eration)g(de\014ned)g(b)o
(y)f Fc(op)g Fh(is)g(asso)q(ciativ)o(e)h(and)g(comm)o(utativ)o(e)o(,)d(and)j
(the)f(im-)164 1210 y(plemen)o(tation)d(can)j(tak)o(e)e(adv)m(an)o(tage)j(of)
e(asso)q(ciativit)o(y)g(and)h(comm)o(utativi)o(t)o(y)c(in)j(order)164
1270 y(to)20 b(c)o(hange)f(order)g(of)h(ev)m(aluation.)30 b(The)20
b(routine)f(is)g(called)f(b)o(y)h(all)g(group)h(mem)o(b)q(ers)164
1330 y(using)d(the)f(same)f(argumen)o(ts)g(for)i Fc(tag,)24
b(group,)f(root)15 b Fh(and)i Fc(op)p Fh(.)164 1426 y Fd(IN)h(in)n(buf)26
b Fh(handle)16 b(to)h(input)f(bu\013er)164 1526 y Fd(OUT)j(outbuf)25
b Fh(handle)16 b(to)g(output)h(bu\013er)g({)f(signi\014can)o(t)g(only)g(at)h
(ro)q(ot)164 1625 y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o
(teger\))164 1725 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164
1824 y Fd(IN)h(ro)r(ot)24 b Fh(rank)16 b(of)h(ro)q(ot)g(pro)q(cess)g(\(in)o
(teger\))164 1924 y Fd(IN)h(op)25 b Fh(op)q(eration)17 b(\(status\))237
2020 y(W)l(e)k(list)g(b)q(elo)o(w)g(the)g(op)q(erations)h(are)g(supp)q(orted)
g(for)g(F)l(ortran,)g(eac)o(h)f(with)g(the)164 2080 y(corresp)q(onding)c(v)m
(alue)f(of)h(the)f Fc(op)f Fh(parameter.)164 2175 y Fd(MPI)p
279 2175 17 2 v 20 w(IMAX)26 b Fh(in)o(teger)15 b(maxim)n(um)164
2275 y Fd(MPI)p 279 2275 V 20 w(RMAX)25 b Fh(real)16 b(maxim)o(um)164
2374 y Fd(MPI)p 279 2374 V 20 w(DMAX)26 b Fh(double)16 b(precision)f(real)h
(maxim)o(um)164 2474 y Fd(MPI)p 279 2474 V 20 w(IMIN)25 b Fh(in)o(teger)15
b(minim)n(um)949 2599 y(17)p eop
%%Page: 18 18
bop 164 307 a Fd(MPI)p 279 307 17 2 v 20 w(RMIN)24 b Fh(real)16
b(minim)n(um)164 407 y Fd(MPI)p 279 407 V 20 w(DMIN)25 b Fh(double)16
b(precision)f(real)h(minim)n(um)164 508 y Fd(MPI)p 279 508
V 20 w(ISUM)25 b Fh(in)o(teger)15 b(sum)164 608 y Fd(MPI)p
279 608 V 20 w(RSUM)25 b Fh(real)16 b(sum)164 708 y Fd(MPI)p
279 708 V 20 w(DSUM)25 b Fh(double)16 b(precision)g(real)g(sum)164
809 y Fd(MPI)p 279 809 V 20 w(CSUM)26 b Fh(complex)14 b(sum)164
909 y Fd(MPI)p 279 909 V 20 w(DCSUM)26 b Fh(double)16 b(precision)f(complex)f
(sum)164 1009 y Fd(MPI)p 279 1009 V 20 w(IPR)n(OD)25 b Fh(in)o(teger)16
b(pro)q(duct)164 1110 y Fd(MPI)p 279 1110 V 20 w(RPR)n(OD)25
b Fh(real)16 b(pro)q(duct)164 1210 y Fd(MPI)p 279 1210 V 20
w(DPR)n(OD)25 b Fh(double)17 b(precision)e(real)h(pro)q(duct)164
1310 y Fd(MPI)p 279 1310 V 20 w(CPR)n(OD)26 b Fh(complex)14
b(pro)q(duct)164 1411 y Fd(MPI)p 279 1411 V 20 w(DCPR)n(OD)26
b Fh(double)16 b(precision)f(complex)g(pro)q(duct)164 1511
y Fd(MPI)p 279 1511 V 20 w(AND)25 b Fh(logical)16 b(and)164
1611 y Fd(MPI)p 279 1611 V 20 w(IAND)25 b Fh(in)o(teger)15
b(\(bit-wise\))h(and)164 1712 y Fd(MPI)p 279 1712 V 20 w(OR)25
b Fh(logical)15 b(or)164 1812 y Fd(MPI)p 279 1812 V 20 w(IOR)25
b Fh(in)o(teger)15 b(\(bit-wise\))h(or)164 1912 y Fd(MPI)p
279 1912 V 20 w(X)n(OR)25 b Fh(logical)16 b(xor)164 2013 y
Fd(MPI)p 279 2013 V 20 w(IX)n(OR)25 b Fh(in)o(teger)16 b(\(bit-wise\))f(xor)
164 2113 y Fd(MPI)p 279 2113 V 20 w(MAXLOC)26 b Fh(rank)16
b(of)h(pro)q(cess)g(with)f(maxim)n(um)c(in)o(teger)k(v)m(alue)164
2213 y Fd(MPI)p 279 2213 V 20 w(MAXRLOC)26 b Fh(rank)16 b(of)h(pro)q(cess)g
(with)f(maxim)n(um)c(real)k(v)m(alue)164 2314 y Fd(MPI)p 279
2314 V 20 w(MAXDLOC)26 b Fh(rank)d(of)h(pro)q(cess)g(with)f(maxim)o(um)c
(double)k(precision)f(real)286 2374 y(v)m(alue)164 2474 y Fd(MPI)p
279 2474 V 20 w(MINLOC)j Fh(rank)16 b(of)h(pro)q(cess)g(with)f(minim)n(um)c
(in)o(teger)j(v)m(alue)949 2599 y(18)p eop
%%Page: 19 19
bop 164 307 a Fd(MPI)p 279 307 17 2 v 20 w(MINRLOC)25 b Fh(rank)16
b(of)h(pro)q(cess)g(with)f(minim)n(um)c(real)k(v)m(alue)164
407 y Fd(MPI)p 279 407 V 20 w(MINDLOC)25 b Fh(rank)11 b(of)g(pro)q(cess)h
(with)e(minim)n(um)d(double)k(precision)f(real)g(v)m(alue)164
565 y Fd(MPI)p 279 565 V 20 w(REDUCEB\()19 b(in)n(buf,)g(outbuf,)g(len,)g
(tag,)f(group,)g(ro)r(ot,)g(op\))237 685 y Fh(Is)e(same)f(as)i
Fc(MPI)p 553 685 16 2 v 18 w(REDUCE)p Fh(,)c(restricted)i(to)i(a)g(blo)q(c)o
(k)e(bu\013er.)164 782 y Fd(IN)j(in)n(buf)26 b Fh(\014rst)17
b(lo)q(cation)f(in)g(input)g(bu\013er)164 883 y Fd(OUT)j(outbuf)25
b Fh(\014rst)16 b(lo)q(cation)h(in)f(output)g(bu\013er)h({)g(signi\014can)o
(t)f(only)g(at)g(ro)q(ot)164 983 y Fd(IN)i(len)25 b Fh(n)o(um)o(b)q(er)14
b(of)j(en)o(tries)e(in)h(input)g(and)h(output)g(bu\013er)f(\(in)o(teger\))164
1083 y Fd(IN)i(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
1183 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164 1283
y Fd(IN)h(ro)r(ot)24 b Fh(rank)16 b(of)h(ro)q(ot)g(pro)q(cess)g(\(in)o
(teger\))164 1384 y Fd(IN)h(op)25 b Fh(op)q(eration)17 b(\(status\))164
1590 y Ff(Discussion:)237 1646 y Fe(If)h(w)o(e)f(are)g(to)f(b)q(e)i
(compatible)g(with)g(the)f(p)q(oin)o(t)h(to)f(p)q(oin)o(t)g(blo)q(c)o(k)h(op)
q(erations,)g(the)f Fb(len)164 1703 y Fe(parameter)g(should)h(indicate)h(the)
e(n)o(um)o(b)q(er)h(of)f(w)o(ords)f(in)j(bu\013er.)26 b(But)17
b(it)h(migh)o(t)f(b)q(e)h(more)164 1759 y(natural)i(to)e(ha)o(v)o(e)i
Fb(len)f Fe(indicate)i(the)e(n)o(um)o(b)q(er)h(of)f(en)o(tries)h(in)h(the)e
(bu\013er,)i(so)e(that)g(if)h(the)164 1816 y(en)o(tries)d(are)g(complex)h(or)
f(double)h(precision,)h Fb(len)d Fe(will)j(b)q(e)f(half)f(the)h(n)o(um)o(b)q
(er)f(of)g(w)o(ords)f(in)164 1872 y(the)f(bu\013er.)164 2173
y Fd(MPI)p 279 2173 17 2 v 20 w(USER)p 452 2173 V 20 w(REDUCE\()k(in)n(buf,)h
(outbuf,)e(tag,)g(group,)h(ro)r(ot,)e(function\))237 2293 y
Fh(Same)d(as)i(the)g(reduce)e(op)q(eration)j(function)e(ab)q(o)o(v)o(e)g
(except)f(that)i(a)g(user)f(supplied)164 2354 y(function)h(is)f(used.)21
b Fc(function)13 b Fh(is)j(an)g(asso)q(ciativ)o(e)f(and)i(comm)o(utativ)n(e)c
(function)i(with)164 2414 y(t)o(w)o(o)i(argumen)o(ts.)23 b(The)17
b(t)o(yp)q(es)f(of)i(the)f(t)o(w)o(o)f(argumen)o(ts)h(and)g(of)h(the)e
(returned)h(v)m(alues)164 2474 y(all)f(agree.)949 2599 y(19)p
eop
%%Page: 20 20
bop 164 307 a Fd(IN)18 b(in)n(buf)26 b Fh(handle)16 b(to)h(input)f(bu\013er)
164 409 y Fd(OUT)j(outbuf)25 b Fh(handle)16 b(to)g(output)h(bu\013er)g({)f
(signi\014can)o(t)g(only)g(at)h(ro)q(ot)164 511 y Fd(IN)h(tag)25
b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164 612 y Fd(IN)h(group)25
b Fh(handle)16 b(to)h(group)164 714 y Fd(IN)h(ro)r(ot)24 b
Fh(rank)16 b(of)h(ro)q(ot)g(pro)q(cess)g(\(in)o(teger\))164
816 y Fd(IN)h(function)26 b Fh(user)16 b(pro)o(vided)f(function)164
978 y Fd(MPI)p 279 978 17 2 v 20 w(USER)p 452 978 V 20 w(REDUCEB\()h(in)n
(buf,)i(outbuf,)e(len,)h(tag,)f(group,)g(ro)r(ot,)g(func-)164
1038 y(tion\))164 1098 y Fh(Is)g(same)f(as)i Fc(MPI)p 480 1098
16 2 v 498 1098 V 36 w(USER)p 620 1098 V 17 w(REDUCE)p Fh(,)d(restricted)h
(to)h(a)h(blo)q(c)o(k)f(bu\013er.)164 1200 y Fd(IN)i(in)n(buf)26
b Fh(\014rst)17 b(lo)q(cation)f(in)g(input)g(bu\013er)164 1301
y Fd(OUT)j(outbuf)25 b Fh(\014rst)16 b(lo)q(cation)h(in)f(output)g(bu\013er)h
({)g(signi\014can)o(t)f(only)g(at)g(ro)q(ot)164 1403 y Fd(IN)i(len)25
b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(in)h(input)g(and)h(output)g
(bu\013er)f(\(in)o(teger\))164 1505 y Fd(IN)i(tag)25 b Fh(op)q(eration)17
b(tag)g(\(in)o(teger\))164 1606 y Fd(IN)h(group)25 b Fh(handle)16
b(to)h(group)164 1708 y Fd(IN)h(ro)r(ot)24 b Fh(rank)16 b(of)h(ro)q(ot)g(pro)
q(cess)g(\(in)o(teger\))164 1810 y Fd(IN)h(op)25 b Fh(op)q(eration)17
b(\(status\))164 2021 y Ff(Discussion:)237 2077 y Fe(Do)d(w)o(e)g(also)g(w)o
(an)o(t)f(a)h(v)o(ersion)h(of)e(reduce)j(that)d(broadcasts)h(the)g(result)h
(to)e(all)i(pro)q(cesses)164 2134 y(in)j(the)f(group?)27 b(\(This)17
b(can)h(b)q(e)f(ac)o(hiev)o(ed)i(b)o(y)e(a)g(reduce)h(follo)o(w)o(ed)f(b)o(y)
g(a)g(broadcast,)g(but)g(a)164 2190 y(com)o(bined)f(function)g(ma)o(y)f(b)q
(e)h(somewhat)e(more)h(e\016cien)o(t.)949 2599 y Fh(20)p eop
%%Page: 21 21
bop 164 307 a Fd(Scan)164 367 y(MPI)p 279 367 17 2 v 20 w(SCAN\()20
b(in)n(buf,)f(outbuf,)g(tag,)f(group,)h(op)g(\))237 488 y Fh(MPI)p
336 488 15 2 v 17 w(SCAN)12 b(is)h(used)f(to)h(p)q(erform)f(a)h(parallel)f
(pre\014x)g(with)g(resp)q(ect)g(to)h(an)g(asso)q(cia-)164 548
y(tiv)o(e)f(reduction)h(op)q(eration)h(on)g(data)g(distributed)f(across)h
(the)f(group.)22 b(The)13 b(op)q(eration)164 608 y(returns)k(in)g(the)g
(output)g(bu\013er)h(of)f(the)g(pro)q(cess)h(with)e(rank)i
Fc(i)e Fh(the)h(reduction)g(of)g(the)164 668 y(v)m(alues)k(in)g(the)g(input)g
(bu\013ers)h(of)g(pro)q(cesses)g(with)f(ranks)h Fc(0,...,i)p
Fh(.)33 b(The)22 b(t)o(yp)q(e)f(of)164 729 y(op)q(erations)e(supp)q(orted)h
(and)f(their)e(seman)o(tic,)g(and)i(the)f(constrain)o(ts)g(on)h(input)f(and)
164 789 y(output)f(bu\013ers)g(are)f(as)h(for)f Fc(MPI)p 779
789 16 2 v 18 w(REDUCE)p Fh(.)164 872 y Fd(IN)i(in)n(buf)26
b Fh(handle)16 b(to)h(input)f(bu\013er)164 967 y Fd(OUT)j(outbuf)25
b Fh(handle)16 b(to)g(output)h(bu\013er)164 1063 y Fd(IN)h(tag)25
b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164 1158 y Fd(IN)h(group)25
b Fh(handle)16 b(to)h(group)164 1254 y Fd(IN)h(op)25 b Fh(op)q(eration)17
b(\(status\))164 1397 y Fd(MPI)p 279 1397 17 2 v 20 w(SCANB\()j(in)n(buf,)f
(outbuf,)g(len,)f(tag,)h(group,)f(op)h(\))164 1457 y Fh(Same)c(as)i
Fc(MPI)p 435 1457 16 2 v 17 w(SCAN)p Fh(,)e(restricted)g(to)h(blo)q(c)o(k)g
(bu\013ers.)164 1546 y Fd(IN)i(in)n(buf)26 b Fh(\014rst)17
b(input)f(bu\013er)g(elemen)o(t)e(\(c)o(hoice\))164 1642 y
Fd(OUT)19 b(outbuf)25 b Fh(\014rst)16 b(output)h(bu\013er)f(elemen)o(t)e(\(c)
o(hoice\))164 1737 y Fd(IN)k(len)25 b Fh(n)o(um)o(b)q(er)14
b(of)j(en)o(tries)e(in)h(input)g(and)h(output)g(bu\013er)f(\(in)o(teger\))164
1833 y Fd(IN)i(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
1928 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164 2024
y Fd(IN)h(op)25 b Fh(op)q(eration)17 b(\(status\))164 2173
y Fd(MPI)p 279 2173 17 2 v 20 w(USER)p 452 2173 V 20 w(SCAN\()j(in)n(buf,)f
(outbuf,)g(tag,)f(group,)h(function)g(\))237 2293 y Fh(Same)g(as)h(the)f
(scan)h(op)q(eration)h(function)e(ab)q(o)o(v)o(e)g(except)g(that)h(a)g(user)f
(supplied)164 2354 y(function)d(is)f(used.)21 b Fc(function)13
b Fh(is)j(an)g(asso)q(ciativ)o(e)f(and)i(comm)o(utativ)n(e)c(function)i(with)
164 2414 y(t)o(w)o(o)i(argumen)o(ts.)23 b(The)17 b(t)o(yp)q(es)f(of)i(the)f
(t)o(w)o(o)f(argumen)o(ts)h(and)g(of)h(the)e(returned)h(v)m(alues)164
2474 y(all)f(agree.)949 2599 y(21)p eop
%%Page: 22 22
bop 164 307 a Fd(IN)18 b(in)n(buf)26 b Fh(handle)16 b(to)h(input)f(bu\013er)
164 409 y Fd(OUT)j(outbuf)25 b Fh(handle)16 b(to)g(output)h(bu\013er)164
511 y Fd(IN)h(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
612 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164 714
y Fd(IN)h(function)26 b Fh(user)16 b(pro)o(vided)f(function)164
876 y Fd(MPI)p 279 876 17 2 v 20 w(USER)p 452 876 V 20 w(SCANB\()k(in)n(buf,)
h(outbuf,)f(len,)f(tag,)h(group,)f(function\))164 936 y Fh(Is)e(same)f(as)i
Fc(MPI)p 480 936 16 2 v 18 w(USER)p 602 936 V 16 w(SCAN)p Fh(,)e(restricted)g
(to)h(a)h(blo)q(c)o(k)f(bu\013er.)164 1038 y Fd(IN)i(in)n(buf)26
b Fh(\014rst)17 b(lo)q(cation)f(in)g(input)g(bu\013er)164 1139
y Fd(OUT)j(outbuf)25 b Fh(\014rst)16 b(lo)q(cation)h(in)f(output)g(bu\013er)
164 1241 y Fd(IN)i(len)25 b Fh(n)o(um)o(b)q(er)14 b(of)j(en)o(tries)e(in)h
(input)g(and)h(output)g(bu\013er)f(\(in)o(teger\))164 1343
y Fd(IN)i(tag)25 b Fh(op)q(eration)17 b(tag)g(\(in)o(teger\))164
1445 y Fd(IN)h(group)25 b Fh(handle)16 b(to)h(group)164 1546
y Fd(IN)h(function)26 b Fh(user)16 b(pro)o(vided)f(function)164
1757 y Ff(Discussion:)237 1817 y Fe(Do)j(w)o(e)g(w)o(an)o(t)f(scan)i(op)q
(erations)f(executed)i(b)o(y)e(segmen)o(ts?)30 b(\(The)18 b(HPF)g
(de\014nition)i(of)164 1878 y(pre\014x)c(and)f(su\016x)g(op)q(eration)h(migh)
o(t)f(b)q(e)h(handy)f({)g(in)h(addition)h(to)d(the)i(scanned)g(v)o(ector)e
(of)164 1938 y(v)m(alues)i(there)g(is)f(a)g(mask)g(that)f(tells)j(where)e
(segmen)o(ts)g(start)f(and)h(end.\))164 2227 y Ff(Missing:)237
2284 y Fe(Non)o(blo)q(c)o(king)d(\(immediate\))g(collectiv)o(e)h(op)q
(erations.)18 b(The)12 b(syn)o(tax)e(is)h(ob)o(vious:)18 b(for)11
b(eac)o(h)164 2340 y(collectiv)o(e)20 b(op)q(eration)e Fb(MPI)p
645 2340 15 2 v 16 w(op\(params\))f Fe(one)h(ma)o(y)f(ha)o(v)o(e)g(a)h(new)g
(non)o(blo)q(c)o(king)h(collectiv)o(e)164 2397 y(op)q(eration)e(of)g(the)g
(form)f Fb(MPI)p 687 2397 V 17 w(Iop\(handle,)22 b(params\))p
Fe(,)16 b(that)h(initiates)h(the)f(execution)h(of)949 2599
y Fh(22)p eop
%%Page: 23 23
bop 164 307 a Fe(the)14 b(corresp)q(onding)i(op)q(eration.)k(The)14
b(execution)i(of)e(the)g(op)q(eration)h(is)g(completed)g(b)o(y)f(exe-)164
364 y(cuting)d Fb(MPI)p 373 364 15 2 v 17 w(WAIT\(handle,...)p
Fe(,)d Fb(MPI)p 843 364 V 17 w(STATUS\(handle,...\))p Fe(,)g
Fb(MPI)p 1385 364 V 17 w(WAITALL)p Fe(,)h Fb(MPI)p 1664 364
V 17 w(WAITANY)p Fe(,)164 420 y(or)15 b Fb(MPI)p 295 420 V
16 w(STATUSANY)p Fe(.)f(There)h(are)g(three)h(issues)g(to)e(consider:)237
477 y(\(i\))22 b(The)h(exact)e(de\014nition)j(of)e(the)g(seman)o(tics)g(of)g
(there)g(op)q(erations)h(\(in)f(particular)164 533 y(constrain)o(ts)15
b(on)g(order.)237 589 y(\(ii\))23 b(The)f(complexit)o(y)h(of)f(implemen)o
(tation)h(\(including)i(the)d(complexit)o(y)h(of)f(ha)o(ving)164
646 y(the)15 b(same)g Fb(WAIT)g Fe(or)f Fb(STATUS)h Fe(functions)h(apply)f(b)
q(oth)h(to)e(p)q(oin)o(t-to-p)q(oin)o(t)i(and)g(to)e(collectiv)o(e)164
702 y(op)q(erations\).)237 763 y(\(iii\))j(The)e(accrued)h(p)q(erformance)f
(adv)m(an)o(tage.)164 1027 y Fi(1.4)70 b(Correctness)164 1240
y Ff(Discussion:)40 b Fe(This)16 b(is)g(still)g(v)o(ery)f(preliminary)237
1421 y Fh(The)g(seman)o(tics)e(of)h(the)h(collectiv)o(e)d(comm)o(uni)o
(cation)g(op)q(erations)k(can)f(b)q(e)f(deriv)o(ed)164 1481
y(from)j(their)g(op)q(erational)i(de\014nition)f(in)f(terms)g(of)h(p)q(oin)o
(t-to-p)q(oin)o(t)i(comm)o(uni)o(cation.)164 1541 y(It)c(is)f(assumed)h(that)
g(messages)g(p)q(ertaining)g(to)h(one)f(op)q(eration)h(cannot)g(b)q(e)f
(confused)164 1601 y(with)g(messages)f(p)q(ertaining)h(to)g(another)h(op)q
(eration.)22 b(Also)15 b(messages)h(p)q(ertaining)g(to)164
1661 y(t)o(w)o(o)21 b(distinct)g(o)q(ccurrences)f(of)i(the)f(same)f(op)q
(eration)j(cannot)f(b)q(e)f(confused,)h(if)f(the)164 1722 y(t)o(w)o(o)c(o)q
(ccurrences)h(ha)o(v)o(e)e(distinct)h(parameters.)24 b(The)18
b(relev)m(an)o(t)f(parameters)f(for)i(this)164 1782 y(purp)q(ose)j(are)e
Fc(group)p Fh(,)g Fc(tag)p Fh(,)g Fc(root)f Fh(and)i Fc(op)p
Fh(.)31 b(messages)19 b(p)q(ertaining)h(to)g(another)g(o)q(c-)164
1842 y(currence)14 b(of)h(the)g(same)f(op)q(eration,)i(with)f(di\013eren)o(t)
g(parameters.)20 b(The)15 b(implem)o(en)n(ter)164 1902 y(can,)i(of)h(course,)
f(use)g(another,)g(more)f(e\016cien)o(t)f(implem)o(en)o(tation,)f(as)k(long)g
(as)f(it)g(has)164 1962 y(the)f(same)f(e\013ect.)164 2132 y
Ff(Discussion:)237 2188 y Fe(This)j(statemen)o(t)f(do)q(es)h(not)f(y)o(et)g
(apply)h(to)f(the)h(curren)o(t,)g(incomplete)h(and)f(somewhat)164
2245 y(careless)e(de\014nitions)h(I)e(pro)o(vided)h(in)g(this)g(draft.)237
2301 y(The)i(de\014nition)h(ab)q(o)o(v)o(e)f(means)f(that)g(messages)g(p)q
(ertaining)i(to)e(a)h(collectiv)o(e)h(comm)o(u-)164 2358 y(nication)g(carry)f
(information)g(iden)o(tifying)i(the)e(op)q(eration)h(itself,)g(and)f(the)h(v)
m(alues)g(of)f(the)164 2414 y Fb(tag,)23 b(group)15 b Fe(and,)g(where)g
(relev)m(an)o(t,)h Fb(root)e Fe(or)h Fb(op)g Fe(parameters.)k(Is)c(this)h
(acceptable?)949 2599 y Fh(23)p eop
%%Page: 24 24
bop 237 488 a Fh(A)16 b(few)g(examples:)164 596 y Fc(MPI_BCAST\()o(buf)o(,)22
b(len,)j(tag,)f(group,)f(0\);)164 656 y(MPI_BCAST\()o(buf)o(,)f(len,)j(tag,)f
(group,)f(1\);)237 765 y Fh(Tw)o(o)c(consecutiv)o(e)d(broadcasts,)j(in)f(the)
g(same)f(group,)i(with)e(the)h(same)f(tag,)i(but)164 825 y(di\013eren)o(t)d
(ro)q(ots.)26 b(Since)16 b(the)h(op)q(erations)i(are)e(distinguishable,)g
(messages)g(from)f(one)164 885 y(broadcast)k(cannot)f(b)q(e)f(confused)h
(with)f(messages)g(from)f(the)h(other)h(broadcast;)h(the)164
945 y(program)c(is)g(safe)h(and)f(will)g(execute)e(as)j(exp)q(ected.)164
1054 y Fc(MPI_BCAST\()o(buf)o(,)22 b(len,)j(tag,)f(group,)f(0\);)164
1114 y(MPI_BCAST\()o(buf)o(,)f(len,)j(tag,)f(group,)f(0\);)237
1222 y Fh(Tw)o(o)c(consecutiv)o(e)e(broadcasts,)k(in)d(the)h(same)e(group,)j
(with)f(the)f(same)g(tag)h(and)164 1282 y(ro)q(ot.)36 b(Since)21
b(p)q(oin)o(t-to-p)q(oin)o(t)h(comm)o(uni)o(cation)c(preserv)o(es)i(the)h
(order)g(of)g(messages)164 1342 y(here,)c(to)q(o,)j(messages)d(from)g(one)h
(broadcast)i(will)d(not)h(b)q(e)g(confused)h(with)e(messages)164
1403 y(from)e(the)h(other)g(broadcast;)i(the)e(program)g(is)g(safe)g(and)h
(will)e(execute)g(as)i(in)o(tended.)164 1511 y Fc(MPI_RANK\(&)o(ran)o(k,)22
b(group\))164 1571 y(if)j(\(rank==0\))215 1631 y({)241 1692
y(MPI_BCASTB)o(\(b)o(uf,)d(len,)i(tag,)g(group,)g(0\);)241
1752 y(MPI_SENDB\()o(bu)o(f,)e(len,)j(2,)f(tag,)h(group\);)215
1812 y(})164 1872 y(elseif)e(\(rank==1\))215 1932 y({)241 1993
y(MPI_RECVB\()o(bu)o(f,)f(len,)j(MPI_DONTC)o(AR)o(E,)d(tag,)j(group\);)241
2053 y(MPI_BCASTB)o(\(b)o(uf,)d(len,)i(tag,)g(group,)g(0\);)241
2113 y(MPI_RECVB\()o(bu)o(f,)e(len,)j(MPI_DONTC)o(AR)o(E,)d(tag,)j(group\);)
215 2173 y(})164 2233 y(else)215 2293 y({)241 2354 y(MPI_SENDB\()o(bu)o(f,)d
(len,)j(2,)f(tag,)h(group\);)241 2414 y(MPI_BCASTB)o(\(b)o(uf,)d(len,)i(tag,)
g(group,)g(0\);)215 2474 y(})949 2599 y Fh(24)p eop
%%Page: 25 25
bop 237 307 a Fh(Pro)q(cess)25 b(zero)f(executes)e(a)j(broadcast)g(follo)o(w)
o(ed)e(b)o(y)h(a)g(send)g(to)h(pro)q(cess)f(one;)164 367 y(pro)q(cess)e(t)o
(w)o(o)g(executes)e(a)i(send)f(to)h(pro)q(cess)g(one,)h(follo)o(w)o(ed)d(b)o
(y)i(a)f(broadcast;)k(and)164 428 y(pro)q(cess)13 b(one)g(executes)e(a)i
(receiv)o(e,)e(a)i(broadcast)g(and)h(a)f(receiv)o(e.)k(A)12
b(p)q(ossible)h(outcome)164 488 y(is)j(for)h(the)f(op)q(erations)h(to)g(b)q
(e)f(matc)o(hed)e(as)j(illustrated)f(b)o(y)f(the)h(diagram)g(b)q(elo)o(w.)267
722 y Fc(0)589 b(1)563 b(2)574 843 y(/)25 b(-)h(>)51 b(receive)305
b(/)25 b(-)g(send)523 903 y(/)640 b(/)164 963 y(broadcast)74
b(/)230 b(broadcast)176 b(/)77 b(broadcast)446 1023 y(/)615
b(/)215 1083 y(send)76 b(-)333 b(receive)48 b(<)25 b(-)237
1318 y Fh(The)18 b(reason)g(is)f(that)h(broadcast)g(is)g(not)f(a)h(sync)o
(hronous)g(op)q(eration;)h(the)e(call)g(at)164 1378 y(a)f(pro)q(cess)h(ma)o
(y)d(return)i(b)q(efore)g(the)g(other)g(pro)q(cesses)g(ha)o(v)o(e)g(en)o
(tered)e(the)i(broadcast.)164 1438 y(Th)o(us,)h(the)f(message)g(sen)o(t)h(b)o
(y)f(pro)q(cess)i(zero)e(can)h(arriv)o(e)f(to)h(pro)q(cess)g(one)g(b)q(efore)
g(the)164 1499 y(message)c(sen)o(t)g(b)o(y)g(pro)q(cess)h(t)o(w)o(o,)g(and)g
(b)q(efore)f(the)h(call)e(to)i(broadcast)h(on)f(pro)q(cess)g(one.)949
2599 y(25)p eop
%%Trailer
end
userdict /end-hook known{end-hook}if
%%EOF
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 03:33:29 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11551; Wed, 17 Mar 93 03:33:29 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA00308; Wed, 17 Mar 93 03:33:01 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 03:33:00 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA00300; Wed, 17 Mar 93 03:32:54 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA10257
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Wed, 17 Mar 1993 09:31:20 +0100
Received: by f1neuman.gmd.de id AA15351; Wed, 17 Mar 1993 09:32:45 GMT
Date: Wed, 17 Mar 1993 09:32:45 GMT
From: Rolf.Hempel@gmd.de
Message-Id: <9303170932.AA15351@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu
Subject: information cacheing
Cc: gmap10@f1neuman.gmd.de


Thanks to Rick for the clarification! Without any doubt the proposed
cacheing mechanism is very useful for implementors of global
communication routines. The remaining question is whether we want to
export it to writers of custom-made collective routines, and therefore
put it into the standard. If we decide so, then we have to mark this
section such that the regular MPI user knows that he does not have to
read it.

Rolf Hempel
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 05:28:26 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA20171; Wed, 17 Mar 93 05:28:26 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA09098; Wed, 17 Mar 93 05:27:37 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 05:27:36 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from dino.conicit.ve by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA09089; Wed, 17 Mar 93 05:27:32 -0500
Received: by dino.conicit.ve (4.1/SMI-4.1/RP-1.2)
	id AA06437; Wed, 17 Mar 93 06:27:47-040
From: mcuttin@conicit.ve (Marco Cuttin (USB))
Message-Id: <9303171027.AA06437@dino.conicit.ve>
Subject: MPI Group information required
To: gst@ornl.gov, mpi-collcomm@cs.utk.edu
Date: Wed, 17 Mar 93 6:27:47 AST
Cc: cuttin@usb.ve (Marco Cuttin (USB-PDP-SUN))
X-Mailer: ELM [version 2.2 PL13]

Mr. Geist
We at the Simon Bolivar University of Caracas Venezuela are trying to
implement the MPI standard on a transputer plattform. We have been
reading the different mails that are offered by the MPI commetee
but we need to have more information about the concept of groups. We
have seen this concept on the original draft standard (A proposal for a
User-level Message-Passing interface in a distributed memory
environment, October 1992).
Please let us know what you mean with the concept of group of processes,
and any other information you think will help us in our implementation. 
Hoping to read you soon, and thanking you in advance

sicerely,

Marco Cuttin
cuttin@usb.ve, mcuttin@conicit.ve
FAX: +58-2-238-1816
Phone: +58-2-238-7749

From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 09:25:09 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA15942; Wed, 17 Mar 93 09:25:09 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19426; Wed, 17 Mar 93 09:24:05 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 09:24:03 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19418; Wed, 17 Mar 93 09:23:58 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA29985
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Wed, 17 Mar 1993 15:22:23 +0100
Received: by f1neuman.gmd.de id AA15159; Wed, 17 Mar 1993 15:23:46 GMT
Date: Wed, 17 Mar 1993 15:23:46 GMT
From: Rolf.Hempel@gmd.de
Message-Id: <9303171523.AA15159@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu
Subject: New draft
Cc: gmap10@f1neuman.gmd.de


The distribution of the second version of the COLLCOMM draft after
just a few days came as a surprise to me. There are some changes,
and I would like to throw in a few comments:

1. What I said yesterday about the shift function still holds with the
   new draft. If the shift is based on the group topology, the end-off
   version comes as a special case of the (single) shift function,
   depending on whether the cartesian topology is periodic in the
   shift direction or not. I still propose to add the "direction"
   argument.

2. I do not like to return via the "len" argument the difference of
   the buffer length and the actual message length. It is common
   practice to return the message length, and that's what most users
   will expect. If we chose the other definition, this will lead to
   frequent user errors.

3. In Marc's definition, a list of handles contains the number of
   elements as the first entry. I would prefer an additional argument
   over putting together the handles and their number into one vector
   (at least in Fortran). As far as I understand the definition code of
   routine MPI_SCATTER, in the loop
    
      for (i=0; i < size; i++)
         MPI_SEND(inbuf[i], i, tag, group);

   inbuf[i] must be replaced by something like list_of_inbufs.inbuf[i].

4. In function MPI_REDUCE (or in an additional function) I would like
   to see the possibility of specifying different operations for
   different elements of the input buffer. So, it would be possible
   to have a buffer of two reals, and to compute the global sum on the
   first entry and the maximum on the second. In the current proposal
   I don't see how this can be done, at least not in Fortran, even if
   one resorts to the MPI_USER_REDUCE function.

Rolf Hempel
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 11:25:17 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA19014; Wed, 17 Mar 93 11:25:17 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA26161; Wed, 17 Mar 93 11:21:59 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 11:21:57 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA26153; Wed, 17 Mar 93 11:21:56 -0500
Message-Id: <9303171621.AA26153@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 2519;
   Wed, 17 Mar 93 11:21:53 EST
Date: Wed, 17 Mar 93 11:09:52 EST
From: "Marc Snir" <snir@watson.ibm.com>
X-Addr: (914) 945-3204  (862-3204)
        28-226 IBM T.J. Watson Research Center
        P.O. Box 218 Yorktown Heights NY 10598
To: mpi-collcomm@cs.utk.edu
Reply-To: SNIR@watson.ibm.com
Subject:  New draft

Reference:  Attached note from Rolf.Hempel at gmd.de




*************** Forwarded Note ***************

Received: from CS.UTK.EDU by watson.ibm.com (IBM VM SMTP V2R3) with TCP;
   Wed, 17 Mar 93 09:27:56 EST
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19426; Wed, 17 Mar 93 09:24:05 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 09:24:03 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19418; Wed, 17 Mar 93 09:23:58 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA29985
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Wed, 17 Mar 1993 15:22:23 +0100
Received: by f1neuman.gmd.de id AA15159; Wed, 17 Mar 1993 15:23:46 GMT
Date: Wed, 17 Mar 1993 15:23:46 GMT
From: Rolf.Hempel at gmd.de
Message-Id: <9303171523.AA15159@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu
Subject: New draft
Cc: gmap10@f1neuman.gmd.de


The distribution of the second version of the COLLCOMM draft after
just a few days came as a surprise to me. There are some changes,
and I would like to throw in a few comments:

1. What I said yesterday about the shift function still holds with the
   new draft. If the shift is based on the group topology, the end-off
   version comes as a special case of the (single) shift function,
   depending on whether the cartesian topology is periodic in the
   shift direction or not. I still propose to add the "direction"
   argument.

>>> Is this an argument for "topological shift" functions, that use
>>> CSHIFT or EOSHIFT as appropriate, or is this an argument for
>>> different group shift functions?


2. I do not like to return via the "len" argument the difference of
   the buffer length and the actual message length. It is common
   practice to return the message length, and that's what most users
   will expect. If we chose the other definition, this will lead to
   frequent user errors.

>>> The reason for my choice is that I believe that most of the time
>>> people will check for a match (len=0) or mismatch (len >0).  I
>>> wanted this test to be easy.  But I am willing to bow to "accepted
>>> practice" if, indeed, there is an entrenched practice.



3. In Marc's definition, a list of handles contains the number of
   elements as the first entry. I would prefer an additional argument
   over putting together the handles and their number into one vector
   (at least in Fortran). As far as I understand the definition code of
   routine MPI_SCATTER, in the loop

      for (i=0; i < size; i++)
         MPI_SEND(inbuf[i], i, tag, group);

   inbuf[i] must be replaced by something like list_of_inbufs.inbuf[i].

>>> I don't see the virtue of an additional argument.  The definition
>>> code is, indeed, messier than what I provided, but this makes the
>>> user interface simpler, which is goodness.  I don't think there
>>> will be any significant difference in efficiency.  In any case, if we
>>> change here the definition of "list of handles", it should be
>>> done consistently across MPI, including for the WAITALL, WAITANY
>>> functions. I don't view this as a major issue.

4. In function MPI_REDUCE (or in an additional function) I would like
   to see the possibility of specifying different operations for
   different elements of the input buffer. So, it would be possible
   to have a buffer of two reals, and to compute the global sum on the
   first entry and the maximum on the second. In the current proposal
   I don't see how this can be done, at least not in Fortran, even if
   one resorts to the MPI_USER_REDUCE function.

>>> A proposal will be appreciated.



Rolf Hempel
>>> Thanks for the comments
>>> Marc Snir
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 15:19:59 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24555; Wed, 17 Mar 93 15:19:59 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA07968; Wed, 17 Mar 93 15:19:12 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 15:19:10 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gstws.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA07952; Wed, 17 Mar 93 15:19:09 -0500
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA16341; Wed, 17 Mar 1993 15:19:07 -0500
Date: Wed, 17 Mar 1993 15:19:07 -0500
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9303172019.AA16341@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Re: New draft



>What I said yesterday about the shift function still holds with the
>the new draft. I still propose to add the "direction" argument.

point taken. I sent out the revised draft before I saw your comments Rolf.
There is the question of what does "direction" mean if the user
has not specified a topology. Can he use the collective functions
without envoking topology routines?

>I do not like to return via the "len" argument the difference of
>the buffer length and the actual message length. It is common
>practice to return the message length, and that's what most users
>will expect.

We should discuss this in Dallas, I suspect most will favor your 
"common practice" approach.

>In Marc's definition, a list of handles contains the number of
>elements as the first entry. I would prefer an additional argument
>over putting together the handles and their number into one vector.

What are other other subcommittee members thoughts?

>to see the possibility of specifying different operations for
>different elements of the input buffer
>I don't see how this can be done, at least not in Fortran,

We should make sure the draft reads that the MPI_USER_REDUCE function
specified by the user takes the buffer as an argument so that the
user can manipulate the buffer any way he wishes before returning
it to MPI_USER_REDUCE. PICL contains a routine like this so it is possible.

Al
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 15:28:05 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24602; Wed, 17 Mar 93 15:28:05 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08481; Wed, 17 Mar 93 15:27:00 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 15:26:58 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sampson.ccsf.caltech.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08473; Wed, 17 Mar 93 15:26:56 -0500
Received: from elephant (elephant.parasoft.com) by sampson.ccsf.caltech.edu with SMTP id AA22235
  (5.65c/IDA-1.4.4 for mpi-collcomm@cs.utk.edu); Wed, 17 Mar 1993 12:26:52 -0800
Received: from lion.parasoft by elephant (4.1/SMI-4.1)
	id AA20937; Wed, 17 Mar 93 12:19:02 PST
Received: by lion.parasoft (4.1/SMI-4.1)
	id AA02454; Wed, 17 Mar 93 12:19:42 PST
Date: Wed, 17 Mar 93 12:19:42 PST
From: jwf@lion.Parasoft.COM (Jon Flower)
Message-Id: <9303172019.AA02454@lion.parasoft>
To: mpi-collcomm@cs.utk.edu


I'm definitely in favor of having the number of entries in the list
and the list as separate arguments. Although I don't think it's
confusing either way, as long as its documented clearly, it's
very easy to make simple "off by one" errors if you put everything
together. 

I would like MPI to be consistent in having distinct arguments
for lists or things and the number of them.

	Jon

From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 15:31:37 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24665; Wed, 17 Mar 93 15:31:37 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08743; Wed, 17 Mar 93 15:30:50 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 15:30:49 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA08725; Wed, 17 Mar 93 15:30:47 -0500
Message-Id: <9303172030.AA08725@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 5629;
   Wed, 17 Mar 93 15:30:36 EST
Date: Wed, 17 Mar 93 15:25:59 EST
From: "Marc Snir" <snir@watson.ibm.com>
X-Addr: (914) 945-3204  (862-3204)
        28-226 IBM T.J. Watson Research Center
        P.O. Box 218 Yorktown Heights NY 10598
To: mpi-collcomm@cs.utk.edu
Subject: direction for shifts
Reply-To: SNIR@watson.ibm.com

The current definition of shift in the draft assumes no topology information
for the underlying group -- just an ordering of the processes in the group.
Thus, direction is not meaningful in this context.   A "grid shift"
operation that would act on grids (i.e., on topological objects, rather than
groups) could take advantage of this additional parameter.

I don't think it's a good idea to force each group object to be associated with
a topology.   More to come on this.
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 17 17:52:04 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA27117; Wed, 17 Mar 93 17:52:04 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18194; Wed, 17 Mar 93 17:51:27 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 17 Mar 1993 17:51:25 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18183; Wed, 17 Mar 93 17:51:21 -0500
Received: from fermi.pnl.gov (130.20.182.50) by pnlg.pnl.gov; Wed, 17 Mar 93
 14:49 PST
Received: by fermi.pnl.gov (4.1/SMI-4.1) id AA22140; Wed, 17 Mar 93 14:47:28 PST
Date: Wed, 17 Mar 93 14:47:27 -0800
From: Robert J Harrison <d3g681@fermi.pnl.gov>
Subject: Re: New draft
To: mpi-collcomm@cs.utk.edu
Message-Id: <9303172247.AA22140@fermi.pnl.gov>
In-Reply-To: Your message of "Wed, 17 Mar 93 15:19:07 EST."
 <9303172019.AA16341@gstws.epm.ornl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

In message <9303172019.AA16341@gstws.epm.ornl.gov> you write:
> 

...

> 
> >I do not like to return via the "len" argument the difference of
> >the buffer length and the actual message length. It is common
> >practice to return the message length, and that's what most users
> >will expect.
> 
> We should discuss this in Dallas, I suspect most will favor your 
> "common practice" approach.

I certainly strongly endorse the common practice argument in this
instance.  Also, in FORTRAN, there is no special interpretation of
zero being equivalent to FALSE, as there is in C.

> 
> >In Marc's definition, a list of handles contains the number of
> >elements as the first entry. I would prefer an additional argument
> >over putting together the handles and their number into one vector.
> 
> What are other other subcommittee members thoughts?

Certainly, in FORTRAN again, it would be much easier, and also far more
consistent with current practice, to manipulate them separately than
together.  It is possible to use an EQUIVALENCE to treat the
array reference as a true scalar variable, but this is generally
deprecated practice.

> 
> >to see the possibility of specifying different operations for
> >different elements of the input buffer
> >I don't see how this can be done, at least not in Fortran,
> 
> We should make sure the draft reads that the MPI_USER_REDUCE function
> specified by the user takes the buffer as an argument so that the
> user can manipulate the buffer any way he wishes before returning
> it to MPI_USER_REDUCE. PICL contains a routine like this so it is possible.

If this functionality of operating on different pieces of the data
vector with different functions is to be supported, it should not
compromise the possible efficiency of the simpler single function
operations.  By requiring that the entire vector be available before
the function operates we preclude some *major* optimizations (e.g.
pipelining, recursive splitting, ...) that can transform, for
example,  naive O(N log P) algorithms to effective O(N) algorithms
(for some N >> P).

I propose therefore that two interfaces be provided.  One that is
capable of functioning by applying a user supplied function on
arbitary (with constraints of item size) chunks of the vector.
Another, as described by Al, that is given the whole vector.

I also propose that we futher discuss if MPI-1 should worry about
providing this second routine.

Robert.
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 02:44:37 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA06810; Thu, 18 Mar 93 02:44:37 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13098; Thu, 18 Mar 93 02:44:01 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 02:44:00 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13089; Thu, 18 Mar 93 02:43:57 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA08455
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Thu, 18 Mar 1993 08:42:23 +0100
Received: by f1neuman.gmd.de id AA15304; Thu, 18 Mar 1993 08:43:49 GMT
Date: Thu, 18 Mar 1993 08:43:49 GMT
From: Rolf.Hempel@gmd.de
Message-Id: <9303180843.AA15304@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu
Subject: more on New Draft
Cc: gmap10@f1neuman.gmd.de


Just a few more thoughts about the new COLLCOMM proposal:

1. Commenting on my proposed change to MPI_SHIFT Marc asks:

   >>> Is this an argument for "topological shift" functions, that use
   >>> CSHIFT or EOSHIFT as appropriate, or is this an argument for
   >>> different group shift functions?

   Well, it depends on whether we can agree on a default topology for
   a group (this is the topology of a group which is not created by
   a topology definition function like MPI_CART). If we define this
   default to be a ring topology, then we need only one shift function.
   I think there is some reason for this approach. After all, the
   linear ordering of processes by rank (with wrap-around as in some
   examples we have seen) is nothing else than a logical ring topology.

2. Yesterday I proposed that we should have a reduce function with
   the capability of applying different operations on different 
   elements of the buffer. Al's suggestion to define the operands
   of the MPI_USER_REDUCE function as being blocks instead of single
   elements seems to resolve my problem. Bob Harrison then said that
   both versions should be available for the sake of efficient
   implementations. Perhaps the algorithms he mentioned (pipelining,
   recursive splitting,...) could work on the level of blocks in a
   vector type buffer. Would this work as a compromise?

Rolf Hempel
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 08:26:19 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA10327; Thu, 18 Mar 93 08:26:19 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA04137; Thu, 18 Mar 93 08:25:33 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 08:25:31 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA04106; Thu, 18 Mar 93 08:25:24 -0500
Date: Thu, 18 Mar 93 13:25:15 GMT
Message-Id: <9144.9303181325@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: document of March 16
To: mpi-collcomm@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk

Hi all

I just got back from approx.  one week of leave and customer vistis, and
what a lovely load of email I find!

I have just read the document of March 16, "Collective Communication",
of Al and Marc.  On first skim-read, this looks generally great,
although I do have a couple of problems with it. 

I suggest that certain sections be deleted from this document, as they
do not appear to be within the remit of the collective communication
subcommittee. The material is:

a) Section 1.2 from "MPI_COPY_CONTEXT(" to the end of section 1.2, as
this would appear to be within the remit of the context subcommittee. 

Comments? Flames??

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 08:56:56 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA10924; Thu, 18 Mar 93 08:56:56 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05271; Thu, 18 Mar 93 08:54:32 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 08:54:31 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05262; Thu, 18 Mar 93 08:54:29 -0500
Message-Id: <9303181354.AA05262@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 1801;
   Thu, 18 Mar 93 08:54:30 EST
Date: Thu, 18 Mar 93 08:51:17 EST
From: "Marc Snir" <snir@watson.ibm.com>
X-Addr: (914) 945-3204  (862-3204)
        28-226 IBM T.J. Watson Research Center
        P.O. Box 218 Yorktown Heights NY 10598
To: mpi-collcomm@cs.utk.edu
Reply-To: SNIR@watson.ibm.com
Subject: reduce

*************** Referenced Note ***************

Received: from CS.UTK.EDU by watson.ibm.com (IBM VM SMTP V2R3) with TCP;
   Thu, 18 Mar 93 02:47:11 EST
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13098; Thu, 18 Mar 93 02:44:01 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 02:44:00 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13089; Thu, 18 Mar 93 02:43:57 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA08455
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Thu, 18 Mar 1993 08:42:23 +0100
Received: by f1neuman.gmd.de id AA15304; Thu, 18 Mar 1993 08:43:49 GMT
Date: Thu, 18 Mar 1993 08:43:49 GMT
From: Rolf.Hempel@gmd.de
Message-Id: <9303180843.AA15304@f1neuman.gmd.de>
To: mpi-collcomm@cs.utk.edu
Subject: more on New Draft
Cc: gmap10@f1neuman.gmd.de


Just a few more thoughts about the new COLLCOMM proposal:

1. Commenting on my proposed change to MPI_SHIFT Marc asks:

   >>> Is this an argument for "topological shift" functions, that use
   >>> CSHIFT or EOSHIFT as appropriate, or is this an argument for
   >>> different group shift functions?

   Well, it depends on whether we can agree on a default topology for
   a group (this is the topology of a group which is not created by
   a topology definition function like MPI_CART). If we define this
   default to be a ring topology, then we need only one shift function.
   I think there is some reason for this approach. After all, the
   linear ordering of processes by rank (with wrap-around as in some
   examples we have seen) is nothing else than a logical ring topology.

*** I think this discussion has to be postponed until we resolve the
*** question of the status of topologies in MPI.


2. Yesterday I proposed that we should have a reduce function with
   the capability of applying different operations on different
   elements of the buffer. Al's suggestion to define the operands
   of the MPI_USER_REDUCE function as being blocks instead of single
   elements seems to resolve my problem. Bob Harrison then said that
   both versions should be available for the sake of efficient
   implementations. Perhaps the algorithms he mentioned (pipelining,
   recursive splitting,...) could work on the level of blocks in a
   vector type buffer. Would this work as a compromise?

*** The proposal, I assume, is to have two user defined reduce functions:
*** one is elemental, and applies to each element in the input buffer;
*** the other applies to the entire input buffer, as one argument.



Rolf Hempel

*** Marc


From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 09:09:45 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11145; Thu, 18 Mar 93 09:09:45 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05806; Thu, 18 Mar 93 09:08:55 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 09:08:54 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sampson.ccsf.caltech.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA05798; Thu, 18 Mar 93 09:08:51 -0500
Received: from elephant (elephant.parasoft.com) by sampson.ccsf.caltech.edu with SMTP id AA17821
  (5.65c/IDA-1.4.4 for mpi-collcomm@cs.utk.edu); Thu, 18 Mar 1993 06:08:49 -0800
Received: from lion.parasoft by elephant (4.1/SMI-4.1)
	id AA05586; Thu, 18 Mar 93 06:00:54 PST
Received: by lion.parasoft (4.1/SMI-4.1)
	id AA02012; Thu, 18 Mar 93 06:01:37 PST
Date: Thu, 18 Mar 93 06:01:37 PST
From: jwf@lion.Parasoft.COM (Jon Flower)
Message-Id: <9303181401.AA02012@lion.parasoft>
To: mpi-collcomm@cs.utk.edu
Subject: Default topology


I agree with Rolf. whether we like it or not there is
definitely a default topology associated with every group
of nodes. IN fact its the one that most people take advantage
of in their programs whenever they make use of processor
numbers that are merely ranks in this group.

I think it would be eminently sensible to take advantage
of this default topology and endow it with whatever
properties all the other topologies will have. That way
we can have both opaque node identifiers but still easy
access to rank information without inventing yet another
set of routines.

I think the shift function (and its partner, exchange)
are extremely useful.

I also agree with the comments about the user level REDUCE 
operation. We have implemented the "Blocks" approach in
Express and it seems to work fine. Of course the user
can cause it to break by having extremely large blocks
but that's always going to be true.

	Jon
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 09:43:46 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11672; Thu, 18 Mar 93 09:43:46 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA07311; Thu, 18 Mar 93 09:43:18 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 09:43:17 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA07302; Thu, 18 Mar 93 09:43:14 -0500
Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA13129
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Thu, 18 Mar 1993 09:43:10 -0500
Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1)
	id AA11776; Thu, 18 Mar 93 14:43:06 GMT
Date: Thu, 18 Mar 93 14:43:06 GMT
From: jim@meiko.co.uk (James Cownie)
Message-Id: <9303181443.AA11776@hub.meiko.co.uk>
Received: by float.co.uk (5.0/SMI-SVR4)
	id AA01402; Thu, 18 Mar 93 14:39:55 GMT
To: mpi-collcomm@cs.utk.edu
In-Reply-To: Jon Flower's message of Thu, 18 Mar 93 06:01:37 PST <9303181401.AA02012@lion.parasoft>
Subject: User reduction functions
Content-Length: 1422

From a performance point of view it is important to allow the
vectorisation of these calls, especially if they need a whole
transition into user space (as may be the case in some
implementations). [See Nessett's work on data conversion for the
effects of removing similarly cheap subroutine calls from the loop].

I would therefore suggest that the user function operations should
look like this.

MPI_USER_XXX(inbuf,outbuf,tag,group,function,BLOCKSIZE)

where BLOCKSIZE must be a factor of the length of inbuf, and is the
smallest chunk which will be passed to the function. [This lets the
implementation split the buffer if that is beneficial, while allowing
the user to ensure that suitable contiguous chunks are kept together
if that is a requirement, say because the buffer is really an array of
structures. ]

The user function should ALWAYS look something like

void reduceFunction(inbuf1, inbuf2, outbuf, nelems)
{
   register int i;

   for (i=0; i < nelems; i++)
     outbuf[i] = inbuf1[i] OP inbuf2[i];
}	

Questions :--
1) Should nelems be passed in, or can the user function obtain this
   from the buffer descriptors (cheaply !)

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com

From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 18 11:56:59 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA14768; Thu, 18 Mar 93 11:56:59 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13527; Thu, 18 Mar 93 11:55:38 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 18 Mar 1993 11:55:36 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA13498; Thu, 18 Mar 93 11:55:30 -0500
Received: from fermi.pnl.gov (130.20.182.50) by pnlg.pnl.gov; Thu, 18 Mar 93
 08:54 PST
Received: by fermi.pnl.gov (4.1/SMI-4.1) id AA23964; Thu, 18 Mar 93 08:53:04 PST
Date: Thu, 18 Mar 93 08:53:03 -0800
From: Robert J Harrison <d3g681@fermi.pnl.gov>
Subject: Re: User reduction functions
To: mpi-collcomm@cs.utk.edu
Message-Id: <9303181653.AA23964@fermi.pnl.gov>
In-Reply-To: Your message of "Thu, 18 Mar 93 14:43:06 GMT."
 <9303181443.AA11776@hub.meiko.co.uk>
X-Envelope-To: mpi-collcomm@cs.utk.edu

In message <9303181443.AA11776@hub.meiko.co.uk> you write:

c.f. the previous discussion about supporting multiple functions
     to operate on disjoint sections Jim's syntax might be slightly
     adjusted to include an additional argument, base, the offset
     of this array chunk in the entire vector.
> 
> The user function should ALWAYS look something like
> 
> void reduceFunction(inbuf1, inbuf2, outbuf, nelems)

  void reduceFunction(inbuf1, inbuf2, outbuf, nelems, base)

> {
>    register int i;
> 
>    for (i=0; i < nelems; i++)
>      outbuf[i] = inbuf1[i] OP inbuf2[i];

       outbuf[i] = inbuf1[i] OP[i+base] inbuf2[i];

> }	

OP[i+base] could of course be independent of its argument.

> 
> Questions :--
> 1) Should nelems be passed in, or can the user function obtain this
>    from the buffer descriptors (cheaply !)

I would recomend that all required information be passed directly
in as arguments.  Since FORTRAN does not typically support macros
or inline functions in as clean a fashion as C or C++, there is
little chance to optimize away any subroutine call overhead.

Robert.


Robert J. Harrison

Mail Stop K1-90                             tel: 509-375-2037
Battelle Pacific Northwest Laboratory       fax: 509-375-6631
P.O. Box 999, Richland WA 99352          E-mail: rj_harrison@pnl.gov

From owner-mpi-collcomm@CS.UTK.EDU  Mon Mar 22 13:28:31 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA28628; Mon, 22 Mar 93 13:28:31 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19822; Mon, 22 Mar 93 13:27:26 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 22 Mar 1993 13:27:25 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19813; Mon, 22 Mar 93 13:27:23 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93
 10:23 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06637; Mon,
 22 Mar 93 10:22:02 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA16699; Mon, 22 Mar 93 10:21:58
 PST
Date: Mon, 22 Mar 93 10:21:58 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: inbuf == outbuf
To: geist@gstws.epm.ornl.gov, mpi-collcomm@cs.utk.edu
Cc: d39135@carbon.pnl.gov
Message-Id: <9303221821.AA16699@sodium.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

The March 16 collective communication draft asks:

> Do we want to support the case {\tt inbuf = outbuf} somehow?

Yes -- this is important to some of our applications.

If inbuf=outbuf is not permitted, then these applications
have to explicitly copy data and to allocate extra storage.
The extra storage may also have to be smaller than the
buffer that the application would otherwise handle, due to
hard limits on process memory.

The resulting application code is certainly longer and
probably slower than it would be with inbuf=outbuf.

However, I believe it would be a mistake to allow arbitrary
overlap, due to the difficulty of writing correct code,
particularly in user reduction routines.  It would be OK if
inbuf and outbuf had to be either disjoint or coincident.

--Rik

----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 23 14:27:00 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25676; Tue, 23 Mar 93 14:27:00 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA25694; Tue, 23 Mar 93 14:26:33 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 23 Mar 1993 14:26:32 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from almaden.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA25686; Tue, 23 Mar 93 14:26:30 -0500
Message-Id: <9303231926.AA25686@CS.UTK.EDU>
Received: from almaden.ibm.com by almaden.ibm.com (IBM VM SMTP V2R2)
   with BSMTP id 4942; Tue, 23 Mar 93 11:26:55 PST
Date: Tue, 23 Mar 93 11:24:18 PST
From: "Ching-Tien (Howard) Ho" <ho@almaden.ibm.com>
To: mpi-collcomm@cs.utk.edu
Subject: No tag for a CC routine?

Hi,
  I like to revisit an old issue regarding the Collective Communication (CC)
proposal to MPI.

Do we really need a user-supplied tag (aka type) in a CC call?

I know most of you believe a tag is needed and am ready to get a lot of
objections.  I remember the issue was raised in the first MPI meeting but not
really resolved.  I didn't attend the 2nd one and don't know if that was
discussed.  FYI, the original design of Venus, by Bala
and Kipnis, took a tag as well.  However, the
newer version of Venus also removed a tag from a call to CC.
In the Collective Communication Library (CCL)
which is part of the External User Interface (EUI) of IBM's Scalable Parallel
Systems, we decided not to take "tags" for CC routines after various
discussion.  (See some supporting arguments below.  The receive-by-source
methodology is described in other part of a forth-coming paper of ours.)

========================================================================
\subsection{No Tags for CCL Routines}

Certain communication libraries
require the user to supply a user tag to each CCL call.  There
are a few disadvantages to this approach.  Consider two typical cases
for the semantics of this user's tag.  One is that the user needs to
guarantee that the tag is uniquely matched within the given group
instance and cannot be matched with any other group instances existing
at the same time, for all possible program runs.  The other case is
that the user only guarantees that the tag is matched within the given
group instance, but may not be unique at a given time.

Consider the semantics of the first case for the user's tag.  Although
this helps to simplify the implementation, it is inconvenient and
tedious, if not impossible, for the user to guarantee such property on
the tag, given the fact that the receive-by-source methodology used in
the implementation of CCL can solve the
matching problem gracefully.  For the second case, it means that the
implementation cannot use the user's tag alone in selecting the
expected incoming message, as confusion may occur.  In fact, using
(gid, tag) to select an incoming message still cannot guarantee
correctness.  Thus, the receive-by-source implementation is still
needed to guarantee correctness.  In other words, the tag is
really a redundant field which does not add more functionality and
cannot substitute the receive-by-source
implementation.  Furthermore, the semantics of tag here are not
consistent with those in the context of point-to-point communication.
Specifically, the
tag here is used as message matching from a given source.  (That is if
the tag of the expected message does not match with the tag of the
first message from the specified source, CCL returns an error
flag.)  In contrast, the tag in point-to-point routines is used in
selecting a message from a given source.  (That is if the tag of the
expected message does not match with the tag of the first message from
the specified source, the receive call simply blocks until there is
one that matches.)  In summary, a tag is a redundant argument to the CCL
routines. It is not required for correct implementation of CCL, and it may
cause confusion to the users.  The only advantage for
having a tag field in collective communication routines is that in
an incorrect program where the collective communication routines
are mismatched, one may be able to locate the mismatch at
an earlier place than otherwise (assuming the user does not introduce
tag-mismatch errors).

===========================================================================

Any comments?

-- Howard







From owner-mpi-collcomm@CS.UTK.EDU  Tue Mar 23 14:38:00 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25900; Tue, 23 Mar 93 14:38:00 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26157; Tue, 23 Mar 93 14:37:32 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 23 Mar 1993 14:37:31 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26148; Tue, 23 Mar 93 14:37:27 -0500
Date: Tue, 23 Mar 93 19:37:21 GMT
Message-Id: <16027.9303231937@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: No tag for a CC routine?
To: "Ching-Tien (Howard) Ho" <ho@almaden.ibm.com>, mpi-collcomm@cs.utk.edu
In-Reply-To: Howard's message of Tue, 23 Mar 93 11:24:18 PST
Reply-To: lyndon@epcc.ed.ac.uk

> Hi,
>   I like to revisit an old issue regarding the Collective Communication (CC)
> proposal to MPI.

I support the specification of collective communications without use of
message tag. I just cannot see that it is needed there.

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 24 00:12:23 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04586; Wed, 24 Mar 93 00:12:23 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19034; Wed, 24 Mar 93 00:11:46 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 24 Mar 1993 00:11:44 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19026; Wed, 24 Mar 93 00:11:41 -0500
Message-Id: <9303240511.AA19026@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 2163;
   Wed, 24 Mar 93 00:11:41 EST
Date: Wed, 24 Mar 93 00:11:40 EST
From: "Marc Snir" <snir@watson.ibm.com>
To: MPI-COLLCOMM@CS.UTK.EDU

\documentstyle[12pt]{article}


\newcommand{\discuss}[1]{
\ \\ \ \\ {\small {\bf Discussion:} #1} \ \\ \ \\
}

\newcommand{\missing}[1]{
\ \\ \ \\ {\small {\bf Missing:} #1} \\ \ \\
}

\begin{document}

\title{ Collective Communication}


\author{Al Geist \\ Marc Snir}
\maketitle

\section{Collective Communication}
\subsection{Introduction}

This section is a draft of the current proposal for collective communication.
Collective communication is defined to be communication that involves
a group of processes.  Examples are broadcast and global sum.
A collective operation is executed by having all processes in the group call the
communication routine, with matching parameters.
Routines can (but are not required to) return as soon as their
participation in the collective communication is complete.  The completion
of a call indicates that the caller is now free to access the locations in the
communication buffer, or any other location that can be referenced by the
collective operation.  However, it does not indicate that other processes in
the group have started the operation (unless otherwise indicated in the
description of the operation).   However, the successful completion of
a collective communication call may depend on the execution of a matching call
at all processes in the group.

The syntax and semantics of the collective operations is
defined so as to be consistent with the syntax and semantics of the point to
point operations.

The reader is referred to the point-to-point communication section of the current
MPI draft for information concerning groups (aka contexts) and group formation
operations, and for general information on types of objects used by the MPI
library.

The collective communication routines are built above the point-to-point
routines.  While vendors may optimize certain collective routines for
their architectures, a complete library of the collective communication
routines can be written entirely using point-to-point communication
functions.  We are using naive implementations of the collective calls in terms
of point to point operations in order to provide an operational definition of
their semantics.

The following communication functions are proposed.
\begin{itemize}
\item
Broadcast from one member to all members of a group.
\item
Barrier across all group members
\item
Gather data from all group members to one member.
\item
Scatter data from one member to all members of a group.
\item
Global operations such as sum, max, min, etc., were the result
is known by all group members and a variation where the result is
known by only one member. The ability to have user defined
global operations.
\item
Simultaneous shift of data around the group, the simplest example
being all members sending their data to (rank+1) with wrap around.
\item
Scan across all members of a group (also called parallel prefix).
\item
Broadcast from all members to all members of a group.
\item
Scatter data from all members to all members of a group
(also called complete exchange or index).
\end{itemize}

To simplify the collective communication interface it is
designed with two layers. The low level routines have all the
generality of, and make use of, the buffer descriptor routines
of the point-to-point section which allows arbitrarily complex
messages to be constructed. The second level routines are
similar to the upper level point-to-point routines in that they send
only a contiguous buffer.

\missing {

The current draft does not include the nonblocking collective communication
calls that where discussed at the last meeting.
}

\discuss{

The current proposal assumes that a group carries no ``topology''
information; it is just an ordered set of processes.
}

\subsection{Group Functions}

The point to point document discusses the use of groups (aka contexts), and
describe the operations available for the creation and manipulation of
groups and group objects. For sake of completeness, we list
them anew here.


{\bf \ \\ MPI\_CREATE(handle, type, persistence)} \\
Create new opaque object
\begin{description}
\item[OUT handle] handle to object
\item[IN type] state value that identifies the type of object to be created
\item[IN persistence] state value; either {\tt MPI\_PERSISTENT} or {\tt
MPI\_EPHEMERAL}.
\end{description}

{\bf \ \\ MPI\_FREE(handle)} \\
Destroy object associated with handle.
\begin{description}
\item[IN handle] handle to object
\end{description}


{\bf \ \\ MPI\_ASSOCIATED(handle, type)}  \\
Returns the type of the object the handle is currently associated with, if
such exists.  Returns the special type {\tt MPI\_NULL} if the handle is
not currently associated with any object.
\begin{description}
\item[IN handle] handle to object
\item[OUT type] state
\end{description}


{\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)}  \\

Create a new context that includes all processes in the old context.
The rank of the processes in the previous context is preserved.  The call must
be executed by all processes in the old context.  It is a blocking call:  No
call returns until all processes have called the function.
\begin{description}
\item[OUT newcontext]  handle to newly created context.  The handle should not
be associated with an object before the call.
\item[IN context] handle to old context
\end{description}

{\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \\
A new context is created for
each distinct value of {\tt key}; this context is shared by all processes that
made the call with this key value.  Within each new context the processes are
ranked according to the order of the {\tt index} values they provided; in case
of ties, processes are ranked according to their rank in the old context.
This call is blocking:  No call returns until all processes in the old context
executed the call.
\begin{description}
\item[OUT newcontext] handle to newly created context at calling process.   This
handle should not be associated with an object before the call.
\item[IN context] handle to old context
\item[IN key] integer
\item[IN index] integer
\end{description}

{\bf \ \\ MPI\_RANK(rank, context)} \\
Return the rank of the calling process within the specified context.
\begin{description}
\item[OUT rank] integer
\item[IN context] context handle
\end{description}


{\bf \ \\ MPI\_SIZE(size, context)} \\
Return the number of processes that belong to the specified context.
\begin{description}
\item[OUT size] integer
\item[IN context] context handle
\end{description}

\paragraph*{Extensions}
Possible extensions:

{\bf \ \\ MPI\_CREATE\_CONTEXT(newcontext, oldcontext,
list\_of\_ranks)} \\
creates a new context out of an explicit list of members
and rank them in their order of occurrence in the list.
\begin{description}
\item[OUT newcontext] handle to newly created context.  Handle should not
be associated with an object before the call.
\item[IN oldcontext] handle to previous context.
\item[IN list\_of\_ranks]
List of the ranks of in the old group of the
processes to be included in new group.
\end{description}

The function is called by all processes in the list, and all
supply the same parameters.


{\bf \ \\ MPI\_EXTEND\_CONTEXT(context, number)} \\
Add processes to an existing context.  The new processes are ranked above
the old context members.
\begin{description}
\item[INOUT context] handle to context object
\item[IN number] number of additional processes (integer)

\end{description}
\subsection{Communication Functions}

The proposed communication functions are divided into two layers.
The lowest level uses the same buffer descriptor objects
available in point-to-point to create noncontiguous, multiple data type
messages. The second level is similar to the block send/receive
point-to-point operations in that it supports only contiguous buffers of
arithmetic storage units.   For each communication operation, we list these two
level of calls together.


\subsubsection{Synchronization}

\paragraph*{Barrier synchronization}

{\bf \ \\ MPI\_BARRIER( group, tag )} \\

MPI\_BARRIER blocks the calling process until all group members have called
it; the call returns at any process only after all group members have
entered the call.
\begin{description}
\item[IN group] group handle
\item[tag] communication tag (integer)
\end{description}

{\tt \ \\ MPI\_BARRIER( group, tag )}  \\ is
\begin{verbatim}
MPI_CREATE(buffer_handle, MPI_BUFFER, MPI_PERSISTENT);
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
if (rank==0)
{
   for (i=1; i < size; i++)
      MPI_RECV(buffer_handle, i, tag, group);
   for (i=1; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
}
else
{
   MPI_SEND(buffer_handle, 0, tag, group);
   MPI_RECV(buffer_handle, 0, tag, group);
}
MPI_FREE(buffer_handle);
\end{verbatim}

\subsubsection{Data move functions}

\paragraph*{Circular shift}

{\bf \ \\ MPI\_CSHIFT( inbuf, outbuf, tag, group, shift)} \\

Process with rank {\tt i} sends the data in its input buffer to
process with rank $\tt (i+ shift) \bmod  group\_size$, who receives the
data in its output buffer. All processes make the call with the same values for
{\tt tag, group}, and {\tt shift}.  The {\tt shift} value can be positive, zero,
or negative.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\bf \ \\ MPI\_CSHIFTB( inbuf, outbuf, len, tag, group, shift)} \\

Behaves like {\tt MPI\_CSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, tag, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\tt \ \\ MPI\_CSHIFT( inbuf, outbuf, tag, group, shift)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_ISEND( handle, inbuf, mod(rank+shift, size), tag, group);
MPI_RECV( outbuf, mod(rank-shift,size), tag, group)
MPI_WAIT(handle);
\end{verbatim}

\discuss{
Do we want to support the case {\tt inbuf = outbuf} somehow?
}

\paragraph*{End-off shift}

{\bf \ \\ MPI\_EOSHIFT( inbuf, outbuf, tag, group, shift)} \\

Process with rank {\tt i}, $\tt \max( 0, -shift) \le i < min( size, size -
shift)$, sends the data
in its input buffer to process with rank {\tt i+ shift}, who receives the data
in its output buffer.   The output buffer of processes which do not receive
data is left unchanged.   All processes
make the call with the same values for {\tt tag, group}, and {\tt shift}.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


{\bf \ \\ MPI\_EOSHIFTB( inbuf, outbuf, len, tag, group, shift)} \\

Behaves like {\tt MPI\_EOSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, tag, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN tag] operation tag (integer)
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

\discuss{

Two other possible definitions for end-off shift: (i) zero filling for processes
that don't receive messages, or (ii) boundary values explicitly provided as an
additional parameter.  Any preferences?
(Fortran 90 allows to optionally provide boundary values, and does zero filling,
if none were provided)

}

\paragraph*{Broadcast}

{\bf \ \\  MPI\_BCAST( buffer\_handle, tag, group, root )} \\

{\tt MPI\_BCAST} broadcasts a message from the process with rank {\tt root} to
all other processes
of the group. It is called by all members of group using the same arguments for
{\tt tag, group, and root}.
On return the contents of the buffer of the process with rank {\tt root}
is contained in buffer of all group members.
\begin{description}
\item[INOUT buffer\_handle]  Handle for buffer where from message is
sent or received.
\item[IN tag] tag of communication operation (integer)
\item[IN group] context of communication (handle)
\item[IN root] rank of broadcast root (integer)
\end{description}


{\bf \ \\  MPI\_BCASTB( buf, len, tag, group, root )} \\

{\tt MPI\_BCASTB} behaves like broadcast, restricted to a block buffer.
It is called by all processes with the same arguments for {\tt len, tag, group}
and {\tt root}.
\begin{description}
\item[INOUT buffer]  Starting address of buffer (choice type)
\item[IN len] Number of words in buffer (integer)
\item[IN tag] tag of communication operation (integer)
\item[IN group] context of communication (handle)
\item[in root] rank of broadcast root (integer)
\end{description}


{\tt \ \\  MPI\_BCAST( buffer\_handle, tag, group, root )} \\
is
\begin{verbatim}
MPI_SIZE( &size, context);
MPI_RANK( &rank, context);
MPI_IRECV(handle, buffer_handle, root, tag, group);
if (rank==root)
   for (i=0; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
MPI_WAIT(handle)
\end{verbatim}

\paragraph*{Gather}

{\bf \ \\ MPI\_GATHER( inbuf, outbuf, tag, group, root, len) } \\

Each process (including the root process) sends the content of its input
buffer to the root process.  The root process concatenates all the
incoming messages in the order of the senders' rank and places the
results in its output buffer.
It is called by all members of group using the same arguments for
{\tt tag, group}, and {\tt root}.   The input buffer of each process may have
different length.
\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[OUT outbuf] handle to output buffer descriptor -- significant only at root
(choice)
\item[IN tag] operation tag (integer)
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\item[OUT len] difference between output buffer size (in bytes) and
number of bytes received.
\end{description}

\discuss{

It would be more elegant (but no more convenient) to have a return status
object.

If we follow ``accepted practice'' we shall return number of bytes
received.   The choice here and in subsequent similar functions
should be consistent with similar choice for point to point routines.

}

{\bf \ \\ MPI\_GATHERB( inbuf, inlen, outbuf, tag, group, root) } \\

{\tt MPI\_GATHER} behaves like {\tt MPI\_GATHER} restricted to block
buffers, and with the additional restriction that all input buffers should
have the same length.   All processes should provided the same values for
{\tt inlen, tag, group}, and {\tt root} .
\begin{description}
\item[IN inbuf] first variable of input buffer (choice)
\item[IN inlen] Number of (word) variables in input buffer (integer)
\item[OUT outbuf] first variable of output buffer -- significant only at
root (choice)
\item[IN tag] operation tag (integer)
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\end{description}


{\tt \ \\ MPI\_GATHERB( inbuf, inlen, outbuf, tag, group, root) } \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_ISENDB(handle, inbuf, inlen, root, tag, group);
if (rank==root)
   for (i=0; i < size; i++)
   {
      MPI_RECVB(outbuf, inlen, i, tag, group, return_status);
      outbuf += inlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\paragraph*{Scatter}

{\bf \ \\ MPI\_SCATTER( list\_of\_inbufs, outbuf, tag, group, root, len)} \\

The root process sends the content of its {\tt i}-th input buffer
to the process with rank {\tt i}; each process (including the root process)
stores the incoming message in its output buffer.
The difference between the size of
the output buffer (in bytes) and the number of bytes received is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt root}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[OUT outbuf] buffer descriptor handle
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\item[OUT len]  number of remaining bytes in the output buffer at each process
(integer)
\end{description}


{\tt \ \\ MPI\_SCATTER( list\_of\_inbufs, outbuf, tag, group, root, len)} \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_IRECV(handle, outbuf, root, tag, group);
if (rank=root)
   for (i=0; i < size; i++)
      MPI_SEND(inbuf[i], i, tag, group);
MPI_WAIT(handle, return_status);
MPI_RETURN_STATUS(return_status, len, source, tag);
\end{verbatim}


{\bf \ \\ MPI\_SCATTERB( inbuf, outbuf, len, tag, group, root)}
\\

{\tt MPI\_SCATTERB} behaves like {\tt MPI\_SCATTER} restricted to block buffers,
and with the additional restriction that all output buffers have the same
length. The input buffer block of the root process is partitioned into
{\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt i}-th process in the group and stored in its output buffer.
The routine is called by all members of the group using the same
arguments for {\tt tag, group, len}, and {\tt root}.
\begin{description}
\item[IN inbuf] first entry in input buffer -- significant only at root
(choice).
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries to be stored in output buffer (integer)
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\end{description}


{\tt \ \\ MPI\_SCATTERB( inbuf, outbuf, outlen, tag, group, root) } \\
is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
MPI_IRECVB( handle, outbuf, outlen, root, tag, group);
if (rank=root)
   for (i=0; i < size; i++)
   {
      MPI_SENDB(inbuf, outlen, i, tag, group, return_status);
      inbuf += outlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\paragraph*{All-to-all scatter}

{\bf \ \\ MPI\_ALLSCATTER( list\_of\_inbufs, outbuf, tag, group, len)} \\

Each process in the group sends its {\tt i}-th buffer in its input buffer list
to the process with rank {\tt i} (itself included); each process concatenates
the incoming messages in its output buffer, in the order of the senders' ranks.
The number of bytes left in the output buffer is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag} and {\tt group}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[OUT outbuf] buffer descriptor handle
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[OUT len]  number of remaining bytes in the output buffer (integer)
\end{description}




{\bf \ \\ MPI\_ALLSCATTERB( inbuf, outbuf, len, tag, group)} \\

{\tt MPI\_ALLSCATTERB} behaves like {\tt MPI\_ALLSCATTER} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have
the same length. The input buffer block of each process is partitioned
into {\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt it}-th process in the group.  Each process concatenates the incoming
messages, in the order of the senders' ranks, and store them in its output
buffer. The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (choice).
root (integer)
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries sent from each process to each other (integer).
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\end{description}


{\tt \ \\ MPI\_ALLSCATTERB( inbuf, outbuf, len, tag, group)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
for (i=0; i < rank; i++)
   {
    MPI_IRECVB(recv_handles[i], outbuf, len, tag, group);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDB(send_handle[i], inbuf, len, i, tag, group);
    inbuf += len;
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}

\paragraph*{All-to-all broadcast}

{\bf \ \\ MPI\_ALLCAST( inbuf, outbuf, tag, group, len)} \\

Each process in the group broadcasts its input buffer
to all processes (including itself);
each process concatenates
the incoming messages in its output buffer, in the order of the senders' ranks.
The number of bytes left in the output buffer is returned
in {\tt len}.  The routine is called by all members of the group using the same
arguments for {\tt tag} and {\tt group}.
\begin{description}
\item[IN inbuf] buffer descriptor handle for input buffer
\item[OUT outbuf] buffer descriptor handle for output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle
\item[OUT len]  number of remaining untouched bytes in each output buffer
(integer)
\end{description}




{\bf \ \\ MPI\_ALLCASTB( inbuf, outbuf, len, tag, group)} \\

{\tt MPI\_ALLCASTB} behaves like {\tt MPI\_ALLCAST} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have the same length.
The routine is called by all members of the group using the same
arguments for {\tt tag, group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (choice).
root (integer)
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries sent from each process to each other
(including itself).
\item[IN group] handle
\end{description}


{\tt \ \\ MPI\_ALLCASTB( inbuf, outbuf, len, tag, group)} \\ is
\begin{verbatim}
MPI_SIZE( &size, group);
MPI_RANK( &rank, group);
for (i=0; i < rank; i++)
   {
    MPI_IRECVB(recv_handles[i], outbuf, len, tag, group);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDB(send_handle[i], inbuf, len, i, tag, group);
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}


\subsubsection{Global Compute Operations}

\paragraph*{Reduce}

{\bf \ \\ MPI\_REDUCE( inbuf, outbuf, tag, group, root, op)} \\

Combines the values provided in the input buffer of each process in the
group, using the operation {\tt op}, and returns the combined value in
the output buffer of the process with rank {\tt root}.
Each process can provide one value, or a sequence of values, in which case the
combine operation is executed pointwise on each entry of the sequence.
For example, if the operation is {\tt max} and input buffers contains two
floating point numbers, then outbuf(1) $=$ global max(inbuf(1)) and
outbuf(2) $=$ global max(inbuf(2)). All input
buffers should define sequences of equal length of entries of types
that match the type of the operands of {\tt op}.  The
output buffer should define a sequence of the same length of entries of
types that match the type of the result of {\tt op}.
(Note that,
here as for all other communication operations, the type of entries inserted in
a message depend on the information provided by the input buffer descriptor, and
not on the declarations of these variables in the calling program.   The types
of the variables in the calling program need not match the types defined by the
buffer descriptor, but in such case the outcome of a reduce operation may be
implementation dependent.)

The operation
defined by {\tt op} is associative and commutative, and the implementation can
take advantage of associativity and commutativity in order to change
order of evaluation.
The routine is called by all group members using the same arguments
for {\tt tag, group, root} and {\tt op}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer -- significant only at root
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}

We list below the operations are supported for Fortran, each with the
corresponding value of the {\tt op} parameter.
\begin{description}
\item[MPI\_IMAX] integer maximum
\item[MPI\_RMAX] real maximum
\item[MPI\_DMAX] double precision real maximum
\item[MPI\_IMIN] integer minimum
\item[MPI\_RMIN] real minimum
\item[MPI\_DMIN] double precision real minimum
\item[MPI\_ISUM] integer sum
\item[MPI\_RSUM] real sum
\item[MPI\_DSUM] double precision real sum
\item[MPI\_CSUM] complex sum
\item[MPI\_DCSUM] double precision complex sum
\item[MPI\_IPROD] integer product
\item[MPI\_RPROD] real product
\item[MPI\_DPROD] double precision real product
\item[MPI\_CPROD] complex product
\item[MPI\_DCPROD] double precision complex product
\item[MPI\_AND] logical and
\item[MPI\_IAND] integer (bit-wise) and
\item[MPI\_OR] logical or
\item[MPI\_IOR] integer (bit-wise) or
\item[MPI\_XOR] logical xor
\item[MPI\_IXOR] integer (bit-wise) xor
\item[MPI\_MAXLOC] rank of process with maximum integer value
\item[MPI\_MAXRLOC] rank of process with maximum real value
\item[MPI\_MAXDLOC] rank of process with maximum double precision real value
\item[MPI\_MINLOC] rank of process with minimum integer value
\item[MPI\_MINRLOC] rank of process with minimum real value
\item[MPI\_MINDLOC] rank of process with minimum double precision real value
\end{description}

{\bf \ \\ MPI\_REDUCEB( inbuf, outbuf, len, tag, group, root, op)} \\

Is same as {\tt MPI\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}

\discuss{

If we are to be compatible with the point to point block operations, the
{\tt len} parameter should indicate the number of words in buffer.  But it
might be more natural to have {\tt len} indicate the number of entries in
the buffer, so that if the entries are complex or double precision, {\tt
len} will be half the number of words in the buffer.

}


{\bf \ \\ MPI\_USER\_REDUCE( inbuf, outbuf, tag, group, root, function)} \\

Same as the reduce operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned value of the function, and the types of all entries in the
input and output buffers all agree.  The output buffer has the same
length as the input buffer.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer -- significant only at root
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN function] user provided function
\end{description}

{\bf \ \\ MPI\_USER\_REDUCEB( inbuf, outbuf, len, tag, group, root, function)}
\\
Is same as {\tt MPI\_\_USER\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation (status)
\end{description}


\discuss{

Do we also want a version of reduce that broadcasts the result to all processes
in the group?  (This can be achieved by a reduce followed by a broadcast, but a
combined function may be somewhat more efficient.

Do we want a user provided {\em function} (two IN parameters, one OUT
value), or a user provided procedure that overwrites the second input
(ie. one IN param, one INOUT param, the equivalent of C {\tt a op= b}
type assignment)?  The second choice may allow a
more efficient implementation, without changing the semantics of the
MPI call.

Various peoples have suggested an {\tt MPI\_GLOBAL\_USER\_REDUCE} function
where the user function is applied to the entire buffer as one argument, rather
then piece-wise to each entry in the buffer.
A possible definition is given below.

{\bf \ \\ MPI\_GLOBAL\_USER\_REDUCE( inbuf, outbuf, tag,
group, root, routine)} \\

Same as the user reduce operation function above except that the user
supplied routine applies to the entire buffer at once.
{\tt routine} has {\tt 2n} parameters:
{\tt routine( a1, ..., an, b1, ... bn)}.
Each argument {\tt ai} has
intent {\tt IN} and each argument {\tt bi} is intent {\tt INOUT}.
The function assigns to {\tt bi} the value {\tt ai $op_i$ bi},
$op_i$ is a commutative and associative operator (possibly distinct
for each $i$).   Both input buffer and output buffer have {\tt n}
entries, and the type of the {\tt i}-th entry in each agree with the type
of {\tt ai} and of {\tt bi}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer -- significant only at root
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN routine] user provided routine
\end{description}

A similar ``block'' function can be defined.  Note that, in Fortran 77, there
is no straightforward mechanism for passing a heterogeneous structure as one
argument to a function, or have a function return a heterogeneous structure as
result.

A more ``reasonable'' design for a global user reduce function is possible in
the case where all buffer entries have the same type.

}

\paragraph*{Scan}

{\bf \ \\  MPI\_SCAN( inbuf, outbuf, tag, group, op )} \\

MPI\_SCAN is used to perform a parallel prefix with respect to
an associative reduction operation on data distributed across the group.
The operation returns in the output buffer of the process with rank {\tt i} the
reduction of the values in the input buffers of processes with ranks {\tt
0,...,i}.  The type of operations supported and their semantic, and the
constraints on input and output buffers are as for {\tt MPI\_REDUCE}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN op] operation (status)
\end{description}

{\bf \ \\  MPI\_SCANB( inbuf, outbuf, len, tag, group, op )} \\
Same as {\tt MPI\_SCAN}, restricted to block buffers.

\begin{description}
\item[IN inbuf] first input buffer element (choice)
\item[OUT outbuf] first output buffer element (choice)
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN op] operation (status)
\end{description}


{\bf \ \\  MPI\_USER\_SCAN( inbuf, outbuf, tag, group, function )} \\

Same as the scan operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned values all agree.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[OUT outbuf] handle to output buffer
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

{\bf \ \\ MPI\_USER\_SCANB( inbuf, outbuf, len, tag, group, function)}
\\
Is same as {\tt MPI\_USER\_SCAN}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer
\item[IN len] number of entries in input and output buffer (integer)
\item[IN tag]  operation tag (integer)
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

\discuss{

Do we want scan operations executed by segments? (The HPF definition of prefix
and suffix operation might be handy -- in addition to the scanned vector of
values there is a mask that tells where segments start and end.)
}

\missing{

Nonblocking (immediate) collective operations.  The syntax is obvious:   for
each collective operation  {\tt MPI\_op(params)} one may have a new nonblocking
collective operation of the form {\tt MPI\_Iop(handle, params)}, that initiates
the execution of the corresponding operation.  The execution of the operation
is completed by executing {\tt MPI\_WAIT(handle,...},  {\tt
MPI\_STATUS(handle,...)},  {\tt MPI\_WAITALL}, {\tt MPI\_WAITANY}, or {\tt
MPI\_STATUSANY}.   There are three issues to consider:

(i) The exact definition of the semantics of there operations (in particular
constraints on order.

(ii) The complexity of implementation (including the complexity of having the
same {\tt WAIT} or {\tt STATUS} functions apply both to point-to-point and to
collective operations).

(iii) The accrued performance advantage.
}

\subsection{Correctness}

\discuss{ This is still very preliminary}

The semantics of the collective communication operations can be derived from
their operational definition in terms of  point-to-point communication.  It is
assumed that messages pertaining to one
operation cannot be confused with messages pertaining to another operation.
Also messages pertaining to two distinct occurrences of the same operation
cannot be confused, if the two occurrences have distinct parameters.
The relevant parameters for this purpose are {\tt group}, {\tt tag}, {\tt
root} and {\tt op}.
messages pertaining to another occurrence of the same operation, with different
parameters.   The implementer can, of course, use another, more efficient
implementation, as long as it has the same effect.

\discuss{

This statement does not yet apply to the current, incomplete and
somewhat careless definitions I provided in this draft.

The definition above means that messages pertaining to a collective
communication carry information identifying the operation itself, and the
values of the {\tt tag, group} and,
where relevant, {\tt root} or {\tt op} parameters.
Is this acceptable?

}


A few examples:

\begin{verbatim}
MPI_BCAST(buf, len, tag, group, 0);
MPI_BCAST(buf, len, tag, group, 1);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag, but different
roots.  Since the operations are distinguishable, messages from one broadcast
cannot be confused with messages from the other broadcast; the program is safe
and will execute as expected.

\begin{verbatim}
MPI_BCAST(buf, len, tag, group, 0);
MPI_BCAST(buf, len, tag, group, 0);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag and root.
Since point-to-point communication preserves the order of messages
here, too, messages from one broadcast will not be confused with messages from
the other broadcast; the program is safe and will execute as intended.

\begin{verbatim}
MPI_RANK(&rank, group)
if (rank==0)
  {
   MPI_BCASTB(buf, len, tag, group, 0);
   MPI_SENDB(buf, len, 2, tag, group);
  }
elseif (rank==1)
  {
   MPI_RECVB(buf, len, MPI_DONTCARE, tag, group);
   MPI_BCASTB(buf, len, tag, group, 0);
   MPI_RECVB(buf, len, MPI_DONTCARE, tag, group);
  }
else
  {
   MPI_SENDB(buf, len, 2, tag, group);
   MPI_BCASTB(buf, len, tag, group, 0);
  }
\end{verbatim}

Process zero executes a broadcast followed by a send to process one;
process two executes a send to process one, followed by a broadcast;
and process one executes a receive, a broadcast and a receive.
A possible outcome is for the operations to be matched as illustrated by the
diagram below.

\begin{verbatim}


    0                       1                      2

                / - >  receive            / - send
              /                         /
broadcast   /         broadcast       /   broadcast
           /                        /
  send   -             receive  < -


\end{verbatim}

The reason is that broadcast is not a synchronous operation; the call at a
process may return before the other processes have entered the broadcast.
Thus, the message sent by process zero can arrive to process one before the
message sent by process two, and before the call to broadcast on process one.

\end{document}



From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 24 00:17:08 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04629; Wed, 24 Mar 93 00:17:08 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19214; Wed, 24 Mar 93 00:16:45 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 24 Mar 1993 00:16:45 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA19204; Wed, 24 Mar 93 00:16:43 -0500
Message-Id: <9303240516.AA19204@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 2223;
   Wed, 24 Mar 93 00:16:44 EST
Date: Wed, 24 Mar 93 00:12:02 EST
From: "Marc Snir" <snir@watson.ibm.com>
X-Addr: (914) 945-3204  (862-3204)
        28-226 IBM T.J. Watson Research Center
        P.O. Box 218 Yorktown Heights NY 10598
To: mpi-collcomm@cs.utk.edu
Subject: new draft
Reply-To: SNIR@watson.ibm.com

Minor changes, some discussion of alternative choices of reduce with
user provided function.  Thanks to Rolph Hempel, Jon Flower, Robert Harrison,
and everybody else for their comments.

By the way, Steve Otto will put out in a day or two (isn't it, Otto?) a new
complete draft in Postscript format -- So you poor dislatexic guys, be
patient.
From owner-mpi-collcomm@CS.UTK.EDU  Wed Mar 24 18:18:20 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04031; Wed, 24 Mar 93 18:18:20 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09968; Wed, 24 Mar 93 18:17:26 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 24 Mar 1993 18:17:25 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09960; Wed, 24 Mar 93 18:17:22 -0500
Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C)
	id AA27247; Wed, 24 Mar 93 23:17:16 GMT
Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1)
	id AA01503; Wed, 24 Mar 93 16:15:59 MST
Date: Wed, 24 Mar 93 16:15:59 MST
From: hender@macaw.fsl.noaa.gov (Tom Henderson)
Message-Id: <9303242315.AA01503@macaw.fsl.noaa.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Re:   Revised Collective Draft - consistent with p2p draft


Hi all,

I have a few (mostly minor) comments/questions about the current Collective 
Communication proposal.  

1.  One feature I especially like in the point-to-point proposal is the use of 
    a (start, len, datatype) triplet to describe a sequence of contiguous 
    values (block).  Is this going to appear in collective communication 
    also?  I'd like to see it.  For example, MPI_CSHIFTB() would then look 
    like:  

    MPI_CSHIFTB(inbuf, outbuf, len, datatype, tag, group, shift)


2.  How should the buffer descriptor at the root process be specified during a 
    call to MPI_GATHER()?  (This happens elsewhere as well.)  When considering 
    the simple "BLOCK" buffer components only, I can see two alternatives:  

    A)  outbuf in the root is identical to the inbuf in each of the other 
        processes except for length (ie they all have the same number and type 
        of buffer components).  Each buffer component in the root must have 
        length equal to the sum of lengths of corresponding buffer components 
        in the other processes.  For example, suppose process 0 is the root for 
        an MPI_GATHER() called by processes 0, 1, and 2.  If processes 1 and 2 
        have buffer components with the following characteristics:  

            Process 1

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   100

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   5


            Process 2

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   200

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   10


        then Process 0 must have buffer components with the following 
        characteristics:  

            Process 0

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   300

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   15

        In this case, the routine would behave like a bunch of separate calls 
        to MPI_GATHERB().  

    B)  outbuf contains the sum of all buffer components in all buffer 
        descriptors in the other processes, in the appropriate order.  For 
        the same example, if processes 1 and 2 have buffer components with the 
        following characteristics:  

            Process 1

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   100

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   5


            Process 2

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   200

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   10


        then Process 0 must have buffer components with the following 
        characteristics:  

            Process 0

                Buffer Component Number:  0
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   100

                Buffer Component Number:  1
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   5

                Buffer Component Number:  2
                Buffer Component Type:    BLOCK
                Data Type:                DOUBLE
                Length:                   200

                Buffer Component Number:  3
                Buffer Component Type:    BLOCK
                Data Type:                INTEGER
                Length:                   10

        In this case, MPI_GATHER() is a bit more flexible.  

    I'm not sure which I prefer...  "A" may be a bit easier.  I think that we 
    should pick one and say so explicitly in the document.  

    When considering the other types of buffer components (VECTOR and INDEX) 
    it looks like "BLOCK" could be replaced by either "VECTOR" or "INDEX" 
    anywhere in the examples above as long as the total length of each buffer 
    component is preserved.  (This is really point-to-point stuff now.  Is 
    mixing of different buffer components permitted?  I can't see how to 
    prevent it without sending extra junk along with each message...)  


3.  MPI_GATHER() returns "len" == difference in bytes between number of bytes 
    expected and number of bytes received at the root.  (Total number of bytes 
    delivered to the root is proposed as an alternative.)  In MPI_GATHERB() 
    there is no equivalent return value and "inlen" refers to words.  In 
    MPI_CSHIFTB() "len" means "number of elements".  I think this might be 
    confusing (I'm confused!  :-).  I would like to see a "status" returned 
    from each of these routines that behaves in the same way (like "0" means 
    success or something).  (Are you suggesting this in the "Discussion"?)  
    Also, do all calling processes get the return value?  


4.  MPI_REDUCE() has the following op parameters:  

    MPI_IMAX integer maximum
    MPI_RMAX real maximum
    MPI_DMAX double precision real maximum
    MPI_IMIN integer minimum
    MPI_RMIN real minimum
    MPI_DMIN double precision real minimum
    MPI_ISUM integer sum
    MPI_RSUM real sum
    MPI_DSUM double precision real sum
    MPI_CSUM complex sum
    MPI_DCSUM double precision complex sum
    MPI_IPROD integer product
    MPI_RPROD real product
    MPI_DPROD double precision real product
    MPI_CPROD complex product
    MPI_DCPROD double precision complex product
    MPI_AND logical and
    MPI_IAND integer (bit-wise) and
    MPI_OR logical or
    MPI_IOR integer (bit-wise) or
    MPI_XOR logical xor
    MPI_IXOR integer (bit-wise) xor
    MPI_MAXLOC rank of process with maximum integer value
    MPI_MAXRLOC rank of process with maximum real value
    MPI_MAXDLOC rank of process with maximum double precision real value
    MPI_MINLOC rank of process with minimum integer value
    MPI_MINRLOC rank of process with minimum real value
    MPI_MINDLOC rank of process with minimum double precision real value

    Since buffer components contain data type information, it seems like these 
    could be reduced to:  

    MPI_MAX    maximum (integer, real, or double)
    MPI_MIN    minimum (integer, real, or double)
    MPI_SUM    sum (integer, real, double, complex, or double complex)
    MPI_PROD   product (integer, real, double, complex, or double complex)
    MPI_AND    and (logical or bit-wise integer)
    MPI_OR     or (logical or bit-wise integer)
    MPI_XOR    xor (logical or bit-wise integer)
    MPI_MAXLOC rank of process with maximum value (integer, real, or double)
    MPI_MINLOC rank of process with minimum value (integer, real, or double)
    (I kind of hate to suggest getting rid of MPI_MINDLOC...  :-)

    This makes sense for MPI_REDUCEB() if datatype is explicitly included in 
    the parameter list as in point 1.  

    MPI_REDUCEB(inbuf, outbuf, len, datatype, tag, group, root, op)

    I'm completely in favor of having "len" refer to number of entries in a 
    buffer for all the MPI_xxxxxB() routines.  

Generally, I like this proposal.  


Tom Henderson
NOAA Forecast Systems Laboratory
hender@fsl.noaa.gov


From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 09:09:46 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA14741; Thu, 25 Mar 93 09:09:46 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA17983; Thu, 25 Mar 93 09:08:58 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 09:08:57 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA17972; Thu, 25 Mar 93 09:08:45 -0500
Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA01580
  (5.65c/IDA-1.4.4 for <mpi-collcomm@cs.utk.edu>); Thu, 25 Mar 1993 09:08:42 -0500
Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1)
	id AA18010; Thu, 25 Mar 93 14:08:39 GMT
Date: Thu, 25 Mar 93 14:08:39 GMT
From: jim@meiko.co.uk (James Cownie)
Message-Id: <9303251408.AA18010@hub.meiko.co.uk>
Received: by float.co.uk (5.0/SMI-SVR4)
	id AA07231; Thu, 25 Mar 93 14:05:11 GMT
To: ho@almaden.ibm.com
Cc: mpi-collcomm@cs.utk.edu
In-Reply-To: "Ching-Tien (Howard) Ho"'s message of Tue, 23 Mar 93 11:24:18 PST <9303231926.AA25686@CS.UTK.EDU>
Subject: No tag for a CC routine?
Content-Length: 916

In the current draft of the CC chapter, the explanation of the way in
which the collective routines function is in terms of point to point,
and it uses the supplied tag to do the necessary selection...

I guess that this conforms to your first semantic (tag unique in the
group, and all other groups which have intersecting members with this
group). [Actually this means program wide unique, since all processes
are in the INITIAL or ALL group !]

Why is this so unpleasant ? It seems to me to be no more than the
normal requirements of a tag, which are that the user's application
understands it and does not incorrectly replicate it.

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com



From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 12:16:41 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA20707; Thu, 25 Mar 93 12:16:41 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA25872; Thu, 25 Mar 93 12:16:09 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 12:16:08 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA25864; Thu, 25 Mar 93 12:16:05 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Thu, 25 Mar 93
 09:12 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA10516; Thu,
 25 Mar 93 09:10:33 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA01146; Thu, 25 Mar 93 09:10:29
 PST
Date: Thu, 25 Mar 93 09:10:29 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: Re:  No tag for a CC routine?
To: ho@almaden.ibm.com, jim@meiko.co.uk
Cc: d39135@carbon.pnl.gov, mpi-collcomm@cs.utk.edu
Message-Id: <9303251710.AA01146@sodium.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

Jim Cownie writes:

> In the current draft of the CC chapter, the explanation of the way in
> which the collective routines function is in terms of point to point,
> and it uses the supplied tag to do the necessary selection...

Yes, the draft does this, but it's arguably an oversight.  See below.

> I guess that this conforms to your first semantic (tag unique in the
> group, and all other groups which have intersecting members with this
> group). [Actually this means program wide unique, since all processes
> are in the INITIAL or ALL group !]
>
> Why is this so unpleasant ? It seems to me to be no more than the
> normal requirements of a tag, which are that the user's application
> understands it and does not incorrectly replicate it.

It's unpleasant because the coding used in the draft would break
if another module in the group happened to use wildcard receive.
(The draft itself acknowledges that the draft example routines
are not bulletproof.)

The preferred way to isolate one collective comm's messages from all
others is to use "context".  All of the context/group proosals
provide mechanisms to make this cheap and effective.  Presumably
a subsequent draft of collective communication will reflect whatever
mechanism the committee selects for context management.

A collective comm routine might use tags internally to keep its
own messages straight.  But then it needs more than one tag, so
passing one in as an argument would not even be adequate.

> -- Jim
> James Cownie 

--Rik
----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 15:44:07 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25543; Thu, 25 Mar 93 15:44:07 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA05674; Thu, 25 Mar 93 15:43:25 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 15:43:24 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from deepthought.cs.utexas.edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA05664; Thu, 25 Mar 93 15:43:22 -0500
From: rvdg@cs.utexas.edu (Robert van de Geijn)
Received: from grit.cs.utexas.edu by deepthought.cs.utexas.edu (5.64/1.2/relay) with SMTP
	id AA29413; Thu, 25 Mar 93 14:43:20 -0600
Received: by grit.cs.utexas.edu (5.64/Client-v1.3)
	id AA05025; Thu, 25 Mar 93 14:42:58 -0600
Date: Thu, 25 Mar 93 14:42:58 -0600
Message-Id: <9303252042.AA05025@grit.cs.utexas.edu>
To: lyndon@epcc.ed.ac.uk
Cc: ho@almaden.ibm.com, mpi-collcomm@cs.utk.edu
In-Reply-To: L J Clarke's message of Tue, 23 Mar 93 19:37:21 GMT <16027.9303231937@subnode.epcc.ed.ac.uk>
Subject: No tag for a CC routine?

   X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 23 Mar 1993 14:37:31 EST
   Date: Tue, 23 Mar 93 19:37:21 GMT
   From: L J Clarke <lyndon@epcc.ed.ac.uk>
   Reply-To: lyndon@epcc.ed.ac.uk

   > Hi,
   >   I like to revisit an old issue regarding the Collective Communication (CC)
   > proposal to MPI.

   I support the specification of collective communications without use of
   message tag. I just cannot see that it is needed there.

   Best Wishes
   Lyndon

	    /--------------------------------------------------------\
       e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
       c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
	    \--------------------------------------------------------/



Ditto here.

Robert

=====================================================================
  Robert A. van de Geijn                     rvdg@cs.utexas.edu  
  Assistant Professor
  Department of Computer Sciences            (Work)  (512) 471-9720
  The University of Texas                    (Home)  (512) 251-8301 
  Austin, TX 78712                           (FAX)   (512) 471-8885 
=====================================================================
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 16:20:54 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA26660; Thu, 25 Mar 93 16:20:54 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA07148; Thu, 25 Mar 93 16:20:05 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 16:20:04 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA07136; Thu, 25 Mar 93 16:20:03 -0500
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA07870; Thu, 25 Mar 93 15:13:41 CST
Date: Thu, 25 Mar 93 15:13:41 CST
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9303252113.AA07870@Aurora.CS.MsState.Edu>
To: lyndon@epcc.ed.ac.uk, rvdg@cs.utexas.edu
Subject: Re: No tag for a CC routine?
Cc: ho@almaden.ibm.com, mpi-collcomm@cs.utk.edu


Yes, one wonders...

Rationale for:
	1) Debugging of erroneous programs (well, what does the tag mean???)
	2) symmetry with point-to-point ???


Rationale against;
	1) prohibits use of some hardware, for certain
	2) no clear value
	3) tag might have role in implementing certain global operations,
		for some implementations

Despite my previous comments for this, I agree that tag should go.
- TOny
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 18:13:21 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA01329; Thu, 25 Mar 93 18:13:21 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11992; Thu, 25 Mar 93 18:12:45 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 18:12:45 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11984; Thu, 25 Mar 93 18:12:43 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Thu, 25 Mar 93
 15:11 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA11096; Thu,
 25 Mar 93 15:09:21 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA01610; Thu, 25 Mar 93 15:09:18
 PST
Date: Thu, 25 Mar 93 15:09:18 PST
From: rj_littlefield@pnlg.pnl.gov
Subject: RE: No tag for a CC routine?
To: lyndon@epcc.ed.ac.uk, rvdg@cs.utexas.edu, tony@Aurora.CS.MsState.Edu
Cc: d39135@carbon.pnl.gov, ho@almaden.ibm.com, mpi-collcomm@cs.utk.edu
Message-Id: <9303252309.AA01610@sodium.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

SUMMARY: I am in favor of tags for collective communication calls,
on the basis that they have more value than cost.

Tony says

> Rationale for:
> 	1) Debugging of erroneous programs (well, what does the tag mean???)
> 	2) symmetry with point-to-point ???
> 
> Rationale against;
> 	1) prohibits use of some hardware, for certain
> 	2) no clear value
> 	3) tag might have role in implementing certain global operations,
> 		for some implementations

Let us specify that the tag value is logically redundant.

That is, let us specify that collective comm calls in separate
processes are actually matched by group and sequence, but that a
program is declared correct only if the tag value is the same for
all matching calls.  

The match can be checked for debugging.

This clarifies and supports reason #1 in favor of tags.

Reason #1 against is not true (under this spec).  Since the tags
are logically redundant, they can be ignored for the sake of
efficiency.

Reason #2 against is countered by the personal observation that
programmers sometimes foul up and match calls they didn't intend to.
Having a facility to detect this foulup would be valuable.

I don't much care about reason #2 for, and I don't understand reason
#3 against.  It bears some resemblance to my previous reply to Jim.
However, all I intended to do in that note was point out that if
there is a tag, it should not be interpreted as meaning anything in
terms of the point-to-point comms used inside the collective comm
routine.

--Rik

----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Thu Mar 25 19:45:48 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA02875; Thu, 25 Mar 93 19:45:48 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA15216; Thu, 25 Mar 93 19:44:59 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 25 Mar 1993 19:44:59 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA15208; Thu, 25 Mar 93 19:44:57 -0500
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA11124; Thu, 25 Mar 93 18:38:18 CST
Date: Thu, 25 Mar 93 18:38:18 CST
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9303260038.AA11124@Aurora.CS.MsState.Edu>
To: lyndon@epcc.ed.ac.uk, rvdg@cs.utexas.edu, tony@Aurora.CS.MsState.Edu,
        rj_littlefield@pnlg.pnl.gov
Subject: RE: No tag for a CC routine?
Cc: d39135@carbon.pnl.gov, ho@almaden.ibm.com, mpi-collcomm@cs.utk.edu


With regard to hardware problems introduced by tag, it is possible that
a hardware 'maximum' or 'combine' might not be able to handle the extra
tag, without significant additional overhead.  That is all.

I am not strongly against this, and I do value Rik's points of view on
this, provided we do not create an abstraction that limits the (important)
ability to use the emerging hardware-supported SIMD-like operations,
as appropriate.

- Tony
From owner-mpi-collcomm@CS.UTK.EDU  Fri Mar 26 02:16:58 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA09236; Fri, 26 Mar 93 02:16:58 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA28767; Fri, 26 Mar 93 02:16:18 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 26 Mar 1993 02:16:17 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA28759; Fri, 26 Mar 93 02:16:15 -0500
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Thu, 25 Mar 93
 23:14 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA11259; Thu,
 25 Mar 93 23:12:47 PST
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA01977; Thu, 25 Mar 93 23:12:44
 PST
Date: Thu, 25 Mar 93 23:12:44 PST
From: d39135@sodium.pnl.gov
Subject: RE: No tag for a CC routine?
To: lyndon@epcc.ed.ac.uk, rj_littlefield@pnlg.pnl.gov, rvdg@cs.utexas.edu,
        tony@Aurora.CS.MsState.Edu
Cc: d39135@carbon.pnl.gov, ho@almaden.ibm.com, mpi-collcomm@cs.utk.edu
Message-Id: <9303260712.AA01977@sodium.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

Tony says

> I am not strongly against this...
> ...provided we do not create an abstraction that limits the (important)
> ability to use the emerging hardware-supported SIMD-like operations,
> as appropriate.

I agree completely with Tony's concern.  If we are going to include
a tag for collective comms (as I argue would be desirable),
then we need to be sure that the semantics are defined so as to
not exclude efficient hardware ops.  I believe that the specification
I stated accomplishes this.  If not, please correct me.

--Rik
From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  2 01:52:44 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24322; Fri, 2 Apr 93 01:52:44 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA08752; Fri, 2 Apr 93 01:51:45 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 2 Apr 1993 01:51:43 EST
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA08727; Fri, 2 Apr 93 01:51:13 -0500
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA01906; Fri, 2 Apr 93 00:50:53 CST
Date: Fri, 2 Apr 93 00:50:53 CST
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9304020650.AA01906@Aurora.CS.MsState.Edu>
To: mpi-context@cs.utk.edu
Subject: the gathering
Cc: mpi-collcomm@cs.utk.edu

Dear Context sub-committee members (and observers from collcomm, etc),

	The meeting this week underscored the need for convergence
to a unifying proposal that captures the features of Proposal I, VIII, and
III+VII=X.  The following work will be accomplished before May 12 to
that end, while respecting the current separateness of I and VIII.  I
regret having to leave current MPI meeting early, but the context
discussions were quite sufficient to put me in a higher gear on the
problems before us...

	.  Rik Littlefield agrees to organize a set of test cases to be coded
		for each proposal; proposers including codings in their
		proposals.  Deadline for such examples is April 21, 8pm EST.
		This will be discussed on mpi-context over next three weeks.
	.  I will develop a unified proposal X (with sensible names, and
		rationale, details, performance discussion, and examples). 
	.  I will ask for help, as needed, from Lyndon/Mark/Marc etc, on
		understanding nuances of their proposals,
	.  Marc Snir / Lyndon Clarke 
		will discuss changes/enhancements (if any) to Proposal I
	.  Mark Sears will complete (presumably) a full proposal VIII

(Tacit in this discussion is the accepted merger of III+VII as X,
despite its incomplete state, so we have eliminated some proposals
from consideration this round).  To be considered for a straw vote
(before next meeting), all proposals must be complete in that they
must

	.  Address their interactions with the first-reading of pt2pt, and
		current status of collcomm, including needed changes if any

	.  Provide specific syntax/semantics, as needed for pt2pt & collcomm
		chapters

	.  Describe any known flaws in syntax / semantics

	.  Describe logical subsets, if any, for MPI1

	.  Implement the examples that Rik organizes, and upon which we
		agree together (including those from Wednesday night 
		 discussion session)

	.  Include discussions of how starting works, and what the spawning
	   semantics must provide them (or through an initial message)
	   so that they can work. 

	.  The meaning of the MPI_ALL group in the proposal, if any, or
	   weaker substitutes for same.

	.  The existence/non-existence/requirement for servers or
	   shared-memory locations to effect some features

	.  Include expectations for performance of key operations
	   (eg, how much does it cost to get a new context?, can this
		be done outside of loops and cached?)

	.  Describe their use of a "cacheing facility," if any

	.  Describe their syntax/semantics of a "cacheing facility"

	.  Describe their reliance on any other MPI1 features not specifically
		part of context/group/tag/pid nature

		-	-	-	-	-

Presumably Proposals I, VIII, and X will fill all requirements to
reach the next straw poll deadline.  Whichever do make this Straw poll
deadline, (May 10, 1993, 5pm EST), can be considered by the voting
subcommittee.  A ranking will be developed, with the bottom N-2
proposals dropped.  We will meet on the evening of Wednesday, May 12,
8:00pm CST, for as long as it takes to choose the final proposal,
possibly by further merger of the remaining strong proposals.  On
Thursday, May 13, we will present our first reading of the Context
subcommittee (with possible spill over to Friday, May 14).  Actual
context sub-committee members will vote, only, in all cases.  Please
recall the two-sub-committee voting limit of the MPIF (as well as
sub-committee membership; observers are always welcome).

I will strive not to send fine-grain changes to proposal X's around,
but will wait to circulate my product in complete form, prior to May
10, so there is a lower e-mail burden for next weeks; perhaps others
will like to keep their updates coarse grain, but share important
things with everyone, for sure.  If agreements/compromises occur
between proposals and/or proposers, please share this with me and the
sub-committee in a timely fashion; I do not desire surprises at the
next meeting.  For instance, if Marc Snir were willing to consider a
separate context feature (separate from group) in Proposal I, a lot of
effort could be averted, because his proposal is pretty good otherwise
(except in re inter-group issues).  I think Lyndon will be talking to
Marc about making inter-group communication easier in Proposal I,
also.  If any breakthroughs are made, please let me know.

- Tony

PS Please copy mpi-collcomm on context-related matters for the
duration of MPIF. 

.	.	.	.	.	.	.	.	.      .
"There is no lifeguard at the gene pool." - C. H. Baldwin
"In the end ... there can be only one." - Ramirez (Sean Connery) in <Highlander>

Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu




From owner-mpi-collcomm@CS.UTK.EDU  Tue Apr  6 15:38:54 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25360; Tue, 6 Apr 93 15:38:54 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA10129; Tue, 6 Apr 93 15:38:13 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 6 Apr 1993 15:38:12 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA10111; Tue, 6 Apr 93 15:38:09 -0400
Date: Tue, 6 Apr 93 20:38:05 BST
Message-Id: <841.9304061938@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: [L J Clarke: mpi-context: comment and suggestion]
To: mpi-collcomm@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk

---- Start of forwarded text ----

Dear MPI context colleagues.

I'd like to say something about contexts and groups and the extant
proposals ...

First off, we have two major concepts floating around, which I need to
define here for purpose of the discussion below. 

Group --- is an ordered collection of distinct processes, or formally of
references to distinct processes.  It provides a naming scheme for
processes in terms of a group name and rank of process within group. 

Context --- is a distinct space of messages, or more formally of message
tags.  It provides management of messages as a message in context A
cannot be received as a message in context B. 

Within these definitions there are exactly two themes in the extant
proposals. 

Marc Snir, in Proposal I, views Group and Context as identical.  This
simplifies the number of concepts in MPI, but does mean that we can have
intragroup communication and no way at all can you have intergroup
communication within the above definition of Group and Context.  

Rik and I amusingly coined the term "grountext" to describe the
group/context entity in this proposal. 

Tony Skjellum, in Proposal III, views Group and Context as independent. 
This means two concepts instead of one, but does mean that we can allow
intragroup communication and some intergroup communication with
restriction on how flexible we can make such communication. 

Proposals VIII and X are identical to III in the manner in which they
treat Context and Group as independent concepts.  Please consider
Proposal VII as not compliant with the above definitions of Group and
Context. 

We need to decide:

1) Are context and group identical or different?

2) Is intergroup communication provided?

Now I want to point out something about intergroup communication which
we have in our system and find most expressive and convenient, but does
not fit in with the above frameworks and the assumption that the message
envelope always contains just (context, process, tag). 

Receive in intergroup communication can wildcard on (sender group and
sender rank) or (sender rank), in addition to message tag. 

We (at EPCC) do, and want to do (in MPI) (written out in longhand
notation)

receive(group, group', rank, tag)

where group is the receiver group, group' is the sender group,
rank is the sender rank in group' and tag is the message tag.
The receiver can never wildcard group.
The receiver can always wildcard tag.
The receiver can always wildcard either (rank) or (group' and rank).

(In fact, group and group' in this expression are more like the
grountext of Marc's proposal or the "context" of historical proposal
VII, but never mind on that point.)

In the framework of Marc we can reasonably do intergroup communication
without wildcard on group'.  To do this we transmit group information in
messages and form a group which is the union of group and group'.  We
cannot add wildcard on group' by saying that to do that one forms a
union of group and all cases of group'.  This requires the sender to
always know too much about the detail of the recieve call with which it
is to match (i.e., that the receiver is or is not doing a wildcard).  If
you disbelive this, then you should probably argue that we do not need
source selection in point-to-point as you can use tag to choose the
source, as it is the same argument (and bogus in my opinion). 

In the framework of Tony we can reasonably do intergroup communication
without wildcard on group'.  To do this we transmit group information in
messages and choose a context for the pair of groups to use for
intergroup communication.  We cannot add wildcard on group' by using a
context agreed for such use between group and all cases of group'.  The
argument is the same as that above after a little substitution. 

If we are serious about intergroup communication then in my opinion we
really should provide the facility to wildcard on sender group.  This
throws up a small number issues, some of which I now address. 

No process addressed: I didn't mention process addressed communication
at all.  Perhaps the demons of speed are bothered by this.  Well, we
could do such as (context,process,tag), and the above does not exclude
it.  We can fit it in, of course. 

Size of point-to-point section: I said above "longhand notation".  Well
that is the most expressive and convenient notation, and if you ask me
then I think that (group,group,rank,tag) or (NULL,group,rank,tag) are
both acceptable for intragroup communication.  On the other hand one can
introduce some grunge syntax for intergroup communication which use the
same framework as intragroup communication and replaces group in
(group,rank,tag) with some glob object which is "shorthand" for (group,
group').  This is not the best syntax in the world but we can live with
it.  We can even fit in the process addressed stuff with this kind of
syntax as I have shown in Proposal X. 

Message envelope: You probably spot that this needs the sender group id
to go into the message envelope.  Perhaps the demons of speed are
bothered by this.  Well, you could have a different enevelope for
groupless communication, intragroup communication and intergroup
communication, and only pay the cost of the bigger envelope when you
need it.  This is going to take two bits for envelope identification. 
Big deal! It will anyway be natural not to match communications of
different kinds (e.g.  intergroup cannot match with intragroup,
groupless cannot match with intergroup) so the extra header bits would
be useful anyway. 

Unknown group: You probably also spot that the receive with wildcard on
group can pick up a group that the receiver knows nothing of.  I would
be happiest if the implementation of MPI at the receiver asked the
implementation of MPI at the sender about the group in this case, so
that the receiver never has to bother about the eventuality.  We (at
EPCC) could accept that the returned group identifier is a NULL
identifier.  This means that groups have to exchange flattened group
descriptions in messages in a reasonable way before they can make a
great deal of sense of intergroup communication.  Not ideal, but we can
live with it. 

Comments please?

---- End of forwarded text ----
         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Thu Apr  8 10:58:23 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11790; Thu, 8 Apr 93 10:58:23 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA15413; Thu, 8 Apr 93 10:57:34 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 8 Apr 1993 10:57:33 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA15386; Thu, 8 Apr 93 10:56:27 -0400
Date: Thu, 8 Apr 93 15:56:22 BST
Message-Id: <2310.9304081456@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: mpi-context: context and group (medium)
To: mpi-context@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu

Dear MPI Colleagues

This letter is about groups, contexts, independence and coupling
thereof, the kinds of point-to-point communication which we have talked
about, and to brief extent libraries. 

Before embarking on the guts of the letter, I should like to express
very strong support for the suggestion that MPI users can cleanly
program in the host-node model.  In my opinion, this model of
programming is of considerable commercial significance, and I observer
that their are a number of important programs around which use this
model. 


			o--------------------o

I understand three different kinds of point-to-point communication which
have been discussed by various people in MPIF.  I write these out with
separate group and context concepts, as per a previous message to
mpi-context [Subject: mpi-context: comment and suggestion].  I will then
discuss coupling of group and context.  I refer the reader to my prevous
message to mpi-comm which described classes of MPI user libraries
[Subject: mpi-comm: various (long)], as their is some follow on
discussion below. 


Groupless (process addressed)
-----------------------------
(process, context, tag)
Wildcard on process, tag.
No wildcard on context

Intragroup (closed group)
-------------------------
(group, rank, context, tag)
Wildcard on rank, tag. 
No wildcard on group, context.

Intergroup (open group)
-----------------------
(lgroup, rgroup, rank, context, tag)
Wildcard on rgroup, rank, tag. 
No wildcard on lgroup, context.

Observe that "group" in intragroup and "lgroup" in intergroup  are the
same thing, they are the group of the callling process.

Since neither "group" nor "context" in intragroup can be wildcard then
there  may  appear to be appeal in some coupling of them  in order  to
provide  shorter  syntax and  easier  context/group  management.  This
implies that we couple context to group of calling process.  Now  this
coulping  is not compatible  with intergroup  since  the  two  calling
processes have  different  groups, thus different  contexts, thus  the
send and receive can never match. We can resolve this  difficulty by a
more careful statement of where the context of the message is coupled.
In particular  we can state that the context of the message is coupled
to the  group of the message receiver.   In  this way we would express
intragroup  as a coupling  of (group,context),  and  we would  express
intergroup as a pair of such couplings.

The claim  we have  heard that  context  and  group  must  be strongly
coupled, resulting in a proposal which asserts that context and  group
are identical,  is  possibly nothing  more  than a  consequence of  an
assumption  that messages may only be  distinguished  on the basis  of
(process, context, tag)  (here process is a process label which can be
a rank  within  a group).   Given  that  assumption, we  can only  use
context to  distinguish messages within different groups  and  the two
entities become  strongly coupled.   Examining records  of  the  early
meetings  of  MPI,  I  find  that  this "decision"  was  made  by  the
point-to-point subcommittee  in a straw  poll which rejected selection
by group by  a narrow  majority of 10 to 11. Please note also that the
same  meeting  rejected context  modifying process  identifier  ---  a
"decision" which we are already  often  ignoring.   These  "decisions"
predate the existence of  the contexts  subcommittee  and the vigorous
discussion of contexts and groups which has been and continues to take
place.  We should uniformly be  open minded enough  to allow ourselves
to question all such "decisions", and to change them if we see fit.

The description of  MPI  user libraries which has been  given by  Mark
Sears and  myself strongly  suggests  that  context and group  must be
independent entities. 

Provision  of the process addressed communication immediately suggests
that a context can appear without coupling to a group in which case it
seems (to me) that they are independent entities.

There  is an  argument  against process  addressed communication which
says that process addressed communication gains nothing in performance
over intragroup  communication  in the group of all processes.  If the
process  description in process addressed communication will, for sake
of generality and thus portability, have to be an some kind of pointer
to a process description object which contains whatever information is
needed to route a message to the intended recipient.  It could be just
that (in C, at  least), a pointer.  Sometimes,  on  some  machines, it
will actually  be implementable with some other kind of magic which is
more scalable, but it must always appear the same way.  It could be an
index, representable as  an integer in the host language, into a table
of process description objects  (better for F77, for sure).   It could
be  a  rank in  a group  of (all)  processes, used as an  index into a
process  description object table,  which is  just fine  for  a static
process model (and reflects existing practice).  It could be some kind
of global  unique process  identifier which is again  user as  a table
index  somewhere.  If  tables  grow too large in either of the  latter
cases, then there may be some hashing and/or caching involved.

There are counter arguments. I give one,  and invite you to give more.
On some machines,  the global unique process identifier is  sufficient
to route the message, and is representable  as an integer in  the host
language. For example, the global process id can be a composite of two
bit fields (nodeid,  procid) where nodeid is a physical processor node
number and procid is a process number  on the node, and the nodeid bit
field is sufficient to route.  In  these cases, there is no need for a
process description object table, and no need to do a table lookup. We
probably all have used machines just like this.

For  me the arguments  have  piled up  in favour  of context and group
being separate and  independent entities.  This letter therefore makes
the recommendation that context and group are separate and independent
entities. In that light  I propose further discussion on management of
contexts within and between processes, and within  and between groups,
and on the  subject of the use objects which bind one or more contexts
and  one or more  groups in  order  to keep  the  communcation  syntax
compact by overloading. I shall post another letter to you tomorrow.

			o--------------------o

Comments, questions, (flames :-) please?!

Best Wishes
Lyndon


         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Thu Apr  8 14:31:08 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA15535; Thu, 8 Apr 93 14:31:08 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26254; Thu, 8 Apr 93 14:30:02 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 8 Apr 1993 14:30:01 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from ssd.intel.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26243; Thu, 8 Apr 93 14:29:58 -0400
Received: from ernie.ssd.intel.com by SSD.intel.com (4.1/SMI-4.1)
	id AA01330; Thu, 8 Apr 93 11:29:43 PDT
Message-Id: <9304081829.AA01330@SSD.intel.com>
To: lyndon@epcc.ed.ac.uk
Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu, prp@SSD.intel.com
Subject: Re: mpi-context: context and group (longer) 
In-Reply-To: Your message of "Thu, 08 Apr 93 15:56:22 BST."
             <2310.9304081456@subnode.epcc.ed.ac.uk> 
Date: Thu, 08 Apr 93 11:29:42 -0700
From: prp@SSD.intel.com


> From: L J Clarke <lyndon@epcc.ed.ac.uk>
> Subject: mpi-context: context and group (medium)
>
> ...
> 
> For  me the arguments  have  piled up  in favour  of context and group
> being separate and  independent entities.
>
> Lyndon

I agree, although I also see merit in associating a context with a group.

I would like to share my thoughts about context which lead me to think we need
two differently managed forms of context. Some of you have already heard this.

Most of the discussion about context has revolved around protecting two
different entities: libraries and groups. I think this is
required, but I think they need very differently managed contexts. One form
is not adequate to cover both needs without sacrificing performance.

Consider a SPMD program with these calls. Assume the calls are loosely
synchronous.

		call to LibA (Group1)
		call to LibB (Group1)
		call to LibB (Group2)

In a loosely synchronous environment, messages for the next call can come in
before the previous one has completed. Here we see two forms of overlap.

Within the call to LibA, we might get messages from processes which have already
entered LibB. If LibA and LibB are independently written, they might use some
of the same tags. To avoid messages from LibB matching receives in LibA, we
must use different contexts. If we have static contexts, allocated when the
libraries are initialized, each call in the library can quickly provide the
context to its point-to-point calls. If we only have dynamic contexts,
especially if contexts are carried inside groups, then a library must be
prepared to dynamically allocate a new context on any call when it sees a new
group. I know we discussed ways to do this locally, so the context could be
created and cached locally on the fly without communication, but I find the
idea of incorporating such code into every library call horrifying.

Within the first call to LibB, we might get messages from processes which have
entered the second call to LibB. Since these calls are in different groups, it
might be difficult to code LibB in such a way that messages could not
intermix, since a process' position in Group2 might be quite different from
its position in Group1. (I would hope that libraries would be coded so that
multiple sequential calls to the same library with the same group would be
safe. That seems to be current practice.) To keep the two calls from
interfering, it would be convenient to have a different context for each
group. If each group contains a dynamically allocated context, thats easy. But
if contexts are statically allocated, especially if they require a name
server, getting a new context for each new group might be a global operation
that wouldn't scale well.

So I propose that we need two forms of context, one that is quite static for
protecting code, and one that is more dynamic for protecting groups.

The only mechanism I know of that is adequate for protecting code is context
alloctated via a nameserver. In MIMD programs, one cannot say much about the
order in which libraries are initialized. Thus, if context is statically
allocated at initialization time, there must be a way to obtain the global
context value for a piece of code independently of other processes. A more
static method, such as a MPI registry or a "dollar bill server" has the
disadvantage of requiring a much larger value range for context. That uses
precious bits in the envelope of every message. Once a context is allocated to
a piece of code, it can be safely stored in a global variable without
endangering thread safety or shared memory implementations, because no matter
how many instantiations of the library store into the variable, they will
always store the same value.

There are nice dynamic mechanisms for allocating context for groups, which
require only communication within the group. This can piggyback on the
communication which is probably required to set up and synchronize the group
when it is created. For instance, one might set aside a small number of
context values for use by groups. When a group is created, every process in
the group could provide its current set of free context values, possibly as a
bit vector. After a groupwide reduction, each process chooses the smallest
value from the intersection, resulting in every process choosing the same
value.

Other forms of context protection might be required in the future. I don't
predict any, and expect that with both a static and a dynamic form, it is
likely that future needs would be covered.

The point-to-point calls might be configured to accept (group, rank, context).
In this configuration, the static context protecting the code is passed in
explicitly, and the context protecting the group is inside the group object.

I'm not sure how this interacts with cross-group message passing. Perhaps the
simplest solution is to use a well-known group context in such cases, which
effectively disables group protection.

Those are my thoughts on context. Although I think the methods outlined here
are simple enough, I would be happy to see simpler mechanisms that solve the
same problems. I am not comfortable with any solution that requires active
participation by every library call, no matter how local.

Paul
From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 11:48:57 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA29408; Fri, 9 Apr 93 11:48:57 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA24729; Fri, 9 Apr 93 11:48:37 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 11:48:36 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA24704; Fri, 9 Apr 93 11:48:22 -0400
Date: Fri, 9 Apr 93 16:48:19 BST
Message-Id: <3201.9304091548@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: mpi-context: context and group (medium)
To: mpi-context@cs.utk.edu
In-Reply-To: L J Clarke's message of Thu, 8 Apr 93 15:56:22 BST
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu

Dear MPI Colleagues

This is a short letter. 

First: a colleague here pointed out to me that I left an unfinished
point, ie failed to draw to conclusion, in the mail message "Subject:
mpi-context: context and group (medium)".  Apologies to you all for my
slipshod work.  I conclude that discussion here. 

Regarding the process identifier, for which there are arguments for and
against its appearance as a global unique process identifier and as a
process local handle to opaque process descriptor object.  The
discussion in the referenced letter should have concluded that MPI
should say that it is a process local identifier of a process expressed
as an integer, and no more.  This allows the implementation of MPI to
choose the "best" form, which may be a global unique process identifier
or may be a process local opaque reference to a process description
object or may be an index into a table of subject objects describing all
processes. 

Second: the letter I sent to you all "Subject: mpi-context: comment and
suggestion" contained.  Apologies again.  I correct those errors here. 

* The claim that the conceptual framework of Tony regarding Group and 
  Context restricts the possibilities for inter(group)communication is 
  false.  It is the restriction of the message envelope to 
  (context,process,tag) which creates the limitation in this case.

* When I explained how intergroup communication can be done within the
  conceptual framework of Marc (Snir) I should have said that this 
  is a method for *simulating* intergroup communication without wildcard
  on group'.

* When I explained how intergroup communication can be done within the
  framework Tony Ishould have said that this is a method for
  *implementing* intergroup communication without wildcard on group'.

Final: Regarding the same letter whihc really deals with the subject of
inter(group)communication I may have made errors or at least unhelpful
assumptions in the latter couple of paragraphs of the message.  Again I
apologise.  I plan to go into deep thought on the subject of
inter(group,context)communication, and promise to deliver some quality
discussion to you all next week.  Please bear with me.  Until such time
I shall omit inter(group,context)communication from my discussions. 

Best Wishes
Lyndon


         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 12:21:22 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA29775; Fri, 9 Apr 93 12:21:22 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26245; Fri, 9 Apr 93 12:20:56 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 12:20:56 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26150; Fri, 9 Apr 93 12:20:12 -0400
Date: Fri, 9 Apr 93 17:20:03 BST
Message-Id: <3227.9304091620@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: mpi-context: Why scarce contexts?
To: mpi-context@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu

Dear MPI Colleagues.

This question is primarily directed at Mark Sears.  

Mark, in Proposal VII you say that contexts will be a scarce resource,
in fact you suggest 16 which is in my mind very scarce indeed. 

Why do you say this? It will help me/us if I/we understand, I am sure. 
Please reply. 

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 13:32:48 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA01040; Fri, 9 Apr 93 13:32:48 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA29417; Fri, 9 Apr 93 13:32:23 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 13:32:22 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA29393; Fri, 9 Apr 93 13:31:43 -0400
Date: Fri, 9 Apr 93 18:31:38 BST
Message-Id: <3385.9304091731@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: mpi-context: Why scarce contexts? 
To: mpsears@newton.cs.sandia.gov, mpi-context@cs.utk.edu
In-Reply-To: mpsears@newton.cs.sandia.gov's message of Fri, 09 Apr 93 11:06:58 MST
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu


Dear Mark

First, apologies for getting the proposal number wrong.

> 
> Lyndon asks why I think context values will be a scarce resource.
>

[stuff deleted]

> I think there are several reasons. The first is that a context
> requires underlying resources in the implementation (e.g. queues)
> which may be limited. A message arrives at a process, it goes
> into a queue matching the assigned context value in the
> envelope. Both support for the queue and the matching function
> take some effort. (16 queues is not too bad; 1000 is a lot.)
> One way to limit the effort required is to
> limit the number of supported contexts.

What you seem to be asking in this argument is that each process should
use a limited number of contexts, which is different to asking that the
system as a whole should use a limited number of contexts. Okay, that is
just perhaps a subtle point.

You are assuming details of an implementation of context.  For example,
in a different approach there could be just one queue which is searched
through (in some fashion) in receive for a matching message, testing for
context in no different way to testing for tag and sender.  In that
implementation contexts do not require resource, and the number of
contexts is bounded only by the bit length of the context identifier. 

I imagine that you must have good reasons for the assumed implementation
of context.  Please do let me/us know why you make the assumption, I am
sure that I am not alone in my concern that the number of contexts
should be so scarce, but perhaps you know of very good reasons why they
should so be. 

> Second, the bits in the envelope that support the context value
> have to come from somewhere, probably the existing tag field. If
> the tag field is only 16 bits to begin with (for argument's sake),
> then taking more than 4 bits for a context value might have a
> large impact.

I must be missing something here again.  This seems to say that the bit
length of the envelope is fixed to some number of bits and the more
fields we want to cram into the envelope the shorter the bit lengths of
fields must be.  Is there a good reason why the bit length of the
envelope shoud be fixed in this fashion, or perhaps are you arguing
that the bit length of the envelope should be as short as possible?

> This is a question vendors might answer: how many
> context values and tag values are you willing to support on future
> platforms and how many are you willing to back fit on existing ones?
> 

Yes, this would be a good question for the vendors indeed.  

VENDORS - PLEASE PLEASE PLEASE DO ADVISE US ON THIS ONE. 

> Last, I don't see a need for billions of contexts. My model calls
> for most programs to use handfuls, not thousands.

Yes, your model demands that programs use a handfull, the concern which
I have is that complex and highly modular software will not be able to
conform with your model, inhibiting the development of third party
software. 

> I would also like to
> think (this is a hopeless cause, but here goes) that much of
> MPI could be implemented in hardware, not just the communications
> part but the part that we now think of as overhead. This would
> greatly extend the class of programs that could benefit from
> parallelization, and I oppose for this reason things which add
> unnecessary complexity to the communications process. 

I am sure that vendors do take very seriously the possibility of
implementing relevant parts of MPI in hardware. 

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 15:33:50 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA05151; Fri, 9 Apr 93 15:33:50 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA04547; Fri, 9 Apr 93 15:33:05 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 15:33:04 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA04510; Fri, 9 Apr 93 15:32:25 -0400
Date: Fri, 9 Apr 93 20:32:21 BST
Message-Id: <3457.9304091932@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: mpi-context: context management and group binding (long)
To: mpi-context@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu

Dear MPI Colleagues

I now discuss context management and group binding. As promised I omit
inter(group,context)communication  for  the  present.  This letter  is
further to letters of today and yesterday to mpi-comm and mpi-context.

Some of the people I talked with  about contexts at the recent meeting
wanted to be able  to  generate some contexts values  themselves, i.e.
not by calling context constructor procedures. This  is accomadated in
the recommendations of this letter.

In  my letter to mpi-comm  "Subject: various (long)"  I suggested that
the  question  of   secure/insecure   point-to-point  and   collective
communications  could  be described  as a property of  a context, with
some advantage. In this letter I will incorporate this feature.

I will also be discussing communicator objects as  in  Proposal X, but
with sensible names.  Tony Skjellum has  made the valued suggestion to
me,  privately,  that it  is better  to  attributise the  communicator
object with the secure/insecure  stuff,  rather  than  the context.  I
shall  adopt  this suggestion  in  this  letter  and  attributise  the
communicator object rather than the context.

			o--------------------o

Message Contexts
----------------

In this  proposal  a (message) context identifier  is (like a  message
tag) just an integer which is used in message selection and  (unlike a
message tag) may not be wildcard.

The interval of context identifiers (1, ...,  MPI_NUMUSR_CONTEXTS) are
reserved for  the  MPI user to manage as she sees  fit.  Use  of these
contexts  allows the user to  write programs  which do not make use of
the provided context creation and deletion facilities.  How big should
MPI_NUMUSR_CONTEXTS be? Say  1, 2, 4, 8, 16,  32, 64, 128, ...   Steve
and Tom, and friends, can you advise?


The  MPI system provides a procedure which  creates  a  unique context
outside  the  interval  of reserved user  context identifiers,  and  a
procedure  which deletes a created context  (it does  not delete  user
reserved contexts).  For example:

            context = mpi_create_context()
            mpi_delete_context(context)


There  may  be  advantage  in defining  the context  create and delete
functions such that they create and delete more than one context at  a
time, in order to amortise creation/deletion overhead. 

Please note that  these  context generation calls are made by a single
process and are  asynchronous.  They  can be implemented  as a process
local  operation  by  attatching  the  global  process identifier to a
process local context  allocator, at the  expense  of needing a lot of
bits in  the  context.   They can  also be implemented  via access  to
shared data (or a reactive server) in which case the bit length of the
context can be made smaller.  [I view this as an implementation detail
which we  should not dwell on in MPI, and should be the freedom of the
implementor  to  choose any  formally  correct method  which hopefully
optimises execution on the target platform.]

The  user program may make  use of the user reserved  contexts. ClassB
libraries (encapsulated  objects) are  expected  to use system created
contexts. These can  be created  as above or through  the Communicator
object constructors described below.

Communicator Objects
--------------------

The context  acquired by the  user in either of  the above ways is not
valid  for  communication.  Communication  is  effected by  use  of  a
Communicator  object,  which is a binding  of  context, zero  or  more
groups (just zero or one in this letter),  and communicator attributes
(just one in this letter).

Two classes of communicator are described in this letter:

* WorldCommunicator - an instance of a WorldCommunicator is a binding of
  context   to  nothing.   This   communicator   allows   the   user  to
  intracommunicate within  the world  of  processes comprising  the user
  application,  labelling  processes with their (process  local) process
  identifier.

* GroupCommunicator - an instance of a GroupCommunicator is a binding of
  context to  a process  group.  This  communicator  allows the user  to
  intracommunicate within the  group of  processes comprising the group,
  labelling processes with their (group global) rank within group.

Communicator   creation  defines   the   SECURITY  attribute   of  the
communicator to be created, which may be any of the following:

* MPI_DEFAULT_COMMUNICATOR - the default Security attribute
  specified in environmental management.

* MPI_REGULAR_COMMUNICATOR - the regular Security attribute
  which provides regular point-to-point and collective semantics

* MPI_SECURE_COMMUNICATOR  - the secure Security attribute
  which provides secure point-to-point and collective semantics

Communicator objects are opaque objects of undefined size referenced
by an object handle which is expressed as in integer in the host
language.

Communicator  creation will create a context  for the Communicator, or
will  accept and  bind  a  user  managed  context. MPI should  provide
procedures for creation of each class of Communicator objects, and for
deletion of any class of Communicator object.  
For example,
            handle = mpi_create_world_communicator(context, security)
            handle = mpi_create_group_communicator(group, context, security)
            mpi_delete_communicator(handle)

In  each  creation  procedure  "security" is  the  security  attribute
described above. It is  the responsibility  of the user to ensure that
all communicators with the same context also have the same security.

In each creation procedure "context"  may be a user managed context or
may take the value  MPI_NULL_CONTEXT  (or  something  like that :-) in
which  case  the creation  procedure also creates  a context  for  the
communicator.  If the creation procedure creates  a  context  then the
procedure  synchronises the  calling processes  (all  processes for  a
WorldCommunicator and the  group of processes for a GroupCommunicator)
and returns the same context to each copy of the  communicator object.
If a user managed context was supplied then the  procedure is  process
local  and it is the responsibility of the  user  to ensure that  each
user managed context is bound to no more than one  communicator at any
time.

In the GroupCommunicator  creation procedure "group"  is a handle to a
group description.

The  communicator deletion procedure deletes the bound context if that
context  was created in the communicator creation  procedure  but does
not delete a user managed context.

Short Examples
--------------

A user program which only makes use of  two user reserved contexts and
makes no  use  of process  groupings  can "enable" the  user  reserved
contexts by creating WorldCommunicator objects.
For example,
            c0 = mpi_create_world_communicator(0,MPI_DEFAULT_COMMUNICATOR)
            c1 = mpi_create_world_communicator(1,MPI_DEFAULT_COMMUNICATOR)

A ClassA library can accept a communicator object as argument.
For example,
            void class_a_procedure(int communicator, ...) 
            {
              /* do it */
            }

A ClassB library can accept a group as argument and create private
GroupCommunicator objects.
For example,
            void class_b_procedure(int group, ...)
            {
              static int communicator = MPI_NULL_COMMUNICATOR;

              if (communicator != MPI_NULL_COMMUNICATOR) 
              {
                  communicator = mpi_create_group_communicator(group,
                                                    MPI_NULL_CONTEXT, 
                                            MPI_SECURE_COMMUNICATOR);
              }

              /* do it */
            }
This example could  be generalised by adding a group "cache"  facility
as described by Rik Littlefield.

Point-to-point communication
----------------------------

The  point-to-point  (intra)communication  procedures  have a  generic
process     and     message     addressing     form     (communicator,
process_label,message_label).  I  shall  deal with  Send  and  Receive
separately.

Send(communicator, process-label, message-label)
----

* communicator  is   a WorldCommunicator or a GroupCommunicator

* process-label is { the (process local) identifier of the receiver when
                   {                 communicator is a WorldCommunicator
                   {
                   { the rank in communicator.group of the receiver when
                   {                 communicator is a GroupCommunicator

* message-label is   the message tag in communicator.context.

The point-to-point  communication is  REGULAR if communicator.security
has    the    value    MPI_REGULAR_COMMUNICATOR,    and    SECURE   if
communicator.security has the value MPI_SECURE_COMMUNICATOR.


Recv(communicator, process-label, message-label)
----

* communicator  is   a WorldCommunicator or a GroupCommunicator

* process-label is { the (process local) identifier of the receiver when
                   {                 communicator is a WorldCommunicator
                   {
                   { the rank in communicator.group of the receiver when
                   {                 communicator is a GroupCommunicator
                   {
                   { a wildcard value in either case

* message-label is   the message tag in communicator.context or a
  wildcard value

The point-to-point  communication is  REGULAR if communicator.security
has    the    value    MPI_REGULAR_COMMUNICATOR,    and    SECURE   if
communicator.security has the value MPI_SECURE_COMMUNICATOR.

Collective communication
------------------------

The WorldCommunicator is not valid for MPI collective communication.

The  GroupCommunicator  is  valid  for  MPI  collective  communication
procedures.     The   collective    communication   is   REGULAR    if
communicator.security  has  the  value  MPI_REGULAR_COMMUNICATOR,  and
SECURE if communicator.security has the value MPI_SECURE_COMMUNICATOR.

			o--------------------o


Comments, questions, (flames :-), please!

For your conveniene, my plan now is to go into a session of deep thought
regarding intercommunication, the work we have done at EPCC, and MPI.  I
will then discuss these thoughts with my colleagues here, and promise to
return quality discussion of intercommunication to you sometime next
week. 

[If anyone wants to discuss intercommunication with me, I prefer to do
so privately until I have really thought longer and harder than before.]

I have an oustanding reply to Paul Pierce's recent letter, which I shall
make now.  I'll be off-line for a while, probably come on-line again
Sunday, and will reply to letters which I hope you will write in a
reactive and less prolific fashion. 

Happy reading :-)

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 16:01:34 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA06769; Fri, 9 Apr 93 16:01:34 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA05808; Fri, 9 Apr 93 16:01:03 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 16:01:02 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA05800; Fri, 9 Apr 93 16:01:00 -0400
Date: Fri, 9 Apr 93 21:00:58 BST
Message-Id: <3497.9304092000@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: mpi-context: CORRECTION to previous message
To: mpi-collcomm@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk

Dear MPI Colleagues

An astute colleague here has pointed out two silly errors and some
exceptionally bad phrasing in my previous letter "Subject: mpi-context:
context management and group binding (long)"

When describing point-to-point receive, please replace the two erroneous
occurences of "receiver" by "sender".  Cut and paste errors, sorry. 

In the final paragraph I am inviting your replies and informing that I
personally will be in a reactive and less prolific mode of operation. 
The wording implies that I am asking you to be reactive and less
prolific, which of course I would not ask.  Tired and hungry errors (its
9pm here now, Easter Friday), sorry. 

Best Wishes
Lyndon "the prolific" 

ps thanks Al :-)

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 16:20:54 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA07116; Fri, 9 Apr 93 16:20:54 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06566; Fri, 9 Apr 93 16:20:22 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 16:20:22 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06500; Fri, 9 Apr 93 16:19:36 -0400
Date: Fri, 9 Apr 93 21:19:33 BST
Message-Id: <3512.9304092019@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: mpi-context: context and group (longer) 
To: prp@SSD.intel.com
In-Reply-To: prp@SSD.intel.com's message of Thu, 08 Apr 93 11:29:42 -0700
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

Paul Pierce writes:

> > For  me the arguments  have  piled up  in favour  of context and group
> > being separate and  independent entities.
> >
> > Lyndon
> 
> I agree, although I also see merit in associating a context with a group.
> 

Hey, some consensus here.  Magic!

BTW, Paul, I wanted to ask what you thought about my suggestions for
secure send/receive being bound to a context kind of thing (now
communicator object as of last mail to mpi-context), as opposed to
having different calls.  I think you are right about the microscopic
effect on the code.  I just tried to give both global default control
and per module instance control over the security question. 

> Consider a SPMD program with these calls. Assume the calls are loosely
> synchronous.
> 
> 		call to LibA (Group1)
> 		call to LibB (Group1)
> 		call to LibB (Group2)
> 
> In a loosely synchronous environment, messages for the next call can come in
> before the previous one has completed. Here we see two forms of overlap.
> 
> Within the call to LibA, we might get messages from processes which have already
> entered LibB. If LibA and LibB are independently written, they might use some
> of the same tags. To avoid messages from LibB matching receives in LibA, we
> must use different contexts. If we have static contexts, allocated when the
> libraries are initialized, each call in the library can quickly provide the
> context to its point-to-point calls. If we only have dynamic contexts,
> especially if contexts are carried inside groups, then a library must be
> prepared to dynamically allocate a new context on any call when it sees a new
> group. I know we discussed ways to do this locally, so the context could be
> created and cached locally on the fly without communication, but I find the
> idea of incorporating such code into every library call horrifying.

Paul, I have a model for libraries like this, which in my mail to
mpi-comm "Subject: mpi-comm: various (long)" I referred to as ClassB
libraries, which maybe you might want to think about.  It's quite
simple.  We write libraries just like this, which are akin to
encapsulated objects. 

We think in terms of library instances.  The library provides is an
instance constructor which accepts a group, creates context(s) for the
instance and constructs the instance, returning an instance id to the
user which is used to refer to the instance for all calls.  That is, all
calls including and up to the instance destructor, which asks an
instance to detruct itself. 

Our experience is that users do not find it difficult to manage this
model for ClassB libraries. 

> 
> So I propose that we need two forms of context, one that is quite static for
> protecting code, and one that is more dynamic for protecting groups.

I cannot see any difference between the latter of these two contexts
"more dynamic for protecting groups" and a global group identifier. 

> The only mechanism I know of that is adequate for protecting code is context
> alloctated via a nameserver. In MIMD programs, one cannot say much about the
> order in which libraries are initialized. Thus, if context is statically
> allocated at initialization time, there must be a way to obtain the global
> context value for a piece of code independently of other processes. A more
> static method, such as a MPI registry or a "dollar bill server" has the
> disadvantage of requiring a much larger value range for context. That uses
> precious bits in the envelope of every message. Once a context is allocated to
> a piece of code, it can be safely stored in a global variable without
> endangering thread safety or shared memory implementations, because no matter
> how many instantiations of the library store into the variable, they will
> always store the same value.

We find that with regard to operations within a process group, and in
particular to library instance construction and desctruction decribed
above, the main user program has a highly SPMD nature.  So we can
exploit sequencing.  This is a most valuable learning experience,
because we had similar thoughts to those you express here, implemented a
name server, and really didn't need it once (for this purpose). 

> 
> The point-to-point calls might be configured to accept (group, rank, context).
> In this configuration, the static context protecting the code is passed in
> explicitly, and the context protecting the group is inside the group object.
> 
> I'm not sure how this interacts with cross-group message passing. Perhaps the
> simplest solution is to use a well-known group context in such cases, which
> effectively disables group protection.

As I point out above, your "group protecting context hidden inside
group" really does just seem to me to be a global group identifier. 
Within the definition of context I see no reason why we necessarily will
cause a problem with intercommunication. 

When you say "use a well know group context in such cases" I take it you
mean a common ancestor like the "group context" of all processes or
something? 

I have promised, I will return quality discussion on intercommunication
next week. 

Did the points in this reply letter help, Paul?

Best Wishes
Lyndon "the temporarily less prolific"

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 18:44:27 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA08721; Fri, 9 Apr 93 18:44:27 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA12880; Fri, 9 Apr 93 18:43:52 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 18:43:51 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from ssd.intel.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA12872; Fri, 9 Apr 93 18:43:49 -0400
Received: from ernie.ssd.intel.com by SSD.intel.com (4.1/SMI-4.1)
	id AA24612; Fri, 9 Apr 93 15:43:36 PDT
Message-Id: <9304092243.AA24612@SSD.intel.com>
To: lyndon@epcc.ed.ac.uk
Cc: prp@SSD.intel.com, mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu
Subject: Re: mpi-context: context and group (longer) 
In-Reply-To: Your message of "Fri, 09 Apr 93 21:19:33 BST."
             <3512.9304092019@subnode.epcc.ed.ac.uk> 
Date: Fri, 09 Apr 93 15:43:35 -0700
From: prp@SSD.intel.com


> From: L J Clarke <lyndon@epcc.ed.ac.uk>
> 
> Paul Pierce writes:
>
> > Consider a SPMD program with these calls. Assume the calls are loosely
> > synchronous.
> > 
> > 		call to LibA (Group1)
> > 		call to LibB (Group1)
> > 		call to LibB (Group2)
> > 
> > In a loosely synchronous environment, messages for the next call can come in
> > before the previous one has completed. Here we see two forms of overlap.
> > 
> > Within the call to LibA, we might get messages from processes which have already
> > entered LibB.
> 
> Paul, I have a model for libraries like this, which in my mail to
> mpi-comm "Subject: mpi-comm: various (long)" I referred to as ClassB
> libraries ...

Yes, that is a matching concept.

> We think in terms of library instances. ...

I found this part hard to understand. However, if you propose to use the
mechanisms in your just previous mail:

> A ClassB library can accept a group as argument and create private
> GroupCommunicator objects.
> For example,
>            void class_b_procedure(int group, ...)
>            {
>              static int communicator = MPI_NULL_COMMUNICATOR;
>
>              if (communicator != MPI_NULL_COMMUNICATOR) 
>              {
>                  communicator = mpi_create_group_communicator(group,
>                                                    MPI_NULL_CONTEXT, 
>                                            MPI_SECURE_COMMUNICATOR);
>              }
>
>              /* do it */
>            }
> This example could  be generalised by adding a group "cache"  facility
> as described by Rik Littlefield.

First of all this code doesn't work - if the library is called with a
different group (see the LibB(Group{1,2}) example above) it will mistakenly
use the communicator for Group1 when called for Group2. This problem can be
fixed using cacheing. But...

This is exactly the sort of too-dynamic, too-intrusive mechanism I find
horrifying. I can't conceive of unleashing on the unsuspecting world a
standard that requires you to put code like that in every library call
(its even more complex with cacheing.) We _must_ come up with a
better mechanism.

The example mechanism I talked about might look like this:

	int my_context;

	void class_b_initialize() /* Called once at the beginning of time */
	{
		my_context = create_and_or_lookup_context("mylib");
	}

	void class_b_procedure(int group, ...)
	{

		/* do it using (group, rank, my_context) */
	}

Note the total absence of context maintenance in the arbitrary library
procedure. For group protection, the group must contain an additional embedded
context.

> > So I propose that we need two forms of context, one that is quite static for
> > protecting code, and one that is more dynamic for protecting groups.
> 
> I cannot see any difference between the latter of these two contexts
> "more dynamic for protecting groups" and a global group identifier.

You are right, its the same. My point is that group context is necessary but
_not_ sufficient.

> > The only mechanism I know of that is adequate for protecting code is context
> > alloctated via a nameserver. ...
> 
> We find that with regard to operations within a process group, and in
> particular to library instance construction and desctruction decribed
> above, the main user program has a highly SPMD nature.  So we can
> exploit sequencing.  This is a most valuable learning experience,
> because we had similar thoughts to those you express here, implemented a
> name server, and really didn't need it once (for this purpose).

You learned that you have a SPMD universe. We have a mostly SPMD universe, but
we have customers already with MPMD applications.

One can argue that sequencing is acceptable for a static-process model, but it
is not adequate for dynamic processes. We have talked about defining MPI in
such a way that it is complete for a static process model but without limiting
its extension to a dynamic process model. So we must be careful - if we assume
sequencing now, we must do it in a way that allows for a nameserver later.


> Lyndon "the temporarily less prolific"

Paul
From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr  9 22:58:35 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11840; Fri, 9 Apr 93 22:58:35 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA20945; Fri, 9 Apr 93 22:58:12 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 9 Apr 1993 22:58:12 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA20883; Fri, 9 Apr 93 22:56:53 -0400
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Fri, 9 Apr 93
 19:45 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA09922; Fri,
 9 Apr 93 19:43:30 PDT
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA22459; Fri, 9 Apr 93 19:43:26 PDT
Date: Fri, 9 Apr 93 19:43:26 PDT
From: rj_littlefield@pnlg.pnl.gov
Subject: proposal -- context and tag limits
To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, mpsears@newton.cs.sandia.gov
Cc: d39135@carbon.pnl.gov, gropp@mcs.anl.gov, mpi-collcomm@cs.utk.edu,
        mpi-envir@cs.utk.edu, mpi-pt2pt@cs.utk.edu
Message-Id: <9304100243.AA22459@sodium.pnl.gov>
X-Envelope-To: mpi-pt2pt@cs.utk.edu, mpi-envir@cs.utk.edu,
 mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

Lyndon et.al. write:

> ...  This seems to say that the bit
> length of the envelope is fixed to some number of bits and the more
> fields we want to cram into the envelope the shorter the bit lengths of
> fields must be.  Is there a good reason why the bit length of the
> envelope shoud be fixed in this fashion, or perhaps are you arguing
> that the bit length of the envelope should be as short as possible?
> 
> > This is a question vendors might answer: how many
> > context values and tag values are you willing to support on future
> > platforms and how many are you willing to back fit on existing ones?
> 
> Yes, this would be a good question for the vendors indeed.  
> 
> VENDORS - PLEASE PLEASE PLEASE DO ADVISE US ON THIS ONE. 

I wonder what kind of useful advice vendors could really give us.

Hardware support boils down to a question of getting faster
performance in exchange for some relatively small resource limit.

But in almost every case I can think of, such limits are made
functionally transparent to the user by automatic fallback to
some slower mechanism without the resource limit.  Thus we have..
  fixed size register sets with compilers that spill to memory,
  fixed size caches with automatic flush/reload from main memory,
  fixed size TLB's with cpu traps for TLB reload, 
  fixed size physical memory with virtual memory support, 
and so on.

The only counterexample that pops to mind is fixed-length numeric
values, for which reasonably well established conventions exist.

No such conventions currently exist regarding tag and context
values.

============  PROPOSAL TO ENVIRONMENT COMMITTEE ==============

The MPI specification should 

1. require that all MPI implementations provide functional
   support for specified generous limits (e.g., 32 bits) on tag
   and context values, and

2. suggest that vendors provide a system-specific mechanism by
   which the user can optionally specify tag and context limits
   that the program agrees to abide by.  Even the form of
   these limits should remain unspecified since they may vary
   from system to system.
   
======================== END PROPOSAL ========================

Further discussion...

If a vendor wishes to provide hardware support to enhance
performance for some stricter limits, and if some people are able
and willing to write programs within those limits, that's great.
Those people on those machines will be lark happy.  If the
performance increase is substantial, and I'm on one of those
machines, and my program is simple enough, I'll probably be one
of those people.

However, I am not aware of any system on which generous limits
could not be supported, albeit with some loss of performance
compared to staying within the (currently hypothetical)
hardware-supported limits.

Everyone I know would MUCH prefer suboptimal performance 
over HAVING to rewrite applications to conform to varying and
inconsistent hard limits.

Yes, I recall the many arguments against mandating specific
limits.  But, I claim that those arguments are misdirected.
They are based on analogy to things like word length and memory
size, which I again note are subject to well established
conventions and principles.  (You can't run big programs on small
machines, and we pretty much agree about what "big" and "small"
mean.)  In the case of context and tag values, such conventions
do not exist, and a very wide range of conflicting limits have
been discussed at various times and places.

I believe that we will not meet our goal of portability 
if we do not specify usable limits on tag and context values.

--Rik

----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Mon Apr 12 13:58:09 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA19594; Mon, 12 Apr 93 13:58:09 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA24464; Mon, 12 Apr 93 13:57:40 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 12 Apr 1993 13:57:38 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA24426; Mon, 12 Apr 93 13:56:31 -0400
Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 12 Apr 93
 10:42 PST
Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA11608; Mon,
 12 Apr 93 10:40:32 PDT
Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA24711; Mon, 12 Apr 93 10:40:28
 PDT
Date: Mon, 12 Apr 93 10:40:28 PDT
From: rj_littlefield@pnlg.pnl.gov
Subject: contexts examples/problems 1-3
To: jwf@parasoft.com, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu,
        mpi-context@cs.utk.edu, mpsears@cs.sandia.gov, snir@watson.ibm.com,
        tony@cs.msstate.edu
Cc: d39135@carbon.pnl.gov
Message-Id: <9304121740.AA24711@sodium.pnl.gov>
X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

Folks,

As Tony Skjellum noted, I am organizing a set of test cases & issues
to be addressed by the various context proposals.  I have formulated
these as a set of "problems" such as might be found on an essay test.

Here are draft versions of the first three "problem statements"
for the context proposals.

I anticipate that at least one more problem will be submitted.

Please tell me about defects and inadequacies in these problems.

If you have a favorite concern, now is the time to get it
reflected in the problem set.

Thanks,
--Rik Littlefield


BACKGROUND INFO

. Be sure that your point-to-point and group/context control calls
  are specified elsewhere in your proposal.

PROBLEM 1 (simple):

. Specify your calling sequence for an MPI circular-shift routine
  that operates on a contiguous buffer of double precision float
  values.

  E.g. you might specify

    MPI_CSHIFTB (inbuf,outbuf,datatype,len,group,shift)

    where  IN inbuf        input buffer
           OUT outbuf      output buffer
           IN datatype     symbolic constant MPI_DOUBLE
           IN len          length of inbuf (# of elements)
           IN group        handle to group descriptor
           IN shift        number of processes to shift
           
. Assume that a user desires to write a new collective
  communication routine with the same calling sequence as cshift,
  but with different semantics.  

  To be definite, this routine exchanges data in the pattern needed
  for one stage in a butterfly.  I.e., the process of rank i exchanges
  data with the process of rank i+shift*(1-2*(i%(2*shift)/shift).

  Call this routine bflyexchange.

. Show an implementation of bflyexchange in terms of your
  point-to-point and group/context control calls.

. Specify the conditions necessary to ensure correct operation of
  this implementation.

  E.g., you might say "safe under all conditions", "safe if and
  only if no other routine issues wildcard receives in the same
  group/context", "safe if and only if context and tag are
  unique", or something like that.

  Making these conditions simple and broad is good.  
  Getting caught stating conditions that are too broad is bad.

. Discuss the performance of this implementation.  

  Note that the semantics of bflyexchange require only a single send
  and receive per process.  Explain how this level of performance can
  be achieved or approached by your implementation.  

  If you assert that group control operations can be done without
  communications, explain how this works and what implications it has
  on other system parameters, e.g., the number and range of context
  values.


PROBLEM 2 (medium)

. Write a "guidelines for library developers and users" document
  that explains how to write and call libraries in order to maintain
  message-passing isolation between the various libraries and
  between the libraries and the user program.  Be sure to explain
  how to achieve good efficiency.

  Be complete, but brief.

  (Long explanations can be interpreted as indicating a complex
  design.)

  You may wish to describe two or more self-consistent strategies,
  along the lines of Lyndon's "ClassA" and "ClassB" libraries as
  discussed earlier on mpi-context.


PROBLEM 3 (hard?)

This problem is paraphrased from one posed by Jon Flower.  The task
is to simulate the host-node programming model through the use of
"host" and "node" groups.  This is interesting both for backward-
compatibility and for its inter-group communication requirements.

As stated by Jon, this problem really spans subcommittees.  For
the sake of the present discussion, I have reformulated it in
terms of an SPMD programming model in which a black-box function
is used to tell each process whether it's the host or a node.
Note in particular that nodes don't know the id of the host.

Here is pseudo-code for the desired program:

    main()
    {
      if (I_am_the_host())
        host ();
      else
        node ();
    }

    host ();
    /*
     * Form two groups containing:
     *     i)  only the host process.
     *     ii) the node processes.
     */
	host_group = mpi_...;
	node_group = mpi_...;
    /*
     * Broadcast from host to all nodes; using "ALL" group.
     * (It would be nice to have inter-group broadcast for
     *  this since that is more like "current practice".)
     */
	myrank = mpi_...;
	mpi_bcast( ...., myrank, MPI_GROUP_ALL, ...);
    /*
     * Send individual message to each node in turn.
     */
	for(node=0; node < MPI_ORDER(node_group); node++) {
	    mpi_send( ..., (node_group, node), ...);
	}
    /*
     * Receive result from node 0.
     */
	mpi_recv( ..., (node_group, 0), ...);
    }

    node()
    {
    /*
     * Form two groups containing:
     *     i)  only the host process.
     *     ii) the node processes.
     */
	host_group = mpi_...;
	node_group = mpi_...;
    /*
     * Receive bcast from host using ALL group.
     */
	host_rank = mpi_...;
	mpi_bcast(..., host_rank, MPI_GROUP_ALL, ...);
    /*
     * Receive single message from host.
     */
	mpi_recv(..., 0, host_group, ...);
    /*
     * Send point-to-point messages in node group.
     */
        myrank = mpi_... (node_group);
        nnodes = mpi_... (node_group);
        sendhandle = mpi_isend( ..., 
         (node_group,(myrank+1)%nnodes), ...);
        mpi_recv ( ...,
         (node_group,(myrank-1+nnodes)%nnodes), ...);
        mpi_complete (sendhandle);
    /*
     * Compute global sum in nodes only.
     */
	mpi_reduce(...  , node_group, MPI_SUM_OP, ...);
    /*
     * Node 0 sends sum to host.
     */
	 if(myrank == 0) mpi_send(..., 0, host_group, ...);
    }

. Show how to implement this pseudo-code using your point-to-point
  and group calls.  Note that this code wants to think of node
  processes in terms of their rank in the node_group, not the
  ALL group.  Be sure to show all details of any translations
  that are required.

. Discuss how the collective comms and point-to-point messages
  are kept separate, even if the point-to-point calls are
  changed to used wildcards.

----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU  Mon Apr 12 17:55:01 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA24478; Mon, 12 Apr 93 17:55:01 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA10963; Mon, 12 Apr 93 17:54:05 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 12 Apr 1993 17:54:04 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA10955; Mon, 12 Apr 93 17:54:02 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA13925; Mon, 12 Apr 93 16:52:04 CDT
Date: Mon, 12 Apr 93 16:52:04 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9304122152.AA13925@Aurora.CS.MsState.Edu>
To: tony@aurora@cs.msstate.edu, mpsears@newton.cs.sandia.gov
Subject: Re: the gathering
Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu

Mark,

You should explain what your model implies about the starting of
processes.  If you assume that processes have been started by MPI, that
is OK (generally a tacit assumption of MPI1), but in any event you
should tell us what the process is told at the moment of spawning (eg,
about ALL groups, or its name, etc), that will help it become part of
MPI-based communication.  We need to see how "safe"/"unsafe" it will
be to start MPI in every model.  If it is extremely difficult/simple
to get from the "just-spawned" state to the "MPI-up-and-running" sate,
that should be made clear.

I am happy to answer more questions!  Please shoot away.

- Tony
PS Because this is of general interest to all readers, I am echoing to
	the reflector.  I hope that is OK with you.

----- Begin Included Message -----

From mpsears@newton.cs.sandia.gov Mon Apr 12 15:08:18 1993
To: tony@aurora@cs.msstate.edu
Subject: Re: the gathering
Date: Mon, 12 Apr 93 14:10:36 MST
From: mpsears@newton.cs.sandia.gov
Content-Length: 243


Tony, I need a little clarification of what you mean by

"Include discussion of how starting works and what the spawning
semantics must provide them (or through an initial message)
so that they can work."

Starting and spawning what?

mark




----- End Included Message -----

From owner-mpi-collcomm@CS.UTK.EDU  Mon Apr 19 09:47:57 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA01603; Mon, 19 Apr 93 09:47:57 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09663; Mon, 19 Apr 93 09:47:31 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 19 Apr 1993 09:47:30 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09302; Mon, 19 Apr 93 09:45:46 -0400
Date: Mon, 19 Apr 93 14:45:01 BST
Message-Id: <3994.9304191345@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: operation modes
To: mpi-pt2pt@cs.utk.edu, mpi-collcomm@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk

Dear mpi-pt2pt and mpi-collcomm colleagues,

I'm sending this to both subcommittees.  There is a section for pt2pt
and a section for collcomm, however these sections deal with an subject
which probably should be consistent across both subcommittees, hence I
send to both. 

                           o----------o
pt2pt
-----

Here is (yet another) suggestion which if adopted would help to reduce
the multiplicity of send calls.  In particular the multiplicity derived
from the three extant communication modes REGULAR (STANDARD?), READY and
SECURE (SYNCHRONOUS?). 

Observe that the send call in the case of each mode has the same syntax
class, unlike the multiplicity derived from data buffer nature.  The
suggestion is to have one send procedure which accepts a MODE argument
describing the communication mode, i.e.  is one of: REGULAR (STANDARD?);
READY; SECURE (SYNCHRONOUS?). 

This lets the MPI user make either local code decisions about which mode
is appropriate, by using the above names, or global code decisions by
use of #define in C and use of PARAMETER in Fortran (for example).

I also suggest that we say SYNCHRONOUS rather than SECURE, so as not to
give the impression that REGULAR (rather than STANDARD), is always not
secure, since it may be secure some of the time. 

I propose to the pt2pt subcommittee the suggestions made here.

                           o----------o

collcomm
--------

There is a class of collcomm procedures which we see may or may not
barrier synchronise the calling group.  The suggestion at the last
meeting was that users have to write code which allows such procedures
to barrier whereas they may not. 

The suggestion here is that those procedures which are not implicitly
barrier synchronising accept a MODE argument which determines whether
they certainly barrier synchronise, or whether they may or may not
barrier synchronize depending on the implementation.  This mode argument
is one of: REGULAR; SYNCHRONOUS.  Obviously I suggest that SYNCHRONOUS
is the mode which forces barrier synchronisation of the group. 

This is consistent with the pt2pt suggestion above, except that READY is
not a collcomm mode, and again lets the MPI user make either local code
decisions about which mode is appropriate, by using the above names, or
global code decisions by use of #define in C and use of PARAMETER in
Fortran (for example). 

I propose to the collcomm subcommittee the suggestions made here.

                           o----------o

Comments, questions, (flames :-) please.

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Tue Apr 20 13:27:37 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA03009; Tue, 20 Apr 93 13:27:37 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18519; Tue, 20 Apr 93 13:26:45 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 20 Apr 1993 13:26:44 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18365; Tue, 20 Apr 93 13:24:25 -0400
Date: Tue, 20 Apr 93 18:23:45 BST
Message-Id: <4968.9304201723@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
To: mpi-context@cs.utk.edu
Reply-To: lyndon@epcc.ed.ac.uk
Cc: mpi-collcomm@cs.utk.edu

Subject: mpi-context; intercommunication etc (long)

Dear mpi-context colleagues

I  previously  wrote  regarding  context  management  and  binding  of
contexts for intracommunication in the  letter 
[Subject:  mpi-context: context management and  group binding (long)]  
and  sent  out a  short correction in the letter 
[Subject: mpi-context: CORRECTION to previous message] 
to which I draw your attention.

In this letter I wish to briefly revisit and recap the above subjects,
then move on to  briefly discuss and  make  a  concrete suggestion for
intercommunication.   This is a long letter. Probably best to print it
and read over a coffee.

I really must clarify  the  nature of the context which I am assuming.
In  this letter contexts are assumed to be global in the sense that if
a process P  creates a context C, it can send C to another  process Q,
and Q can both send and  receive  messages of context C.  This is  the
model  adopted  by Zipcode, which  I view as the  exemplar of existing
practice regarding message context.

Regarding intracommunication I hope  to slightly simplify the  content
of my suggestion compared to the letters referred to above.  Regarding
intercommunication the particular suggestion I  make is  motivated  by
conformity  with intracommunication both in the point-to-point  syntax
class, and in the content of the message envelope.

			o--------------------o

1. Communicator and Communication
=================================

Communicator   objects   provide    point-to-point    and   collective
communication in MPI.  A communicator object is a binding of a message
context   and   one  or   more  process  worlds.   Two  subclasses  of
communicator   object   are  defined  below,   intracommunicator   and
intercommunicator.   Communicator  objects are identified  by  process
local object identifiers.

1.1 Construction, Destruction and Information
---------------------------------------------

MPI  provides subclass specific  communicator  constructors  described
below. MPI provides a subclass generic communicator object  destructor
procedure.

     mpi_delete_communicator(id)

id           is identifier of a communicator

purpose      deletes the communicator object identified by id
             See Note 1) under intracommunicator construction
             and Note 1) under intercommunicator construction

Notes:

1) This procedure could be replaced with MPI_FREE if we wish to fit in
with the manipulation of communication handles and buffer descriptor
handles described in the point-to-point chapter.

MPI provides a subclass generic procedure which returns the context
identifier of a communicator object.

context = mpi_communicator_context(id)

context   is the context bound to the communicator
id        is the identifier of a communicator
          See Note 1) under intracommunicator construction
          and Note 1) under intercommunicator construction

purpose   informs the caller of the context bound to a communicator

1.2 Discussion
--------------

2. Intracommunicator and Intracommunication
===========================================

Intracommunicator objects provide point-to-point communication between
processes of the same process world in MPI.  Intracommunicator objects
also provide collective communication in MPI.

2.1 Construction and Information
--------------------------------

MPI provides a subclass intracommunicator constructor.

id = mpi_create_intracommunicator(context, world)

id           is identifier of created communicator
context      is message context for communications
world        is process world of receiver and sender in both send and recv

purpose      creates an intracommunicator object

Notes:

1) The context of an intracommunicator is either an actual context  or
the null  context (MPI_NULL). If the context is an actual context then
the call does not  synchronise processes in  the process world of  the
intracommunicator.  If  the context is the null  context then the call
synchronises  the process world of  the  communicator  and  creates  a
context for the communicator. In this case the context is deleted when
the communicator is  itself deleted  calling  mpi_delete_communicator,
and that call  will synchronise the  process world.  In this  case the
information procedure mpi_communicator_context will return MPI_NULL to
the caller --- the  caller is  not  allowed to  have knowledge  of the
context created.

2) The  process  world  of  an intracommunicator  object is either  an
actual process group or the null group (MPI_NULL). If the world is  an
actual  process  group then  the world  is  understood to contain  all
processes  composing the process  group  and  the  communicator object
identifies processes in  a relative  sense, i.e. as a  rank within the
process group.   If the world is the  null  group  then  the  world is
dunerstood to  contain  all  processes  composing the program  and the
communicator  object identifies  processes in an absolute  sense, i.e.
as a process identifier.

MPI  provides  a subclass  information  procedure  which  returns  the
identifier of the world of the intracommunicator.

world = mpi_intracommunicator_world(id)

world        is process world of the communicator
id           is identifier of created communicator

purpose      returns the world identifier of the intracommunicator, 
             either an actual group identifier or the null group
             identifier (MPI_NULL)

2.2 Point-to-point
------------------

I deal with generic "send" and "recv" seperately, and  can  ignore the
multiple flavours thereof.

send(id, process, label, ...)

id           is identifier of intracommunicator object
process      is identifier of receiver in world of object
label        is message tag in context of object        

recv(id, process, label, ...)

id           is identifier of intracommunicator object,
                and cannot be wildcard
process      is identifier of sender in world of object, 
                and can be wildcard
label        is message tag in context of object, 
                and can be wildcard

Notes:

1) The caller  must be  in the  world of  the intracommunicator,  i.e.
either  it is the  null process  group  or an  actual process group of
which the caller is a member.

2.3 Collective
--------------

I  deal  with a  generic collective "operation",  and  can ignore  the
multiple flavours thereof.


operation(id, ...)

id           is identifier of intracommunicator object

Notes:

1) The intracommunicator must have a world which  is an actual process
group of which the caller is a member.

2.4 Envelope
------------

The message envelope for intracommunication consists of:

* sender identifier within process world of communicator (pid or rank)
* receiver routing (implementation defined)
* message context of communicator
* message tag
* message length   (implementation defined)

The sender and  reciever must  bind  the  context to  the same process
world in an intracommunicator, thus the world is determinable.

2.5 Discussion
--------------

The facilities for intracommunication, coupled with the context model,
provide a  convenient and powerful interface  for communications which
are  closed  within  the   scope  of  a  group  and   for  the  serial
client-server model.

The ability to create an  intracommunicator without synchronisation of
processes  simplifies the construction  of  libraries  in highly  MIMD
programs, and  can be  used  to  advantage  in  conjunction  with  the
association and location facilities described below.

3. Association, Dissociation, Location, Passivation and Activation
==================================================================

3.1 Association, Dissociation and Location 
------------------------------------------

These facilities allow the user to bind names to process, group, and
context objects.

     mpi_associate(name, id)

name     is a string which is the name bound to the given object 
id       is the object identifier (process, group or context)

prupose  associates name with object identified by id


id = mpi_locate(name, wait)

id       is the object identifier (process, group or context)
name     is a string which is the name bound to the given object
wait     is a boolean value determining whether the caller waits for
         the name to become associated with an object of given class

purpose  creates a copy of the object associated with name

     mpi_dissociate(id)

id       is the object identifier (process, group or context)

purpose  removes the association of name with object id, and can only
         be performed by the process which previously associated name.

Notes:

1) These facilities are a name service. This could be implemented by a
name server process  which can run on  a host or login  node, and need
not consume expensive numerical computation resources.

3.2 Passivation and Activation 
------------------------------

These  facilities  allow  the  user  to  transmit a process, group and
context objects.   Passivation and  activation  produce  a  "portable"
description  of the object in  a  memory buffer  (conventionally these
operations produce  a description in a file, but a  memory  buffer  is
more convenient for transmission in a message :-).

     mpi_passivate(id, buf, len)

id       is the object identifier (process, group or context)
buf      is an array of character
len      is the length of the array buf

purpose  writes a portable description of object identified by id in
         the memory buffer buf

id = mpi_activate(buf, len)

id       is the object identifier (process, group or context)
buf      is an array of character
len      is the length of the array buf

purpose  reads a portable description of an object and creates a copy
         of the object

Notes:

1) The detailed type  of the  memory buffer is not of great importance
provided  that  we define that type.  I have used character above,  we
could choose integer, for example.

3.3 Discussion
--------------

I  have assumed  that  MPI  can  distinguish the class  of  the object
(process, group  or  context)  given the object  identifier.   If this
cannot be the case  then we can describe a different set of procedures
for each class or we can add a class argument to the above procedures.

The  name association and location  service is the most manageable way
of  describing  which   groups   communicate  with  one  another.  The
passivation activation facilities are potentially a building block  in
the implementation of the name association and location service.

Deletion  of objects created  by  activation or  location  should only
delete the process local copy of the object.  It should not delete the
original copy. 

When location and activation "create" an object and the object already
exists within the calling process, a  new object should not be created
and the id of the existing object should be returned.  This means that
such  object  have  multiple  references,  so  we  should  define  the
destructors in  terms of deleting  references to  objects, leaving the
implementation to delete the object when there are zero references.

4. Intercommunicator and Intercommunication
===========================================

Intercommunicator objects provide point-to-point communication between
processes  of  different  process  worlds  in  MPI.  Intercommunicator
objects do not provide collective communication in MPI (yet :-).

4.1 Construction
----------------

id = mpi_create_intercommunicator(context, local_world, remote_world)

id           is identifier of created communicator
context      is message context for communications
local_world  is process world of sender in send and receiver in recv
remote_world is process world of receiver in send and sender in recv

purpose      creates an intercommunicator object

Notes:

1)  The  context  can  be  an  actual  context  or  the  null  context
(MPI_NULL).  If  the context is an actual  context then the  call does
not  synchronise  processes  within  the  two  process  worlds of  the
communicator.  If  the  context  is the  null  context then  the  call
synchronises the two process worlds of  the communicator and creates a
context for the communicator. In this case the context is deleted when
the communicator  is  itself deleted calling  mpi_delete_communicator,
and that call  will synchronise the  process world.   In this case the
information procedure mpi_communicator_context will return MPI_NULL to
the caller ---  the caller is not  allowed to have  knowledge  of  the
context created.

2)  Each process world of  an  intercommunicator object  is  either an
actual process  group or the null group (MPI_NULL). If the world is an
actual process group  then  the world  is  understood to  contain  all
processes  composing  the  process  group and  the communicator object
identifies processes in that world in a relative sense, i.e. as a rank
within the process group.  If  the world is  the null  group  then the
world is understood to contain all processes composing the program and
the communicator  object  identifies  processes  in  that  world in an
absolute sense,  i.e.   as a  global process identifier.  

MPI  provides  subclass  information  procedures  which  return  the
identifier of the local_world and remote_world of the intercommunicator.

world = mpi_intercommunicator_local_world(id)

world        is local process world of the communicator
id           is identifier of created communicator

purpose      returns the local world identifier of the intercommunicator, 
             either an actual group identifier or the null group
             identifier (MPI_NULL)


world = mpi_intercommunicator_remote_world(id)

world        is remote process world of the communicator
id           is identifier of created communicator

purpose      returns the remote world identifier of the intercommunicator, 
             either an actual group identifier or the null group
             identifier (MPI_NULL)


4.2 Point-to-point
------------------

I deal with generic "send" and "recv" seperately, and  can  ignore the
multiple flavours thereof.

send(id, process, label, ...)

id           is identifier of intracommunicator object
process      is identifier of receiver in remote_world of object
label        is message tag in context of object        

recv(id, process, label, ...)

id           is identifier of intracommunicator object,
                and cannot be wildcard
process      is identifier of sender in remote_world of object, 
                and can be a wildcard
label        is message tag in context of object, 
                and can be a wildcard

1)  The  caller must be  in  the local_world of the intracommunicator,
i.e.  either it is the null process group or an  actual process  group
of which the caller is a member.

4.3 Envelope
------------

The message envelope for intercommunication consists of:

* sender identifier within process world of communicator (pid or rank)
* receiver routing (implementation defined)
* message context of communicator
* message tag
* message length   (implementation defined)

The  sender and reciever  must bind  the context to  the  same process
worlds  in  an  intercommunicator, thus both the  local_world and  the
remote_world are determinable.

This is identical to the envelope of intracommunication.

4.4 Discussion
--------------

The facilities for intercommunication, coupled with the context model,
and the name service, provide a convenient interface for  the parallel
client-server   model  and   parallel  modular-application   software,
provided    that   the   WAIT_ANY()   facilities   of   point-to-point
communication are fair.

The ability  to create an intercommunicator without synchronisation of
processes  simplifies  the   programming  of  parallel   client-server
software, and avoids a dependency graph problem when writing  parallel
modular-application software in which the module graph contains loops.

5. Discussion
=============

I find  it  a  wee  bit  amusing that an  intercommunicator  in  which
local_world  and  remote_world  are  the same  is  no different to  an
intracommunicator.  This  suggests to me that either  (a) there should
only  be  an   intercommunicator  class  or  (b)   we  think  of   the
intracommunicator   class   as  simply   syntactic  sugar  around  the
intercommunicator class. 

The  communicator  object  class  names   are  rather  long.   Perhaps
programmers would prefer shorter names  in programs. We could take the
approach  of  deriving  names  from  the  list  of  objects  which   a
communicator binds, for example: "intracommunicator"  becomes "CW"  as
it is a binding of a context and  a world; "intercommunicator" becomes
"CWW" as it is a binding of a context  and  a world and another world.
On the other hand we  could take  collections of letters from the long
names,  for  example: "intracommunicator"  becomes  "RACO"  or  "ACO";
"intercommunicator" becomes "ERCO" or "ECO".


			o--------------------o

Comments, questions, flames, please!

Best Wishes
Lyndon 

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Tue Apr 20 14:07:45 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04411; Tue, 20 Apr 93 14:07:45 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22213; Tue, 20 Apr 93 14:07:04 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 20 Apr 1993 14:07:02 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22052; Tue, 20 Apr 93 14:06:10 -0400
Date: Tue, 20 Apr 93 19:06:06 BST
Message-Id: <5045.9304201806@subnode.epcc.ed.ac.uk>
From: L J Clarke <lyndon@epcc.ed.ac.uk>
Subject: Re: proposal -- context and tag limits
To: rj_littlefield@pnlg.pnl.gov, mpi-context@cs.utk.edu
In-Reply-To: rj_littlefield@pnlg.pnl.gov's message of Fri, 9 Apr 93 19:43:26 PDT
Reply-To: lyndon@epcc.ed.ac.uk
Cc: d39135@carbon.pnl.gov, gropp@mcs.anl.gov, mpi-collcomm@cs.utk.edu,
        mpi-envir@cs.utk.edu, mpi-pt2pt@cs.utk.edu

Rik writes:

> ============  PROPOSAL TO ENVIRONMENT COMMITTEE ==============

Yes, I support the spirit and detail of the proposal.

> Everyone I know would MUCH prefer suboptimal performance 
> over HAVING to rewrite applications to conform to varying and
> inconsistent hard limits.

Yes, this claim is true of everyone I know except for one very small
community of academic scientists who will write their relatively simple
programs from scratch for every machine on which they will do major
scientific production runs.  I know a whole lot more academics and
commercials who just will not write programs from scratch in this way. 

> Yes, I recall the many arguments against mandating specific
> limits.  But, I claim that those arguments are misdirected.

Indeed I believe that your claim is valid.

> I believe that we will not meet our goal of portability 
> if we do not specify usable limits on tag and context values.

I have the same belief.  I also believe that if we fail on portability
then we fail period. 

Best Wishes
Lyndon

         /--------------------------------------------------------\
    e||) | Lyndon J Clarke    Edinburgh Parallel Computing Centre | e||) 
    c||c | Tel: 031 650 5021  Email: lyndon@epcc.edinburgh.ac.uk  | c||c 
         \--------------------------------------------------------/


From owner-mpi-collcomm@CS.UTK.EDU  Wed Apr 21 12:39:43 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA02902; Wed, 21 Apr 93 12:39:43 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03050; Wed, 21 Apr 93 12:38:34 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 21 Apr 1993 12:38:32 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from almaden.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03027; Wed, 21 Apr 93 12:38:01 -0400
Message-Id: <9304211638.AA03027@CS.UTK.EDU>
Received: from almaden.ibm.com by almaden.ibm.com (IBM VM SMTP V2R2)
   with BSMTP id 3186; Wed, 21 Apr 93 09:38:37 PDT
Date: Wed, 21 Apr 93 09:19:33 PDT
From: "Ching-Tien (Howard) Ho" <ho@almaden.ibm.com>
To: mpi-collcomm@cs.utk.edu
Subject: The CCL Common Group Structures paper by Bruck et al.

Hi,
  Here is another recent related paper by us (A Proposal for Common
Group Structures in a Collective Communication Library)
which I distributed to some people in various occasions.
It has appeared as an IBM RJ 9241, March 1993.

In this paper, we tried NOT to change the semantics of process groups from
its original definition: an ordered set of processes.  Also, the assumption
for our case was to IGNORE the machine topology at all for various reasons.
Under these two assumptions,
the process topology was treated in an implicit way (say, based on
creating subgroup and performing +1/-1 shift within a subgroup).
That is we mainly
provide a set of macros which conveniently create various subgroups from
a specified group based on the commonly used algorithm structures
posed on the group by the user.

As usual, all comments are welcome.

Regards,

-- Howard

%!PS-Adobe-2.0
%%Creator: dvips 5.47 (RS/6000 1.0) Copyright 1986-91 Radical Eye Software
%%Title: grid.dvi
%%Pages: 24 1
%%BoundingBox: 0 0 612 792
%%EndComments
%%BeginProcSet: tex.pro
/TeXDict 250 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch
load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{
isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale
Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get
round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10
N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{
/vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{
statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N
/FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin
/FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array
/BitMaps X /BuildChar{CharBuilder} N /Encoding IE N end dup{/foo setfont}2
array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}
B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont
setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup
length 4 sub get} B /ch-xoff{128 ch-data dup length 3 sub get sub} B /ch-yoff{
ch-data dup length 2 sub get 127 sub} B /ch-dx{ch-data dup length 1 sub get} B
/ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0
N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S
dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0
ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice
ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}
imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr
put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf
div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook
known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore
showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook
known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256
array N 0 1 255{IE S 1 string dup 0 3 index put cvn put} for}N /p /show load N
/RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X
/rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0
7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1
TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1
-.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{
moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{
S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B
/j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w
}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p
a}B /bos{/SS save N}B /eos{clear SS restore}B end
%%EndProcSet
TeXDict begin 1000 300 300 @start /Fa 4 113 df<000FE0000FE00001E00001E00001E0
0001E00001E00001E00001E003F9E00F07E01C03E03C01E07801E07801E0F801E0F801E0F801E0
F801E0F801E07801E07801E03C01E01C03E00F0DFC03F9FC161A7F9919>100
D<1C003E003E003E003E001C0000000000000000007E007E001E001E001E001E001E001E001E00
1E001E001E001E001E001E00FF80FF80091B7F9A0D>105 D<FE1F01F000FE63C63C001E81C81C
001F01F01E001F01F01E001E01E01E001E01E01E001E01E01E001E01E01E001E01E01E001E01E0
1E001E01E01E001E01E01E001E01E01E001E01E01E00FFCFFCFFC0FFCFFCFFC022117F9025>
109 D<FE7F00FFC3C01F01E01E00F01E00F81E00781E007C1E007C1E007C1E007C1E007C1E0078
1E00F81E00F01F01E01F83C01E7F001E00001E00001E00001E00001E0000FFC000FFC00016187F
9019>112 D E /Fb 30 123 df<1C3C3C3C3C040408081020204080060E7D840E>44
D<7FF0FFE07FE00C037D8A10>I<0000600000E00000E00000E00001C00001C00001C000038000
0380000300000700000700000600000E00000C0000180000180000300000300000630000C70000
8700010700030700060E00040E00080E003F8E00607C00801FC0001C00001C0000380000380000
380000380000700000700000600013277E9D17>52 D<08E0100BF01017F8201FF8603E19C0380E
80200080600100400300800300000600000E00000C00001C00001C000038000038000070000070
0000F00000F00001E00001E00001E00003C00003C00003C00007C000078000078000030000141F
799D17>55 D<001F000061800080C00100600300600600600600600600600E00C00F00800F8180
07C30007E40003F80001F80003FC00047E00183F00300F00200700600700C00300C00300C00300
800600800600C00C00C008004030003060001F8000131F7B9D17>I<01FFF0001F00001E00001E
00001E00003C00003C00003C00003C0000780000780000780000780000F00000F00000F00000F0
0001E00001E00001E00001E00003C00003C00003C00003C0000780000780000780000780000F80
00FFF800141F7D9E12>73 D<01FFFF80001E00E0001E0070001E0038001E003C003C003C003C00
3C003C003C003C003C0078007800780078007800F0007800E000F003C000F00F0000FFFC0000F0
000001E0000001E0000001E0000001E0000003C0000003C0000003C0000003C000000780000007
80000007800000078000000F800000FFF000001E1F7D9E1F>80 D<0007E040001C18C000300580
0060038000C0038001C00180018001000380010003800100038001000380000003C0000003C000
0003F8000001FF800001FFE000007FF000001FF0000001F8000000780000007800000038000000
380020003800200038002000300060007000600060006000E0007000C000E8038000C606000081
F800001A217D9F1A>83 D<00F1800389C00707800E03801C03803C038038070078070078070078
0700F00E00F00E00F00E00F00E20F01C40F01C40703C40705C40308C800F070013147C9317>97
D<07803F8007000700070007000E000E000E000E001C001C001CF01D0C3A0E3C0E380F380F700F
700F700F700FE01EE01EE01EE01CE03CE038607060E031C01F0010207B9F15>I<007E0001C100
0300800E07801E07801C07003C0200780000780000780000F00000F00000F00000F00000F00000
70010070020030040018380007C00011147C9315>I<0000780003F80000700000700000700000
700000E00000E00000E00000E00001C00001C000F1C00389C00707800E03801C03803C03803807
00780700780700780700F00E00F00E00F00E00F00E20F01C40F01C40703C40705C40308C800F07
0015207C9F17>I<007C01C207010E011C013C013802780C7BF07C00F000F000F000F000700070
0170023804183807C010147C9315>I<00007800019C00033C00033C000718000700000700000E
00000E00000E00000E00000E0001FFE0001C00001C00001C00001C000038000038000038000038
0000380000700000700000700000700000700000700000E00000E00000E00000E00000C00001C0
0001C0000180003180007B0000F300006600003C00001629829F0E>I<003C6000E27001C1E003
80E00700E00F00E00E01C01E01C01E01C01E01C03C03803C03803C03803C03803C07003C07001C
0F001C17000C2E0003CE00000E00000E00001C00001C00301C00783800F0700060E0003F800014
1D7E9315>I<01E0000FE00001C00001C00001C00001C000038000038000038000038000070000
070000071E000763000E81800F01C00E01C00E01C01C03801C03801C03801C0380380700380700
380700380E10700E20700C20701C20700C40E00CC060070014207D9F17>I<00C001E001E001C0
00000000000000000000000000000E003300230043804300470087000E000E000E001C001C001C
003840388030807080310033001C000B1F7C9E0E>I<01E0000FE00001C00001C00001C00001C0
000380000380000380000380000700000700000703C00704200E08E00E11E00E21E00E40C01C80
001D00001E00001FC00038E000387000387000383840707080707080707080703100E03100601E
0013207D9F15>107 D<03C01FC0038003800380038007000700070007000E000E000E000E001C
001C001C001C0038003800380038007000700070007100E200E200E200E200640038000A207C9F
0C>I<1C0F80F0002630C318004740640C004780680E004700700E004700700E008E00E01C000E
00E01C000E00E01C000E00E01C001C01C038001C01C038001C01C038001C01C070803803807100
3803806100380380E10038038062007007006600300300380021147C9325>I<1C0F802630C047
40604780604700704700708E00E00E00E00E00E00E00E01C01C01C01C01C01C01C038438038838
03083807083803107003303001C016147C931A>I<007C0001C3000301800E01C01E01C01C01E0
3C01E07801E07801E07801E0F003C0F003C0F003C0F00780F00700700F00700E00301800187000
07C00013147C9317>I<01C1E002621804741C04781C04701E04701E08E01E00E01E00E01E00E0
1E01C03C01C03C01C03C01C0380380780380700380E003C1C0072380071E000700000700000E00
000E00000E00000E00001C00001C0000FFC000171D809317>I<1C1E0026610047838047878047
07804703008E00000E00000E00000E00001C00001C00001C00001C000038000038000038000038
000070000030000011147C9313>114 D<00FC030206010C030C070C060C000F800FF007F803FC
003E000E700EF00CF00CE008401020601F8010147D9313>I<018001C003800380038003800700
0700FFF007000E000E000E000E001C001C001C001C003800380038003820704070407080708031
001E000C1C7C9B0F>I<0E00C03300E02301C04381C04301C04701C08703800E03800E03800E03
801C07001C07001C07001C07101C0E20180E20180E201C1E200C264007C38014147C9318>I<03
83800CC4401068E01071E02071E02070C040E00000E00000E00000E00001C00001C00001C00001
C040638080F38080F38100E5810084C60078780013147D9315>120 D<0E00C03300E02301C043
81C04301C04701C08703800E03800E03800E03801C07001C07001C07001C07001C0E00180E0018
0E001C1E000C3C0007DC00001C00001C00003800F03800F07000E06000C0C0004380003E000013
1D7C9316>I<01C04003E08007F1800C1F00080200000400000800001000002000004000008000
0100000200000401000802001002003E0C0063FC0041F80080E00012147D9313>I
E /Fc 3 111 df<003E000C000C000C000C0018001800180018073018F0307060706060C060C0
60C06080C080C480C4C1C446C838700F177E9612>100 D<030003800300000000000000000000
0000001C002400460046008C000C0018001800180031003100320032001C0009177F960C>105
D<383C0044C6004702004602008E06000C06000C06000C0C00180C00180C401818401818803008
80300F00120E7F8D15>110 D E /Fd 46 122 df<FFFCFFFCFFFCFFFC0E047F8C13>45
D<387CFEFEFE7C3807077C8610>I<00180000780001F800FFF800FFF80001F80001F80001F800
01F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F800
01F80001F80001F80001F80001F80001F80001F80001F80001F8007FFFE07FFFE013207C9F1C>
49 D<03FC000FFF003C1FC07007E07C07F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F8
0003F00003F00007E00007C0000F80001F00003E0000380000700000E01801C018038018070018
0E00380FFFF01FFFF03FFFF07FFFF0FFFFF0FFFFF015207D9F1C>I<00FE0007FFC00F07E01E03
F03F03F03F81F83F81F83F81F81F03F81F03F00003F00003E00007C0001F8001FE0001FF000007
C00001F00001F80000FC0000FC3C00FE7E00FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07
F00FFFC001FE0017207E9F1C>I<0000E00001E00003E00003E00007E0000FE0001FE0001FE000
37E00077E000E7E001C7E00187E00307E00707E00E07E00C07E01807E03807E07007E0E007E0FF
FFFEFFFFFE0007E00007E00007E00007E00007E00007E00007E000FFFE00FFFE17207E9F1C>I<
1000201E01E01FFFC01FFF801FFF001FFE001FF8001BC00018000018000018000018000019FC00
1FFF001E0FC01807E01803E00003F00003F00003F80003F83803F87C03F8FE03F8FE03F8FC03F0
FC03F07007E03007C01C1F800FFF0003F80015207D9F1C>I<000070000000007000000000F800
000000F800000000F800000001FC00000001FC00000003FE00000003FE00000003FE00000006FF
000000067F0000000E7F8000000C3F8000000C3F800000183FC00000181FC00000381FE0000030
0FE00000300FE00000600FF000006007F00000E007F80000FFFFF80000FFFFF800018001FC0001
8001FC00038001FE00030000FE00030000FE000600007F000600007F00FFE00FFFF8FFE00FFFF8
25227EA12A>65 D<0003FE0080001FFF818000FF01E38001F8003F8003E0001F8007C0000F800F
800007801F800007803F000003803F000003807F000001807E000001807E00000180FE00000000
FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000000007E000000
007E000001807F000001803F000001803F000003801F800003000F8000030007C000060003F000
0C0001F800380000FF00F000001FFFC0000003FE000021227DA128>67 D<FFFFFF8000FFFFFFF0
0007F003FC0007F0007E0007F0003F0007F0001F8007F0000FC007F00007E007F00007E007F000
07F007F00003F007F00003F007F00003F007F00003F807F00003F807F00003F807F00003F807F0
0003F807F00003F807F00003F807F00003F807F00003F807F00003F007F00003F007F00003F007
F00007E007F00007E007F0000FC007F0001F8007F0003F0007F0007E0007F003FC00FFFFFFF000
FFFFFF800025227EA12B>I<FFFFFFFCFFFFFFFC07F000FC07F0003C07F0001C07F0000C07F000
0E07F0000E07F0000607F0180607F0180607F0180607F0180007F0380007F0780007FFF80007FF
F80007F0780007F0380007F0180007F0180007F0180307F0180307F0000307F0000607F0000607
F0000607F0000E07F0000E07F0001E07F0003E07F001FCFFFFFFFCFFFFFFFC20227EA125>I<00
03FE0040001FFFC0C0007F00F1C001F8003FC003F0000FC007C00007C00FC00003C01F800003C0
3F000001C03F000001C07F000000C07E000000C07E000000C0FE00000000FE00000000FE000000
00FE00000000FE00000000FE00000000FE00000000FE000FFFFC7E000FFFFC7F00001FC07F0000
1FC03F00001FC03F00001FC01F80001FC00FC0001FC007E0001FC003F0001FC001FC003FC0007F
80E7C0001FFFC3C00003FF00C026227DA12C>71 D<FFFF83FFFEFFFF83FFFE07F0001FC007F000
1FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0
001FC007F0001FC007F0001FC007F0001FC007FFFFFFC007FFFFFFC007F0001FC007F0001FC007
F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC007F0001FC0
07F0001FC007F0001FC007F0001FC007F0001FC007F0001FC0FFFF83FFFEFFFF83FFFE27227EA1
2C>I<FFFFE0FFFFE003F80003F80003F80003F80003F80003F80003F80003F80003F80003F800
03F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F800
03F80003F80003F80003F80003F80003F80003F800FFFFE0FFFFE013227FA115>I<FFFFE000FF
FFE00007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F00000
07F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000
0007F0001807F0001807F0001807F0001807F0003807F0003807F0007007F0007007F000F007F0
01F007F007F0FFFFFFF0FFFFFFF01D227EA122>76 D<FFF000000FFFFFF800001FFF07F800001F
E006FC000037E006FC000037E006FC000037E0067E000067E0067E000067E0063F0000C7E0063F
0000C7E0061F800187E0061F800187E0060FC00307E0060FC00307E0060FC00307E00607E00607
E00607E00607E00603F00C07E00603F00C07E00601F81807E00601F81807E00601F81807E00600
FC3007E00600FC3007E006007E6007E006007E6007E006003FC007E006003FC007E006001F8007
E006001F8007E006001F8007E006000F0007E0FFF00F00FFFFFFF00600FFFF30227EA135>I<FF
F8001FFEFFFC001FFE07FC0000C007FE0000C006FF0000C0067F8000C0063FC000C0061FE000C0
060FE000C0060FF000C00607F800C00603FC00C00601FE00C00600FE00C00600FF00C006007F80
C006003FC0C006001FE0C006000FF0C0060007F0C0060007F8C0060003FCC0060001FEC0060000
FFC00600007FC00600007FC00600003FC00600001FC00600000FC006000007C006000003C00600
0003C0FFF00001C0FFF00000C027227EA12C>I<0007FC0000003FFF800000FC07E00003F001F8
0007E000FC000FC0007E001F80003F001F80003F003F00001F803F00001F807F00001FC07E0000
0FC07E00000FC0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00
000FE0FE00000FE0FE00000FE07E00000FC07F00001FC07F00001FC03F00001F803F80003F801F
80003F000FC0007E0007E000FC0003F001F80000FC07E000003FFF80000007FC000023227DA12A
>I<FFFFFF00FFFFFFE007F007F007F001FC07F000FC07F0007E07F0007E07F0007F07F0007F07
F0007F07F0007F07F0007F07F0007E07F0007E07F000FC07F001FC07F007F007FFFFE007FFFF00
07F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000
0007F0000007F0000007F00000FFFF8000FFFF800020227EA126>I<FFFFFE0000FFFFFFC00007
F007F00007F001F80007F000FC0007F0007E0007F0007F0007F0007F0007F0007F0007F0007F00
07F0007F0007F0007F0007F0007E0007F000FC0007F001F80007F007F00007FFFFC00007FFFF80
0007F00FE00007F007F00007F003F80007F001FC0007F001FC0007F001FC0007F001FC0007F001
FC0007F001FC0007F001FC0007F001FC0007F001FC0607F000FE0607F000FF0CFFFF803FF8FFFF
800FF027227EA12A>82 D<01FC0407FF8C1F03FC3C007C7C003C78001C78001CF8000CF8000CFC
000CFC0000FF0000FFE0007FFF007FFFC03FFFF01FFFF80FFFFC03FFFE003FFE0003FF00007F00
003F00003FC0001FC0001FC0001FE0001EE0001EF0003CFC003CFF00F8C7FFE080FF8018227DA1
1F>I<7FFFFFFF807FFFFFFF807E03F80F807803F807807003F803806003F80180E003F801C0E0
03F801C0C003F800C0C003F800C0C003F800C0C003F800C00003F800000003F800000003F80000
0003F800000003F800000003F800000003F800000003F800000003F800000003F800000003F800
000003F800000003F800000003F800000003F800000003F800000003F800000003F800000003F8
00000003F8000003FFFFF80003FFFFF80022227EA127>I<FFFF803FFCFFFF803FFC07F0000180
07F000018007F000018007F000018007F000018007F000018007F000018007F000018007F00001
8007F000018007F000018007F000018007F000018007F000018007F000018007F000018007F000
018007F000018007F000018007F000018007F000018007F000018007F000018007F000018003F0
00030003F800030001F800060000FC000E00007E001C00003F80F800000FFFE0000001FF000026
227EA12B>I<FFFF800FFEFFFF800FFE07F80000C007F80001C003FC00018001FE00030001FE00
070000FF00060000FF000C00007F801C00003FC01800003FC03000001FE07000000FF06000000F
F0E0000007F8C0000003FD80000003FF80000001FF00000001FE00000000FE00000000FE000000
00FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE0000
0000FE00000000FE0000001FFFF000001FFFF00027227FA12A>89 D<07FC001FFF803F07C03F03
E03F01E03F01F01E01F00001F00001F0003FF003FDF01FC1F03F01F07E01F0FC01F0FC01F0FC01
F0FC01F07E02F07E0CF81FF87F07E03F18167E951B>97 D<FF000000FF0000001F0000001F0000
001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0F
E0001F3FF8001FF07C001F801E001F001F001F000F801F000F801F000FC01F000FC01F000FC01F
000FC01F000FC01F000FC01F000FC01F000FC01F000F801F001F801F801F001FC03E001EE07C00
1C3FF800180FC0001A237EA21F>I<00FF8007FFE00F83F01F03F03E03F07E03F07C01E07C0000
FC0000FC0000FC0000FC0000FC0000FC00007C00007E00007E00003E00301F00600FC0E007FF80
00FE0014167E9519>I<0001FE000001FE0000003E0000003E0000003E0000003E0000003E0000
003E0000003E0000003E0000003E0000003E0000003E0001FC3E0007FFBE000F81FE001F007E00
3E003E007E003E007C003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E
00FC003E007C003E007C003E003E007E001E00FE000F83BE0007FF3FC001FC3FC01A237EA21F>
I<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC00
00FC00007C00007C00007E00003E00181F00300FC07003FFC000FF0015167E951A>I<003F8000
FFC001E3E003C7E007C7E00F87E00F83C00F80000F80000F80000F80000F80000F8000FFFC00FF
FC000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F
80000F80000F80000F80000F80000F80007FF8007FF80013237FA211>I<03FC1E0FFF7F1F0F8F
3E07CF3C03C07C03E07C03E07C03E07C03E07C03E03C03C03E07C01F0F801FFF0013FC00300000
3000003800003FFF801FFFF00FFFF81FFFFC3800FC70003EF0001EF0001EF0001EF0001E78003C
7C007C3F01F80FFFE001FF0018217E951C>I<FF000000FF0000001F0000001F0000001F000000
1F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F07E0001F1FF8
001F307C001F403C001F803E001F803E001F003E001F003E001F003E001F003E001F003E001F00
3E001F003E001F003E001F003E001F003E001F003E001F003E001F003E001F003E00FFE1FFC0FF
E1FFC01A237EA21F>I<1C003F007F007F007F003F001C000000000000000000000000000000FF
00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00
FFE0FFE00B247EA310>I<FF000000FF0000001F0000001F0000001F0000001F0000001F000000
1F0000001F0000001F0000001F0000001F0000001F0000001F00FF801F00FF801F0038001F0060
001F01C0001F0380001F0700001F0E00001F1C00001F7E00001FFF00001FCF00001F0F80001F07
C0001F03E0001F01E0001F01F0001F00F8001F007C001F003C00FFE0FFC0FFE0FFC01A237EA21E
>107 D<FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F
001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B237EA2
10>I<FF07F007F000FF1FFC1FFC001F303E303E001F403E403E001F801F801F001F801F801F00
1F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F
001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00
1F001F001F00FFE0FFE0FFE0FFE0FFE0FFE02B167E9530>I<FF07E000FF1FF8001F307C001F40
3C001F803E001F803E001F003E001F003E001F003E001F003E001F003E001F003E001F003E001F
003E001F003E001F003E001F003E001F003E001F003E001F003E00FFE1FFC0FFE1FFC01A167E95
1F>I<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC007E
FC007EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>I<FF0F
E000FF3FF8001FF07C001F803E001F001F001F001F801F001F801F000FC01F000FC01F000FC01F
000FC01F000FC01F000FC01F000FC01F000FC01F001F801F001F801F803F001FC03E001FE0FC00
1F3FF8001F0FC0001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000
00FFE00000FFE000001A207E951F>I<FE1F00FE3FC01E67E01EC7E01E87E01E87E01F83C01F00
001F00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F0000FFF0
00FFF00013167E9517>114 D<0FF3003FFF00781F00600700E00300E00300F00300FC00007FE0
007FF8003FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00C7F0
0011167E9516>I<0180000180000180000180000380000380000780000780000F80003F8000FF
FF00FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F
81800F81800F81800F81800F81800F830007C30003FE0000F80011207F9F16>I<FF01FE00FF01
FE001F003E001F003E001F003E001F003E001F003E001F003E001F003E001F003E001F003E001F
003E001F003E001F003E001F003E001F003E001F003E001F007E001F00FE000F81BE0007FF3FC0
01FC3FC01A167E951F>I<FFE7FF07F8FFE7FF07F81F007800C00F807801800F807C01800F807C
018007C07E030007C0DE030007E0DE070003E0DF060003E18F060001F18F0C0001F38F8C0001FB
079C0000FB07D80000FE03D800007E03F000007E03F000007C01F000003C01E000003800E00000
1800C00025167F9528>119 D<FFE07FC0FFE07FC00F801C0007C0380003E0700003F0600001F8
C00000F98000007F8000003F0000001F0000001F8000003FC0000037C0000063E00000C1F00001
C0F8000380FC0007007E000E003E00FF80FFE0FF80FFE01B167F951E>I<FFE01FE0FFE01FE00F
8006000F8006000FC00E0007C00C0007E01C0003E0180003E0180001F0300001F0300000F86000
00F86000007CC000007CC000007FC000003F8000003F8000001F0000001F0000000E0000000E00
00000C0000000C00000018000078180000FC380000FC300000FC60000069C000007F8000001F00
00001B207F951E>I E /Fe 4 21 df<FFFFFFC0FFFFFFC01A027C8B23>0
D<400004C0000C6000183000301800600C00C006018003030001860000CC000078000030000030
0000780000CC000186000303000601800C00C0180060300030600018C0000C40000416187A9623
>2 D<01800180018001800180C183F18F399C0FF003C003C00FF0399CF18FC183018001800180
0180018010147D9417>I<000000C0000003C000000F0000003C000000F0000003C00000070000
001C00000078000001E00000078000001E00000078000000E0000000780000001E000000078000
0001E0000000780000001C0000000700000003C0000000F00000003C0000000F00000003C00000
00C0000000000000000000000000000000000000000000000000000000007FFFFF80FFFFFFC01A
247C9C23>20 D E /Ff 8 122 df<00FFF83FF8000FC00F80000F80060000078004000007C008
000003C010000003C020000003E040000001E080000001F100000000F300000000F600000000FC
0000000078000000007C000000007C000000007C00000000BE000000011E000000021E00000006
1F0000000C0F000000080F800000100780000020078000004007C000008003C000010003E00003
0003E0000F0007E000FFE01FFE00251F7F9E26>88 D<FFF801FF0F8000780F8000600780004007
C0008007C0018003C0010003E0020003E0040001E0080001F0180000F0100000F0200000F84000
00788000007D0000007D0000003E0000003C0000003C0000003800000078000000780000007800
000070000000F0000000F0000000F0000000F0000001E000003FFF0000201F7F9E1A>I<007FFF
F800FC00F000E001E000C003C0018007800100078003000F0002001E0002003C00040078000000
F8000000F0000001E0000003C00000078000000F0000000F0000001E0000003C00000078008000
F0008001F0010001E0010003C00300078002000F0006001E0004003E000C003C003C007800F800
FFFFF8001D1F7D9E1F>I<0000780003F80000700000700000700000700000E00000E00000E000
00E00001C00001C000F1C00389C00707800E03801C03803C0380380700780700780700780700F0
0E00F00E00F00E00F00E10F01C20F01C20703C20705C40308C400F078015207E9F18>100
D<00E001E001E000C000000000000000000000000000000E001300238043804380438087000700
07000E000E001C001C001C20384038403840388019000E000B1F7E9E10>105
D<1E07802318C023A06043C0704380704380708700E00700E00700E00700E00E01C00E01C00E01
C00E03821C03841C07041C07081C03083803101801E017147E931B>110
D<03C1C00C62201034701038F02038F020386040700000700000700000700000E00000E00000E0
0000E02061C040F1C040F1C080E2C080446300383C0014147E931A>120
D<0F00601180702180E021C0E041C0E04380E08381C00701C00701C00701C00E03800E03800E03
800E03800E07000C07000C07000E0F00061E0003EE00000E00000E00001C007818007838007070
0060600021C0001F0000141D7E9316>I E /Fg 18 117 df<60F0F06004047D830B>46
D<00300030007000F000F001700370027004700C7008701070307020704070C070FFFF00700070
007000700070007007FF10187F9713>52 D<000C0000000C0000000C0000001E0000001E000000
3F000000270000002700000043800000438000004380000081C0000081C0000081C0000100E000
0100E00001FFE000020070000200700006007800040038000400380008001C0008001C001C001E
00FF00FFC01A1A7F991D>65 D<FEFEC0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0
C0C0C0C0C0C0C0C0C0FEFE07257D9B0B>91 D<FEFE060606060606060606060606060606060606
060606060606060606060606060606FEFE0725809B0B>93 D<3F8070C070E020700070007007F0
1C7030707070E070E071E071E0F171FB1E3C10107E8F13>97 D<07F80C1C381C30087000E000E0
00E000E000E000E0007000300438080C1807E00E107F8F11>99 D<007E00000E00000E00000E00
000E00000E00000E00000E00000E00000E0003CE000C3E00380E00300E00700E00E00E00E00E00
E00E00E00E00E00E00E00E00600E00700E00381E001C2E0007CFC0121A7F9915>I<07C01C3030
187018600CE00CFFFCE000E000E000E0006000300438080C1807E00E107F8F11>I<01F0031807
380E100E000E000E000E000E000E00FFC00E000E000E000E000E000E000E000E000E000E000E00
0E000E000E007FE00D1A80990C>I<18003C003C001800000000000000000000000000FC001C00
1C001C001C001C001C001C001C001C001C001C001C001C001C00FF80091A80990A>105
D<FC00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C3F801C1E001C18
001C10001C20001C40001DC0001FE0001CE0001C70001C78001C38001C1C001C1E001C1F00FF3F
C0121A7F9914>107 D<FCF8001D0C001E0E001E0E001C0E001C0E001C0E001C0E001C0E001C0E
001C0E001C0E001C0E001C0E001C0E00FF9FC012107F8F15>110 D<07E01C38300C700E6006E0
07E007E007E007E007E0076006700E381C1C3807E010107F8F13>I<FCF8001F0E001E07001C03
801C03801C01C01C01C01C01C01C01C01C01C01C01C01C03801C03001E07001F0C001CF0001C00
001C00001C00001C00001C00001C0000FF800012177F8F15>I<FCE01D701E701E201C001C001C
001C001C001C001C001C001C001C001C00FFC00C107F8F0F>114 D<1F2060E04020C020C020F0
007F003FC01FE000F080708030C030C020F0408F800C107F8F0F>I<0400040004000C000C001C
003C00FFC01C001C001C001C001C001C001C001C001C201C201C201C201C200E4003800B177F96
0F>I E /Fh 1 50 df<0C003C00CC000C000C000C000C000C000C000C000C000C000C000C000C
00FF8009107E8F0F>49 D E /Fi 42 123 df<00E00001E0000FE000FFE000F3E00003E00003E0
0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0
0003E00003E00003E00003E00003E00003E00003E000FFFF80FFFF80111D7C9C1A>49
D<07F0001FFE00383F007C1F80FE0FC0FE0FC0FE0FE0FE07E07C07E03807E0000FE0000FC0000F
C0001F80001F00003E0000780000F00000E00001C0000380600700600E00601C00E01FFFC03FFF
C07FFFC0FFFFC0FFFFC0131D7D9C1A>I<01FC0007FF000E0F801E0FC03F07E03F07E03F07E03F
07E01E0FC0000FC0000F80001F0001FC0001FC00000F800007C00003E00003F00003F83803F87C
03F8FE03F8FE03F8FE03F0FC03F07807E03C0FC01FFF8003FC00151D7E9C1A>I<0001C00003C0
0007C00007C0000FC0001FC0003BC00073C00063C000C3C00183C00383C00703C00E03C00C03C0
1803C03803C07003C0E003C0FFFFFEFFFFFE0007C00007C00007C00007C00007C00007C000FFFE
00FFFE171D7F9C1A>I<0000E000000000E000000001F000000001F000000001F000000003F800
000003F800000006FC00000006FC0000000EFE0000000C7E0000000C7E000000183F000000183F
000000303F800000301F800000701FC00000600FC00000600FC00000C007E00000FFFFE00001FF
FFF000018003F000018003F000030001F800030001F800060001FC00060000FC000E0000FE00FF
E00FFFE0FFE00FFFE0231F7E9E28>65 D<FFFFFE00FFFFFFC007C007E007C003F007C001F807C0
01FC07C001FC07C001FC07C001FC07C001FC07C001F807C003F807C007F007C00FE007FFFF8007
FFFFC007C003F007C001F807C001FC07C000FC07C000FE07C000FE07C000FE07C000FE07C000FE
07C000FC07C001FC07C003F807C007F0FFFFFFE0FFFFFF001F1F7E9E25>I<0007FC02003FFF0E
00FE03DE03F000FE07E0003E0FC0001E1F80001E3F00000E3F00000E7F0000067E0000067E0000
06FE000000FE000000FE000000FE000000FE000000FE000000FE0000007E0000007E0000067F00
00063F0000063F00000C1F80000C0FC0001807E0003803F0007000FE01C0003FFF800007FC001F
1F7D9E26>I<FFFFFE0000FFFFFFC00007E007F00007E001F80007E000FC0007E0007E0007E000
3F0007E0003F0007E0001F8007E0001F8007E0001F8007E0001FC007E0001FC007E0001FC007E0
001FC007E0001FC007E0001FC007E0001FC007E0001FC007E0001FC007E0001F8007E0001F8007
E0001F8007E0003F0007E0003F0007E0007E0007E000FC0007E001F80007E007F000FFFFFFC000
FFFFFE0000221F7E9E28>I<FFFFFFE0FFFFFFE007E007E007E001E007E000E007E0006007E000
7007E0003007E0003007E0603007E0603007E0600007E0E00007E1E00007FFE00007FFE00007E1
E00007E0E00007E0600007E0600C07E0600C07E0000C07E0001807E0001807E0001807E0003807
E0007807E000F807E003F0FFFFFFF0FFFFFFF01E1F7E9E22>I<FFFFFFE0FFFFFFE007E007E007
E001E007E000E007E0006007E0007007E0003007E0003007E0603007E0603007E0600007E0E000
07E1E00007FFE00007FFE00007E1E00007E0E00007E0600007E0600007E0600007E0000007E000
0007E0000007E0000007E0000007E0000007E0000007E00000FFFF8000FFFF80001C1F7E9E21>
I<0007FC0200003FFF0E0000FE03DE0003F000FE0007E0003E000FC0001E001F80001E003F0000
0E003F00000E007F000006007E000006007E00000600FE00000000FE00000000FE00000000FE00
000000FE00000000FE003FFFE0FE003FFFE07E00007E007E00007E007F00007E003F00007E003F
00007E001F80007E000FC0007E0007E0007E0003F000FE0000FE01FE00003FFF8E000007FC0600
231F7D9E29>I<FFFFFFFF07E007E007E007E007E007E007E007E007E007E007E007E007E007E0
07E007E007E007E007E007E007E007E007E007E007E007E007E0FFFFFFFF101F7E9E14>73
D<FFE000003FF8FFF000007FF807F000007F0006F80000DF0006F80000DF0006F80000DF00067C
00019F00067C00019F00063E00031F00063E00031F00061F00061F00061F00061F00060F800C1F
00060F800C1F000607C0181F000607C0181F000607C0181F000603E0301F000603E0301F000601
F0601F000601F0601F000600F8C01F000600F8C01F0006007D801F0006007D801F0006003F001F
0006003F001F0006003F001F0006001E001F00FFF01E03FFF8FFF00C03FFF82D1F7E9E32>77
D<FFE000FFF0FFF000FFF007F000060007F800060006FC000600067E000600063F000600063F80
0600061F800600060FC006000607E006000603F006000601F806000601FC06000600FC06000600
7E060006003F060006001F860006001FC60006000FE600060007E600060003F600060001FE0006
0000FE00060000FE000600007E000600003E000600001E000600000E00FFF0000600FFF0000600
241F7E9E29>I<001FF80000FFFF0001F81F8007E007E00FC003F01F8001F81F0000F83F0000FC
7F0000FE7E00007E7E00007EFE00007FFE00007FFE00007FFE00007FFE00007FFE00007FFE0000
7FFE00007FFE00007F7E00007E7F0000FE7F0000FE3F0000FC3F8001FC1F8001F80FC003F007E0
07E001F81F8000FFFF00001FF800201F7D9E27>I<FFFFFE00FFFFFF8007E00FE007E003F007E0
01F807E001F807E001FC07E001FC07E001FC07E001FC07E001FC07E001F807E001F807E003F007
E00FE007FFFF8007FFFE0007E0000007E0000007E0000007E0000007E0000007E0000007E00000
07E0000007E0000007E0000007E0000007E00000FFFF0000FFFF00001E1F7E9E24>I<FFFFF800
00FFFFFF000007E01FC00007E007E00007E003F00007E003F00007E003F80007E003F80007E003
F80007E003F80007E003F00007E003F00007E007E00007E01FC00007FFFF000007FFFC000007E0
3E000007E01F000007E00F800007E00F800007E00FC00007E00FC00007E00FC00007E00FE00007
E00FE00007E00FE00007E00FE03007E007F03007E003F860FFFF01FFC0FFFF007F80241F7E9E27
>82 D<03FC080FFF381E03F83800F8700078700038F00038F00018F00018F80000FC00007FC000
7FFE003FFF801FFFC00FFFF007FFF000FFF80007F80000FC00007C00003CC0003CC0003CC0003C
E00038E00078F80070FE01E0E7FFC081FF00161F7D9E1D>I<FFFF01FFE0FFFF01FFE007E0000C
0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E000
0C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0
000C0007E0000C0007E0000C0007E0000C0007E0000C0007E0000C0003E000180001F000180001
F000300000F8006000007E03C000001FFF80000003FC0000231F7E9E28>85
D<07FC001FFF003F0F803F07C03F03E03F03E00C03E00003E0007FE007FBE01F03E03C03E07C03
E0F803E0F803E0F803E0FC05E07E0DE03FF8FE0FE07E17147F9319>97 D<FF0000FF00001F0000
1F00001F00001F00001F00001F00001F00001F00001F00001F00001F1FC01F7FF01FE0F81F807C
1F007E1F003E1F003E1F003F1F003F1F003F1F003F1F003F1F003F1F003E1F003E1F007C1F807C
1EC1F81C7FE0181F8018207E9F1D>I<01FE0007FF801F0FC03E0FC03E0FC07C0FC07C0300FC00
00FC0000FC0000FC0000FC0000FC00007C00007E00003E00603F00C01F81C007FF0001FC001314
7E9317>I<0007F80007F80000F80000F80000F80000F80000F80000F80000F80000F80000F800
00F801F8F80FFEF81F83F83E01F87E00F87C00F87C00F8FC00F8FC00F8FC00F8FC00F8FC00F8FC
00F87C00F87C00F87E00F83E01F81F07F80FFEFF03F8FF18207E9F1D>I<01FE0007FF800F83C0
1E01E03E00F07C00F07C00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC00007C00007C00003E0018
1E00180F807007FFE000FF8015147F9318>I<001F8000FFC001F3E003E7E003C7E007C7E007C3
C007C00007C00007C00007C00007C000FFFC00FFFC0007C00007C00007C00007C00007C00007C0
0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0003FFC003FFC001320
7F9F10>I<01FC3C07FFFE0F079E1E03DE3E03E03E03E03E03E03E03E03E03E01E03C00F07800F
FF0009FC001800001800001C00001FFF800FFFF007FFF81FFFFC3C007C70003EF0001EF0001EF0
001E78003C78003C3F01F80FFFE001FF00171E7F931A>I<FF0000FF00001F00001F00001F0000
1F00001F00001F00001F00001F00001F00001F00001F0FC01F3FE01F61F01FC0F81F80F81F00F8
1F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F8FFE3FF
FFE3FF18207D9F1D>I<1C003E003F007F003F003E001C00000000000000000000000000FF00FF
001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B21
7EA00E>I<FF0000FF00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F
00001F01FE1F01FE1F00F01F00C01F03801F07001F0C001F18001F7C001FFC001F9E001F0F001E
0F801E07C01E03C01E01E01E01F01E00F8FFC3FFFFC3FF18207E9F1C>107
D<FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00
1F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B207E9F0E>I<FE0FE03F80FE
1FF07FC01E70F9C3E01E407D01F01E807E01F01F807E01F01F007C01F01F007C01F01F007C01F0
1F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01
F01F007C01F0FFE3FF8FFEFFE3FF8FFE27147D932C>I<FE0FC0FE3FE01E61F01EC0F81E80F81F
00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F8FF
E3FFFFE3FF18147D931D>I<01FF0007FFC01F83F03E00F83E00F87C007C7C007CFC007EFC007E
FC007EFC007EFC007EFC007E7C007C7C007C3E00F83E00F81F83F007FFC001FF0017147F931A>
I<FF1FC0FF7FF01FE1F81F80FC1F007E1F007E1F003E1F003F1F003F1F003F1F003F1F003F1F00
3F1F003E1F007E1F007C1F80FC1FC1F81F7FE01F1F801F00001F00001F00001F00001F00001F00
001F0000FFE000FFE000181D7E931D>I<FE3E00FE7F801ECFC01E8FC01E8FC01F8FC01F03001F
00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F0000FFF000FFF00012
147E9316>114 D<0FE63FFE701E600EE006E006F800FFC07FF83FFC1FFE03FE001FC007C007E0
07F006F81EFFFCC7F010147E9315>I<01800180018003800380038007800F803F80FFFCFFFC0F
800F800F800F800F800F800F800F800F800F800F860F860F860F860F8607CC03F801F00F1D7F9C
14>I<FF07F8FF07F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F8
1F00F81F00F81F00F81F01F81F01F80F06F807FCFF03F8FF18147D931D>I<FFE7FE1FE0FFE7FE
1FE01F00F003001F00F803000F80F806000F80F8060007C1BC0C0007C1BC0C0007C1BE0C0003E3
1E180003E31E180001F60F300001F60F300001F60FB00000FC07E00000FC07E000007803C00000
7803C000007803C000003001800023147F9326>119 D<FFE1FF00FFE1FF000F80700007C0E000
07E0C00003E1800001F3800000FF0000007E0000003E0000003F0000007F8000006F800000C7C0
000183E0000381F0000701F8000E00FC00FF81FF80FF81FF8019147F931C>I<FFE07F80FFE07F
801F001C000F8018000F80180007C0300007C0300003E0600003E0600001F0C00001F0C00001F9
C00000F9800000FF8000007F0000007F0000003E0000003E0000001C0000001C00000018000000
18000078300000FC300000FC600000C0E00000E1C000007F8000001E000000191D7F931C>I<3F
FFE03FFFE03C07C0380F80701F80603F00603E00607C0000F80001F80003F00003E06007C0600F
80601F80E03F00C03E01C07C03C0FFFFC0FFFFC013147F9317>I E /Fj
2 51 df<03000700FF000700070007000700070007000700070007000700070007000700070007
00070007007FF00C157E9412>49 D<0F8030E040708030C038E0384038003800700070006000C0
0180030006000C08080810183FF07FF0FFF00D157E9412>I E /Fk 80 123
df<001F83E000F06E3001C078780380F8780300F0300700700007007000070070000700700007
0070000700700007007000FFFFFF80070070000700700007007000070070000700700007007000
070070000700700007007000070070000700700007007000070070000700700007007000070070
0007007000070070007FE3FF001D20809F1B>11 D<003F0000E0C001C0C00381E00701E00701E0
070000070000070000070000070000070000FFFFE00700E00700E00700E00700E00700E00700E0
0700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E07FC3FE
1720809F19>I<003FE000E0E001C1E00381E00700E00700E00700E00700E00700E00700E00700
E00700E0FFFFE00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700
E00700E00700E00700E00700E00700E00700E00700E07FE7FE1720809F19>I<001F81F80000F0
4F040001C07C06000380F80F000300F00F000700F00F0007007000000700700000070070000007
0070000007007000000700700000FFFFFFFF000700700700070070070007007007000700700700
070070070007007007000700700700070070070007007007000700700700070070070007007007
000700700700070070070007007007000700700700070070070007007007007FE3FE3FF0242080
9F26>I<7038F87CFC7EFC7E743A0402040204020804080410081008201040200F0E7E9F17>34
D<70F8FCFC74040404080810102040060E7C9F0D>39 D<0020004000800100020006000C000C00
180018003000300030007000600060006000E000E000E000E000E000E000E000E000E000E000E0
00E0006000600060007000300030003000180018000C000C000600020001000080004000200B2E
7DA112>I<800040002000100008000C00060006000300030001800180018001C000C000C000C0
00E000E000E000E000E000E000E000E000E000E000E000E000C000C000C001C001800180018003
000300060006000C00080010002000400080000B2E7DA112>I<01800180018001800180C183F1
8F399C0FF003C003C00FF0399CF18FC1830180018001800180018010147DA117>I<0006000000
060000000600000006000000060000000600000006000000060000000600000006000000060000
00060000000600000006000000060000FFFFFFF0FFFFFFF0000600000006000000060000000600
000006000000060000000600000006000000060000000600000006000000060000000600000006
0000000600001C207D9A23>I<70F8FCFC74040404080810102040060E7C840D>I<FFC0FFC00A02
7F8A0F>I<70F8F8F87005057C840D>I<000100030003000600060006000C000C000C0018001800
1800300030003000600060006000C000C000C00180018001800300030003000600060006000C00
0C000C00180018001800300030003000600060006000C000C000C000102D7DA117>I<03F0000E
1C001C0E00180600380700700380700380700380700380F003C0F003C0F003C0F003C0F003C0F0
03C0F003C0F003C0F003C0F003C0F003C0F003C0F003C070038070038070038078078038070018
06001C0E000E1C0003F000121F7E9D17>I<018003800F80F38003800380038003800380038003
800380038003800380038003800380038003800380038003800380038003800380038007C0FFFE
0F1E7C9D17>I<03F0000C1C00100E00200700400780800780F007C0F803C0F803C0F803C02007
C00007C0000780000780000F00000E00001C0000380000700000600000C0000180000300000600
400C00401800401000803FFF807FFF80FFFF80121E7E9D17>I<03F0000C1C00100E00200F0078
0F80780780780780380F80000F80000F00000F00000E00001C0000380003F000003C00000E0000
0F000007800007800007C02007C0F807C0F807C0F807C0F00780400780400F00200E001C3C0003
F000121F7E9D17>I<000600000600000E00000E00001E00002E00002E00004E00008E00008E00
010E00020E00020E00040E00080E00080E00100E00200E00200E00400E00C00E00FFFFF0000E00
000E00000E00000E00000E00000E00000E0000FFE0141E7F9D17>I<1803001FFE001FFC001FF8
001FE00010000010000010000010000010000010000011F000161C00180E001007001007800003
800003800003C00003C00003C07003C0F003C0F003C0E00380400380400700200600100E000C38
0003E000121F7E9D17>I<007C000182000701000E03800C07801C078038030038000078000070
0000700000F1F000F21C00F40600F80700F80380F80380F003C0F003C0F003C0F003C0F003C070
03C07003C07003803803803807001807000C0E00061C0001F000121F7E9D17>I<4000007FFFC0
7FFF807FFF80400100800200800200800400000800000800001000002000002000004000004000
00C00000C00001C000018000038000038000038000038000078000078000078000078000078000
078000078000030000121F7D9D17>I<03F0000C0C001006003003002001806001806001806001
807001807803003E03003F06001FC8000FF00003F80007FC000C7E00103F00300F806003804001
C0C001C0C000C0C000C0C000C0C000806001802001001002000C0C0003F000121F7E9D17>I<03
F0000E18001C0C00380600380700700700700380F00380F00380F003C0F003C0F003C0F003C0F0
03C07007C07007C03807C0180BC00E13C003E3C000038000038000038000070030070078060078
0E00700C002018001070000FC000121F7E9D17>I<70F8F8F8700000000000000000000070F8F8
F87005147C930D>I<7FFFFFE0FFFFFFF000000000000000000000000000000000000000000000
00000000000000000000FFFFFFF07FFFFFE01C0C7D9023>61 D<00010000000380000003800000
0380000007C0000007C0000007C0000009E0000009E0000009E0000010F0000010F0000010F000
00207800002078000020780000403C0000403C0000403C0000801E0000801E0000FFFE0001000F
0001000F0001000F00020007800200078002000780040003C00E0003C01F0007E0FFC03FFE1F20
7F9F22>65 D<FFFFE0000F80380007801E0007801F0007800F0007800F8007800F8007800F8007
800F8007800F8007800F0007801F0007801E0007803C0007FFF00007803C0007801E0007800F00
07800F8007800780078007C0078007C0078007C0078007C0078007C00780078007800F8007800F
0007801F000F803C00FFFFF0001A1F7E9E20>I<000FC040007030C001C009C0038005C0070003
C00E0001C01E0000C01C0000C03C0000C07C0000407C00004078000040F8000000F8000000F800
0000F8000000F8000000F8000000F8000000F8000000F8000000780000007C0000407C0000403C
0000401C0000401E0000800E000080070001000380020001C0040000703800000FC0001A217D9F
21>I<FFFFE0000F803C0007801E000780070007800380078003C0078001E0078001E0078001F0
078000F0078000F0078000F8078000F8078000F8078000F8078000F8078000F8078000F8078000
F8078000F8078000F0078000F0078000F0078001E0078001E0078003C007800380078007000780
0E000F803C00FFFFE0001D1F7E9E23>I<FFFFFF000F800F000780030007800300078001000780
0180078000800780008007800080078080800780800007808000078080000781800007FF800007
818000078080000780800007808000078080000780002007800020078000200780004007800040
07800040078000C0078000C0078001800F800F80FFFFFF801B1F7E9E1F>I<FFFFFF000F800F00
078003000780030007800100078001800780008007800080078000800780008007808000078080
0007808000078080000781800007FF800007818000078080000780800007808000078080000780
0000078000000780000007800000078000000780000007800000078000000FC00000FFFE000019
1F7E9E1E>I<000FE0200078186000E004E0038002E0070001E00F0000E01E0000601E0000603C
0000603C0000207C00002078000020F8000000F8000000F8000000F8000000F8000000F8000000
F8000000F8007FFCF80003E0780001E07C0001E03C0001E03C0001E01E0001E01E0001E00F0001
E0070001E0038002E000E0046000781820000FE0001E217D9F24>I<FFF8FFF80F800F8007800F
0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007800F000780
0F0007800F0007FFFF0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007
800F0007800F0007800F0007800F0007800F0007800F0007800F000F800F80FFF8FFF81D1F7E9E
22>I<FFFC0FC00780078007800780078007800780078007800780078007800780078007800780
078007800780078007800780078007800780078007800FC0FFFC0E1F7F9E10>I<0FFFC0007C00
003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00
003C00003C00003C00003C00003C00003C00003C00003C00203C00F83C00F83C00F83C00F03800
40780040700030E0000F800012207E9E17>I<FFFC0FFC0FC003E0078001800780010007800200
078004000780080007801000078020000780400007808000078100000783000007878000078F80
000793C0000791E00007A1E00007C0F0000780F0000780780007803C0007803C0007801E000780
1E0007800F000780078007800780078007C00FC007E0FFFC3FFC1E1F7E9E23>I<FFFE000FC000
078000078000078000078000078000078000078000078000078000078000078000078000078000
07800007800007800007800007800007800207800207800207800207800607800407800407800C
07801C0F807CFFFFFC171F7E9E1C>I<FF80001FF80F80001F800780001F0005C0002F0005C000
2F0005C0002F0004E0004F0004E0004F000470008F000470008F000470008F000438010F000438
010F000438010F00041C020F00041C020F00041C020F00040E040F00040E040F00040E040F0004
07080F000407080F000407080F000403900F000403900F000401E00F000401E00F000401E00F00
0E00C00F001F00C01F80FFE0C1FFF8251F7E9E2A>I<FF803FF807C007C007C0038005E0010005
E0010004F001000478010004780100043C0100043C0100041E0100040F0100040F010004078100
040781000403C1000401E1000401E1000400F1000400F1000400790004003D0004003D0004001F
0004001F0004000F0004000700040007000E0003001F000300FFE001001D1F7E9E22>I<001F80
0000F0F00001C0380007801E000F000F000E0007001E0007803C0003C03C0003C07C0003E07800
01E0780001E0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F8
0001F0780001E07C0003E07C0003E03C0003C03C0003C01E0007800E0007000F000F0007801E00
01C0380000F0F000001F80001C217D9F23>I<FFFFE0000F80780007801C0007801E0007800F00
07800F8007800F8007800F8007800F8007800F8007800F8007800F0007801E0007801C00078078
0007FFE00007800000078000000780000007800000078000000780000007800000078000000780
0000078000000780000007800000078000000FC00000FFFC0000191F7E9E1F>I<FFFF80000F80
F0000780780007803C0007801E0007801E0007801F0007801F0007801F0007801F0007801E0007
801E0007803C00078078000780F00007FF80000781C0000780E0000780F0000780700007807800
078078000780780007807C0007807C0007807C0007807C0407807E0407803E040FC01E08FFFC0F
10000003E01E207E9E21>82 D<07E0800C1980100780300380600180600180E00180E00080E000
80E00080F00000F000007800007F00003FF0001FFC000FFE0003FF00001F800007800003C00003
C00001C08001C08001C08001C08001C0C00180C00380E00300F00600CE0C0081F80012217D9F19
>I<7FFFFFE0780F01E0600F0060400F0020400F0020C00F0030800F0010800F0010800F001080
0F0010000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000
000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00
00001F800007FFFE001C1F7E9E21>I<FFFC3FF80FC007C0078003800780010007800100078001
000780010007800100078001000780010007800100078001000780010007800100078001000780
010007800100078001000780010007800100078001000780010007800100078001000380020003
80020001C0020001C0040000E008000070180000382000000FC0001D207E9E22>I<FFF003FE1F
8000F80F0000600F800060078000400780004003C0008003C0008003C0008001E0010001E00100
01F0010000F0020000F0020000F806000078040000780400003C0800003C0800003C0800001E10
00001E1000001F3000000F2000000F20000007C0000007C0000007C00000038000000380000003
8000000100001F207F9E22>I<FFF07FF81FF01F800FC007C00F00078003800F00078001000F00
07C00100078007C00200078007C00200078007C0020003C009E0040003C009E0040003C009E004
0003E010F00C0001E010F0080001E010F0080001F02078080000F02078100000F02078100000F0
403C10000078403C20000078403C20000078C03E2000003C801E4000003C801E4000003C801E40
00001F000F8000001F000F8000001F000F8000001E00078000000E00070000000E00070000000C
000300000004000200002C207F9E2F>I<7FF81FF80FE00FC007C0070003C0020001E0040001F0
0C0000F0080000781000007C1000003C2000003E4000001E4000000F8000000F80000007800000
03C0000007E0000005E0000009F0000018F8000010780000207C0000603C0000401E0000801F00
01800F0001000780020007C0070003C01F8007E0FFE01FFE1F1F7F9E22>I<FFF003FF1F8000F8
0F8000600780004007C0004003E0008001E0008001F0010000F0030000F80200007C0400003C04
00003E0800001E0800001F1000000FB0000007A0000007C0000003C0000003C0000003C0000003
C0000003C0000003C0000003C0000003C0000003C0000003C0000003C0000007C000007FFE0020
1F7F9E22>I<7FFFF87C00F87000F06001E04001E0C003C0C003C0800780800F80800F00001E00
001E00003C00003C0000780000F80000F00001E00001E00003C00403C0040780040F80040F000C
1E000C1E00083C00183C0018780038F801F8FFFFF8161F7D9E1C>I<FEFEC0C0C0C0C0C0C0C0C0
C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0FEFE072D7CA10D
>I<080410082010201040204020804080408040B85CFC7EFC7E7C3E381C0F0E7B9F17>I<FEFE06
060606060606060606060606060606060606060606060606060606060606060606060606060606
06FEFE072D7FA10D>I<1FE000303000781800781C00300E00000E00000E00000E0000FE00078E
001E0E00380E00780E00F00E10F00E10F00E10F01E10781E103867200F83C014147E9317>97
D<0E0000FE00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E3E
000EC3800F01C00F00E00E00E00E00700E00700E00780E00780E00780E00780E00780E00780E00
700E00700E00E00F00E00D01C00CC300083E0015207F9F19>I<03F80E0C1C1E381E380C700070
00F000F000F000F000F000F00070007000380138011C020E0C03F010147E9314>I<000380003F
8000038000038000038000038000038000038000038000038000038000038003E380061B801C07
80380380380380700380700380F00380F00380F00380F00380F00380F003807003807003803803
803807801C07800E1B8003E3F815207E9F19>I<03F0000E1C001C0E0038070038070070070070
0380F00380F00380FFFF80F00000F00000F000007000007000003800801800800C010007060001
F80011147F9314>I<007C00C6018F038F07060700070007000700070007000700FFF007000700
07000700070007000700070007000700070007000700070007000700070007007FF01020809F0E
>I<0000E003E3300E3C301C1C30380E00780F00780F00780F00780F00780F00380E001C1C001E
380033E0002000002000003000003000003FFE001FFF800FFFC03001E0600070C00030C00030C0
0030C000306000603000C01C038003FC00141F7F9417>I<0E0000FE00000E00000E00000E0000
0E00000E00000E00000E00000E00000E00000E00000E3E000E43000E81800F01C00F01C00E01C0
0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0
FFE7FC16207F9F19>I<1C001E003E001E001C000000000000000000000000000E007E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC00A1F809E0C>
I<00E001F001F001F000E0000000000000000000000000007007F000F000700070007000700070
00700070007000700070007000700070007000700070007000700070007000706070F060F0C061
803F000C28829E0E>I<0E0000FE00000E00000E00000E00000E00000E00000E00000E00000E00
000E00000E00000E0FF00E03C00E03000E02000E04000E08000E10000E30000E70000EF8000F38
000E1C000E1E000E0E000E07000E07800E03800E03C00E03E0FFCFF815207F9F18>I<0E00FE00
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E
000E000E000E000E000E000E000E000E000E00FFE00B20809F0C>I<0E1F01F000FE618618000E
81C81C000F00F00E000F00F00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E00
0E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E
000E00E00E00FFE7FE7FE023147F9326>I<0E3E00FE43000E81800F01C00F01C00E01C00E01C0
0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFE7FC
16147F9319>I<01F800070E001C03803801C03801C07000E07000E0F000F0F000F0F000F0F000
F0F000F0F000F07000E07000E03801C03801C01C0380070E0001F80014147F9317>I<0E3E00FE
C3800F01C00F00E00E00E00E00F00E00700E00780E00780E00780E00780E00780E00780E00700E
00F00E00E00F01E00F01C00EC3000E3E000E00000E00000E00000E00000E00000E00000E00000E
0000FFE000151D7F9319>I<03E0800619801C05803C0780380380780380700380F00380F00380
F00380F00380F00380F003807003807803803803803807801C0B800E138003E380000380000380
000380000380000380000380000380000380003FF8151D7E9318>I<0E78FE8C0F1E0F1E0F0C0E
000E000E000E000E000E000E000E000E000E000E000E000E000E00FFE00F147F9312>I<1F9030
704030C010C010C010E00078007F803FE00FF00070803880188018C018C018E030D0608F800D14
7E9312>I<020002000200060006000E000E003E00FFF80E000E000E000E000E000E000E000E00
0E000E000E000E080E080E080E080E080610031001E00D1C7F9B12>I<0E01C0FE1FC00E01C00E
01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E
03C00603C0030DC001F1FC16147F9319>I<FF83F81E01E01C00C00E00800E00800E0080070100
07010003820003820003820001C40001C40001EC0000E80000E800007000007000007000002000
15147F9318>I<FF9FE1FC3C0780701C0300601C0380200E0380400E0380400E03C0400707C080
0704C0800704E080038861000388710003C8730001D0320001D03A0000F03C0000E01C0000E01C
0000601800004008001E147F9321>I<7FC3FC0F01E00701C007018003810001C20000E40000EC
00007800003800003C00007C00004E000087000107000303800201C00601E01E01E0FF07FE1714
809318>I<FF83F81E01E01C00C00E00800E00800E008007010007010003820003820003820001
C40001C40001EC0000E80000E800007000007000007000002000002000004000004000004000F0
8000F08000F100006200003C0000151D7F9318>I<3FFF380E200E201C40384078407000E001E0
01C00380078007010E011E011C0338027006700EFFFE10147F9314>I E
/Fl 37 122 df<004000800100020006000C000C0018001800300030007000600060006000E000
E000E000E000E000E000E000E000E000E000E000E000600060006000700030003000180018000C
000C00060002000100008000400A2A7D9E10>40 D<800040002000100018000C000C0006000600
03000300038001800180018001C001C001C001C001C001C001C001C001C001C001C001C0018001
800180038003000300060006000C000C00180010002000400080000A2A7E9E10>I<60F0F07010
10101020204080040C7C830C>44 D<FFE0FFE00B0280890E>I<60F0F06004047C830C>I<000600
000006000000060000000F0000000F0000000F00000017800000178000001780000023C0000023
C0000023C0000041E0000041E0000041E0000080F0000080F0000180F8000100780001FFF80003
007C0002003C0002003C0006003E0004001E0004001E000C001F001E001F00FF80FFF01C1D7F9C
1F>65 D<FFFFC00F00F00F00380F003C0F001C0F001E0F001E0F001E0F001E0F001C0F003C0F00
780F01F00FFFE00F00780F003C0F001E0F000E0F000F0F000F0F000F0F000F0F000F0F001E0F00
1E0F003C0F0078FFFFE0181C7E9B1D>I<001F808000E0618001801980070007800E0003801C00
03801C00018038000180780000807800008070000080F0000000F0000000F0000000F0000000F0
000000F0000000F0000000F0000000700000807800008078000080380000801C0001001C000100
0E000200070004000180080000E03000001FC000191E7E9C1E>I<001F808000E0618001801980
070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F00000
00F0000000F0000000F0000000F0000000F000FFF0F0000F807000078078000780780007803800
07801C0007801C0007800E00078007000B800180118000E06080001F80001C1E7E9C21>71
D<FFF00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F000F000F000F00FFF00C1C7F9B0F>73 D<FFF8000F80000F00000F00000F
00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F
00000F00080F00080F00080F00180F00180F00100F00300F00700F01F0FFFFF0151C7E9B1A>76
D<FF8000FF800F8000F8000F8000F8000BC00178000BC00178000BC001780009E002780009E002
780008F004780008F004780008F0047800087808780008780878000878087800083C107800083C
107800083C107800081E207800081E207800081E207800080F407800080F407800080780780008
07807800080780780008030078001C03007800FF8307FF80211C7E9B26>I<07E0801C19803005
80700380600180E00180E00080E00080E00080F00000F800007C00007FC0003FF8001FFE0007FF
0000FF80000F800007C00003C00001C08001C08001C08001C0C00180C00180E00300D00200CC0C
0083F800121E7E9C17>83 D<7FFFFFC0700F01C0600F00C0400F0040400F0040C00F0020800F00
20800F0020800F0020000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F
0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000
1F800003FFFC001B1C7F9B1E>I<1FC000307000783800781C00301C00001C00001C0001FC000F
1C00381C00701C00601C00E01C40E01C40E01C40603C40304E801F870012127E9115>97
D<FC00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C7C001D86
001E03001C01801C01C01C00C01C00E01C00E01C00E01C00E01C00E01C00E01C00C01C01C01C01
801E030019060010F800131D7F9C17>I<07E00C301878307870306000E000E000E000E000E000
E00060007004300418080C3007C00E127E9112>I<003F00000700000700000700000700000700
00070000070000070000070000070003E7000C1700180F00300700700700600700E00700E00700
E00700E00700E00700E00700600700700700300700180F000C370007C7E0131D7E9C17>I<03E0
0C301818300C700E6006E006FFFEE000E000E000E00060007002300218040C1803E00F127F9112
>I<00F8018C071E061E0E0C0E000E000E000E000E000E00FFE00E000E000E000E000E000E000E
000E000E000E000E000E000E000E000E000E007FE00F1D809C0D>I<00038003C4C00C38C01C38
80181800381C00381C00381C00381C001818001C38000C300013C0001000003000001800001FF8
001FFF001FFF803003806001C0C000C0C000C0C000C06001803003001C0E0007F800121C7F9215
>I<FC00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C7C001C
87001D03001E03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C
03801C03801C0380FF9FF0141D7F9C17>I<18003C003C00180000000000000000000000000000
00FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80091D
7F9C0C>I<FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C00
1C001C001C001C001C001C001C001C001C001C001C00FF80091D7F9C0C>108
D<FC7E07E0001C838838001D019018001E01E01C001C01C01C001C01C01C001C01C01C001C01C0
1C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01
C01C001C01C01C00FF8FF8FF8021127F9124>I<FC7C001C87001D03001E03801C03801C03801C
03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C0380FF9FF014127F91
17>I<03F0000E1C00180600300300700380600180E001C0E001C0E001C0E001C0E001C0E001C0
6001807003803003001806000E1C0003F00012127F9115>I<FC7C001D86001E03001C01801C01
C01C00C01C00E01C00E01C00E01C00E01C00E01C00E01C01C01C01C01C01801E03001D06001CF8
001C00001C00001C00001C00001C00001C00001C0000FF8000131A7F9117>I<03C1000C330018
0B00300F00700700700700E00700E00700E00700E00700E00700E00700600700700700300F0018
0F000C370007C700000700000700000700000700000700000700000700003FE0131A7E9116>I<
FCE01D301E781E781C301C001C001C001C001C001C001C001C001C001C001C001C00FFC00D127F
9110>I<1F9030704030C010C010E010F8007F803FE00FF000F880388018C018C018E010D0608F
C00D127F9110>I<04000400040004000C000C001C003C00FFE01C001C001C001C001C001C001C
001C001C001C101C101C101C101C100C100E2003C00C1A7F9910>I<FC1F801C03801C03801C03
801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C07800C07800E1B
8003E3F014127F9117>I<FF07E03C03801C01001C01000E02000E020007040007040007040003
880003880003D80001D00001D00000E00000E00000E00000400013127F9116>I<FF3FCFE03C0F
03801C0701801C0701001C0B01000E0B82000E0B82000E1182000711C4000711C4000720C40003
A0E80003A0E80003C0680001C0700001C0700001803000008020001B127F911E>I<7F8FF00F03
800F030007020003840001C80001D80000F00000700000780000F800009C00010E00020E000607
000403801E07C0FF0FF81512809116>I<FF07E03C03801C01001C01000E02000E020007040007
040007040003880003880003D80001D00001D00000E00000E00000E00000400000400000800000
8000F08000F10000F300006600003C0000131A7F9116>I E /Fm 7 117
df<00038000000380000007C0000007C0000007C000000FE000000FE000001FF000001BF00000
1BF0000031F8000031F8000061FC000060FC0000E0FE0000C07E0000C07E0001803F0001FFFF00
03FFFF8003001F8003001F8006000FC006000FC00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E
9B24>65 D<0FF8001C1E003E0F803E07803E07C01C07C00007C0007FC007E7C01F07C03C07C07C
07C0F807C0F807C0F807C0780BC03E13F80FE1F815127F9117>97 D<FF0000FF00001F00001F00
001F00001F00001F00001F00001F00001F00001F00001F3F801FE1E01F80701F00781F003C1F00
3C1F003E1F003E1F003E1F003E1F003E1F003E1F003C1F003C1F00781F80701EC1E01C3F00171D
7F9C1B>I<03FC000E0E001C1F003C1F00781F00780E00F80000F80000F80000F80000F80000F8
00007800007801803C01801C03000E0E0003F80011127E9115>I<FE3E00FE47001E8F801E8F80
1E8F801F07001F00001F00001F00001F00001F00001F00001F00001F00001F00001F0000FFF000
FFF00011127F9114>114 D<1FD830786018E018E018F000FF807FE07FF01FF807FC007CC01CC0
1CE01CE018F830CFC00E127E9113>I<0300030003000300070007000F000F003FFCFFFC1F001F
001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C0F08079803F00E1A7F9913>I
E /Fn 42 122 df<70F8FCFC7404040404080810102040060F7C840E>44
D<FFE0FFE00B027F8B10>I<01F000071C000C06001803003803803803807001C07001C07001C0
7001C0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0
F001E0F001E07001C07001C07001C07803C03803803803801C07000C0600071C0001F00013227E
A018>48 D<008003800F80F3800380038003800380038003800380038003800380038003800380
0380038003800380038003800380038003800380038003800380038007C0FFFE0F217CA018>I<
03F0000C1C001007002007804003C04003C08003E0F003E0F801E0F801E0F801E02003E00003E0
0003C00003C0000780000700000E00001C0000180000300000600000C000018000010000020020
0400200800201800603000403FFFC07FFFC0FFFFC013217EA018>I<1000801E07001FFF001FFE
001FF80013E00010000010000010000010000010000010000010F800130E001407001803801003
800001C00001C00001E00001E00001E00001E07001E0F001E0F001E0E001C08001C04003C04003
802007001006000C1C0003F00013227EA018>53 D<007E0001C1000300800601C00E03C01C03C0
180180380000380000780000700000700000F0F800F30C00F40600F40300F80380F801C0F001C0
F001E0F001E0F001E0F001E0F001E07001E07001E07001E03801C03801C01803801C03000C0600
070C0001F00013227EA018>I<01F000060C000C0600180700380380700380700380F001C0F001
C0F001C0F001E0F001E0F001E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1
E00001C00001C00001C0000380000380300300780700780600700C002018001030000FC0001322
7EA018>57 D<0001800000018000000180000003C0000003C0000003C0000005E0000005E00000
0DF0000008F0000008F0000010F800001078000010780000203C0000203C0000203C0000401E00
00401E0000401E0000800F0000800F0000FFFF000100078001000780030007C0020003C0020003
C0040003E0040001E0040001E00C0000F00C0000F03E0001F8FF800FFF20237EA225>65
D<FFFFF8000F800E0007800780078003C0078003E0078001E0078001F0078001F0078001F00780
01F0078001F0078001E0078003E0078007C007800F8007803E0007FFFE0007800780078003C007
8001E0078001F0078000F0078000F8078000F8078000F8078000F8078000F8078000F8078001F0
078001F0078003E0078007C00F800F00FFFFFC001D227EA123>I<0007E0100038183000E00630
01C00170038000F0070000F00E0000701E0000701C0000303C0000303C0000307C000010780000
1078000010F8000000F8000000F8000000F8000000F8000000F8000000F8000000F80000007800
0000780000107C0000103C0000103C0000101C0000201E0000200E000040070000400380008001
C0010000E0020000381C000007E0001C247DA223>I<FFFFF0000F801E0007800700078003C007
8001C0078000E0078000F007800078078000780780007C0780003C0780003C0780003C0780003E
0780003E0780003E0780003E0780003E0780003E0780003E0780003E0780003E0780003C078000
3C0780007C0780007807800078078000F0078000E0078001E0078003C0078007000F801E00FFFF
F8001F227EA125>I<FFFFFFC00F8007C0078001C0078000C00780004007800040078000600780
0020078000200780002007802020078020000780200007802000078060000780E00007FFE00007
80E000078060000780200007802000078020000780200807800008078000080780001007800010
07800010078000300780003007800070078000E00F8003E0FFFFFFE01D227EA121>I<FFFC3FFF
0FC003F0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001
E0078001E0078001E0078001E0078001E0078001E007FFFFE0078001E0078001E0078001E00780
01E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E007
8001E0078001E00FC003F0FFFC3FFF20227EA125>72 D<FFFC0FC0078007800780078007800780
078007800780078007800780078007800780078007800780078007800780078007800780078007
8007800780078007800FC0FFFC0E227EA112>I<03FFF0001F00000F00000F00000F00000F0000
0F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F0000
0F00000F00000F00000F00000F00000F00000F00700F00F80F00F80F00F80E00F01E00401C0020
380018700007C00014237EA119>I<FFC00003FF0FC00003F007C00003E005E00005E005E00005
E004F00009E004F00009E004F00009E004780011E004780011E004780011E0043C0021E0043C00
21E0043C0021E0041E0041E0041E0041E0040F0081E0040F0081E0040F0081E004078101E00407
8101E004078101E00403C201E00403C201E00401E401E00401E401E00401E401E00400F801E004
00F801E00400F801E004007001E00E007001E01F007003F0FFE0203FFF28227EA12D>77
D<FFFFF0000F803C0007800F0007800780078007C0078003C0078003E0078003E0078003E00780
03E0078003E0078003E0078003C0078007C00780078007800F0007803C0007FFF0000780000007
800000078000000780000007800000078000000780000007800000078000000780000007800000
0780000007800000078000000FC00000FFFC00001B227EA121>80 D<FFFFE000000F803C000007
800E00000780078000078007C000078003C000078003E000078003E000078003E000078003E000
078003E000078003C000078007C000078007800007800E000007803C000007FFE0000007807000
00078038000007801C000007801E000007800E000007800F000007800F000007800F000007800F
000007800F800007800F800007800F800007800F808007800FC080078007C0800FC003C100FFFC
01E2000000007C0021237EA124>82 D<03F0200C0C601802603001E07000E0600060E00060E000
60E00020E00020E00020F00000F000007800007F00003FF0001FFE000FFF0003FF80003FC00007
E00001E00000F00000F0000070800070800070800070800070C00060C00060E000C0F000C0C801
80C6070081FC0014247DA21B>I<7FFFFFF87807807860078018400780084007800840078008C0
07800C800780048007800480078004800780040007800000078000000780000007800000078000
000780000007800000078000000780000007800000078000000780000007800000078000000780
00000780000007800000078000000780000007800000078000000FC00003FFFF001E227EA123>
I<0FE0001838003C0C003C0E0018070000070000070000070000FF0007C7001E07003C07007807
00700700F00708F00708F00708F00F087817083C23900FC1E015157E9418>97
D<0E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00
000E00000E1F000E61C00E80600F00300E00380E003C0E001C0E001E0E001E0E001E0E001E0E00
1E0E001E0E001E0E001C0E003C0E00380F00700C80600C41C0083F0017237FA21B>I<01FE0007
03000C07801C0780380300780000700000F00000F00000F00000F00000F00000F00000F0000070
00007800403800401C00800C010007060001F80012157E9416>I<0000E0000FE00001E00000E0
0000E00000E00000E00000E00000E00000E00000E00000E00000E00000E001F8E00704E00C02E0
1C01E03800E07800E07000E0F000E0F000E0F000E0F000E0F000E0F000E0F000E07000E07800E0
3800E01801E00C02E0070CF001F0FE17237EA21B>I<01FC000707000C03801C01C03801C07801
E07000E0F000E0FFFFE0F00000F00000F00000F00000F000007000007800203800201C00400E00
8007030000FC0013157F9416>I<00007001F198071E180E0E181C07001C07003C07803C07803C
07803C07801C07001C07000E0E000F1C0019F0001000001000001800001800001FFE000FFFC00F
FFE03800F0600030400018C00018C00018C000186000306000303800E00E038003FE0015217F95
18>103 D<0E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E
00000E00000E00000E1F800E60C00E80E00F00700F00700E00700E00700E00700E00700E00700E
00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF18237FA21B>I<
1C001E003E001E001C00000000000000000000000000000000000E00FE001E000E000E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC00A227FA10E>I<0E0000
FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0000
0E03FC0E01F00E01C00E01800E02000E04000E08000E10000E38000EF8000F1C000E1E000E0E00
0E07000E07800E03C00E01C00E01E00E00F00E00F8FFE3FE17237FA21A>107
D<0E00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFE00B237FA20E>I<
0E1FC07F00FE60E183801E807201C00F003C00E00F003C00E00E003800E00E003800E00E003800
E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E0038
00E00E003800E00E003800E00E003800E00E003800E0FFE3FF8FFE27157F942A>I<0E1F80FE60
C01E80E00F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00
700E00700E00700E00700E00700E0070FFE7FF18157F941B>I<01FC000707000C01801800C038
00E0700070700070F00078F00078F00078F00078F00078F00078F000787000707800F03800E01C
01C00E038007070001FC0015157F9418>I<0E1F00FE61C00E80600F00700E00380E003C0E001C
0E001E0E001E0E001E0E001E0E001E0E001E0E001E0E003C0E003C0E00380F00700E80E00E41C0
0E3F000E00000E00000E00000E00000E00000E00000E00000E00000E0000FFE000171F7F941B>
I<0E3CFE461E8F0F0F0F060F000E000E000E000E000E000E000E000E000E000E000E000E000E00
0F00FFF010157F9413>114 D<0F8830786018C018C008C008E008F0007F803FE00FF001F8003C
801C800C800CC00CC008E018D0308FC00E157E9413>I<02000200020002000600060006000E00
1E003E00FFF80E000E000E000E000E000E000E000E000E000E000E000E040E040E040E040E040E
040708030801F00E1F7F9E13>I<0E0070FE07F01E00F00E00700E00700E00700E00700E00700E
00700E00700E00700E00700E00700E00700E00700E00700E00F00E00F006017003827800FC7F18
157F941B>I<FFC1FE1E00780E00300E00200E002007004007004003808003808003808001C100
01C10000E20000E20000E20000740000740000380000380000380000100017157F941A>I<FF83
FE1F01F00E00C007008003810003830001C20000E400007800007800003800003C00004E00008E
000187000103800201C00401E00C00E03E01F0FF03FE17157F941A>120
D<FFC1FE1E00780E00300E00200E002007004007004003808003808003808001C10001C10000E2
0000E20000E2000074000074000038000038000038000010000010000020000020000020000040
00F04000F08000F180004300003C0000171F7F941A>I E /Fo 35 122 df<0001FF0000001FFF
C000007F80F00000FC00F80001F801F80003F803FC0007F003FC0007F003FC0007F003FC0007F0
01F80007F000F00007F000000007F000000007F000000007F0000000FFFFFFFC00FFFFFFFC00FF
FFFFFC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC00
07F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC
0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC0007F001FC007FFF1FFFC07FFF1F
FFC07FFF1FFFC0222A7FA926>12 D<000E00001E00007E0007FE00FFFE00FFFE00F8FE0000FE00
00FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE00
00FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE00
00FE0000FE007FFFFE7FFFFE7FFFFE17277BA622>49 D<00FF800003FFF0000FFFFC001F03FE00
3800FF007C007F80FE003FC0FF003FC0FF003FE0FF001FE0FF001FE07E001FE03C003FE000003F
E000003FC000003FC000007F8000007F000000FE000000FC000001F8000003F0000003E0000007
8000000F0000001E0000003C00E0007000E000E000E001C001C0038001C0070001C00FFFFFC01F
FFFFC03FFFFFC07FFFFFC0FFFFFF80FFFFFF80FFFFFF801B277DA622>I<007F800003FFF00007
FFFC000F81FE001F00FF003F80FF003F807F803F807F803F807F801F807F800F007F800000FF00
0000FF000000FE000001FC000001F8000007F00000FFC00000FFF0000001FC0000007E0000007F
0000007F8000003FC000003FC000003FE000003FE03C003FE07E003FE0FF003FE0FF003FE0FF00
3FC0FF007FC07E007F807C007F003F01FE001FFFFC0007FFF00000FF80001B277DA622>I<0000
0E0000001E0000003E0000007E000000FE000000FE000001FE000003FE0000077E00000E7E0000
0E7E00001C7E0000387E0000707E0000E07E0000E07E0001C07E0003807E0007007E000E007E00
0E007E001C007E0038007E0070007E00E0007E00FFFFFFF8FFFFFFF8FFFFFFF80000FE000000FE
000000FE000000FE000000FE000000FE000000FE000000FE00007FFFF8007FFFF8007FFFF81D27
7EA622>I<000003800000000007C00000000007C0000000000FE0000000000FE0000000000FE0
000000001FF0000000001FF0000000003FF8000000003FF8000000003FF80000000073FC000000
0073FC00000000F3FE00000000E1FE00000000E1FE00000001C0FF00000001C0FF00000003C0FF
80000003807F80000007807FC0000007003FC0000007003FC000000E003FE000000E001FE00000
1E001FF000001C000FF000001FFFFFF000003FFFFFF800003FFFFFF80000780007FC0000700003
FC0000700003FC0000E00001FE0000E00001FE0001E00001FF0001C00000FF0001C00000FF00FF
FE001FFFFEFFFE001FFFFEFFFE001FFFFE2F297EA834>65 D<00003FF001800003FFFE0380000F
FFFF8780003FF007DF8000FF8001FF8001FE00007F8003FC00003F8007F000001F800FF000000F
801FE0000007801FE0000007803FC0000007803FC0000003807FC0000003807F80000003807F80
00000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000
00FF8000000000FF8000000000FF80000000007F80000000007F80000000007FC0000003803FC0
000003803FC0000003801FE0000003801FE0000007000FF00000070007F000000E0003FC00001E
0001FE00003C0000FF8000F800003FF007E000000FFFFFC0000003FFFF000000003FF800002929
7CA832>67 D<FFFFFFFFE0FFFFFFFFE0FFFFFFFFE003FC001FE003FC0007F003FC0001F003FC00
01F003FC0000F003FC00007003FC00007003FC00007003FC01C07803FC01C03803FC01C03803FC
01C03803FC03C00003FC03C00003FC0FC00003FFFFC00003FFFFC00003FFFFC00003FC0FC00003
FC03C00003FC03C00003FC01C00E03FC01C00E03FC01C00E03FC01C01C03FC00001C03FC00001C
03FC00001C03FC00003C03FC00003803FC00007803FC0000F803FC0001F803FC0003F803FC001F
F8FFFFFFFFF0FFFFFFFFF0FFFFFFFFF027297DA82D>69 D<00007FE003000003FFFC0700001FFF
FF0F00003FF00FFF0000FF8001FF0001FE0000FF0003F800003F0007F000003F000FF000001F00
1FE000000F001FE000000F003FC000000F003FC0000007007FC0000007007F80000007007F8000
000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000
FF8000000000FF8000000000FF8001FFFFF87F8001FFFFF87F8001FFFFF87FC00000FF003FC000
00FF003FC00000FF001FE00000FF001FE00000FF000FF00000FF0007F00000FF0003F80000FF00
01FE0000FF0000FF8001FF00003FF007BF00001FFFFF1F000003FFFE0F0000007FF003002D297C
A836>71 D<FFFFFCFFFFFCFFFFFC01FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001
FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001
FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001
FE00FFFFFCFFFFFCFFFFFC16297EA81A>73 D<FFFFFC0000FFFFFC0000FFFFFC000003FC000000
03FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC0000
0003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00
000003FC00000003FC00000003FC00000003FC00000003FC0001C003FC0001C003FC0001C003FC
0001C003FC0003C003FC00038003FC00038003FC00078003FC00078003FC000F8003FC000F8003
FC001F8003FC007F8003FC01FF00FFFFFFFF00FFFFFFFF00FFFFFFFF0022297DA829>76
D<0000FFE000000007FFFC0000003FC07F8000007F001FC00001FC0007F00003F80003F80007F0
0001FC000FF00001FE001FE00000FF001FE00000FF003FC000007F803FC000007F807FC000007F
C07F8000003FC07F8000003FC07F8000003FC0FF8000003FE0FF8000003FE0FF8000003FE0FF80
00003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003F
E07F8000003FC07FC000007FC07FC000007FC03FC000007F803FC000007F801FE00000FF001FE0
0000FF000FF00001FE0007F00001FC0003F80003F80001FC0007F00000FF001FE000003FC07F80
00000FFFFE00000000FFE000002B297CA834>79 D<FFFFFFF800FFFFFFFF00FFFFFFFFC003FC00
3FE003FC000FF003FC0007F803FC0007FC03FC0003FC03FC0003FE03FC0003FE03FC0003FE03FC
0003FE03FC0003FE03FC0003FE03FC0003FE03FC0003FC03FC0007FC03FC0007F803FC000FF003
FC003FE003FFFFFF8003FFFFFE0003FC00000003FC00000003FC00000003FC00000003FC000000
03FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC0000
0003FC00000003FC00000003FC000000FFFFF00000FFFFF00000FFFFF0000027297DA82F>I<FF
FFFFE00000FFFFFFFE0000FFFFFFFF800003FC007FE00003FC000FF00003FC0007F80003FC0007
FC0003FC0003FC0003FC0003FE0003FC0003FE0003FC0003FE0003FC0003FE0003FC0003FE0003
FC0003FE0003FC0003FC0003FC0007F80003FC0007F80003FC001FE00003FC007FC00003FFFFFE
000003FFFFF0000003FC00FC000003FC007F000003FC003F800003FC003F800003FC001FC00003
FC001FE00003FC001FE00003FC001FE00003FC001FE00003FC001FE00003FC001FF00003FC001F
F00003FC001FF00003FC001FF00703FC001FF80703FC000FF80703FC0007F80EFFFFF003FE1CFF
FFF001FFF8FFFFF0003FF030297DA834>82 D<007F806003FFF0E007FFF9E00F807FE01F001FE0
3E0007E07C0003E07C0001E0FC0001E0FC0001E0FC0000E0FE0000E0FE0000E0FF000000FFC000
007FFE00007FFFE0003FFFFC001FFFFE000FFFFF8007FFFFC003FFFFE000FFFFE00007FFF00000
7FF000000FF8000007F8000003F8600001F8E00001F8E00001F8E00001F8F00001F0F00001F0F8
0003F0FC0003E0FF0007C0FFE01F80F3FFFF00E0FFFE00C01FF0001D297CA826>I<01FF800007
FFF0000F81F8001FC07E001FC07E001FC03F000F803F8007003F8000003F8000003F8000003F80
000FFF8000FFFF8007FC3F800FE03F803F803F803F003F807F003F80FE003F80FE003F80FE003F
80FE003F807E007F807F00DF803F839FFC0FFF0FFC01FC03FC1E1B7E9A21>97
D<FFE0000000FFE0000000FFE00000000FE00000000FE00000000FE00000000FE00000000FE000
00000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE1
FE00000FE7FF80000FFE07E0000FF801F0000FF000F8000FE000FC000FE000FE000FE0007F000F
E0007F000FE0007F000FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F80
0FE0007F800FE0007F000FE0007F000FE0007F000FE000FE000FE000FC000FF001F8000FF803F0
000F9E07E0000F07FF80000E01FC0000212A7EA926>I<001FF80000FFFE0003F01F0007E03F80
0FC03F801F803F803F801F007F800E007F0000007F000000FF000000FF000000FF000000FF0000
00FF000000FF000000FF0000007F0000007F0000007F8000003F8001C01F8001C00FC0038007E0
070003F01E0000FFFC00001FE0001A1B7E9A1F>I<00003FF80000003FF80000003FF800000003
F800000003F800000003F800000003F800000003F800000003F800000003F800000003F8000000
03F800000003F800000003F800000003F800001FE3F80000FFFBF80003F03FF80007E00FF8000F
C007F8001F8003F8003F8003F8007F0003F8007F0003F8007F0003F800FF0003F800FF0003F800
FF0003F800FF0003F800FF0003F800FF0003F800FF0003F8007F0003F8007F0003F8007F0003F8
003F8003F8001F8003F8000F8007F80007C00FF80003F03BFF8000FFF3FF80003FC3FF80212A7E
A926>I<003FE00001FFF80003F07E0007C01F000F801F801F800F803F800FC07F000FC07F0007
C07F0007E0FF0007E0FF0007E0FFFFFFE0FFFFFFE0FF000000FF000000FF0000007F0000007F00
00007F0000003F8000E01F8000E00FC001C007E0038003F81F0000FFFE00001FF0001B1B7E9A20
>I<0007F0003FFC00FE3E01F87F03F87F03F07F07F07F07F03E07F00007F00007F00007F00007
F00007F00007F000FFFFC0FFFFC0FFFFC007F00007F00007F00007F00007F00007F00007F00007
F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007
F0007FFF807FFF807FFF80182A7EA915>I<07000FC01FE03FE03FE03FE01FE00FC00700000000
0000000000000000000000FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFFFEFFFE0F2B7DAA14>105
D<FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E0FFFEFFFEFFFE0F2A7DA914>108 D<FFC07F800FF000FFC1FFE03FFC00FFC383F0707E000FC6
03F8C07F000FCC01F9803F000FD801FF003F800FF001FE003F800FF001FE003F800FE001FC003F
800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001
FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F80
0FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F80FFFE1FFFC3FFF8FFFE1FFF
C3FFF8FFFE1FFFC3FFF8351B7D9A3A>I<FFC07F0000FFC1FFC000FFC787E0000FCE03F0000FD8
03F0000FD803F8000FF003F8000FF003F8000FE003F8000FE003F8000FE003F8000FE003F8000F
E003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F800
0FE003F8000FE003F8000FE003F8000FE003F800FFFE3FFF80FFFE3FFF80FFFE3FFF80211B7D9A
26>I<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007E07F0007F07F0007F0
7F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F87F0007
F07F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC00003FE0001D1B7E9A22>
I<FFE1FE0000FFE7FF8000FFFE07E0000FF803F0000FF001F8000FE000FC000FE000FE000FE000
FF000FE0007F000FE0007F000FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0
007F800FE0007F800FE0007F000FE000FF000FE000FF000FE000FE000FE001FC000FF001F8000F
F803F0000FFE0FE0000FE7FF80000FE1FC00000FE00000000FE00000000FE00000000FE0000000
0FE00000000FE00000000FE00000000FE00000000FE0000000FFFE000000FFFE000000FFFE0000
0021277E9A26>I<FFC1F0FFC7FCFFCE3E0FD87F0FD87F0FF07F0FF03E0FF01C0FE0000FE0000F
E0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000F
E000FFFF00FFFF00FFFF00181B7E9A1C>114 D<03FE300FFFF01E03F03800F0700070F00070F0
0070F80070FC0000FFE0007FFE007FFF803FFFE01FFFF007FFF800FFF80003FC0000FC60007CE0
003CF0003CF00038F80038FC0070FF01E0F7FFC0C1FF00161B7E9A1B>I<007000007000007000
00700000F00000F00000F00001F00003F00003F00007F0001FFFF0FFFFF0FFFFF007F00007F000
07F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F03807F038
07F03807F03807F03807F03803F03803F87001F86000FFC0001F8015267FA51B>I<FFE03FF800
FFE03FF800FFE03FF8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003
F8000FE003F8000FE003F8000FE003F8000FE003F8000FE007F80007E007F80007E00FF80003F0
3BFF8001FFF3FF80003FC3FF80211B7D9A26>I<FFFE03FF80FFFE03FF80FFFE03FF8007F00070
0007F000700007F800F00003F800E00003FC01E00001FC01C00001FC01C00000FE03800000FE03
8000007F070000007F070000007F8F0000003F8E0000003FDE0000001FDC0000001FDC0000000F
F80000000FF80000000FF800000007F000000007F000000003E000000003E000000001C0000021
1B7F9A24>I<FFFE7FFC0FFEFFFE7FFC0FFEFFFE7FFC0FFE0FE007E000E007F003F001C007F003
F001C007F807F803C003F807F8038003F807F8038001FC0EFC070001FC0EFC070001FE1EFC0F00
00FE1C7E0E0000FE1C7E0E0000FF383F1E00007F383F1C00007F783F3C00003FF01FB800003FF0
1FB800003FF01FF800001FE00FF000001FE00FF000000FC007E000000FC007E000000FC007E000
00078003C00000078003C0002F1B7F9A32>I<FFFC0FFF00FFFC0FFF00FFFC0FFF0007F003C000
03F807800001FC07800000FE0F000000FF1E0000007F3C0000003FF80000001FF00000000FF000
00000FF000000007F000000007F80000000FFC0000001FFE0000001EFE0000003C7F000000783F
800000F01FC00001E01FE00001C00FE00003C007F000FFF01FFF80FFF01FFF80FFF01FFF80211B
7F9A24>I<FFFE03FF80FFFE03FF80FFFE03FF8007F000700007F000700007F800F00003F800E0
0003FC01E00001FC01C00001FC01C00000FE03800000FE038000007F070000007F070000007F8F
0000003F8E0000003FDE0000001FDC0000001FDC0000000FF80000000FF80000000FF800000007
F000000007F000000003E000000003E000000001C000000001C000000003800000000380000038
078000007C07000000FE0F000000FE0E000000FE1E000000FE3C0000007C780000003FE0000000
0FC000000021277F9A24>I E end
%%EndProlog
%%BeginSetup
%%Feature: *Resolution 300
TeXDict begin
%%EndSetup
%%Page: 1 1
bop 366 157 a Fo(A)23 b(Prop)r(osal)g(for)h(Common)e(Group)i(Structures)423
237 y(in)e(a)h(Collecti)o(v)n(e)d(Comm)n(unication)h(Library)200
352 y Fn(Jehosh)o(ua)c(Bruc)o(k,)e(Rob)q(ert)h(Cypher,)g(P)o(ablo)g
(Elustondo,)h(Alex)e(Ho,)h(Ching-Tien)g(Ho)765 410 y(IBM)f(Researc)o(h)h
(Division)734 468 y(Almaden)e(Researc)o(h)i(Cen)o(ter)841 526
y(650)h(Harry)f(Road)799 583 y(San)h(Jose,)f(CA)g(95120)920
713 y Fm(Abstract)250 808 y Fl(A)c(collectiv)o(e)g(comm)o(unication)d
(library)i(includes)i(a)f(set)h(of)e(frequen)o(tly)i(used)g(collectiv)o(e)f
(comm)o(uni-)188 858 y(cation)i(routines)h(suc)o(h)h(as)f(broadcast,)g
(reduction,)g(scatter,)h(gather,)e(etc.)22 b(A)15 b(library)e(of)i(this)f
(nature,)188 907 y(CCL,)c(the)i(Collectiv)o(e)f(Comm)n(unication)d(Library)j
(in)o(tended)g(for)g(the)h(line)e(of)h(scalable)g(parallel)f(system)188
957 y(pro)q(ducts)16 b(b)o(y)f(IBM,)g(has)h(b)q(een)g(designed)g(and)f
(implemen)o(ted.)20 b(The)15 b(CCL)g(includes)h(b)q(oth)f(collectiv)o(e)188
1007 y(comm)o(unicatio)o(n)9 b(routines)j(that)g(p)q(erform)f(collectiv)o(e)g
(op)q(erations)h(within)f(groups)h(and)f(pro)q(cess)j(group)188
1057 y(routines)c(that)g(create)h(and)f(manipulate)e(groups)i(of)f(pro)q
(cesses.)19 b(In)10 b(this)g(pap)q(er,)h(w)o(e)f(presen)o(t)h(a)f(prop)q
(osal)188 1107 y(for)k(a)f(set)i(of)f(Common)d(Group)i(Structure)j(\(CGS\))e
(routines)h(whic)o(h)f(facilitate)f(the)i(programmi)o(ng)c(of)188
1157 y(applications)i(with)g(grid)h(and)f(h)o(yp)q(ercub)q(e)j(structures,)g
(as)e(an)g(extension)g(to)g(the)g(CCL.)74 1331 y Fo(1)69 b(In)n(tro)r
(duction)74 1463 y Fk(A)16 b(collectiv)o(e)i(comm)o(unication)f(library)g
(includes)i(a)d(set)g(of)g(frequen)o(tly)h(used)f(collectiv)o(e)i(comm)o
(unication)74 1520 y(routines)h(suc)o(h)g(as)f(broadcast,)g(reduction,)h
(scatter,)f(gather,)g(etc.)29 b(It)19 b(pro)o(vides)g(users)f(with)h(the)f
(con)o(v)o(e-)74 1576 y(nience)i(of)d(programming)g(as)g(w)o(ell)i(as)e(with)
i(the)e(comm)o(unication)i(co)q(de)f(e\016ciency)h(and)f(p)q(ortabilit)o(y)l
(.)29 b(A)74 1633 y(library)14 b(of)e(this)h(nature,)g(CCL,)g(the)g
(Collectiv)o(e)h(Comm)o(unication)f(Library)g(in)o(tended)i(for)d(the)h(line)
h(of)f(scal-)74 1689 y(able)i(parallel)h(system)d(pro)q(ducts)i(b)o(y)f(IBM,)
g(has)g(b)q(een)h(designed)h(and)e(implemen)o(ted.)21 b(CCL)14
b(constitutes)g(a)74 1746 y(part)i(of)f(the)i(External)f(User)g(In)o(terface)
g([4])f(for)h(this)g(line)i(of)e(parallel)i(systems.)k(In)16
b(fact,)g(CCL)g(has)g(b)q(een)74 1802 y(included)i(in)e(the)f(recen)o(tly)h
(announced)g(Scalable)h(PO)o(WERparallel)g(System)e(\(9076)f(SP1\))h(b)o(y)g
(IBM.)74 1890 y(CCL)j(includes)j(a)d(set)f(of)h(Collectiv)o(e)i(Comm)o
(unication)e(\(CC\))f(routines)i(and)f(a)g(set)g(of)g(Pro)q(cess)1785
1873 y Fj(1)1822 1890 y Fk(Group)74 1946 y(\(PG\))d(routines)i([1)o(].)22
b(CC)15 b(routines)i(pro)o(vide)f(e\016cien)o(t)h(supp)q(ort)f(for)g(common)f
(t)o(yp)q(es)h(of)g(comm)o(unication,)74 2003 y(suc)o(h)g(as)g(broadcasting)g
(a)f(single)i(v)m(alue)g(from)f(one)g(pro)q(cess)g(to)f(all)i(of)e(the)h
(other)g(pro)q(cesses)g(or)f(gathering)74 2059 y(di\013eren)o(t)c(v)m(alues)g
(from)f(all)h(of)f(the)h(pro)q(cesses)g(to)f(a)g(single)h(pro)q(cess.)19
b(All)12 b(CC)e(routines)h(ha)o(v)o(e)f(a)g(pro)q(cess)g(group)74
2116 y(iden)o(ti\014er)15 b(\(gid\))e(that)f(iden)o(ti\014es)i(the)f(group)g
(of)g(pro)q(cesses)g(that)f(participate)i(in)g(the)f(collectiv)o(e)h(op)q
(eration.)74 2172 y(PG)j(routines)g(pro)o(vide)g(users)g(with)g(the)g
(capabilit)o(y)h(of)f(sp)q(ecifying)h(and)f(manipulating)i(pro)q(cess)e
(groups.)74 2229 y(In)23 b(particular,)g(PG)e(routines)i(allo)o(w)e(users)h
(to)f(de\014ne)i(new)f(pro)q(cess)g(groups)g(b)q(oth)f(dynamically)j(and)74
2285 y(recursiv)o(ely)c(b)o(y)f(partitioning)h(a)f(previously)h(de\014ned)h
(pro)q(cess)e(group)g(or)f(a)h(system)f(prede\014ned)j(group.)74
2341 y(A)16 b(system)g(prede\014ned)h(group)f(is)g(the)g Fi(all)h
Fk(group)f(whic)o(h)h(consists)f(of)f(all)i(the)f(pro)q(cesses)g(a)o(v)m
(ailable)i(to)d(the)74 2398 y(user.)20 b(Once)14 b(a)f(pro)q(cess)g(group)g
(is)h(de\014ned,)g(an)o(y)f(collectiv)o(e)i(comm)o(unication)f(routine)g(can)
f(op)q(erate)g(within)74 2454 y(the)g(pro)q(cess)h(group)e(using)i(its)f(gid)
h(as)f(a)f(handle.)21 b(F)l(or)12 b(instance,)i(separate)f(broadcasts)f(can)h
(b)q(e)h(p)q(erformed)74 2511 y(within)j(previously)f(de\014ned)h(disjoin)o
(t)f(pro)q(cess)f(groups)g(concurren)o(tly)h(and)f(indep)q(enden)o(tly)l(.)p
74 2550 750 2 v 126 2577 a Fh(1)143 2593 y Fg(A)e(pro)q(cess)h(is)f(referred)
g(to)g(as)h(a)f(task)g(in)h([4].)1000 2727 y Fk(1)p eop
%%Page: 2 2
bop 74 157 a Fk(A)13 b(pro)q(cess)h(group)f(of)g Ff(n)g Fk(pro)q(cesses)h(sp)
q(eci\014es)h(the)e(order)g(of)g(these)g(pro)q(cesses)h(b)o(y)f(assigning)h
(eac)o(h)f(pro)q(cess)g(a)74 214 y(unique)g(rank)e(from)f(0)h(through)g
Ff(n)r Fe(\000)r Fk(1.)19 b(This)11 b(order)g(is)h(imp)q(ortan)o(t)f(for)f
(some)h(CC)g(routines)g(\(suc)o(h)h(as)e(scatter,)74 270 y(gather,)k
(pre\014x,)i(concat,)f(index)h(and)g(shift\))f(and)g(are)g(irrelev)m(an)o(t)i
(for)d(other)h(CC)g(routines)h(\(suc)o(h)f(as)g(b)q(cast)74
327 y(and)f(sync\).)20 b(In)14 b(other)g(w)o(ords,)f(there)h(is)g(no)g(other)
g(structure)f(\(or)g(top)q(ology\))g(imp)q(osed)i(on)f(pro)q(cess)g(groups)74
383 y(b)q(esides)20 b(the)e(implicit)j(structure)d(of)f(a)h(one-dimensional)j
(arra)o(y)c(on)h(the)g(rank)g(of)f(pro)q(cesses.)29 b(Ho)o(w)o(ev)o(er,)74
439 y(man)o(y)14 b(applications)i(ha)o(v)o(e)e(algorithms)g(based)g(on)g(a)g
(grid)h(\(mesh\),)f(t)o(ypically)h(2D)f(or)g(3D,)f(or)h(a)f(h)o(yp)q(ercub)q
(e)74 496 y(structure.)74 583 y(This)19 b(pap)q(er)g(presen)o(ts)f(a)g(prop)q
(osal)h(for)e(a)h(set)g(of)g(Common)g(Group)g(Structure)g(\(CGS\))g(routines)
g(whic)o(h)74 640 y(facilitate)f(the)e(programming)g(of)g(applications)i
(with)f(grid)f(and)h(h)o(yp)q(ercub)q(e)h(structures.)j(A)c(few)f(existing)74
696 y(comm)o(unication)h(libraries,)g(suc)o(h)f(as)f(Express)h([3)o(],)g(Zip)
q(co)q(de)h([8)o(,)f(9)o(])g(and)g(P)l(ARMA)o(CS)g([5)o(],)f(and)h(some)g
(cur-)74 753 y(ren)o(t)d(prop)q(osals)h([6)o(,)f(7])g(submitted)h(to)f(the)g
(Message-P)o(assing)g(In)o(terface)g(\(MPI\))g(Standard)g(Committee)g([2])74
809 y(also)17 b(address)f(the)h(problem)g(of)f(managing)g(pro)q(cess)h
(groups)f(with)h(resp)q(ect)g(to)f(pro)q(cess)h(top)q(ologies.)24
b(Our)74 866 y(prop)q(osal)15 b(for)g(the)g(CGS)g(routines)g(fo)q(cuses)g(on)
g(pro)o(viding)h(the)f(programming)g(con)o(v)o(enience)h(in)g(using)g(the)74
922 y(CCL)f(pro)q(cess)g(groups)f(for)g(application)i(with)f(grid)g(and)f(h)o
(yp)q(ercub)q(e)i(structures,)e(without)h(c)o(hanging)g(and)74
979 y(complicating)i(the)e(curren)o(t)g(seman)o(tics)h(of)e(the)i(pro)q(cess)
f(groups)g(in)h(the)f(CCL.)74 1066 y(The)c(remainder)g(of)f(the)g(pap)q(er)h
(is)g(organized)g(as)f(follo)o(ws.)18 b(An)11 b(o)o(v)o(erview)f(of)g(the)h
(CGS)f(routines)h(is)g(presen)o(ted)74 1123 y(in)j(Section)h(2.)k(Some)13
b(examples)h(of)f(implemen)o(ting)i(a)f(CGS)f(routine)h(based)f(on)h(PG)e
(routines)i(are)f(giv)o(en)h(in)74 1179 y(Section)h(3.)k(Finally)l(,)d(a)d
(more)h(detailed)h(description)h(of)d(the)h(CGS)g(routines,)g(presen)o(ted)h
(in)f(a)g(\\man)g(page")74 1236 y(format,)g(is)h(giv)o(en)h(in)g(Section)g
(4.)74 1410 y Fo(2)69 b(Ov)n(erview)22 b(of)h(CGS)g(Routines)74
1542 y Fk(The)16 b(Common)e(Group)h(Structure)g(\(CGS\))g(routines)g(form)g
(an)g(extension)h(to)f(the)g(Collectiv)o(e)i(Comm)o(uni-)74
1599 y(cation)g(Library)g(\(CCL\))f(and)h(are)f(closely)i(tied)g(to)e(the)g
(other)h(routines)g(in)g(that)f(library)l(.)26 b(In)17 b(particular,)74
1655 y(recall)h(that)e(the)i(CCL)f(routines)g(include)i(b)q(oth)e(Collectiv)o
(e)i(Comm)o(unication)e(\(CC\))f(routines)h(that)f(p)q(er-)74
1712 y(form)c(collectiv)o(e)j(op)q(erations)e(and)g(Pro)q(cess)g(Group)g
(\(PG\))f(routines)h(that)f(create)h(and)g(manipulate)h(groups)74
1768 y(of)19 b(pro)q(cesses.)33 b(The)19 b(CGS)g(routines)h(mak)o(e)e(use)i
(of)f(PG)g(routines)h(transparen)o(tly)e(for)h(de\014ning)i(pro)q(cess)74
1825 y(groups)13 b(that)g(arise)g(in)h(algorithms)f(with)h(grid)g(and)f(h)o
(yp)q(ercub)q(e)i(structures.)k(Once)14 b(these)g(grid-structured)74
1881 y(and)21 b(h)o(yp)q(ercub)q(e-structured)h(groups)d(ha)o(v)o(e)h(b)q
(een)h(de\014ned,)i(the)d(standard)g(CC)g(routines)g(can)h(b)q(e)g(used)74
1938 y(within)16 b(the)f(structured)h(groups)e(to)h(p)q(erform)f(the)h
(collectiv)o(e)i(comm)o(unication.)k(The)15 b(CGS)g(routines)g(also)74
1994 y(include)j(utilities)f(for)e(con)o(v)o(erting)g(b)q(et)o(w)o(een)g
(1-dimensional)i(and)f(higher-dimensional)i(addresses.)74 2147
y Fd(2.1)56 b(Grid-structured)17 b(routines)74 2264 y Fk(A)f(n)o(um)o(b)q(er)
f(of)g(CGS)g(routines)h(are)f(pro)o(vided)h(for)f(de\014ning)i(pro)q(cess)e
(groups)g(based)h(on)f(a)g(grid)h(structure.)74 2320 y(F)l(or)f(example,)h
(consider)g(a)g(group)f(of)g(pro)q(cesses)h(that)e(store)h(the)h(v)m(alues)g
(of)g(a)f(t)o(w)o(o-dimensional)h(arra)o(y)e(in)74 2377 y(a)j(distributed)i
(manner.)26 b(If)17 b(eac)o(h)h(pro)q(cess)f(holds)h(a)f(subblo)q(c)o(k)i(of)
d(the)i(arra)o(y)l(,)e(it)i(is)f(natural)h(to)e(view)i(the)74
2433 y(pro)q(cesses)i(as)e(forming)h(a)g(t)o(w)o(o-dimensional)h(grid.)32
b(One)20 b(ma)o(y)f(need)h(to)e(p)q(erform)h(op)q(erations)g(\(suc)o(h)h(as)
74 2490 y(broadcasts\))15 b(within)j(eac)o(h)e(column)h(of)f(this)h(grid,)f
(and)h(other)f(op)q(erations)g(\(suc)o(h)g(as)g(reductions\))h(within)74
2546 y(eac)o(h)22 b(ro)o(w)e(of)h(this)h(grid.)39 b(As)22 b(a)f(result,)i(it)
f(w)o(ould)g(b)q(e)g(helpful)h(to)e(ha)o(v)o(e)g(a)h(PG)f(routine)h(that)e
(creates)74 2602 y(pro)q(cess)d(groups)f(corresp)q(onding)i(to)d(the)i
(columns)g(of)f(the)h(grid)g(and)f(pro)q(cess)h(groups)f(corresp)q(onding)i
(to)1000 2727 y(2)p eop
%%Page: 3 3
bop 74 157 a Fk(the)18 b(ro)o(ws)e(of)h(the)h(grid.)27 b(This)19
b(is)f(exactly)f(the)h(purp)q(ose)g(of)f(the)h(routine)g Fi(F)o(ORM2DGRID)p
Fk(.)f(The)h(user)74 214 y(pro)o(vides)i(the)f(gid)g(of)f(the)h(paren)o(t)g
(group)g(\(the)f(group)h(whic)o(h)h(is)f(b)q(eing)h(partitioned\))f(and)h
(the)e(lengths)74 270 y(of)f(the)g Ff(X)j Fk(and)d Ff(Y)28
b Fk(axes.)d(The)17 b(partition)g(is)h(p)q(erformed)f(b)o(y)g(viewing)h(the)f
(rank)g(of)g(eac)o(h)g(pro)q(cess)g(within)74 327 y(the)22
b(paren)o(t)f(group)h(as)f(its)h(ro)o(w-ma)s(jor)e(p)q(osition)j(in)g(the)e
(t)o(w)o(o-dimensional)i(grid.)40 b(More)21 b(sp)q(eci\014cally)m(,)74
383 y Fi(F)o(ORM2DGRID)d Fk(tak)o(es)f(an)g(existing)i(\(user-de\014ned)g(or)
e(system-prede\014ned\))i(group)e(and)h(partitions)74 439 y(it)e(in)o(to)g(a)
f(set)g(of)g(nono)o(v)o(erlapping)i(groups)e(corresp)q(onding)i(to)e(the)g
(columns)i(of)e(a)g(t)o(w)o(o-dimensional)i(grid)74 496 y(and)h(also)g(in)o
(to)g(a)f(set)h(of)f(nono)o(v)o(erlapping)i(groups)f(corresp)q(onding)g(to)g
(the)f(ro)o(ws)g(of)h(a)f(t)o(w)o(o-dimensional)74 552 y(grid.)25
b(Th)o(us,)16 b(eac)o(h)h(pro)q(cess)g(receiv)o(es)g(t)o(w)o(o)e(new)i(gids,)
g(one)g(for)f(the)g(column)i(in)f(whic)o(h)h(it)f(is)g(lo)q(cated)g(and)74
609 y(one)e(for)g(the)g(ro)o(w)g(in)h(whic)o(h)g(it)f(is)h(lo)q(cated)g(in)g
(the)f(t)o(w)o(o-dimensional)h(grid.)74 696 y(Similar)c(routines)e(are)g(pro)
o(vided)h(for)f(creating)g(groups)g(based)g(on)g(three-dimensional)j(and)d
(four-dimensional)74 753 y(grids.)19 b(All)14 b(of)e(these)h(routines)g
(create)f(groups)g(that)g(corresp)q(ond)g(to)g(the)h(pro)q(cesses)g(that)e
(lie)j(along)f(a)f(single)74 809 y(axis)j(of)g(the)g(grid.)20
b(F)l(or)15 b(example,)g(if)h(the)f(routine)g Fi(F)o(ORM3DGRID)g
Fk(is)h(called)h(with)e Ff(X)t Fk(,)f Ff(Y)25 b Fk(and)16 b
Ff(Z)i Fk(axis)74 866 y(lengths)g(of)f(8,)g(8)g(and)g(16,)g(eac)o(h)h(pro)q
(cess)f(will)i(receiv)o(e)f(three)g(new)f(gids)h(corresp)q(onding)g(to)f
(groups)g(with)74 922 y(8,)f(8)g(and)g(16)g(mem)o(b)q(ers.)23
b(Once)17 b(these)f(groups)g(ha)o(v)o(e)g(b)q(een)h(created,)f(an)o(y)g(of)g
(the)g(standard)g(CC)g(routines)74 979 y(can)g(b)q(e)g(p)q(erformed)f(within)
i(them.)k(F)l(or)14 b(example,)i(there)g(is)g(a)f(CC)g(routine)h(for)f
(shifting)h(\(either)g(with)g(or)74 1035 y(without)g(wraparound\))e(data)h
(within)i(a)e(group.)21 b(This)16 b(routine)g(could)g(b)q(e)g(used)g(to)f
(shift)h(data)f(along)g(an)o(y)74 1092 y(of)g(the)g(dimensions)i(of)e(the)g
(grid.)74 1179 y(Although)22 b(these)g(grid)g(routines)f(co)o(v)o(er)g(man)o
(y)g(of)g(the)h(common)f(uses)h(of)f(grids,)h(they)g(do)f(ha)o(v)o(e)g(some)
74 1236 y(limitations.)34 b(In)20 b(particular,)h(they)f(alw)o(a)o(ys)f
(create)g(one-dimensional)j(subgroups,)e(but)g(in)g(some)f(cases)74
1292 y(it)f(ma)o(y)e(b)q(e)i(necessary)f(to)f(sp)q(ecify)j
(higher-dimensional)h(subgroups)d(\(suc)o(h)g(as)g(a)g(plane)h(within)g(a)f
(three-)74 1349 y(dimensional)h(grid\).)i(In)c(order)f(to)g(handle)h(suc)o(h)
g(cases,)f(a)g(more)g(general)h(routine)g(called)h Fi(SUBGRID)f
Fk(is)74 1405 y(pro)o(vided.)27 b Fi(SUBGRID)18 b Fk(partitions)g(a)f
Ff(d)823 1412 y Fj(1)842 1405 y Fk(-dimensional)j(paren)o(t)d(group)g(in)o
(to)g(a)g(set)g(of)g(nono)o(v)o(erlapping)74 1461 y Ff(d)98
1468 y Fj(2)118 1461 y Fk(-dimensional)k(subgroups,)e(where)h
Ff(d)779 1468 y Fj(2)817 1461 y Fe(\024)g Ff(d)896 1468 y Fj(1)915
1461 y Fk(.)31 b(The)20 b(user)f(sp)q(eci\014es)i(the)e(dimensions)h
Ff(d)1679 1468 y Fj(1)1718 1461 y Fk(and)f Ff(d)1834 1468 y
Fj(2)1853 1461 y Fk(,)h(the)74 1518 y(lengths)e(of)e(eac)o(h)h(dimension,)i
(and)e(the)g(list)g(con)o(taining)h(the)f Ff(d)1194 1525 y
Fj(2)1230 1518 y Fk(dimensions)i(that)d(are)h(spanned)g(b)o(y)g(the)74
1574 y(subgroups.)74 1727 y Fd(2.2)56 b(Hyp)r(ercub)r(e-structured)15
b(routines)74 1844 y Fk(The)d(CGS)g(routines)g(for)g(h)o(yp)q(ercub)q(e)h
(structures)f(are)g(similar)h(to)e(the)h(ones)g(for)f(grid)i(structures.)18
b(In)13 b(partic-)74 1901 y(ular,)g(the)g(routine)h Fi(F)o(ORMCUBE)e
Fk(creates)h(the)g(groups)f(\(eac)o(h)h(of)f(whic)o(h)i(is)f(of)f(size)i(t)o
(w)o(o\))d(corresp)q(onding)74 1957 y(to)20 b(the)g(pro)q(cesses)h(along)f
(eac)o(h)h(axis)f(of)g(a)g(h)o(yp)q(ercub)q(e.)37 b(Th)o(us,)21
b Fi(F)o(ORMCUBE)f Fk(is)h(lik)o(e)g(the)g(routines)74 2014
y Fi(F)o(ORM2DGRID)p Fk(,)13 b Fi(F)o(ORM3DGRID)g Fk(and)h
Fi(F)o(ORM4DGRID)p Fk(,)f(except)g(the)h(length)g(of)f(eac)o(h)g(axis)h(is)74
2070 y(alw)o(a)o(ys)i(t)o(w)o(o)g(\(and)h(th)o(us,)g(it)g(is)h(not)f
(required)h(as)e(an)h(input)i(parameter)d(from)g(the)h(user\).)26
b(Similarly)l(,)19 b(the)74 2126 y(routine)e Fi(SUBCUBE)g Fk(is)g(analogous)f
(to)g(the)g(routine)h Fi(SUBGRID)p Fk(,)g(except)f(that)g(eac)o(h)g(axis)h
(is)g(alw)o(a)o(ys)74 2183 y(of)e(length)h(t)o(w)o(o.)74 2270
y(In)j(addition,)i(there)d(is)h(one)g(CGS)f(routine)i(for)d(h)o(yp)q(ercub)q
(e)k(structures)d(that)g(is)h(not)f(analogous)g(to)g(an)o(y)74
2327 y(of)e(the)h(routines)g(for)f(grid)h(structures.)24 b(This)17
b(routine,)g(called)h Fi(F)o(ORMSUBCUBES)p Fk(,)f(creates)f(groups)74
2383 y(consisting)f(of)e(the)h(2)433 2367 y Fc(i)461 2383 y
Fk(pro)q(cesses)g(with)h(ranks)e(in)i(the)f(2)1032 2367 y Fc(d)1066
2383 y Fk(pro)q(cess)g(paren)o(t)f(group)h(that)f(di\013er)h(in)h(only)g
(their)74 2440 y Ff(i)f Fk(least)h(signi\014can)o(t)h(bits,)f(for)f(all)h(v)m
(alues)h(of)e Ff(i)h Fk(from)f(1)g(through)g Ff(d)9 b Fe(\000)g
Fk(1.)20 b(These)15 b(groups)f(corresp)q(ond)h(to)f(the)74
2496 y(sub)q(cub)q(es)j(that)e(o)q(ccur)g(in)h(divide-and-conquer)i(h)o(yp)q
(ercub)q(e)f(algorithms.)1000 2727 y(3)p eop
%%Page: 4 4
bop 74 157 a Fd(2.3)56 b(Utilit)n(y)17 b(routines)74 274 y
Fk(The)h(remaining)g(CGS)f(routines)h(are)f(utilities)i(for)e(con)o(v)o
(erting)g(addresses)g(b)q(et)o(w)o(een)h(a)f(one-dimensional)74
331 y(arra)o(y)12 b(and)i(an)f Ff(n)p Fk(-dimensional)j(grid)d(or)g(h)o(yp)q
(ercub)q(e)i(based)f(on)f(the)g(v)m(arying-lo)o(w)o(est-dimension-\014rst)j
(rule.)74 387 y(These)11 b(routines)f(include)j Fi(MAPGRID1N)p
Fk(,)c Fi(MAPGRIDN1)p Fk(,)g Fi(MAPCUBE1N)p Fk(,)h(and)g Fi(MAPCUBEN1)p
Fk(.)74 443 y(None)17 b(of)f(these)g(routines)h(in)o(v)o(olv)o(es)g(comm)o
(unication)g(and)g(none)g(of)f(these)g(routines)h(creates)f(or)g(manipu-)74
500 y(lates)f(groups)g(of)g(pro)q(cesses.)74 653 y Fd(2.4)56
b(Common)17 b(prop)r(erties)f(of)j(CGS)h(routines)74 770 y
Fk(The)13 b(prop)q(osed)g(CGS)f(routines)h(do)f(not)g(imp)q(ose)h(the)g
(restriction)g(that)f(the)g(size)h(of)f(the)h(paren)o(t)f(group)g(need)74
826 y(to)f(b)q(e)h(the)f(same)g(as)g(that)f(of)h(the)h(grid)f(or)g(h)o(yp)q
(ercub)q(e)i(structure)e(to)g(b)q(e)g(mapp)q(ed.)20 b(In)12
b(particular,)g(the)f(size)i(of)74 883 y(the)g(paren)o(t)g(group)g(can)g(b)q
(e)h(larger)f(or)g(equal)h(to)f(that)f(of)h(the)g(mapp)q(ed)h(structure.)19
b(Also,)14 b(in)g(this)f(prop)q(osal,)74 939 y(the)g(seman)o(tics)g(of)g(b)q
(oth)g(the)g(grid)h(and)f(h)o(yp)q(ercub)q(e)i(routines)e(do)g(not)g(imply)h
(a)f(barrier.)19 b(Users)13 b(should)i(not)74 996 y(assume)e(a)g(barrier)h
(when)g(an)o(y)f(of)f(the)i(Common)e(Group)h(Structures)h(routines)f(are)g
(executed.)20 b(As)14 b(de\014ned)74 1052 y(in)20 b([4)o(])f(a)g(lab)q(el)i
(is)e(asso)q(ciated)h(with)f(eac)o(h)g(pro)q(cess)h(group.)31
b(In)20 b(particular,)g(a)f(lab)q(el)i(will)f(b)q(e)g(generated)74
1108 y(b)o(y)c(the)h(system)f(for)f(eac)o(h)i(new)f(c)o(hild)i(group)e(that)g
(is)g(formed)g(as)g(a)g(consequence)i(of)e(calling)i(a)e(Common)74
1165 y(Group)e(Structures)h(routine)g(b)o(y)f(applying)i(the)f(follo)o(wing)g
(rules)g(for)f(eac)o(h)h(partitioning)g(separately)l(.)20 b(\(F)l(or)74
1221 y(example,)d(in)g Fi(F)o(ORM2DGRID)g Fk(these)f(rules)h(are)f(applied)i
(\014rst)e(to)g(the)g(xgid,)h(then)g(to)e(the)i(ygid.\))23
b(\(1\))74 1278 y(Select)18 b(the)e(leader)i(whic)o(h)f(is)g(the)f(pro)q
(cess)h(with)g(rank)f(0)g(in)i(eac)o(h)e(c)o(hild)i(group,)e(\(2\))g(sort)f
(all)j(the)e(leaders)74 1334 y(according)k(to)g(their)g(ranks)g(in)g(the)g
(paren)o(t)g(group,)g(\(3\))f(assign)h(to)f(eac)o(h)h(leader)g(a)g(lab)q(el)h
(according)g(to)74 1391 y(the)16 b(rank)f(of)g(the)h(sorted)f(order,)g(\(4\))
f(all)j(other)e(non-leaders)h(get)f(the)h(same)f(lab)q(el)i(as)e(their)h
(leader)h(in)f(the)74 1447 y(c)o(hild)i(group,)f(and)f(\(5\))g(all)h
Fb(out-of-b)n(ound)h Fk(pro)q(cesses)f(\(the)g(ones)f(form)g(a)g(singleton)i
(group)e(b)o(y)h(itself)t(\))f(are)74 1504 y(assigned)g(a)f(lab)q(el)i
Fe(\000)p Fk(1.)74 1657 y Fd(2.5)56 b(Examples)16 b(using)j(the)f(CGS)h
(routines)74 1773 y Fk(Based)12 b(on)g(the)f(new)h(CGS)g(routines)g(and)f
(existing)i(CC)e(routines)h(in)h(the)e(CCL,)h(the)f(user)h(can)g(easily)h
(write)e(a)74 1830 y(subroutine)j(for)f(the)g(t)o(ypical)h(data)f(exc)o
(hanges)h(b)q(et)o(w)o(een)f(nearest)g(neigh)o(b)q(ors)h(on)f(a)g(2D)g(grid.)
20 b(W)l(e)13 b(giv)o(e)g(t)o(w)o(o)74 1886 y(suc)o(h)j(examples)f(as)g
(follo)o(ws,)g(for)f(the)i(5-p)q(oin)o(t)f(and)g(9-p)q(oin)o(t)h(stencil)h(t)
o(yp)q(es,)d(resp)q(ectiv)o(ely)l(.)22 b(The)15 b(examples)74
1943 y(are)j(for)g(illustrativ)o(e)i(purp)q(oses)f(only)l(.)31
b(Here,)19 b(w)o(e)f(assume)g(the)h(paren)o(t)f(group)g(pgid)h(is)g(of)f
(size)i Ff(nx)12 b Fe(\003)g Ff(ny)r Fk(.)74 1999 y(Also,)i(the)h(nearest)f
(neigh)o(b)q(or)g(is)h(de\014ned)h(in)f(the)f(wraparound)f(sense.)20
b(In)15 b(the)f(\014rst)g(subroutine)h Fb(get4nbrs)p Fk(,)74
2056 y(the)k(receiv)o(ed)h(data)f(from)f(the)h(4)g(nearest)g(neigh)o(b)q(ors)
g(will)i(b)q(e)f(stored)e(in)i(the)f(lo)q(cal)h(arra)o(y)e
Fb(nbr)n(data)p Fk(.)31 b(In)74 2112 y(the)14 b(second)g(subroutine)g
Fb(get8nbrs)p Fk(,)f(the)g(receiv)o(ed)i(data)d(from)h(the)h(8)f(nearest)g
(neigh)o(b)q(ors)h(\(including)i(the)d(4)74 2169 y(diagonal)i(and)g(an)o
(tidiagonal)h(neigh)o(b)q(ors\))f(as)f(w)o(ell)h(as)g(the)f(initial)j(lo)q
(cal)f(data)e(will)i(b)q(e)g(stored)e(in)h(the)g(lo)q(cal)74
2225 y(arra)o(y)g Fb(nbr)n(data)h Fk(in)i(the)e(ro)o(w-ma)s(jor)e(order)i
(within)h(a)f(3)11 b Fe(\002)g Fk(3)16 b(windo)o(w.)23 b(Finally)l(,)18
b(it)e(should)i(b)q(e)f(noted)f(that)74 2282 y(corresp)q(onding)g
(subroutines)g(for)f(a)g(3D)f(or)h(4D)g(grid)g(can)h(b)q(e)f(written)h(in)g
(a)f(similar)h(w)o(a)o(y)l(.)279 2419 y Fi(Subroutine)g Fk(get4n)o(brs)e
(\(nx,)h(n)o(y)l(,)g(pgid,)g(m)o(ydata,)f(n)o(brdata\))279
2532 y(in)o(teger)h(nx,)g(n)o(y)l(,)g(pgid,)h(msglen,)f(\015ag,)g(xgid,)g
(ygid,)g(east,)g(w)o(est,)f(south,)g(north)279 2588 y(real*8)h(m)o(ydata,)e
(n)o(brdata\(4\))1000 2727 y(4)p eop
%%Page: 5 5
bop 279 157 a Fk(msglen)15 b(=)h(8)279 214 y Fi(call)h Fk(form2dgrid)d(\(nx,)
h(n)o(y)l(,)g(pgid,)h(xgid,)f(ygid\))74 270 y(C....Flag)e(=)j(0)f(for)f(a)h
(circulan)o(t)h(shift.)279 327 y(\015ag)f(=)g(0)74 383 y(C....Send)g(to)f
(the)h(east)g(and)h(receiv)o(e)g(from)e(the)h(w)o(est.)279
439 y(east)f(=)i(1)279 496 y Fi(call)h Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g
(\(2\),)g(msglen,)h(east,)g(\015ag,)f(xgid\))74 552 y(C....Send)h(to)f(the)h
(w)o(est)g(and)g(receiv)o(e)h(from)f(the)g(east.)279 609 y(w)o(est)f(=)i
Fe(\000)p Fk(1)279 665 y Fi(call)h Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g
(\(3\),)g(msglen,)h(w)o(est,)f(\015ag,)h(xgid\))74 722 y(C....Send)g(to)f
(the)h(south)g(and)h(receiv)o(e)g(from)e(the)i(north.)279 778
y(south)f(=)g(1)279 835 y Fi(call)i Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g
(\(1\),)g(msglen,)h(south,)g(\015ag,)f(ygid\))74 891 y(C....Send)h(to)f(the)h
(north)g(and)h(receiv)o(e)g(from)e(the)i(south.)279 948 y(north)f(=)g
Fe(\000)p Fk(1)279 1004 y Fi(call)i Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g
(\(4\),)g(msglen,)h(north,)g(\015ag,)f(ygid\))279 1060 y Fi(return)279
1254 y(Subroutine)i Fk(get8n)o(brs)e(\(nx,)h(n)o(y)l(,)g(pgid,)g(m)o(ydata,)f
(n)o(brdata\))279 1367 y(in)o(teger)h(nx,)g(n)o(y)l(,)g(pgid,)h(msglen,)f
(\015ag,)g(xgid,)g(ygid,)g(east,)g(w)o(est,)f(south,)g(north)279
1424 y(real*8)h(m)o(ydata,)e(n)o(brdata\(9\))279 1480 y(msglen)i(=)h(8)279
1537 y Fi(call)h Fk(form2dgrid)d(\(nx,)h(n)o(y)l(,)g(pgid,)h(xgid,)f(ygid\))
74 1593 y(C....Flag)e(=)j(0)f(for)f(a)h(circulan)o(t)h(shift.)279
1650 y(\015ag)f(=)g(0)74 1706 y(C....Send)g(to)f(the)h(east)g(and)h(receiv)o
(e)g(from)e(the)h(w)o(est.)279 1762 y(east)f(=)i(1)279 1819
y Fi(call)h Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g(\(4\),)g(msglen,)h(east,)g
(\015ag,)f(xgid\))74 1875 y(C....Send)h(to)f(the)h(w)o(est)g(and)g(receiv)o
(e)h(from)f(the)g(east.)279 1932 y(w)o(est)f(=)i Fe(\000)p
Fk(1)279 1988 y Fi(call)h Fk(shift)e(\(m)o(ydata,)f(n)o(brdata)g(\(6\),)g
(msglen,)h(w)o(est,)f(\015ag,)h(xgid\))279 2045 y(msglen)g(=)h(24)279
2101 y(n)o(brdata)e(\(5\))h(=)g(m)o(ydata)74 2158 y(C....Send)g(3)f(data)h
(items)g(to)g(the)g(south)g(and)h(receiv)o(e)g(from)e(the)i(north.)279
2214 y(south)f(=)g(1)279 2271 y Fi(call)i Fk(shift)e(\(n)o(brdata)f(\(4\),)g
(n)o(brdata)h(\(1\),)f(msglen,)h(south,)g(\015ag,)f(ygid\))74
2327 y(C....Send)h(3)f(data)h(items)g(to)g(the)g(north)g(and)h(receiv)o(e)g
(from)e(the)h(south.)279 2383 y(north)g(=)g Fe(\000)p Fk(1)279
2440 y Fi(call)i Fk(shift)e(\(n)o(brdata)f(\(4\),)g(n)o(brdata)h(\(7\),)f
(msglen,)h(north,)g(\015ag,)f(ygid\))279 2496 y Fi(return)1000
2727 y Fk(5)p eop
%%Page: 6 6
bop 74 157 a Fo(3)69 b(Implemen)n(tati)o(on)21 b(Examples)74
290 y Fk(In)16 b(this)g(section,)g(w)o(e)f(giv)o(e)h(an)f(example)i(of)e(ho)o
(w)g(a)g(CGS)g(routine)h(can)g(b)q(e)g(implemen)o(ted)h(based)f(on)f(a)g(few)
74 346 y(PG)f(routines.)20 b(F)l(or)14 b(easy)h(reference,)g(the)f(PG)g
(routines)h(used)g(b)q(elo)o(w)g(are)g(brie\015y)g(describ)q(ed)i(\014rst.)i
(See)c([4])74 403 y(more)g(details.)74 555 y Fd(3.1)56 b(The)18
b(PG)h(routines)74 672 y Fk(The)d(PG)f(routine)h Fi(partition)h
Fk(\(pgid,)e(k)o(ey)l(,)g(lab)q(el,)i(gid\))f(creates)f(new)g(pro)q(cess)h
(groups)f(b)o(y)g(partitioning)h(a)74 728 y(previously)g(de\014ned)g
Fb(p)n(ar)n(ent)f Fk(pro)q(cess)g(group)f Fb(p)n(gid)p Fk(,)h(based)g(on)f
(the)h(lo)q(cal)h(v)m(alue)g(of)f Fb(lab)n(el)e Fk(supplied)k(b)o(y)e(eac)o
(h)74 785 y(pro)q(cess.)26 b(A)17 b(system-wide)g(unique)i(pro)q(cess)e
(group)g(iden)o(ti\014er)h Fb(gid)f Fk(for)g(eac)o(h)g(new)g(group)g(is)g
(returned)h(to)74 841 y(the)e(calling)h(pro)q(cess.)j(Eac)o(h)15
b(pro)q(cess)h(will)h(only)f Fb(se)n(e)e Fk(the)i(new)f(group)h(in)g(whic)o
(h)g(it)g(resides.)21 b(The)15 b(order)h(of)74 897 y(pro)q(cesses)h(in)g(eac)
o(h)f(new)h(group)e(is)i(determined)h(b)o(y)e Fb(key)g Fk(in)h(an)f
(ascending)h(order,)f(with)h(the)f(rank)g(in)h(the)74 954 y(paren)o(t)e
(group)g(b)q(eing)h(used)g(for)f(tie)g(break)o(er.)74 1042
y(The)k(PG)e(routine)i Fi(group)f Fk(\(gsize,)h(glist,)g(lab)q(el,)h(gid\))e
(returns)g(as)g(an)g(argumen)o(t)g(a)g(new)g(pro)q(cess)g(group)74
1098 y(iden)o(ti\014er)e Fb(gid)e Fk(as)g(a)g(handle)i(to)e(the)g(newly)h
(formed)f(group,)g(based)h(on)f(a)g(giv)o(en)h(pro)q(cess)g(group)f
(structure:)74 1154 y Fb(gsize)e Fk(\(the)g(size)i(of)e(the)h(group\),)f
Fb(glist)g Fk(\(the)g(list)i(of)e(pids)i(in)f(the)g(group\),)f(and)h
Fb(lab)n(el)f Fk(\(the)g(lab)q(el)i(of)f(the)f(group)74 1211
y(assigned)k(b)o(y)f(the)g(user\).)74 1298 y(The)k(PG)g(routine)g
Fi(getsize)h Fk(\(gsize,)g(gid\))f(tak)o(es)f(a)h(pro)q(cess)g(group)g(iden)o
(ti\014er)h Fb(gid)f Fk(and)g(returns)g(its)g(size)74 1355
y Fb(gsize)p Fk(.)35 b(The)20 b(PG)g(routine)h Fi(getrank)g
Fk(\(rank,)f(pid,)j(gid\))e(tak)o(es)e(a)h(pro)q(cess)h(group)f(iden)o
(ti\014er)i Fb(gid)f Fk(and)f(a)74 1411 y(pro)q(cess)15 b(id)g
Fb(pid)g Fk(and)f(returns)g(the)h(corresp)q(onding)g Fb(r)n(ank)f
Fk(of)g(the)h(pro)q(cess)f(in)h(the)g(group.)k(The)c(PG)f(routine)74
1468 y Fi(gettaskid)20 b Fk(\(rank,)f(pid,)i(gid\))f(tak)o(es)e(a)h(pro)q
(cess)g(group)g(iden)o(ti\014er)i Fb(gid)e Fk(and)g(a)g Fb(r)n(ank)g
Fk(in)h(the)f(group)g(and)74 1524 y(returns)c(the)h(corresp)q(onding)g(pro)q
(cess)f(id)h(to)f Fb(pid)p Fk(.)74 1676 y Fd(3.2)56 b(Implem)o(en)n(ting)16
b(a)j(CGS)g(routine)f(using)g(partition)74 1793 y Fk(The)f(follo)o(wing)g
(piece)h(of)e(pseudo)q(co)q(de)i(describ)q(es)g(ho)o(w)e(form2dgrid)g(can)g
(b)q(e)i(implemen)o(ted)g(using)f(parti-)74 1850 y(tion.)j(Lines)15
b(\(1\))f(through)f(\(6\))h(c)o(hec)o(k)g(if)h(the)f(paren)o(t)g(group)g(is)g
(to)q(o)g(small)h(to)e(form)h(an)g Ff(nx)8 b Fe(\002)g Ff(ny)17
b Fk(grid)d(in)h(it.)74 1906 y(Lines)i(\(8\))e(through)h(\(14\))e(deals)j
(with)f(the)g(out-of-b)q(ound)g(pro)q(cesses,)g(when)g(the)g(paren)o(t)f
(group)h(is)g(larger)74 1963 y(than)g(the)g(2D)g(grid)g(to)g(b)q(e)g(formed.)
23 b(Lines)17 b(\(16\))e(through)h(\(19\))e(compute)j(the)f(ro)o(w)f(index)i
(\(yid\))g(and)f(use)74 2019 y(it)h(as)f(a)g(lab)q(el)h(to)f(form)g(a)g(ro)o
(w)f(group)h Fb(xgid)p Fk(.)23 b(Similarly)l(,)c(lines)e(\(20\))e(through)h
(\(22\))f(compute)i(the)f(column)74 2076 y(index)f(\(xid\))e(and)g(use)h(it)f
(as)g(a)g(lab)q(el)h(to)f(form)f(a)h(column)h(group)f Fb(ygid)p
Fk(.)19 b(All)c(v)m(ariables)f(are)f(assumed)g(of)g(t)o(yp)q(e)74
2132 y(in)o(teger.)279 2263 y Fi(Subroutine)j Fk(form2dgrid)e(\(nx,)h(n)o(y)l
(,)g(pgid,)h(xgid,)f(ygid\))279 2376 y(in)o(teger)g(glist)h(\(*\))74
2433 y(\(1\))146 b Fi(call)17 b Fk(getsize\(pgsize,)e(pgid\))74
2489 y(\(2\))146 b(gridsize)16 b(=)g(nx)f(*)g(n)o(y)74 2546
y(\(3\))146 b Fi(if)15 b Fk(\(pgsize)h(.lt.)k(gridsize\))c
Fi(then)74 2602 y Fk(\(4\))248 b Fi(error)14 b Fk(\(\\The)h(paren)o(t)f
(group)h(size)h(is)g(to)q(o)f(small"\))1000 2727 y(6)p eop
%%Page: 7 7
bop 74 157 a Fk(\(5\))248 b Fi(return)74 214 y Fk(\(6\))146
b Fi(endif)74 270 y Fk(\(7\))g Fi(call)17 b Fk(getrank)d(\(m)o(yrank,)g(m)o
(ypid,)h(gid\))74 327 y(\(8\))146 b Fi(if)15 b Fk(\(m)o(yrank)g(.ge.)k
(gridsize\))d Fi(then)74 383 y Fk(\(9\))248 b(xgsize)16 b(=)f(1)74
439 y(\(10\))225 b(ygsize)16 b(=)f(1)74 496 y(\(11\))225 b(glist)16
b(\(0\))e(=)h(m)o(ypid)74 552 y(\(12\))225 b(lab)q(el)17 b(=)e
Fe(\000)p Fk(1)74 609 y(\(13\))225 b Fi(call)17 b Fk(group)e(\(xgsize,)g
(glist,)g(lab)q(el,)i(xgid\))74 665 y(\(14\))225 b Fi(call)17
b Fk(group)e(\(ygsize,)g(glist,)g(lab)q(el,)i(ygid\))74 722
y(\(15\))123 b Fi(else)74 778 y Fk(\(16\))225 b(yid)16 b(=)g(ceiling)h(\(m)o
(yrank,)d(nx\))74 835 y(\(17\))225 b(lab)q(el)17 b(=)e(yid)74
891 y(\(18\))225 b(k)o(ey)15 b(=)h(0)74 948 y(\(19\))225 b
Fi(call)17 b Fk(partition)e(\(pgid,)h(k)o(ey)l(,)f(lab)q(el,)h(xgid\))74
1004 y(\(20\))225 b(xid)16 b(=)g(mo)q(d)f(\(m)o(yrank,)f(nx\))74
1060 y(\(21\))225 b(lab)q(el)17 b(=)e(xid)74 1117 y(\(22\))225
b Fi(call)17 b Fk(partition)e(\(pgid,)h(k)o(ey)l(,)f(lab)q(el,)h(ygid\))74
1173 y(\(23\))123 b Fi(endif)74 1230 y Fk(\(24\))g Fi(return)74
1439 y Fd(3.3)56 b(Implem)o(en)n(ting)16 b(a)j(CGS)g(routine)f(using)g(group)
74 1556 y Fk(Note)k(that)f(the)g(p)q(erformance)h(of)g(the)f(ab)q(o)o(v)o(e)h
(co)q(de)g(can)g(b)q(e)g(impro)o(v)o(ed)g(b)o(y)g(using)h Fi(group)e
Fk(instead)i(of)74 1613 y Fi(partition)d Fk(for)e(lines)i(\(15\))e(through)g
(\(24\).)28 b(This)19 b(is)g(b)q(ecause)h(a)e(t)o(ypical)h(implemen)o(tation)
h(of)e(partition)74 1669 y(requires)d(a)g(concat)f(\(all-to-all)i(b)q(cast\))
e(routine,)h(while)h(group)e(can)h(b)q(e)g(implemen)o(ted)h(b)o(y)f(a)f(b)q
(cast)h(routine)74 1725 y(\(in)k(guaran)o(teeing)f(the)h(system-wide)g
(uniqueness)h(of)e(gid\).)29 b(T)l(o)18 b(do)g(so,)h(one)f(m)o(ust)g
(explicitly)j(compute)74 1782 y(the)16 b(ranks)f(of)g(all)i(the)e(pro)q
(cesses,)h(whic)o(h)g(are)g(on)f(the)h(same)f(ro)o(w)g(\(and)g(column\))h(as)
f(the)h(calling)h(pro)q(cess,)74 1838 y(in)f(order.)74 1976
y(\(15\))123 b Fi(else)74 2032 y Fk(\(16\))225 b(yid)16 b(=)g(ceiling)h(\(m)o
(yrank,)d(nx\))74 2089 y(\(17\))225 b(xgsize)16 b(=)f(nx)74
2145 y(\(18\))225 b Fi(do)15 b Fk(10)g(i)h(=)f(0,)g(nx-1)74
2202 y(\(19\))327 b(rank)15 b(=)h(yid)g(*)f(nx)g(+)g(i)74 2258
y(\(20\))327 b Fi(call)17 b Fk(gettaskid)e(\(rank,)f(glist)i(\(i\),)f(pgid\))
74 2314 y(\(21\))20 b(10)159 b Fi(con)o(tin)o(ue)74 2371 y
Fk(\(22\))225 b(lab)q(el)17 b(=)e(yid)74 2427 y(\(23\))225
b Fi(call)17 b Fk(group)e(\(xgsize,)g(glist,)g(lab)q(el,)i(xgid\))74
2484 y(\(24\))225 b(xid)16 b(=)g(mo)q(d)f(\(m)o(yrank,)f(nx\))74
2540 y(\(25\))225 b(ygsize)16 b(=)f(n)o(y)74 2597 y(\(26\))225
b Fi(do)15 b Fk(20)g(i)h(=)f(0,)g(n)o(y-1)1000 2727 y(7)p eop
%%Page: 8 8
bop 74 157 a Fk(\(27\))327 b(rank)15 b(=)h(nx)f(*)g(i)h(+)f(xid)74
214 y(\(28\))327 b Fi(call)17 b Fk(gettaskid)e(\(rank,)f(glist)i(\(i\),)f
(pgid\))74 270 y(\(29\))20 b(20)159 b Fi(con)o(tin)o(ue)74
327 y Fk(\(30\))225 b(lab)q(el)17 b(=)e(xid)74 383 y(\(31\))225
b Fi(call)17 b Fk(group)e(\(ygsize,)g(glist,)g(lab)q(el,)i(ygid\))74
439 y(\(32\))123 b Fi(endif)74 496 y Fk(\(33\))g Fi(return)74
727 y Fo(4)69 b(Sp)r(eci\014cation)21 b(of)i(CGS)g(Routines)74
859 y Fk(This)f(section)h(con)o(tains)f(a)f(more)g(detailed)i(description)h
(of)d(the)h(CGS)f(routines)h(presen)o(ted)g(in)h(a)e(man)74
916 y(page)c(format.)23 b(The)18 b(routines)f(for)f(grid)h(structures)g(are)g
(presen)o(ted)g(\014rst,)f(follo)o(w)o(ed)i(b)o(y)e(the)h(routines)h(for)74
972 y(h)o(yp)q(ercub)q(e)g(structures)d(and)h(the)g(utilit)o(y)h(routines.)23
b(In)16 b(the)g(calling)i(argumen)o(ts)d(part,)g(\\\(I\)")g(denotes)h(an)74
1029 y(input)g(argumen)o(t)f(and)g(\\\(O\)")f(denotes)i(an)f(output)g
(argumen)o(t.)1000 2727 y(8)p eop
%%Page: 9 9
bop 74 157 a Fd(NAME)164 278 y Fi(F)o(ORM2DGRID)645 274 y Fk(F)l(orm)14
b(the)h(X-axis)h(\(ro)o(w\))d(and)j(Y-axis)f(\(column\))g(c)o(hild)i(groups)e
(for)f(the)645 331 y(calling)f(pro)q(cess)g(within)g(a)f(giv)o(en)g(paren)o
(t)g(group)g(b)o(y)g(viewing)h(the)f(paren)o(t)645 387 y(group)j(as)f(a)h(2D)
g(grid.)74 529 y Fd(SYNOPSIS)149 646 y Fk(subroutine)h Fi(F)o(ORM2DGRID)f
Fk(\(nx,)g(n)o(y)l(,)g(pgid,)h(xgid,)f(ygid\))149 733 y(in)o(teger)h(nx,)f(n)
o(y)l(,)g(pgid,)g(xgid,)h(ygid)74 886 y Fd(CALLING)j(AR)n(GUMENTS)149
1002 y Fi(nx)554 1003 y Fk(\(I\))g(The)g(length)g(of)f(the)h(X)f(axis)h(in)g
(the)g(2D)f(grid)h(mapp)q(ed)g(to)f(the)h(paren)o(t)554 1060
y(group.)149 1132 y Fi(n)o(y)554 1138 y Fk(\(I\))g(The)g(length)g(of)f(the)h
(Y)f(axis)h(in)g(the)g(2D)f(grid)h(mapp)q(ed)g(to)f(the)h(paren)o(t)554
1194 y(group.)149 1273 y Fi(pgid)307 b Fk(\(I\))16 b(The)g(pro)q(cess)h
(group)e(iden)o(ti\014er)j(of)e(the)g(paren)o(t)f(group)h(from)f(whic)o(h)i
(the)554 1329 y(X-axis)f(and)g(Y-axis)f(c)o(hild)i(groups)e(are)g(formed.)149
1407 y Fi(xgid)308 b Fk(\(O\))17 b(The)h(pro)q(cess)f(group)g(iden)o
(ti\014er)i(of)d(the)i(X-axis)f(c)o(hild)i(group)e(whic)o(h)h(is)554
1464 y(formed)i(b)o(y)g(all)h(pro)q(cesses)g(along)f(the)g(X)h(axis,)g(i.e.,)
g(ha)o(ving)f(the)h(same)e(Y)554 1520 y(co)q(ordinate,)d(as)f(the)g(calling)i
(pro)q(cess.)149 1598 y Fi(ygid)308 b Fk(\(O\))17 b(The)h(pro)q(cess)f(group)
g(iden)o(ti\014er)i(of)d(the)i(Y-axis)f(c)o(hild)i(group)e(whic)o(h)h(is)554
1655 y(formed)i(b)o(y)g(all)h(pro)q(cesses)g(along)f(the)g(Y)h(axis,)g(i.e.,)
g(ha)o(ving)f(the)h(same)e(X)554 1711 y(co)q(ordinate,)d(as)f(the)g(calling)i
(pro)q(cess.)74 1853 y Fd(DESCRIPTION)149 1970 y Fi(F)o(ORM2DGRID)g
Fk(uses)h(the)f(rank)h(of)e(the)i(calling)h(pro)q(cess)f(in)g(a)f(paren)o(t)g
(group)g(and)g(map)h(it)f(in)o(to)74 2027 y(an)h Fi(nx)12 b
Fe(\002)g Fi(n)o(y)18 b Fk(2-dimensional)i(grid,)f(based)f(on)g(the)g(v)m
(arying-X-co)q(ordinate-\014rst)h(rule.)30 b(Then,)19 b(it)f(forms)74
2083 y(the)f(X-axis)g(c)o(hild)h(group)e(\(xgid\))g(and)h(the)f(Y-axis)h(c)o
(hild)h(group)f(\(ygid\))f(for)g(the)g(calling)i(pro)q(cess.)24
b(If)17 b(the)74 2140 y(rank)f(of)f(the)i(calling)g(pro)q(cess)g(is)f
(outside)h(the)f(range)g(of)f(the)h(2D)g(grid,)g(then)g(a)g(singleton)h
(group)f(of)f(itself)74 2196 y(is)h(formed)f(for)f Fi(xgid)i
Fk(and)f Fi(ygid)p Fk(.)74 2349 y Fd(ERR)n(OR)k(CONDITIONS)149
2466 y Fk(It)f(is)h(a)e(run-mo)q(de)i(error)e(if)h(\(1\))f
Fi(nx)h Fk(or)f Fi(n)o(y)g Fk(is)i(less)f(than)g(1,)g(or)f(\(2\))g(the)h(pro)
q(duct)g(of)g Fi(nx)f Fk(and)i Fi(n)o(y)e Fk(is)74 2522 y(greater)d(than)i
(the)f(n)o(um)o(b)q(er)g(of)g(pro)q(cesses)h(in)g(the)f(paren)o(t)g(group)g
Fi(pgid)p Fk(.)1000 2727 y(9)p eop
%%Page: 10 10
bop 74 157 a Fd(NAME)164 278 y Fi(F)o(ORM3DGRID)645 274 y Fk(F)l(orm)16
b(the)i(X-axis,)g(Y-axis)g(and)g(Z-axis)g(c)o(hild)h(groups)e(for)g(the)h
(calling)645 331 y(pro)q(cess)13 b(within)i(a)e(giv)o(en)h(paren)o(t)g(group)
f(b)o(y)g(viewing)i(the)f(paren)o(t)f(group)645 387 y(as)h(a)h(3D)g(grid.)74
529 y Fd(SYNOPSIS)149 646 y Fk(subroutine)h Fi(F)o(ORM3DGRID)f
Fk(\(nx,)g(n)o(y)l(,)g(nz,)g(pgid,)h(xgid,)f(ygid,)h(zgid\))149
733 y(in)o(teger)g(nx,)f(n)o(y)l(,)g(nz,)g(pgid,)g(xgid,)h(ygid,)f(zgid)74
886 y Fd(CALLING)k(AR)n(GUMENTS)149 1002 y Fi(nx)554 1003 y
Fk(\(I\))g(The)g(length)g(of)f(the)h(X)f(axis)h(in)g(the)g(3D)f(grid)h(mapp)q
(ed)g(to)f(the)h(paren)o(t)554 1060 y(group.)149 1132 y Fi(n)o(y)554
1138 y Fk(\(I\))g(The)g(length)g(of)f(the)h(Y)f(axis)h(in)g(the)g(3D)f(grid)h
(mapp)q(ed)g(to)f(the)h(paren)o(t)554 1194 y(group.)149 1272
y Fi(nz)554 1273 y Fk(\(I\))g(The)g(length)h(of)f(the)g(Z)g(axis)g(in)h(the)f
(3D)f(grid)i(mapp)q(ed)f(to)g(the)g(paren)o(t)554 1329 y(group.)149
1407 y Fi(pgid)307 b Fk(\(I\))16 b(The)g(pro)q(cess)h(group)e(iden)o
(ti\014er)j(of)e(the)g(paren)o(t)f(group)h(from)f(whic)o(h)i(the)554
1464 y(X-axis,)f(Y-axis)f(and)h(Z-axis)f(c)o(hild)i(groups)e(are)g(formed.)
149 1542 y Fi(xgid)308 b Fk(\(O\))17 b(The)h(pro)q(cess)f(group)g(iden)o
(ti\014er)i(of)d(the)i(X-axis)f(c)o(hild)i(group)e(whic)o(h)h(is)554
1598 y(formed)13 b(b)o(y)g(all)h(pro)q(cesses)g(along)f(the)g(X)g(axis,)h
(i.e.,)f(ha)o(ving)g(the)g(same)g(Y)g(and)554 1655 y(Z)i(co)q(ordinates,)h
(as)e(the)i(calling)h(pro)q(cess.)149 1733 y Fi(ygid)308 b
Fk(\(O\))17 b(The)h(pro)q(cess)f(group)g(iden)o(ti\014er)i(of)d(the)i(Y-axis)
f(c)o(hild)i(group)e(whic)o(h)h(is)554 1790 y(formed)13 b(b)o(y)g(all)h(pro)q
(cesses)g(along)f(the)g(Y)g(axis,)h(i.e.,)f(ha)o(ving)g(the)g(same)g(X)g(and)
554 1846 y(Z)i(co)q(ordinates,)h(as)e(the)i(calling)h(pro)q(cess.)149
1924 y Fi(zgid)313 b Fk(\(O\))18 b(The)g(pro)q(cess)g(group)f(iden)o
(ti\014er)j(of)d(the)h(Z-axis)g(c)o(hild)h(group)e(whic)o(h)i(is)554
1981 y(formed)14 b(b)o(y)f(all)i(pro)q(cesses)f(along)f(the)h(Z)f(axis,)h
(i.e.,)g(ha)o(ving)g(the)f(same)h(X)f(and)554 2037 y(Y)j(co)q(ordinates,)f
(as)g(the)g(calling)i(pro)q(cess.)74 2179 y Fd(DESCRIPTION)149
2296 y Fi(F)o(ORM3DGRID)c Fk(uses)h(the)f(rank)g(of)g(the)g(calling)i(pro)q
(cess)f(in)g(a)f(paren)o(t)g(group)g(and)g(map)g(it)h(in)o(to)f(an)74
2352 y Fi(nx)7 b Fe(\002)g Fi(n)o(y)g Fe(\002)g Fi(nz)14 b
Fk(3-dimensional)i(grid,)e(based)g(on)g(the)f(v)m(arying-X-Y-Z-in-order)j
(rule.)k(Then,)14 b(it)g(forms)f(the)74 2409 y(X-axis,)i(Y-axis)f(and)h
(Z-axis)f(c)o(hild)i(groups)e(for)f(the)i(calling)g(pro)q(cess.)20
b(If)15 b(the)f(rank)g(of)g(the)g(calling)i(pro)q(cess)74 2465
y(is)i(outside)h(the)f(range)f(of)g(the)h(3D)f(grid,)h(then)g(a)g(singleton)h
(group)e(of)g(itself)i(is)f(formed)g(for)f Fi(xgid)p Fk(,)h
Fi(ygid)74 2522 y Fk(and)e Fi(zgid)p Fk(.)989 2727 y(10)p eop
%%Page: 11 11
bop 74 157 a Fd(ERR)n(OR)19 b(CONDITIONS)149 274 y Fk(It)c(is)g(a)f(run-mo)q
(de)i(error)e(if)h(\(1\))f Fi(nx)p Fk(,)g Fi(n)o(y)f Fk(or)h
Fi(nz)h Fk(is)h(less)f(than)f(1,)g(or)h(\(2\))e Fi(nx)c Fe(\003)g
Fi(n)o(y)g Fe(\003)f Fi(nz)15 b Fk(is)g(greater)f(than)74 331
y(the)h(n)o(um)o(b)q(er)h(of)f(pro)q(cesses)g(in)h(the)g(paren)o(t)e(group)h
Fi(pgid)p Fk(.)989 2727 y(11)p eop
%%Page: 12 12
bop 74 157 a Fd(NAME)164 278 y Fi(F)o(ORM4DGRID)645 274 y Fk(F)l(orm)14
b(the)i(W-axis,)f(X-axis,)g(Y-axis)h(and)f(Z-axis)h(c)o(hild)h(groups)e(for)f
(the)645 331 y(calling)f(pro)q(cess)g(within)g(a)f(giv)o(en)g(paren)o(t)g
(group)g(b)o(y)g(viewing)h(the)f(paren)o(t)645 387 y(group)j(as)f(a)h(4D)g
(grid.)74 529 y Fd(SYNOPSIS)149 646 y Fk(subroutine)h Fi(F)o(ORM4DGRID)f
Fk(\(n)o(w,)g(nx,)g(n)o(y)l(,)g(nz,)g(pgid,)h(wgid,)f(xgid,)g(ygid,)g(zgid\))
149 733 y(in)o(teger)h(n)o(w,)e(nx,)h(n)o(y)l(,)g(nz,)g(pgid,)h(wgid,)f
(xgid,)h(ygid,)f(zgid)74 886 y Fd(CALLING)k(AR)n(GUMENTS)149
1002 y Fi(n)o(w)554 1003 y Fk(\(I\))f(The)g(length)g(of)f(the)h(W)f(axis)h
(in)h(the)f(4D)f(grid)h(mapp)q(ed)g(to)f(the)h(paren)o(t)554
1060 y(group.)149 1137 y Fi(nx)554 1138 y Fk(\(I\))h(The)g(length)g(of)f(the)
h(X)f(axis)h(in)g(the)g(4D)f(grid)h(mapp)q(ed)g(to)f(the)h(paren)o(t)554
1194 y(group.)149 1267 y Fi(n)o(y)554 1273 y Fk(\(I\))g(The)g(length)g(of)f
(the)h(Y)f(axis)h(in)g(the)g(4D)f(grid)h(mapp)q(ed)g(to)f(the)h(paren)o(t)554
1329 y(group.)149 1406 y Fi(nz)554 1407 y Fk(\(I\))g(The)g(length)h(of)f(the)
g(Z)g(axis)g(in)h(the)f(4D)f(grid)i(mapp)q(ed)f(to)g(the)g(paren)o(t)554
1464 y(group.)149 1542 y Fi(pgid)307 b Fk(\(I\))16 b(The)g(pro)q(cess)h
(group)e(iden)o(ti\014er)j(of)e(the)g(paren)o(t)f(group)h(from)f(whic)o(h)i
(the)554 1598 y(W-axis,)e(X-axis,)h(Y-axis)f(and)h(Z-axis)f(c)o(hild)i
(groups)e(are)g(formed.)149 1677 y Fi(wgid)298 b Fk(\(O\))16
b(The)g(pro)q(cess)h(group)e(iden)o(ti\014er)j(of)e(the)g(W-axis)g(c)o(hild)h
(group)f(whic)o(h)h(is)554 1733 y(formed)d(b)o(y)g(all)i(pro)q(cesses)e
(along)g(the)h(W)f(axis,)g(i.e.,)g(ha)o(ving)g(the)h(same)f(X,)f(Y)554
1790 y(and)j(Z)f(co)q(ordinates,)g(as)g(the)g(calling)i(pro)q(cess.)149
1868 y Fi(xgid)308 b Fk(\(O\))17 b(The)h(pro)q(cess)f(group)g(iden)o
(ti\014er)i(of)d(the)i(X-axis)f(c)o(hild)i(group)e(whic)o(h)h(is)554
1924 y(formed)c(b)o(y)g(all)i(pro)q(cesses)e(along)g(the)h(X)f(axis,)g(i.e.,)
g(ha)o(ving)h(the)f(same)g(W,)f(Y)554 1981 y(and)j(Z)f(co)q(ordinates,)g(as)g
(the)g(calling)i(pro)q(cess.)149 2059 y Fi(ygid)308 b Fk(\(O\))17
b(The)h(pro)q(cess)f(group)g(iden)o(ti\014er)i(of)d(the)i(Y-axis)f(c)o(hild)i
(group)e(whic)o(h)h(is)554 2115 y(formed)c(b)o(y)g(all)i(pro)q(cesses)e
(along)g(the)h(Y)f(axis,)g(i.e.,)g(ha)o(ving)h(the)f(same)g(W,)f(X)554
2172 y(and)j(Z)f(co)q(ordinates,)g(as)g(the)g(calling)i(pro)q(cess.)149
2250 y Fi(zgid)313 b Fk(\(O\))18 b(The)g(pro)q(cess)g(group)f(iden)o
(ti\014er)j(of)d(the)h(Z-axis)g(c)o(hild)h(group)e(whic)o(h)i(is)554
2306 y(formed)c(b)o(y)f(all)i(pro)q(cesses)f(along)g(the)g(Z)f(axis,)h(i.e.,)
f(ha)o(ving)h(the)g(same)f(W,)g(X)554 2363 y(and)i(Y)f(co)q(ordinates,)g(as)g
(the)g(calling)i(pro)q(cess.)989 2727 y(12)p eop
%%Page: 13 13
bop 74 157 a Fd(DESCRIPTION)149 274 y Fi(F)o(ORM4DGRID)13 b
Fk(uses)h(the)f(rank)g(of)g(the)g(calling)i(pro)q(cess)f(in)g(a)f(paren)o(t)g
(group)g(and)g(map)g(it)h(in)o(to)f(an)74 331 y Fi(n)o(w)d
Fe(\002)f Fi(nx)g Fe(\002)h Fi(n)o(y)f Fe(\002)h Fi(nz)15 b
Fk(4-dimensional)i(grid,)e(based)g(on)g(the)g(v)m(arying-W-X-Y-Z-in-order)h
(rule.)21 b(Then,)15 b(it)74 387 y(forms)f(the)h(W-axis,)g(X-axis,)g(Y-axis)g
(and)g(Z-axis)g(c)o(hild)i(groups)d(for)g(the)h(calling)i(pro)q(cess.)j(If)15
b(the)g(rank)g(of)74 443 y(the)f(calling)h(pro)q(cess)f(is)g(outside)h(the)f
(range)f(of)g(the)h(4D)f(grid,)h(then)g(a)f(singleton)i(group)e(of)h(itself)g
(is)g(formed)74 500 y(for)h Fi(wgid)p Fk(,)g Fi(xgid)p Fk(,)g
Fi(ygid)g Fk(and)h Fi(zgid)p Fk(.)74 653 y Fd(ERR)n(OR)j(CONDITIONS)149
770 y Fk(It)e(is)h(a)e(run-mo)q(de)i(error)f(if)g(\(1\))f Fi(n)o(w)p
Fk(,)h Fi(nx)p Fk(,)f Fi(n)o(y)g Fk(or)h Fi(nz)g Fk(is)h(less)f(than)g(1,)g
(or)g(\(2\))f Fi(n)o(w)11 b Fe(\003)g Fi(nx)g Fe(\003)g Fi(n)o(y)g
Fe(\003)g Fi(nz)18 b Fk(is)74 826 y(greater)c(than)i(the)f(n)o(um)o(b)q(er)g
(of)g(pro)q(cesses)h(in)g(the)f(paren)o(t)g(group)g Fi(pgid)p
Fk(.)989 2727 y(13)p eop
%%Page: 14 14
bop 74 157 a Fd(NAME)164 278 y Fi(SUBGRID)570 274 y Fk(Map)13
b(a)h Fi(p)q(dim)p Fk(-dimensional)j(grid)d(in)o(to)g(a)g(paren)o(t)f(group)h
(and)g(partition)g(the)570 331 y(paren)o(t)g(group)g(in)o(to)h(c)o(hild)h
(groups)f(of)f Fi(cdim)p Fk(-dimensional)j(subgrid)f(accord-)570
387 y(ing)f(to)g(a)g(mask)f(arra)o(y)l(.)74 529 y Fd(SYNOPSIS)149
646 y Fk(subroutine)i Fi(SUBGRID)g Fk(\(p)q(dim,)g(laxis,)f(cdim,)h(mask,)f
(pgid,)g(cgid\))149 733 y(in)o(teger)h(p)q(dim,)g(laxis\(*\),)e(cdim,)i
(mask\(*\),)d(pgid,)j(cgid)74 886 y Fd(CALLING)j(AR)n(GUMENTS)149
1003 y Fi(p)q(dim)288 b Fk(\(I\))22 b(The)g(dimensionalit)o(y)i(of)e(the)g
(grid)g(to)g(b)q(e)g(mapp)q(ed)h(in)o(to)f(the)g(paren)o(t)554
1060 y(group.)149 1142 y Fi(laxis)554 1138 y Fk(\(I\))13 b(The)g(arra)o(y)f
(of)g(the)h Fi(p)q(dim)h Fk(axis)f(lengths)g(of)g(the)g(grid)g(to)f(b)q(e)i
(mapp)q(ed)f(in)o(to)554 1194 y(the)j(paren)o(t)f(group.)149
1277 y Fi(cdim)554 1273 y Fk(\(I\))g(The)h(dimensionalit)o(y)h(of)e(the)g
(grid)h(corresp)q(onds)f(to)g(a)g(c)o(hild)h(group.)149 1364
y Fi(mask)554 1360 y Fk(\(I\))e(An)g(arra)o(y)e(of)i Fi(cdim)g
Fk(masks)f(where)h(the)g Ff(i)p Fk(-th)f(elemen)o(t)h(sp)q(eci\014es)i(the)e
Ff(i)p Fk(-th)554 1417 y(co)q(ordinate)i(to)f(b)q(e)g(spanned)i(in)f(order.)
149 1495 y Fi(pgid)307 b Fk(\(I\))15 b(The)h(pro)q(cess)f(group)g(iden)o
(ti\014er)i(of)e(the)g(paren)o(t)g(group.)149 1582 y Fi(cgid)313
b Fk(\(O\))16 b(The)g(pro)q(cess)f(group)h(iden)o(ti\014er)h(of)e(the)h
(newly)g(formed)f(c)o(hild)j(group)d(of)554 1639 y(the)h(calling)h(pro)q
(cess.)74 1781 y Fd(DESCRIPTION)149 1898 y Fi(SUBGRID)i Fk(uses)f(the)g(rank)
g(of)f(the)i(calling)g(pro)q(cess)g(in)f(a)g(paren)o(t)g(group)f(as)h(an)g
(index)h(in)g(a)f(one-)74 1954 y(dimensional)26 b(arra)o(y)c(and)i(maps)g(it)
g(to)f(a)g Fi(p)q(dim)p Fk(-dimensional)k(grid,)e(based)f(on)g(the)g(v)m
(arying-lo)o(w)o(est-)74 2011 y(dimension-\014rst)d(rule.)34
b(It)20 b(then)g(forms)f(a)g(new)h(c)o(hild)i(group)d(for)g(the)h(calling)h
(pro)q(cess)f(according)g(to)f(a)74 2067 y(user-supplied)g(mask)d(arra)o(y)l
(.)22 b(The)17 b(mask)f(arra)o(y)f Fi(mask)g Fk(con)o(tains)h
Fi(cdim)h Fk(elemen)o(ts,)g(where)g(the)f Ff(i)p Fk(-th)g(ele-)74
2124 y(men)o(t)g(sp)q(eci\014es)h(the)f Ff(i)p Fk(-th)f(co)q(ordinate)h(to)f
(b)q(e)i(spanned)f(in)h(order.)k(A)15 b(c)o(hild)j(group)d(whic)o(h)i(is)f(a)
f(subgrid)i(of)74 2180 y Fi(cdim)f Fk(dimensions)h(is)g(formed)e(and)h(the)g
(rank)f(within)i(the)f(c)o(hild)h(group)f(is)g(determined)h(b)o(y)e(the)h
(order)g(of)74 2236 y(the)f(mask)f(arra)o(y)l(.)19 b(If)14
b(the)h(calling)h(pid)g(is)f(outside)g(the)g Fi(p)q(dim)p Fk(-dimensional)i
(grid)e(to)f(b)q(e)i(mapp)q(ed,)f(i.e.,)f(the)74 2293 y(rank)i(in)h(the)g
(paren)o(t)f(group)g Fi(pgid)h Fk(is)g(larger)f(than)g(or)g(equal)h(to)f(the)
g(size)h(of)f(the)h(grid,)f(then)h(a)f(singleton)74 2349 y(group)f(is)h
(formed)f(for)f(the)i(calling)g(pro)q(cess.)989 2727 y(14)p
eop
%%Page: 15 15
bop 74 157 a Fd(ERR)n(OR)19 b(CONDITIONS)149 274 y Fk(It)c(is)h(a)f(run-mo)q
(de)g(error)g(if)g(\(1\))f Fi(p)q(dim)i Fk(or)e Fi(cdim)i Fk(is)f(less)h
(than)f(1,)f(\(2\))g Fi(p)q(dim)i Fk(is)f(less)h(than)f Fi(cdim)p
Fk(,)g(\(3\))74 331 y(an)o(y)i(elemen)o(t)i(in)f(the)g(arra)o(y)e
Fi(laxis)i Fk(is)h(less)f(than)f(1,)h(\(4\))e(the)i(mask)f(arra)o(y)g(con)o
(tains)g(in)o(tegers)h(whic)o(h)g(are)74 387 y(duplicated)f(or)c(not)h(in)i
(the)e(range)g(of)g(0)g(through)g Fi(p)q(dim)9 b Fe(\000)g
Fk(1,)14 b(or)g(\(5\))f(the)i(pro)q(duct)g(of)f(all)h(elemen)o(ts)g(of)f(the)
74 443 y(arra)o(y)h Fi(laxis)i Fk(is)g(greater)e(than)i(the)f(n)o(um)o(b)q
(er)h(of)e(pro)q(cesses)i(in)g(the)g(paren)o(t)f(group)g Fi(pgid)p
Fk(.)23 b(Note)16 b(that)g(it)h(is)74 500 y Fb(not)g Fk(an)h(error)f(if)g
(\(1\))g Fi(p)q(dim)g Fk(=)g Fi(cdim)p Fk(,)g(\(2\))g(some)g(elemen)o(t)h(in)
h(the)e(arra)o(y)f Fi(laxis)i Fk(is)g(equal)g(to)f(1,)g(and)h(\(3\))74
556 y(the)c(pro)q(duct)g(of)f(all)i(elemen)o(ts)f(of)f(the)h(arra)o(y)e
Fi(laxis)i Fk(is)h(less)f(than)f(the)h(n)o(um)o(b)q(er)g(of)f(pro)q(cesses)h
(in)h(the)f(paren)o(t)74 613 y(group)h Fi(pgid)p Fk(.)989 2727
y(15)p eop
%%Page: 16 16
bop 74 157 a Fd(NAME)164 278 y Fi(F)o(ORMCUBE)645 274 y Fk(F)l(orm)18
b(all)i Fi(p)q(dim)g Fk(c)o(hild)h(groups)d(of)h(1-dimensional)i(sub)q(cub)q
(es)g(for)d(the)645 331 y(calling)13 b(pro)q(cess)g(within)g(a)f(giv)o(en)g
(paren)o(t)g(group)g(b)o(y)g(viewing)h(the)f(paren)o(t)645
387 y(group)j(as)f(a)h Fi(p)q(dim)p Fk(-dimensional)j(cub)q(e.)74
529 y Fd(SYNOPSIS)149 646 y Fk(subroutine)e Fi(F)o(ORMCUBE)f
Fk(\(p)q(dim,)h(pgid,)g(cgid\))149 733 y(in)o(teger)g(p)q(dim,)g(pgid,)f
(cgid\(*\))74 886 y Fd(CALLING)k(AR)n(GUMENTS)149 1003 y Fi(p)q(dim)288
b Fk(\(I\))15 b(The)h(dimensionalit)o(y)h(of)e(the)g(paren)o(t)g(group)g
Fi(pgid)p Fk(.)149 1091 y Fi(pgid)307 b Fk(\(I\))15 b(The)h(pro)q(cess)f
(group)g(iden)o(ti\014er)i(of)e(the)g(paren)o(t)g(group.)149
1178 y Fi(cgid)313 b Fk(\(O\))17 b(An)h(arra)o(y)e(of)h Fi(p)q(dim)h
Fk(pro)q(cess)f(group)g(iden)o(ti\014ers)i(where)f(the)f Ff(i)p
Fk(-th)g(one)554 1235 y(is)f(the)f(pro)q(cess)g(group)g(iden)o(ti\014er)i(of)
d(the)h(newly)h(form)e(c)o(hild)j(group)e(\(with)g(2)554 1291
y(pro)q(cesses\))h(along)f(cub)q(e)h(dimension)h Ff(i)p Fk(.)74
1436 y Fd(DESCRIPTION)149 1553 y Fi(F)o(ORMCUBE)i Fk(uses)f(the)h(rank)f(of)g
(the)g(calling)i(pro)q(cess)f(in)g(a)f(paren)o(t)g(group)h(and)f(map)g(it)h
(in)o(to)f(a)74 1609 y Fi(p)q(dim)p Fk(-dimensional)23 b(cub)q(e,)g(based)e
(on)f(the)h(con)o(v)o(en)o(tional)f(binary)h(enco)q(ding.)37
b(Then,)22 b(it)f(forms)f Fi(p)q(dim)74 1666 y Fk(c)o(hild)e(groups)e(of)g
(1-dimensional)i(sub)q(cub)q(e)g(eac)o(h)e(for)f(the)i(calling)h(pro)q(cess,)
e(where)g(the)h Ff(i)p Fk(-th)f(c)o(hild)h(group)74 1722 y(is)f(formed)g(b)o
(y)f(the)h(calling)h(pro)q(cess)f(and)g(its)g(neigh)o(b)q(or)g(across)f
(dimension)j Ff(i)p Fk(.)i(If)c(the)g(rank)f(of)h(the)f(calling)74
1779 y(pro)q(cess)j(in)g(the)g(paren)o(t)f(group)g Fi(pgid)i
Fk(is)f(outside)g(the)f(range)h(of)f(the)g Fi(p)q(dim)p Fk(-dimensional)k
(cub)q(e,)e(then)e(a)74 1835 y(singleton)f(group)f(of)g(itself)h(is)g(formed)
f(for)f(eac)o(h)i(elemen)o(t)f(in)i(the)e Fi(cgid)h Fk(arra)o(y)l(.)74
1988 y Fd(ERR)n(OR)j(CONDITIONS)149 2105 y Fk(It)14 b(is)h(a)f(run-mo)q(de)h
(error)e(if)i(2)658 2088 y Fa(p)q(dim)773 2105 y Fk(is)g(greater)e(than)h
(the)h(n)o(um)o(b)q(er)f(of)g(pro)q(cesses)g(in)h(the)g(paren)o(t)e(group)74
2161 y Fi(pgid)p Fk(.)989 2727 y(16)p eop
%%Page: 17 17
bop 74 157 a Fd(NAME)164 278 y Fi(SUBCUBE)570 274 y Fk(Map)12
b(a)g Fi(p)q(dim)p Fk(-dimensional)k(cub)q(e)d(in)o(to)g(a)f(paren)o(t)g
(group)h(and)g(partition)f(the)570 331 y(paren)o(t)g(group)h(in)o(to)f(c)o
(hild)j(groups)d(of)h Fi(cdim)p Fk(-dimensional)i(sub)q(cub)q(e)g(accord-)570
387 y(ing)g(to)g(a)g(mask)f(arra)o(y)l(.)74 529 y Fd(SYNOPSIS)149
646 y Fk(subroutine)i Fi(SUBCUBE)g Fk(\(p)q(dim,)g(cdim,)g(mask,)e(pgid,)i
(cgid\))149 733 y(in)o(teger)g(p)q(dim,)g(cdim,)f(mask\(*\),)f(pgid,)h(cgid)
74 886 y Fd(CALLING)k(AR)n(GUMENTS)149 1003 y Fi(p)q(dim)288
b Fk(\(I\))12 b(The)f(dimensionalit)o(y)j(of)d(the)h(cub)q(e)g(to)f(b)q(e)h
(mapp)q(ed)h(to)d(the)i(paren)o(t)f(group.)149 1095 y Fi(cdim)554
1091 y Fk(\(I\))g(The)h(dimensionalit)o(y)h(of)e(the)g(sub)q(cub)q(e)i
(corresp)q(onds)e(to)g(the)g(c)o(hild)i(group.)149 1182 y Fi(mask)554
1178 y Fk(\(I\))h(An)g(arra)o(y)e(of)i Fi(cdim)g Fk(masks)f(where)h(the)g
Ff(i)p Fk(-th)f(elemen)o(t)h(sp)q(eci\014es)i(the)e Ff(i)p
Fk(-th)554 1235 y(co)q(ordinate)i(to)f(b)q(e)g(spanned)i(in)f(order.)149
1313 y Fi(pgid)307 b Fk(\(I\))15 b(The)h(pro)q(cess)f(group)g(iden)o
(ti\014er)i(of)e(the)g(paren)o(t)g(group.)149 1401 y Fi(cgid)313
b Fk(\(O\))16 b(The)g(pro)q(cess)f(group)h(iden)o(ti\014er)h(of)e(the)h
(newly)g(formed)f(c)o(hild)j(group)d(of)554 1457 y(the)h(calling)h(pro)q
(cess.)74 1599 y Fd(DESCRIPTION)149 1716 y Fi(SUBCUBE)i Fk(uses)e(the)h(rank)
f(of)g(the)h(calling)h(pro)q(cess)e(in)i(a)e(paren)o(t)g(group)g(as)g(an)h
(index)g(in)h(a)e(one-)74 1772 y(dimensional)g(arra)o(y)d(and)h(maps)g(it)h
(to)e(a)h Fi(p)q(dim)p Fk(-dimensional)j(cub)q(e,)d(based)h(on)f(the)g(con)o
(v)o(en)o(tional)h(binary)74 1829 y(enco)q(ding.)31 b(It)18
b(then)h(forms)e(a)h(new)h(c)o(hild)h(group)e(of)g(a)g(sub)q(cub)q(e)i(for)e
(the)g(calling)i(pro)q(cess)f(according)g(to)74 1885 y(a)g(user-supplied)j
(mask)d(arra)o(y)l(.)31 b(The)20 b(mask)e(arra)o(y)g Fi(mask)h
Fk(con)o(tains)g Fi(cdim)h Fk(elemen)o(ts,)g(where)g(the)f
Ff(i)p Fk(-th)74 1942 y(elemen)o(t)d(sp)q(eci\014es)g(the)f
Ff(i)p Fk(-th)f(co)q(ordinate)h(to)g(b)q(e)g(spanned)h(in)f(order.)20
b(A)14 b(c)o(hild)j(group)d(whic)o(h)i(is)f(a)f(sub)q(cub)q(e)74
1998 y(of)k Fi(cdim)h Fk(dimensions)i(is)e(formed)f(for)g(the)h(calling)h
(pro)q(cess)f(and)g(the)g(rank)f(within)i(the)f(c)o(hild)h(group)f(is)74
2055 y(determined)e(b)o(y)e(the)g(order)g(of)g(the)h(mask)f(arra)o(y)l(.)k
(If)c(the)h(calling)h(pid)f(is)g(outside)g(the)f Fi(p)q(dim)p
Fk(-dimensioanl)74 2111 y(cub)q(e)i(to)f(b)q(e)h(mapp)q(ed,)f(i.e.,)g(the)g
(rank)g(in)h(the)f(paren)o(t)g(group)g Fi(pgid)h Fk(is)f(larger)g(than)g(or)g
(equal)h(to)e(the)h(size)74 2168 y(of)f(the)g(mapp)q(ed)h(cub)q(e,)g(then)g
(a)f(singleton)h(group)f(is)g(formed)g(for)g(the)g(calling)i(pro)q(cess.)74
2321 y Fd(ERR)n(OR)i(CONDITIONS)149 2437 y Fk(It)g(is)g(a)g(run-mo)q(de)h
(error)e(if)h(\(1\))f Fi(p)q(dim)h Fk(or)g Fi(cdim)g Fk(is)g(less)h(than)e
(1,)h(\(2\))f Fi(p)q(dim)i Fk(is)f(less)g(than)g Fi(cdim)p
Fk(,)74 2494 y(\(3\))g(the)h(mask)f(arra)o(y)g(con)o(tains)h(in)o(tegers)g
(whic)o(h)g(are)g(duplicated)i(or)d(not)g(in)i(the)f(range)g(of)f(0)g
(through)74 2550 y Fi(p)q(dim)6 b Fe(\000)f Fk(1,)14 b(or)e(\(4\))g(2)434
2534 y Fa(p)q(dim)548 2550 y Fk(is)h(greater)f(than)h(the)g(n)o(um)o(b)q(er)g
(of)g(pro)q(cesses)g(in)h(the)f(paren)o(t)f(group)h Fi(pgid)p
Fk(.)19 b(Note)989 2727 y(17)p eop
%%Page: 18 18
bop 74 158 a Fk(that)15 b(it)g(is)h Fb(not)f Fk(an)g(error)f(if)i(\(1\))e
Fi(p)q(dim)f Fk(=)g Fi(cdim)j Fk(and)f(\(2\))f(2)1120 142 y
Fa(p)q(dim)1237 158 y Fk(is)h(less)h(than)f(the)g(n)o(um)o(b)q(er)h(of)f(pro)
q(cesses)74 215 y(in)h(the)f(paren)o(t)g(group)g Fi(pgid)p
Fk(.)989 2727 y(18)p eop
%%Page: 19 19
bop 74 157 a Fd(NAME)164 278 y Fi(F)o(ORMSUBCUBES)645 274 y
Fk(F)l(orm)13 b(all)j Fi(p)q(dim)9 b Fe(\000)f Fk(1)14 b(c)o(hild)i(groups)e
(of)g(sub)q(cub)q(es)i(of)e(di\013eren)o(t)h(sizes)g(for)645
331 y(the)j(calling)i(pro)q(cess)f(within)h(a)e(giv)o(en)h(paren)o(t)f(group)
g(b)o(y)h(viewing)g(the)645 387 y(paren)o(t)14 b(group)h(as)g(a)g
Fi(p)q(dim)p Fk(-dimensional)j(cub)q(e.)74 529 y Fd(SYNOPSIS)149
646 y Fk(subroutine)e Fi(F)o(ORMSUBCUBES)g Fk(\(p)q(dim,)g(pgid,)f(cgid\))149
733 y(in)o(teger)h(p)q(dim,)g(pgid,)f(cgid\(*\))74 886 y Fd(CALLING)k(AR)n
(GUMENTS)149 1003 y Fi(p)q(dim)288 b Fk(\(I\))15 b(The)h(dimensionalit)o(y)h
(of)e(the)g(paren)o(t)g(group)g Fi(pgid)p Fk(.)149 1091 y Fi(pgid)307
b Fk(\(I\))15 b(The)h(pro)q(cess)f(group)g(iden)o(ti\014er)i(of)e(the)g
(paren)o(t)g(group.)149 1178 y Fi(cgid)313 b Fk(\(O\))18 b(An)g(arra)o(y)e
(of)i Fi(p)q(dim)12 b Fe(\000)g Fk(1)17 b(pro)q(cess)h(group)g(iden)o
(ti\014ers)h(where)f(the)g Ff(i)p Fk(-th)554 1235 y(one,)f(1)e
Fe(\024)g Ff(i)f Fe(\024)i Fi(p)q(dim)11 b Fe(\000)h Fk(1,)k(is)h(the)g(pro)q
(cess)f(group)h(iden)o(ti\014er)h(of)e(the)h(newly)554 1291
y(form)e(c)o(hild)j(group)d(of)g(an)g Ff(i)p Fk(-dimensional)j(sub)q(cub)q
(es)f(spanning)g(dimensions)554 1348 y(0)e(through)g Ff(i)10
b Fe(\000)g Fk(1.)74 1490 y Fd(DESCRIPTION)149 1607 y Fi(F)o(ORMSUBCUBES)j
Fk(uses)f(the)g(rank)g(of)g(the)g(calling)i(pro)q(cess)e(in)h(a)f(paren)o(t)g
(group)g(and)g(map)g(it)g(in)o(to)74 1663 y(a)g Fi(p)q(dim)p
Fk(-dimensional)j(cub)q(e,)e(based)g(on)f(the)g(con)o(v)o(en)o(tional)h
(binary)g(enco)q(ding.)20 b(Then,)13 b(it)f(forms)f Fi(p)q(dim)5
b Fe(\000)t Fk(1)74 1720 y(c)o(hild)16 b(groups)f(of)f(sub)q(cub)q(es)i(of)e
(di\013eren)o(t)h(sizes)h(for)e(the)g(calling)j(pro)q(cess,)d(where)h(the)g
Ff(i)p Fk(-th)f(elemen)o(t)i(in)f(the)74 1776 y Fi(cgid)g Fk(arra)o(y)l(,)e
(1)f Fe(\024)h Ff(i)f Fe(\024)h Fi(p)q(dim)c Fe(\000)f Fk(1,)13
b(is)h(a)g(pro)q(cess)g(group)g(iden)o(ti\014er)h(of)f(the)g(c)o(hild)h
(group)f(form)f(b)o(y)h(spanning)74 1832 y(cub)q(e)i(dimensions)g(0)f
(through)f Ff(i)9 b Fe(\000)g Fk(1.)20 b(If)15 b(the)f(rank)h(of)f(the)h
(calling)h(pro)q(cess)f(in)h(the)f(paren)o(t)f(group)g Fi(pgid)i
Fk(is)74 1889 y(outside)g(the)g(range)f(of)g(the)h Fi(p)q(dim)p
Fk(-dimensional)i(cub)q(e,)f(then)f(a)f(singleton)i(group)e(of)g(itself)h(is)
g(formed)g(for)74 1945 y(eac)o(h)f(elemen)o(t)h(in)g(the)g
Fi(cgid)g Fk(arra)o(y)l(.)74 2098 y Fd(ERR)n(OR)j(CONDITIONS)149
2215 y Fk(It)14 b(is)h(a)f(run-mo)q(de)h(error)e(if)i(2)658
2199 y Fa(p)q(dim)773 2215 y Fk(is)g(greater)e(than)h(the)h(n)o(um)o(b)q(er)f
(of)g(pro)q(cesses)g(in)h(the)g(paren)o(t)e(group)74 2272 y
Fi(pgid)p Fk(.)989 2727 y(19)p eop
%%Page: 20 20
bop 74 157 a Fd(NAME)164 278 y Fi(MAPGRID1N)570 274 y Fk(Map)11
b(an)i(index)g(in)g(a)f(one-dimensional)j(arra)o(y)c(in)o(to)h
Ff(n)h Fk(indices)h(\(co)q(ordinates\))570 331 y(in)i(an)f
Ff(n)p Fk(-dimensional)i(grid.)74 472 y Fd(SYNOPSIS)149 589
y Fk(subroutine)f Fi(MAPGRID1N)f Fk(\(n,)f(laxis,)i(index,)g(indices\))149
677 y(in)o(teger)g(n,)f(laxis\(*\),)f(index,)i(indices\(*\))74
830 y Fd(CALLING)j(AR)n(GUMENTS)149 946 y Fi(n)554 947 y Fk(\(I\))c(The)h
(dimensionalit)o(y)h(of)e(the)g(grid.)149 1038 y Fi(laxis)554
1034 y Fk(\(I\))g(The)h(arra)o(y)e(of)h(axis)g(lengths)h(of)f(the)g
Fi(n)p Fk(-dimensional)i(grid.)149 1126 y Fi(index)554 1122
y Fk(\(I\))e(The)h(index)g(in)g(a)f(1-dimensional)i(arra)o(y)l(.)149
1214 y Fi(indices)554 1210 y Fk(\(O\))e(The)h(arra)o(y)e(of)g
Fi(n)i Fk(indices)h(in)f(the)f(sp)q(eci\014ed)j Fi(n)p Fk(-dimensional)f
(grid.)74 1362 y Fd(DESCRIPTION)149 1479 y Fi(MAPGRID1N)12
b Fk(maps)g(an)g(index)i(from)e(a)g(one-dimensional)j(arra)o(y)c(in)o(to)i
Fi(n)f Fk(indices)j(in)e(the)g(sp)q(eci\014ed)74 1536 y Fi(n)p
Fk(-dimensional)k(grid.)j(All)d(indices)g(starts)c(from)h(0.)20
b(The)15 b(mapping)h(from)e(the)h(one-dimensional)i(arra)o(y)d(to)74
1592 y(the)h Ff(n)p Fk(-dimensional)j(grid)e(follo)o(ws)f(the)g(v)m
(arying-lo)o(w)o(est-dimension-\014rst)i(rule.)74 1745 y Fd(ERR)n(OR)i
(CONDITIONS)149 1862 y Fk(It)c(is)g(a)f(run-mo)q(de)i(error)e(if)h(\(1\))f
Fi(n)g Fk(is)h(less)h(than)e(1,)g(\(2\))g(an)o(y)g(elemen)o(t)i(in)f(the)g
(arra)o(y)e Fi(laxis)j Fk(is)f(less)g(than)74 1919 y(1,)g(\(3\))g
Fi(index)h Fk(is)h(less)f(than)g(0,)f(or)h(greater)e(than)i(or)g(equal)g(to)f
(the)h(size)h(of)e(the)h Fi(n)p Fk(-dimensional)i(grid,)e(i.e.,)74
1975 y(the)f(pro)q(duct)h(of)f(all)h(elemen)o(ts)g(in)g(the)f(arra)o(y)f
Fi(laxis)p Fk(.)989 2727 y(20)p eop
%%Page: 21 21
bop 74 157 a Fd(NAME)164 278 y Fi(MAPGRIDN1)570 274 y Fk(Map)15
b Fi(n)h Fk(indices)i(\(co)q(ordinates\))e(in)h(an)f Fi(n)p
Fk(-dimensional)i(grid)e(in)o(to)g(an)g(index)570 331 y(in)g(a)f
(one-dimensional)i(arra)o(y)l(.)74 472 y Fd(SYNOPSIS)149 589
y Fk(subroutine)f Fi(MAPGRIDN1)f Fk(\(n,)f(laxis,)i(indices,)h(index\))149
677 y(in)o(teger)f(n,)f(laxis\(*\),)f(indices\(*\),)i(index)74
830 y Fd(CALLING)j(AR)n(GUMENTS)149 946 y Fi(n)554 947 y Fk(\(I\))c(The)h
(dimensionalit)o(y)h(of)e(the)g(grid.)149 1038 y Fi(laxis)554
1034 y Fk(\(I\))g(The)h(arra)o(y)e(of)h(axis)g(lengths)h(of)f(the)g
Fi(n)p Fk(-dimensional)i(grid.)149 1126 y Fi(indices)554 1122
y Fk(\(I\))11 b(The)g(arra)o(y)f(of)g Fi(n)h Fk(indices)i(\(co)q(ordinates\))
d(in)i(the)f(sp)q(eci\014ed)i Ff(n)p Fk(-dimensional)554 1178
y(grid.)149 1261 y Fi(index)554 1257 y Fk(\(O\))i(The)h(mapp)q(ed)g(index)g
(in)g(a)f(one-dimensional)j(arra)o(y)l(.)74 1410 y Fd(DESCRIPTION)149
1526 y Fi(MAPGRIDN1)i Fk(maps)h Fi(n)f Fk(indices)j(from)d(the)h(sp)q
(eci\014ed)i Fi(n)p Fk(-dimensional)g(grid)f(in)o(to)e(an)h(index)h(in)74
1583 y(the)g(mapp)q(ed)g(one-dimensional)i(arra)o(y)l(.)37
b(The)22 b(mapping)g(from)f(the)h Ff(n)p Fk(-dimensional)i(grid)d(to)g(the)h
(one-)74 1639 y(dimensional)17 b(arra)o(y)d(follo)o(ws)i(the)f(v)m(arying-lo)
o(w)o(est-dimension-\014rst)i(rule.)74 1792 y Fd(ERR)n(OR)i(CONDITIONS)149
1909 y Fk(It)h(is)g(a)g(run-mo)q(de)h(error)e(if)h(\(1\))f
Fi(n)h Fk(is)g(less)h(than)f(1,)g(\(2\))f(an)o(y)h(elemen)o(t)g(in)h(the)f
(arra)o(y)f Fi(laxis)h Fk(is)g(less)74 1966 y(than)c(1,)g(or)f(\(3\))h(an)o
(y)f(elemen)o(t)i(in)g(the)f(arra)o(y)f Fi(indices)i Fk(is)g(less)g(than)f
(0,)f(or)h(greater)f(than)h(or)g(equal)h(to)e(the)74 2022 y(corresp)q(onding)
h(elemen)o(t)g(in)g(the)g(arra)o(y)e Fi(laxis)p Fk(.)989 2727
y(21)p eop
%%Page: 22 22
bop 74 157 a Fd(NAME)164 278 y Fi(MAPCUBE1N)570 274 y Fk(Map)11
b(an)i(index)g(in)g(a)f(one-dimensional)j(arra)o(y)c(in)o(to)h
Ff(n)h Fk(indices)h(\(co)q(ordinates\))570 331 y(in)i(an)f
Ff(n)p Fk(-dimensional)i(cub)q(e.)74 464 y Fd(SYNOPSIS)149
581 y Fk(subroutine)f Fi(MAPCUBE1N)f Fk(\(n,)g(index,)h(indices\))149
668 y(in)o(teger)g(n,)f(index,)h(indices\(*\))74 821 y Fd(CALLING)j(AR)n
(GUMENTS)149 937 y Fi(n)554 938 y Fk(\(I\))c(The)h(dimensionalit)o(y)h(of)e
(the)g(cub)q(e.)149 1030 y Fi(index)554 1026 y Fk(\(I\))g(The)h(index)g(in)g
(a)f(1-dimensional)i(arra)o(y)l(.)149 1117 y Fi(indices)554
1113 y Fk(\(O\))e(The)h(arra)o(y)e(of)g Fi(n)i Fk(indices)h(in)f(the)f
Fi(n)p Fk(-dimensional)j(cub)q(e.)74 1266 y Fd(DESCRIPTION)149
1383 y Fi(MAPCUBE1N)j Fk(maps)g(an)g(index)i(from)d(a)h(one-dimensional)j
(arra)o(y)c(in)o(to)i Fi(n)f Fk(indices)i(in)g(the)e Fi(n)p
Fk(-)74 1439 y(dimensional)h(cub)q(e.)36 b(The)20 b(mapping)h(from)e(the)h
(one-dimensional)i(arra)o(y)d(to)g(the)h Ff(n)p Fk(-dimensional)j(cub)q(e)74
1496 y(follo)o(ws)17 b(the)g(con)o(v)o(en)o(tional)g(binary)h(address)f(tra)o
(v)o(ersal.)24 b(Th)o(us,)17 b Fi(MAPCUBE1N)f Fk(maps)h(an)f(in)o(teger)i(of)
74 1552 y Ff(n)e Fk(bits)f(in)o(to)g(a)g(bit-arra)o(y)g(of)g(size)h
Ff(n)p Fk(,)f(starting)f(from)h(the)g(least)g(signi\014can)o(t)i(bit.)74
1705 y Fd(ERR)n(OR)i(CONDITIONS)149 1822 y Fk(It)c(is)h(a)e(run-mo)q(de)i
(error)e(if)h(\(1\))f Fi(n)h Fk(is)h(less)f(than)g(1,)f(or)h(\(2\))e
Fi(index)i Fk(is)h(less)f(than)g(0,)f(or)h(greater)f(than)h(or)74
1879 y(equal)h(to)f(2)273 1862 y Fc(n)296 1879 y Fk(.)989 2727
y(22)p eop
%%Page: 23 23
bop 74 157 a Fd(NAME)164 278 y Fi(MAPCUBEN1)570 274 y Fk(Map)14
b Fi(n)g Fk(indices)j(\(co)q(ordinates\))d(in)i(an)e Fi(n)p
Fk(-dimensional)j(cub)q(e)f(in)o(to)f(an)f(index)570 331 y(in)i(a)f
(one-dimensional)i(arra)o(y)l(.)74 472 y Fd(SYNOPSIS)149 589
y Fk(subroutine)f Fi(MAPCUBEN1)f Fk(\(n,)g(indices,)h(index\))149
677 y(in)o(teger)g(n,)f(indices\(*\),)h(index)74 830 y Fd(CALLING)j(AR)n
(GUMENTS)149 946 y Fi(n)554 947 y Fk(\(I\))c(The)h(dimensionalit)o(y)h(of)e
(the)g(cub)q(e.)149 1038 y Fi(indices)554 1034 y Fk(\(I\))g(The)h(arra)o(y)e
(of)h Fi(n)g Fk(indices)i(\(co)q(ordinates\))e(in)h(the)f Ff(n)p
Fk(-dimensional)j(cub)q(e.)149 1126 y Fi(index)554 1122 y Fk(\(O\))d(The)h
(mapp)q(ed)g(index)g(in)g(the)g(one-dimensional)h(arra)o(y)l(.)74
1275 y Fd(DESCRIPTION)149 1392 y Fi(MAPGRIDN1)d Fk(maps)h Fi(n)g
Fk(indices)i(from)e(the)g(sp)q(eci\014ed)i Fi(n)p Fk(-dimensional)h(cub)q(e)e
(to)e(a)h(corresp)q(onding)74 1448 y(index)i(in)f(the)f(one-dimensional)j
(arra)o(y)l(.)149 1536 y Fi(MAPCUBEN1)h Fk(maps)g Fi(n)g Fk(indices)i(from)d
(the)h Fi(n)p Fk(-dimensional)j(cub)q(e)e(in)o(to)f(an)g(index)i(in)f(the)f
(one-)74 1592 y(dimensional)j(arra)o(y)l(.)31 b(The)20 b(mapping)g(from)e
(the)i Ff(n)p Fk(-dimensional)i(cub)q(e)e(to)f(the)h(one-dimensional)h(arra)o
(y)74 1649 y(follo)o(ws)16 b(the)g(con)o(v)o(en)o(tional)g(binary)h(address)f
(tra)o(v)o(ersal.)k(Th)o(us,)c Fi(MAPCUBEN1)f Fk(maps)h(a)f(bit-arra)o(y)h
(of)74 1705 y(size)g Ff(n)g Fk(in)o(to)f(an)g(in)o(teger)g(of)g
Ff(n)h Fk(bits,)f(starting)f(from)h(the)g(least)g(signi\014can)o(t)i(bit.)74
1858 y Fd(ERR)n(OR)i(CONDITIONS)149 1975 y Fk(It)c(is)h(a)e(run-mo)q(de)i
(error)e(if)h(\(1\))f Fi(n)h Fk(is)h(less)f(than)g(1,)f(or)h(\(2\))e
Fi(index)i Fk(is)h(less)f(than)g(0,)f(or)h(greater)f(than)h(or)74
2031 y(equal)h(to)f(2)273 2015 y Fc(n)296 2031 y Fk(.)989 2727
y(23)p eop
%%Page: 24 24
bop 74 157 a Fd(Ac)n(kno)n(wledgemen)n(ts)74 274 y Fk(W)l(e)16
b(thank)g(Dan)g(F)l(ry)o(e)g(of)g(IBM)g(Kingston)h(and)f(Shlomo)h(Kipnis)h
(and)f(Marc)e(Snir)i(of)f(IBM)g(T.J.)g(W)l(atson)74 331 y(Researc)o(h)g(Cen)o
(ter)f(for)f(their)i(helpful)h(commen)o(ts.)74 505 y Fo(References)74
625 y Fk([1])22 b(V.)14 b(Bala)h(and)g(S.)f(Kipnis,)j(\\Pro)q(cess)d(Groups:)
19 b(a)c(mec)o(hanism)g(for)f(the)h(co)q(ordination)g(of)f(and)h(comm)o(u-)
145 682 y(nication)g(among)f(pro)q(cesses)h(in)g(the)f(V)l(en)o(us)h
(collectiv)o(e)h(comm)o(unication)g(library)l(,")e Fb(7th)i(International)145
738 y(Par)n(al)r(lel)f(Pr)n(o)n(c)n(essing)f(Symp)n(osium,)h
Fk(IEEE,)g(Newp)q(ort)g(Beac)o(h,)g(CA,)g(April)i(1993.)74
832 y([2])22 b(J.)c(Dongarra,)e(R.)i(Hemp)q(el,)i(A.)d(Ha)o(y)h(and)g(D.)f(W)
l(alk)o(er,)h(\\A)g(Prop)q(osal)f(for)h(a)f(User-Lev)o(el,)j(Message-)145
888 y(P)o(assing)15 b(In)o(terface)g(in)h(a)f(Distributed)h(Memory)f(En)o
(vironmen)o(t",)74 982 y([3])22 b(Express)15 b(User's)g(Guide)h(3.0,)e(P)o
(arasoft)f(Corp)q(oration.)74 1076 y([4])22 b(D.)11 b(F)l(ry)o(e,)g(R.)h(Bry)
o(an)o(t,)f(C.T.)g(Ho,)g(P)l(.)h(de)g(Jong,)g(R.)g(La)o(wrence,)g(and)g(M)f
(Snir,)i(\\An)e(External)h(User)g(In)o(ter-)145 1132 y(face)17
b(for)g(Scalable)i(P)o(arallel)f(Systems)f(F)o(OR)l(TRAN)h(In)o(terface",)g
(Highly)g(P)o(arallel)h(Sup)q(ercomputing)145 1189 y(Systems)c(Lab.,)g(IBM,)g
(No)o(v)o(em)o(b)q(er)f(1992.)74 1283 y([5])22 b(R.)f(Hemp)q(el,)k(\\The)c
(ANL/GMD)g(Macros)f(\(P)l(ARMA)o(CS\))h(in)i(F)o(OR)l(TRAN)f(for)f(P)o
(ortable)g(P)o(arallel)145 1339 y(Programming)15 b(using)h(the)g(Message)f(P)
o(assing)h(Programming)f(Mo)q(del.)h(User's)g(Guide)g(and)g(reference)145
1396 y(man)o(ual",)f(Gesellsc)o(haft)g(fur)g(Mathematik)g(und)h(Daten)o(v)o
(erarb)q(eitung)f(m)o(bH.)g(W)l(est)g(German)o(y)l(.)74 1489
y([6])22 b(R.)12 b(Hemp)q(el,)h(\\A)f(W)l(orking)g(Do)q(cumen)o(t)f(on)h(Pro)
q(cess)g(T)l(op)q(ologies)g(in)h(MPI",)e(GMD,)f(German)i(National)145
1546 y(Researc)o(h)k(Cen)o(ter)e(for)h(Computer)g(Science,)h(Jan)o(uary)f
(1993.)74 1640 y([7])22 b(R.)13 b(Hemp)q(el,)i(\\A)f(Prop)q(osal)f(for)g
(Virtual)i(T)l(op)q(ologies)f(in)g(MPI",)f(GMD,)g(German)g(National)h
(Researc)o(h)145 1696 y(Cen)o(ter)h(for)f(Computer)h(Science,)i(No)o(v)o(em)o
(b)q(er)e(1992.)74 1790 y([8])22 b(A.)e(Skjellum,)i(S.G.)d(Smith,)j(C.H.)d
(Still,)k(A.P)l(.)c(Leung)i(and)g(M.)e(Morari,)h(\\The)g(Zip)q(co)q(de)i
(Message)145 1846 y(P)o(assing)f(Systems",)g(T)l(ec)o(hnical)i(Rep)q(ort,)f
(La)o(wrence)g(Liv)o(ermore)f(National)g(Lab)q(oratory)l(,)h(Octob)q(er)145
1903 y(1992.)74 1997 y([9])g(A.)13 b(Skjellum)h(and)g(A.P)l(.)e(Leung,)i
(\\Zip)q(co)q(de:)20 b(A)13 b(P)o(ortable)g(Multicomputer)h(Comm)o(unication)
g(Library)145 2053 y(on)22 b(top)f(the)h(Reactiv)o(e)h(Kernel",)h(Pro)q
(ceedings)f(of)e(the)h(Fifth)g(Distributed)h(Memory)e(Computing)145
2110 y(Conference)16 b(\(DMCC5\),)c(pages)j(767-776,)f(IEEE,)h(April)h(1990.)
989 2727 y(24)p eop
%%Trailer
end
userdict /end-hook known{end-hook}if
%%EOF
From owner-mpi-collcomm@CS.UTK.EDU  Fri Apr 23 11:08:57 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA28391; Fri, 23 Apr 93 11:08:57 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA14199; Fri, 23 Apr 93 11:08:32 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 23 Apr 1993 11:08:31 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA14135; Fri, 23 Apr 93 11:07:22 -0400
Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C)
	id AA19885; Fri, 23 Apr 93 15:07:18 GMT
Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1)
	id AA01641; Fri, 23 Apr 93 09:05:52 MDT
Date: Fri, 23 Apr 93 09:05:52 MDT
From: hender@macaw.fsl.noaa.gov (Tom Henderson)
Message-Id: <9304231505.AA01641@macaw.fsl.noaa.gov>
To: walker@rios2.epm.ornl.gov
Subject: Re: MPI_EXCHANGE?
Cc: mpi-pt2pt@cs.utk.edu, mpi-collcomm@cs.utk.edu


David writes:  

> Was there ever a vote on whether there should be a routine mpi_exchange
> for exchanging data between two processes? I think it would be a useful
> addition to the point-to-point routines. It would look something like this:
> 	
> 	mpi_exchange ( send_bdo_handle,
> 		       recv_bdo_handle,
> 		       other_proc_handle,
> 		       tag,
> 		       context )
> 
> where "bdo"="buffer descriptor object". I think a blocking version would be 
> sufficient, though a nonblocking version is conceivable in which the exchange
> is initiated and we later check if its completed, or block until completion.
> 
> mpi_exchange will help users avoid writing unsafe programs.
> 
> David
> 

I also like "exchange()".  I buy the argument that exchange() helps users 
avoid writing unsafe programs.  Whenever I use Express, I use exchange() at 
every opportunity.  

/* Begin REHASH of old arguments */
When we last discussed this in point-to-point, we got mired in details.  Can 
send and receive buffers be the same?  Should there be a version with both 
"source" and "destination" processes in the parameter list to support a more 
general "shift" (like Express' exchange()).  If we allow one or more of these 
features, can exchange() be "fast"?  If we can't make exchange() "fast", 
should it be left to users to build it out of the low-level MPI point-to-point 
routines?  
/* End REHASH of old arguments */

On January 6, the point-to-point subcomittee decided to dump this problem on 
the collective communication subcommittee.  In the current collective 
communication, there are a whole bunch of group-based "shift()" routines.  

Maybe we need to answer the following questions:  

  Are the collective-communication "shift()" routines general enough?  
  (Examples?  Counter-examples?)  

  We COULD do "exchange()" by making a group of two and using one of the 
  "cshift()" routines.  Is this acceptable?  

  If we had point-to-point versions of both "exchange()" and "shift()", could 
  we avoid "ready-receive"?  (This is a really odd thought... :-)  


I'll stop rambling for now...

Tom Henderson



From owner-mpi-collcomm@CS.UTK.EDU  Tue May  4 10:35:02 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA17133; Tue, 4 May 93 10:35:02 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09104; Tue, 4 May 93 10:33:38 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 4 May 1993 10:33:37 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from super.super.org by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA09096; Tue, 4 May 93 10:33:35 -0400
Received: from b124.super.org by super.super.org (4.1/SMI-4.1)
	id AA09851; Tue, 4 May 93 10:33:24 EDT
Received: by b124.super.org (4.1/SMI-4.1)
	id AA02742; Tue, 4 May 93 10:33:21 EDT
Date: Tue, 4 May 93 10:33:21 EDT
From: lederman@super.org (Steve Huss-Lederman)
Message-Id: <9305041433.AA02742@b124.super.org>
To: mpi-collcomm@cs.utk.edu
Subject: comments on draft

Howdy partners (in my best Texas drawl :-)

I would like to make a few observations about the Collective
Communication draft of April 20.  I generally like it, so there is
nothing drastic here.  Since there has not been lots of mail lately,
I'll just include all my comments here instead of sending some
privately.

On the top of p. 4 the MPI_BCAST states that "On return the contents
of the buffer of the process with rank root is contained in buffer of
all group members".  This implies that all processes are done when any
one returns.  Is this what was meant?  I would have thought that only
the buffer on the returned process will be guaranteed to have the
values of root.

In several places, such as MPI_GATHER, the OUT outbuf is significant
only at root.  Did we decide at the last meeting that all the other
nodes do not need to pass a legitimate handle?  If so, the draft should
state this (or not).

I'm not sure what the return_status is for some calls.  For example,
MPI_GATHER (p. 7) has a return_status (also MPI_ALLSCATTER,
MPI_ALLCAST).  If it is like the return_status from a MPI_RECV then
I'm not sure which of the many potential receives it refers to.  Was
it supposed to be an array of handles?  If it is a different type of
status handle then I missed the description in the text.

For MPI_ALLSCATTER (p. 9) the input is a list of buffer descriptors
(list_of_inbufs) but the output is a buffer descriptor handle
(outbuf).  Would it be better if the output was also a list instead of
a single buffer.  Each input may be complex and a single handle might
be difficult.  The same question applies to the outbuf for MPI_ALLCAST
(p. 10).  Am I seeing this correctly?

In MPI_ALLSCATTERC (p. 9) has "IN inbuf first entry in input buffer
(choice). root (integer)"  What does the root (integer) mean?  I also
don't understand "IN op operation (status)" in MPI_REDUCE (p. 11).
What is the status?

For MPI_REDUCE (p. 11) I have a few questions.  First, how does the op
work on a handle if the buffer does not have the same type of
variables.  All of the reduce operations are typed, so what is the use
of running it on a buffer with mixed types?  Basically, why do we have
the MPI_REDUCE and not just an MPI_REDUCEC.  One thought I had was for
MPI_REDUCE you only state the op without the type and MPI would apply
the correct type for the op given based on the buffer descriptor.
Second, we don't have complex for MIN & MAX.  This is the only case
that has real and not complex.  I propose we add this and define it to
be the absolute value for the max and min.  This not only would make
the calls symmetric but I think can actually be used.  The max/min
modulus of matrix elements is an operation that is used in linear
algebra.

Enough for one message.  Hope this makes sense.  Sorry if this has
duplicated previous comments.

Steve
From owner-mpi-collcomm@CS.UTK.EDU  Tue May  4 14:40:12 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA23773; Tue, 4 May 93 14:40:12 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26833; Tue, 4 May 93 14:39:36 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 4 May 1993 14:39:35 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sun4.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26825; Tue, 4 May 93 14:39:33 -0400
Received: by sun4.epm.ornl.gov (4.1/1.34)
	id AA24305; Tue, 4 May 93 14:39:31 EDT
Date: Tue, 4 May 93 14:39:31 EDT
From: geist@sun4.epm.ornl.gov (Al Geist)
Message-Id: <9305041839.AA24305@sun4.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Revised collective draft. Added data type arguments to block funcs...


Hi Gang,

I got some more feedback on the draft and have fixed the
typos they pointed out and added 'type' arguments to MPI_*C functions.

Al
------- latex follows ----- postscript to be sent in separate msg. ----
%     MPI Authors:
% This is MY version of YOUR chapter.  It has a wrapper so that you
% should be able to simply LaTeX this.
%
% Please work from this text so that we are in synch.
%
% --Steve Otto

\documentstyle[twoside,11pt]{report}
\pagestyle{headings}
%\markright{ {\em Draft Document of the MPI Standard,\/ \today} }
\marginparwidth 0pt
\oddsidemargin=.25in
\evensidemargin  .25in
\marginparsep 0pt
\topmargin=-.5in
\textwidth=6.0in
\textheight=9.0in
\parindent=2em

%   ----------------------------------------------------------------------
%   mpi-macs.tex  --- man page macros,
%                discuss, missing, mpifunc macros
%
% ----------------------------------------------------------------------
% a couple of commands from Marc Snir, modified S. Otto

\newlength{\discussSpace}
\setlength{\discussSpace}{.7cm}

\newcommand{\discuss}[1]{\vspace{\discussSpace} {\small {\bf Discussion:} #1} \vspace{\discussSpace}
}

\newcommand{\missing}[1]{\vspace{\discussSpace} {\small {\bf Missing:} #1} \vspace{\discussSpace}
}

\newlength{\codeSpace}
\setlength{\codeSpace}{.3cm}

\newcommand{\mpifunc}[1]{\vspace{\codeSpace} {\bf #1} \vspace{\codeSpace} }

% -----------------------------------------------------------------------
%  A few commands to help in writing MPI man pages
%
\def\twoc#1#2{
\begin{list}
{\hbox to95pt{#1\hfil}}
{\setlength{\leftmargin}{120pt}
 \setlength{\labelwidth}{95pt}
 \setlength{\labelsep}{0pt}
 \setlength{\partopsep}{0pt}
 \setlength{\parskip}{0pt}
 \setlength{\topsep}{0pt}
}
\item
{#2}
\end{list}
}
\outer\long\def\onec#1{
\begin{list}
{}
{\setlength{\leftmargin}{25pt}
 \setlength{\labelwidth}{0pt}
 \setlength{\labelsep}{0pt}
 \setlength{\partopsep}{0pt}
 \setlength{\parskip}{0pt}
 \setlength{\topsep}{0pt}
}
\item
{#1}
\end{list}
}
\def\manhead#1{\noindent{\bf{#1}}}


\hyphenation{RE-DIS-TRIB-UT-ABLE sub-script mul-ti-ple}

\begin{document}

\setcounter{page}{1}
\pagenumbering{roman}

\title{ {\em D R A F T} \\ Document for a Standard Message-Passing Interface}

\author{Scott Berryman, {\em Yale Univ} \\
James Cownie, {\em Meiko Ltd} \\
Jack Dongarra, {\em Univ. of Tennessee and ORNL} \\
Al Geist, {\em ORNL} \\
Bill Gropp, {\em ANL} \\
Rolf Hempel, {\em GMD} \\
Bob Knighten, {\em Intel} \\
Rusty Lusk, {\em ANL} \\
Steve Otto, {\em Oregon Graduate Inst} \\
Tony Skjellum, {\em Missisippi State Univ} \\
Marc Snir, {\em IBM T. J. Watson} \\
David Walker, {\em ORNL} \\
Steve Zenith, {\em Kuck \& Associates}   } 

%\date{April 20, 1993 \\
\date{ \today \\
This work was supported by ARPA and NSF under contract number \#\#\#,
by the National Science Foundation Science and
Technology Center Cooperative Agreement No. CCR-8809615.
}

\maketitle
\hfuzz=5pt
%\tableofcontents

%\begin{abstract}
%We don't have an abstract yet.
%\end{abstract}

\setcounter{page}{1}
\pagenumbering{arabic}

\chapter{Collective Communication}
\label{sec:coll}

\begin{center}
Al Geist \\ Marc Snir
\end{center}

\section{Introduction}

This section is a draft of the current proposal for collective communication.
Collective communication is defined to be communication that involves
a group of processes.  Examples are broadcast and global sum.
A collective operation is executed by having all processes in the group call the
communication routine, with matching parameters.
Routines can (but are not required to) return as soon as their
participation in the collective communication is complete.  The completion
of a call indicates that the caller is now free to access the locations in the
communication buffer, or any other location that can be referenced by the
collective operation.  However, it does not indicate that other processes in
the group have started the operation (unless otherwise indicated in the
description of the operation).   However, the successful completion of
a collective communication call may depend on the execution of a matching call
at all processes in the group.

The syntax and semantics of the collective operations is
defined so as to be consistent with the syntax and semantics of the point to
point operations.

The reader is referred to the point-to-point communication section 
of the current MPI draft for information concerning communication buffers 
and their manipulations. 
The context section describes the formation,
manipulation, and query functions (such as group size) that are
available for groups and group objects.

The collective communication routines are built above the point-to-point
routines.  While vendors may optimize certain collective routines for
their architectures, a complete library of the collective communication
routines can be written entirely using point-to-point communication
functions.  We are using naive implementations of the collective calls in terms
of point to point operations in order to provide an operational definition of
their semantics.

The following communication functions are proposed.
\begin{itemize}
\item
Broadcast from one member to all members of a group.
\item
Barrier across all group members
\item
Gather data from all group members to one member.
\item
Scatter data from one member to all members of a group.
\item
Global operations such as sum, max, min, etc., were the result
is known by all group members and a variation where the result is
known by only one member. The ability to have user defined
global operations.
\item
Simultaneous shift of data around the group, the simplest example
being all members sending their data to (rank+1) with wrap around.
\item
Scan across all members of a group (also called parallel prefix).
\item
Broadcast from all members to all members of a group.
\item
Scatter data from all members to all members of a group
(also called complete exchange or index).
\end{itemize}

To simplify the collective communication interface it is
designed with two layers. The low level routines have all the
generality of, and make use of, the communication buffer routines
of the point-to-point section which allows arbitrarily complex
messages to be constructed. The second level routines are
similar to the upper level point-to-point routines in that they send
only a contiguous buffer.


\section{Group Functions}

A full description of the group formation and manipulation functions
can be found in the context chapter of the MPI document.
Here we describe only those group functions that are used in the
semantic description of the collective communication routines.

An initial group containing all processes is supplied by default in MPI.
MPI provides a procedure that returns the handle to this initial group.
The processes in the inital group each have a unique rank in the group
represented by integers (0, 1, 2, ..., [number-of-processes~-~1].

\mpifunc{MPI\_ALLGROUP(group)} 
Return the descriptor of the inital group containing all processes.
\begin{description}
\item[OUT group] handle to descriptor object of initial group.
\end{description}

\mpifunc{MPI\_RANK(group, rank)} 
Return the rank of the calling process within the specified group.
\begin{description}
\item[IN group] group handle
\item[OUT rank] integer
\end{description}


\mpifunc{MPI\_GSIZE(group, size)} 
Return the number of processes that belong to the specified group.
\begin{description}
\item[IN group] group handle
\item[OUT size] integer
\end{description}

\section{Communication Functions}

The proposed communication functions are divided into two layers.
The lowest level uses the same buffer descriptor objects
available in point-to-point to create noncontiguous, multiple data type
messages. The second level is similar to the block send/receive
point-to-point operations in that it supports only contiguous buffers of data.
For each communication operation, we list these two level of calls together.


\section{Synchronization}

\subsubsection*{Barrier synchronization}

\mpifunc{MPI\_BARRIER( group )} 

MPI\_BARRIER blocks the calling process until all group members have called
it; the call returns at any process only after all group members have
entered the call.
\begin{description}
\item[IN group] group handle
\end{description}

{\tt MPI\_BARRIER( group )}
is
\begin{verbatim}
MPI_CREATE_BUFFER(buffer_handle, MPI_BUFFER, MPI_PERSISTENT);
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
if (rank==0)
{
   for (i=1; i < size; i++)
      MPI_RECV(buffer_handle, i, tag, group, return_handle);
   for (i=1; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
}
else
{
   MPI_SEND(buffer_handle, 0, tag, group);
   MPI_RECV(buffer_handle, 0, tag, group, return_handle);
}
MPI_FREE(buffer_handle);
\end{verbatim}


\section{Data move functions}

\subsubsection*{Broadcast}

\mpifunc{ MPI\_BCAST( buffer\_handle, group, root )} 

{\tt MPI\_BCAST} broadcasts a message from the process with rank {\tt root} to
all other processes
of the group. It is called by all members of group using the same arguments for
{\tt  group, and root}.
On return the contents of the buffer of the process with rank {\tt root}
is contained in the buffer of the calling process.
\begin{description}
\item[INOUT buffer\_handle]  Handle for buffer where from message is
sent or received.
\item[IN group] group handle
\item[IN root] rank of broadcast root (integer)
\end{description}


\mpifunc{ MPI\_BCASTC( buf, len, type, group, root )} 

{\tt MPI\_BCASTC} behaves like broadcast, restricted to a block buffer.
It is called by all processes with the same arguments for {\tt len, group}
and {\tt root}.
\begin{description}
\item[INOUT buffer]  Starting address of buffer (choice type)
\item[IN len] Number of entries in buffer (integer)
\item[IN type] data type of buffer
\item[IN group] group handle
\item[in root] rank of broadcast root (integer)
\end{description}


{\tt  MPI\_BCAST( buffer\_handle, group, root )} 
is
\begin{verbatim}
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
MPI_IRECV(handle, buffer_handle, root, tag, group, return_handle);
if (rank==root)
   for (i=0; i < size; i++)
      MPI_SEND(buffer_handle, i, tag, group);
MPI_WAIT(handle)
\end{verbatim}

\subsubsection*{Circular shift}

\mpifunc{MPI\_CSHIFT( inbuf, outbuf, group, shift)} 

Process with rank {\tt i} sends the data in its input buffer to
process with rank $\tt (i+ shift) \bmod  group\_size$, who receives the
data in its output buffer. All processes make the call with the same values for
{\tt group}, and {\tt shift}.  The {\tt shift} value can be positive, zero,
or negative.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[IN outbuf] handle to output buffer descriptor
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

\mpifunc{MPI\_CSHIFT1( buf, group, shift)} 

Process with rank {\tt i} sends the data in its buffer to
process with rank $\tt (i+ shift) \bmod  group\_size$, who receives the
data in the same buffer. All processes make the call with the same values for
{\tt group}, and {\tt shift}.  The {\tt shift} value can be positive, zero,
or negative.

\begin{description}
\item[INOUT buf] handle to buffer descriptor
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


\mpifunc{MPI\_CSHIFTC( inbuf, outbuf, len, type, group, shift)} 

Behaves like {\tt MPI\_CSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


\mpifunc{MPI\_CSHIFTC1( buf, len, type, group, shift)} 

Behaves like {\tt MPI\_CSHIFT1}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, group}, and {\tt shift}.
\begin{description}
\item[INOUT buf] initial location of buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

{\tt MPI\_CSHIFT( inbuf, outbuf, group, shift)} 
is
\begin{verbatim}
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
MPI_ISEND( handle, inbuf, mod(rank+shift, size), tag, group);
MPI_RECV( outbuf, mod(rank-shift,size), tag, group, return_handle)
MPI_WAIT(handle);
\end{verbatim}


\subsubsection*{End-off shift}

\mpifunc{MPI\_EOSHIFT( inbuf, outbuf, group, shift)} 

Process with rank {\tt i}, $\tt \max( 0, -shift) \le i < min( size, size -
shift)$, sends the data
in its input buffer to process with rank {\tt i+ shift}, who receives the data
in its output buffer.   The output buffer of processes which do not receive
data is left unchanged.   All processes
make the call with the same values for {\tt group}, and {\tt shift}.

\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[IN outbuf] handle to output buffer descriptor
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

\mpifunc{MPI\_EOSHIFT1( buf, group, shift)} 

Process with rank {\tt i}, $\tt \max( 0, -shift) \le i < min( size, size -
shift)$, sends the data
in its buffer to process with rank {\tt i+ shift}, who receives the data
in the same buffer.   The output buffer of processes which do not receive
data is left unchanged.   All processes
make the call with the same values for {\tt group}, and {\tt shift}.

\begin{description}
\item[INOUT buf] handle to buffer descriptor
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


\mpifunc{MPI\_EOSHIFTC( inbuf, outbuf, len, type, group, shift)} 

Behaves like {\tt MPI\_EOSHIFT}, with buffers restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, group}, and {\tt shift}.
\begin{description}
\item[IN inbuf] initial location of input buffer
\item[OUT outbuf] initial location of output buffer
\item[IN len] number of entries in input (and output) buffers
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN shift] integer
\end{description}

\mpifunc{MPI\_EOSHIFTC1( buf, len, type, group, shift)} 

Behaves like {\tt MPI\_EOSHIFT1}, with buffer restricted to be blocks of
numeric units.
All processes make the call with the same values for
{\tt len, group}, and {\tt shift}.
\begin{description}
\item[INOUT buf] initial location of buffer
\item[IN len] number of entries in buffer
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN shift] integer
\end{description}


\subsubsection*{Gather}

\mpifunc{MPI\_GATHER( inbuf, list\_of\_outbufs, group, root, return\_status) } 

Each process (including the root process) sends the content of its input
buffer to the root process.  The root process places all the
incoming messages in the location specified by the output buffer handle
corresponding to the sender's rank. 
For example, the root places the data from process with rank 3
in the location specified by the third buffer descriptor in the 
list of outbufs.
The list\_of\_outbufs argument is ignored for all non-root processes.
The routine is called by all members of group using the same arguments for
{\tt group}, and {\tt root}.   The input buffer of each process may have
a different length.
\begin{description}
\item[IN inbuf] handle to input buffer descriptor
\item[IN list\_of\_outbufs] list of buffer descriptor handles (root)
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\item[OUT return\_status] return status handle
\end{description}

\discuss{
Do we want the collective routines to have return status handles?
And if so what information do we want the handle to be able to 
return? 
}

\mpifunc{MPI\_GATHERC( inbuf, outbuf, inlen, type, group, root) } 

{\tt MPI\_GATHERC} behaves like {\tt MPI\_GATHER} restricted to block
buffers, and with the additional restriction that all input buffers should
have the same length.   All processes should provided the same values for
{\tt inlen, group}, and {\tt root} .
\begin{description}
\item[IN inbuf] first variable of input buffer (choice)
\item[OUT outbuf] first variable of output buffer -- significant only at
root (matches type)
\item[IN inlen] Number of (word) variables in input buffer (integer)
\item[IN type] data type of buffer
\item[IN group] group handle
\item[IN root] rank of receiving process (integer)
\end{description}


{\tt MPI\_GATHERC( inbuf, outbuf, inlen, type, group, root) } 
is
\begin{verbatim}
MPI_GSIZE( &size, group);
MPI_RANK( &rank, group);
MPI_ISENDC(handle, inbuf, inlen, root, tag, group);
if (rank==root)
   for (i=0; i < size; i++)
   {
      MPI_RECVC(outbuf, inlen, i, tag, group, return_status);
      outbuf += inlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\subsubsection*{Scatter}

\mpifunc{MPI\_SCATTER( list\_of\_inbufs, outbuf, group, root, return\_status)} 

The root process sends the content of its {\tt i}-th input buffer
to the process with rank {\tt i}; each process (including the root process)
stores the incoming message in its output buffer.
The routine is called by all members of the group using the same
arguments for {\tt group}, and {\tt root}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[IN outbuf] buffer descriptor handle
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\item[OUT return\_status] return status handle
\end{description}


{\tt MPI\_SCATTER( list\_of\_inbufs, outbuf, group, root, return\_status)} 
is
\begin{verbatim}
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
MPI_IRECV(handle, outbuf, root, tag, group);
if (rank=root)
   for (i=0; i < size; i++)
      MPI_SEND(inbuf[i], i, tag, group);
MPI_WAIT(handle, return_status);
\end{verbatim}


\mpifunc{MPI\_SCATTERC( inbuf, outbuf, len, type, group, root)}


{\tt MPI\_SCATTERC} behaves like {\tt MPI\_SCATTER} restricted to block buffers,
and with the additional restriction that all output buffers have the same
length. The input buffer block of the root process is partitioned into
{\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt i}-th process in the group and stored in its output buffer.
The routine is called by all members of the group using the same
arguments for {\tt group, len}, and {\tt root}.
\begin{description}
\item[IN inbuf] first entry in input buffer -- significant only at root
(choice).
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries to be stored in output buffer (integer)
\item[IN type] data type of buffer
\item[IN group] handle
\item[IN root]  rank of sending process (integer)
\end{description}


{\tt MPI\_SCATTERC( inbuf, outbuf, outlen, type, group, root) } 
is
\begin{verbatim}
MPI_GSIZE( &size, group);
MPI_RANK( &rank, group);
MPI_IRECVC( handle, outbuf, outlen, type, root, tag, group, return_handle);
if (rank=root)
   for (i=0; i < size; i++)
   {
      MPI_SENDC(inbuf, outlen, type, i, tag, group);
      inbuf += outlen;
   }
MPI_WAIT(handle);
\end{verbatim}

\subsubsection*{All-to-all scatter}

\mpifunc{MPI\_ALLSCATTER( list\_of\_inbufs, list\_of\_outbufs, group, return\_status)} 

Each process in the group sends its {\tt i}-th buffer in its input buffer list
to the process with rank {\tt i} (itself included); each process places
the incoming messages in the location specified by output buffer handle
corresponding to the rank of the sender.
For example, each process places the data from process with rank 3
in the location specified by the third buffer descriptor in the 
list of outbufs.
The routine is called by all members of the group using the same
arguments for {\tt group}.
\begin{description}
\item[IN list\_of\_inbufs] list of buffer descriptor handles
\item[IN list\_of\_outbufs] list of buffer descriptor handles
\item[IN group] handle
\item[OUT return\_status] return status handle
\end{description}


\mpifunc{MPI\_ALLSCATTERC( inbuf, outbuf, len, type, group)} 

{\tt MPI\_ALLSCATTERC} behaves like {\tt MPI\_ALLSCATTER} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have
the same length. The input buffer block of each process is partitioned
into {\tt n} consecutive blocks,
each consisting of {\tt len} words.  The {\tt i}-th block is sent to the
{\tt it}-th process in the group.  Each process concatenates the incoming
messages, in the order of the senders' ranks, and store them in its output
buffer. The routine is called by all members of the group using the same
arguments for {\tt group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (matches type).
\item[OUT outbuf] first entry in output buffer (matches type).
\item[IN len]  number of entries sent from each process to each other (integer).
\item[IN type] data type of buffer
\item[IN group] handle
\end{description}


{\tt MPI\_ALLSCATTERC( inbuf, outbuf, len, type, group)}  
is
\begin{verbatim}
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
for (i=0; i < rank; i++)
   {
    MPI_IRECVC(recv_handles[i], outbuf, len, type, tag, group, return_handle);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDC(send_handle[i], inbuf, len, type, i, tag, group);
    inbuf += len;
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}

\subsubsection*{All-to-all broadcast}

\mpifunc{MPI\_ALLCAST( inbuf, list\_of\_outbufs, group, return\_status)} 

Each process in the group broadcasts its input buffer
to all processes (including itself);
each process places
the incoming messages in the location specified by output buffer handle
corresponding to the rank of the sender.
For example, each process places the data from process with rank 3
in the location specified by the third buffer descriptor in the
list of outbufs.
The routine is called by all members of the group using the same
arguments for {\tt group}.
\begin{description}
\item[IN inbuf] buffer descriptor handle for input buffer
\item[IN list\_of\_outbufs] list of buffer descriptor handles
\item[IN group] handle
\item[OUT return\_status] return status handle
\end{description}


\mpifunc{MPI\_ALLCASTC( inbuf, outbuf, len, type, group)} 

{\tt MPI\_ALLCASTC} behaves like {\tt MPI\_ALLCAST} restricted to
block buffers,
and with the additional restriction that all blocks sent from one process
to another have the same length.
Each process concatenates the incoming messages, 
in the order of the senders' ranks, and store them in its output buffer.
The routine is called by all members of the group using the same
arguments for {\tt group}, and {\tt len}.
\begin{description}
\item[IN inbuf] first entry in input buffer (choice).
root (integer)
\item[OUT outbuf] first entry in output buffer (choice).
\item[IN len]  number of entries sent from each process to each other
(including itself).
\item[IN type] data type of buffer
\item[IN group] group handle
\end{description}


{\tt MPI\_ALLCASTC( inbuf, outbuf, len, type, group)}  
is
\begin{verbatim}
MPI_GSIZE( group, &size );
MPI_RANK( group, &rank );
for (i=0; i < rank; i++)
   {
    MPI_IRECVC(recv_handles[i], outbuf, len, type, tag, group, return_handle);
    outbuf += len;
   }
for (i=0; i < size; i++)
   {
    MPI_ISENDC(send_handle[i], inbuf, len, type, i, tag, group);
   }
MPI_WAITALL(send_handle);
MPI_WAITALL(recv_handle);
\end{verbatim}


\section{Global Compute Operations}

\subsubsection*{Reduce}

\mpifunc{MPI\_REDUCE( inbuf, outbuf, group, root, op)} 

Combines the values provided in the input buffer of each process in the
group, using the operation {\tt op}, and returns the combined value in
the output buffer of the process with rank {\tt root}.
Each process can provide one value, or a sequence of values, in which case the
combine operation is executed pointwise on each entry of the sequence.
For example, if the operation is {\tt max} and input buffers contains two
floating point numbers, then outbuf(1) $=$ global max(inbuf(1)) and
outbuf(2) $=$ global max(inbuf(2)). All input
buffers should define sequences of equal length of entries of types
that match the type of the operands of {\tt op}.  The
output buffer should define a sequence of the same length of entries of
types that match the type of the result of {\tt op}.
(Note that,
here as for all other communication operations, the type of entries inserted in
a message depend on the information provided by the input buffer descriptor, and
not on the declarations of these variables in the calling program.   The types
of the variables in the calling program need not match the types defined by the
buffer descriptor, but in such case the outcome of a reduce operation may be
implementation dependent.)

The operation
defined by {\tt op} is associative and commutative, and the implementation can
take advantage of associativity and commutativity in order to change
order of evaluation.
The routine is called by all group members using the same arguments
for {\tt group, root} and {\tt op}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[IN outbuf] handle to output buffer -- significant only at root
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation 
\end{description}

The buffer descriptor contains data type information so 
that the correct form of the operation can be performed. 
We list below the operations which are supported.
\begin{description}
\item[MPI\_MAX] maximum
\item[MPI\_MIN] minimum
\item[MPI\_MIN] minimum
\item[MPI\_SUM] sum
\item[MPI\_PROD] product
\item[MPI\_AND] and (logical or bit-wise integer)
\item[MPI\_OR] or (logical or bit-wise integer)
\item[MPI\_XOR] xor (logical or bit-wise integer)
\item[MPI\_MAXLOC] rank of process with maximum value
\item[MPI\_MINLOC] rank of process with minimum value
\end{description}

\mpifunc{MPI\_REDUCEC( inbuf, outbuf, len, type, group, root, op)} 

Is same as {\tt MPI\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN op] operation 
\end{description}


\mpifunc{MPI\_USER\_REDUCE( inbuf, outbuf, group, root, function)} 

Same as the reduce operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned value of the function, and the types of all entries in the
input and output buffers all agree.  The output buffer has the same
length as the input buffer.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[IN outbuf] handle to output buffer -- significant only at root
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\end{description}

\mpifunc{MPI\_USER\_REDUCEC( inbuf, outbuf, len, type, group, root, function)}

Is same as {\tt MPI\_\_USER\_REDUCE}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer -- significant only at root
\item[IN len] number of entries in input and output buffer (integer)
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN root] rank of root process (integer)
\item[IN function] user provided function
\end{description}


\discuss{

Do we also want a version of reduce that broadcasts the result to all processes
in the group?  (This can be achieved by a reduce followed by a broadcast, but a
combined function may be somewhat more efficient.)
These would be respectively:

\mpifunc{MPI\_GOP( inbuf, outbuf, group, op)}

\mpifunc{MPI\_GOPC( inbuf, outbuf, len, type, group, op)}

\mpifunc{MPI\_USER\_GOP( inbuf, outbuf, group, function)}

\mpifunc{MPI\_USER\_GOPC( inbuf, outbuf, len, type, group, function)}

Do we want a user provided {\em function} (two IN parameters, one OUT
value), or a user provided procedure that overwrites the second input
(ie. one IN param, one INOUT param, the equivalent of C {\tt a op= b}
type assignment)?  The second choice may allow a
more efficient implementation, without changing the semantics of the
MPI call.

}

\subsubsection*{Scan}

\mpifunc{ MPI\_SCAN( inbuf, outbuf, group, op )} 

MPI\_SCAN is used to perform a parallel prefix with respect to
an associative reduction operation on data distributed across the group.
The operation returns in the output buffer of the process with rank {\tt i} the
reduction of the values in the input buffers of processes with ranks {\tt
0,...,i}.  The type of operations supported and their semantic, and the
constraints on input and output buffers are as for {\tt MPI\_REDUCE}.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[IN outbuf] handle to output buffer
\item[IN group] handle to group
\item[IN op] operation 
\end{description}

\mpifunc{ MPI\_SCANC( inbuf, outbuf, len, type, group, op )} 
Same as {\tt MPI\_SCAN}, restricted to block buffers.

\begin{description}
\item[IN inbuf] first input buffer element (choice)
\item[OUT outbuf] first output buffer element (choice)
\item[IN len] number of entries in input and output buffer (integer)
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN op] operation 
\end{description}


\mpifunc{ MPI\_USER\_SCAN( inbuf, outbuf, group, function )} 

Same as the scan operation function above except that a user
supplied function is used.  {\tt function} is an associative and commutative
function with two arguments.  The types of the two arguments and of the
returned values all agree.
\begin{description}
\item[IN inbuf] handle to input buffer
\item[IN outbuf] handle to output buffer
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

\mpifunc{MPI\_USER\_SCANC( inbuf, outbuf, len, type, group, function)}

Is same as {\tt MPI\_USER\_SCAN}, restricted to a block buffer.
\begin{description}
\item[IN inbuf] first location in input buffer
\item[OUT outbuf] first location in output buffer
\item[IN len] number of entries in input and output buffer (integer)
\item[IN type] data type of buffer
\item[IN group] handle to group
\item[IN function] user provided function
\end{description}

\discuss{

Do we want scan operations executed by segments? (The HPF definition of prefix
and suffix operation might be handy -- in addition to the scanned vector of
values there is a mask that tells where segments start and end.)
}


\section{Correctness}

\discuss{ This is still very preliminary}

The semantics of the collective communication operations can be derived from
their operational definition in terms of  point-to-point communication.  It is
assumed that messages pertaining to one
operation cannot be confused with messages pertaining to another operation.
Also messages pertaining to two distinct occurrences of the same operation
cannot be confused, if the two occurrences have distinct parameters.
The relevant parameters for this purpose are {\tt group}, {\tt root} 
and {\tt op}.
messages pertaining to another occurrence of the same operation, with different
parameters.   The implementer can, of course, use another, more efficient
implementation, as long as it has the same effect.

\discuss{

This statement does not yet apply to the current, incomplete and
somewhat careless definitions I provided in this draft.

The definition above means that messages pertaining to a collective
communication carry information identifying the operation itself, and the
values of the {\tt group} and,
where relevant, {\tt root} or {\tt op} parameters.
Is this acceptable?

}


A few examples:

\begin{verbatim}
MPI_BCASTC(buf, len, type, group, 0);
MPI_BCASTC(buf, len, type, group, 1);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag, but different
roots.  Since the operations are distinguishable, messages from one broadcast
cannot be confused with messages from the other broadcast; the program is safe
and will execute as expected.

\begin{verbatim}
MPI_BCASTC(buf, len, type, group, 0);
MPI_BCASTC(buf, len, type, group, 0);
\end{verbatim}

Two consecutive broadcasts, in the same group, with the same tag and root.
Since point-to-point communication preserves the order of messages
here, too, messages from one broadcast will not be confused with messages from
the other broadcast; the program is safe and will execute as intended.

\begin{verbatim}
MPI_RANK(&rank, group)
if (rank==0)
  {
   MPI_BCASTC(buf, len, type, group, 0);
   MPI_SENDC(buf, len, type, 2, tag, group);
  }
elseif (rank==1)
  {
   MPI_RECVC(buf, len, type, MPI_DONTCARE, tag, group);
   MPI_BCASTC(buf, len, type, group, 0);
   MPI_RECVC(buf, len, type, MPI_DONTCARE, tag, group);
  }
else
  {
   MPI_SENDC(buf, len, type, 2, tag, group);
   MPI_BCASTC(buf, len, type, group, 0);
  }
\end{verbatim}

Process zero executes a broadcast followed by a send to process one;
process two executes a send to process one, followed by a broadcast;
and process one executes a receive, a broadcast and a receive.
A possible outcome is for the operations to be matched as illustrated by the
diagram below.

\begin{verbatim}


    0                       1                      2

                / - >  receive            / - send
              /                         /
broadcast   /         broadcast       /   broadcast
           /                        /
  send   -             receive  < -


\end{verbatim}

The reason is that broadcast is not a synchronous operation; the call at a
process may return before the other processes have entered the broadcast.
Thus, the message sent by process zero can arrive to process one before the
message sent by process two, and before the call to broadcast on process one.

\end{document}

From owner-mpi-collcomm@CS.UTK.EDU  Tue May  4 14:41:02 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA23820; Tue, 4 May 93 14:41:02 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26934; Tue, 4 May 93 14:40:28 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 4 May 1993 14:40:24 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sun4.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA26907; Tue, 4 May 93 14:40:16 -0400
Received: by sun4.epm.ornl.gov (4.1/1.34)
	id AA24353; Tue, 4 May 93 14:40:15 EDT
Date: Tue, 4 May 93 14:40:15 EDT
From: geist@sun4.epm.ornl.gov (Al Geist)
Message-Id: <9305041840.AA24353@sun4.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Postscript of revised collective draft

%!PS-Adobe-2.0
%%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software
%%Title: cc.dvi
%%Pages: 17 1
%%BoundingBox: 0 0 612 792
%%EndComments
%%BeginProcSet: tex.pro
/TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch
load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{
isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale
Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get
round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10
N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{
/vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{
statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N
/FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin
/FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array
/BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2
array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}
B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont
setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup
length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{
ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B
/ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0
N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S
dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0
ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice
ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}
imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr
put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf
div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook
known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore
showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook
known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256
array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N
/RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X
/rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0
7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1
TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1
-.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{
moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{
S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B
/j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w
}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p
a}B /bos{/SS save N}B /eos{clear SS restore}B end
%%EndProcSet
TeXDict begin 1000 400 400 @start /Fa 9 118 400 360 dfs[<7FFFF0FFFFF8FFFFF87F
FFF00000000000000000000000007FFFF0FFFFF8FFFFF87FFFF0>21 12
126 148 29 61 D[<1FF0003FFC007FFE00781F00300780000380000380007F8007FF801FFF80
3FC3807C0380F00380E00380E00380E00380F007807C1F803FFFFC1FFDFC07F0FC>22
21 125 148 29 97 D[<FE0000FE0000FE00000E00000E00000E00000E00000E00000E00000E3F
000EFFC00FFFE00FE1F00F80700F00780F00380E003C0E001C0E001C0E001C0E001C0E001C0F00
3C0F00380F00780F80F00FC3E00FFFC00EFF80067E00>22 30 127 157
29 I[<00F8FC03FFFE07FFFE0F8F8C0E03801E03C01C01C01C01C01C01C01E03C00E03800F8F80
0FFF001FFE001CF8001C00001C00001E00000FFF801FFFF03FFFF87C00FC70001CF0001EE0000E
E0000EE0000EF0001E78003C3F01F81FFFF00FFFE001FF00>23 33 127
148 29 103 D[<01F00007FC001FFF003E0F803C07807803C07001C0E000E0E000E0E000E0E000
E0E000E0E000E0F001E07001C07803C03C07803E0F801FFF0007FC0001F000>19
21 125 148 29 111 D[<FE3F00FEFFC0FFFFE00FE1F00F80700F00780F00380E003C0E001C0E
001C0E001C0E001C0E001C0F003C0F00380F00780F80F00FC3E00FFFC00EFF800E7E000E00000E
00000E00000E00000E00000E00000E00000E0000FFE000FFE000FFE000>22
32 127 148 29 I[<FF87F0FF9FF8FFBFFC03FC3C03F01803E00003C00003C00003C000038000
038000038000038000038000038000038000038000038000FFFF00FFFF80FFFF00>22
21 126 148 29 114 D[<00C00001C00001C00001C00001C00001C00001C0007FFFE0FFFFE0FF
FFE001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C07001C07001
C07001C0F001E1E000FFE0007FC0003F00>20 28 127 155 29 116 D[<FE0FE0FE0FE0FE0FE0
0E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E01E0
0E01E00F07E007FFFE03FFFE01FCFE>23 21 127 148 29 I E /Fb 7 118
400 360 dfs[<007C01C2030307070E0F1C0F3C003800780078007800F000F000F000F000F001
70037006301C18380FC0>16 21 123 148 26 99 D[<00007C0000CE00019E00039E00038C0003
00000700000700000700000700000E00000E00000E00000E0001FFF001FFF0001C00001C00001C
00001C00001C0000380000380000380000380000380000700000700000700000700000700000E0
0000E00000E00000E00001C00001C00001C00001C000038000738000F30000F300006600003C00
00>23 45 130 162 17 102 D[<00E000E001E000C00000000000000000000000000000000000
001E00330063806380C380C700C70007000E000E000E001C001C001C40386038C070C070803180
31001E00>11 34 124 161 17 105 D[<1E07803318E063B06063E070C3C070C38070C3807007
00E00700E00700E00700E00E01C00E01C00E03820E03831C03861C07061C070C1C030838031818
01E0>24 21 124 148 31 110 D[<007C0001C6000303000603800E03C01C03C03C03C03803C0
7803C07803C07803C0F00780F00780F00780F00F00F00E00701E00701C003038001860000F8000
>18 21 123 148 28 I[<006000E000E000E000E001C001C001C001C00380FFF8FFF803800700
0700070007000E000E000E000E001C001C001C101C18383038303860186018C00F00>13
31 124 158 19 116 D[<0F003011807021C07061C0E0C1C0E0C380E0C380E00381C00701C007
01C00701C00E03800E03800E03840E03860E070C0C070C0E070C0E0B1806131003E1E0>23
21 124 148 30 I E /Fc 46 124 400 360 dfs[<000FC0000078300000E0080001803C000380
7C0007007C0007007C0007003800070000000700000007000000070000000700000007000000FF
FFFC00FFFFFC0007003C0007001C0007001C0007001C0007001C0007001C0007001C0007001C00
07001C0007001C0007001C0007001C0007001C0007001C0007001C0007001C0007001C007FF1FF
C07FF1FFC0>26 35 128 162 31 12 D[<000FC03F00007031E0C000E00B802001803E00F00380
7E01F007007C01F007007C01F007003C00E007001C000007001C000007001C000007001C000007
001C000007001C0000FFFFFFFFF0FFFFFFFFF007001C00F007001C007007001C007007001C0070
07001C007007001C007007001C007007001C007007001C007007001C007007001C007007001C00
7007001C007007001C007007001C007007001C007007001C00707FF1FFC7FF7FF1FFC7FF>40
35 128 162 47 14 D[<001000200040008001000300060004000C001800180018003000300030
007000600060006000E000E000E000E000E000E000E000E000E000E000E000E000600060006000
70003000300030001800180018000C0004000600030001000080004000200010>12
50 125 164 21 40 D[<800040002000100008000C0006000200030001800180018000C000C000
C000E0006000600060007000700070007000700070007000700070007000700070006000600060
00E000C000C000C00180018001800300020006000C0008001000200040008000>12
50 125 164 21 I[<70F8FCFC7404040404080810102040>6 15 124 132
16 44 D[<FFF0FFF0>12 2 127 139 19 I[<70F8F8F870>5 5 124 132
16 I[<70F8F8F870000000000000000000000070F8F8F870>5 21 124 148
16 58 D[<07F000181C00200E00400700F00780F80780F80780F80780700780000F00000E0000
1C0000380000700000600000C00000800000800001800001000001000001000001000001000001
000000000000000000000000000000000003800007C00007C00007C000038000>17
35 125 162 27 63 D[<0001800000018000000180000003C0000003C0000003C0000005E00000
05E000000DF0000008F0000008F0000010F800001078000010780000203C0000203C0000203C00
00401E0000401E0000401E0000800F0000FFFF0000FFFF000100078001000780030007C0020003
C0020003C0040003E0040001E00C0001E01E0001F0FFC01FFFFFC01FFF>32
34 126 161 41 65 D[<0007F008003FFC1800FC073801F001B803C000F8078000780F0000381E
0000183E0000183C0000187C0000087C00000878000008F8000000F8000000F8000000F8000000
F8000000F8000000F8000000F8000000780000007C0000087C0000083C0000083E0000081E0000
100F0000100780002003C0004001F0018000FC0700003FFE000007F000>29
34 125 161 40 67 D[<FFFFF800FFFFFE0007800F80078003C0078001E0078000F00780007807
8000780780003C0780003C0780001E0780001E0780001E0780001F0780001F0780001F0780001F
0780001F0780001F0780001F0780001F0780001F0780001E0780001E0780003E0780003C078000
3C07800078078000F0078001E0078003C007800F80FFFFFF00FFFFF800>32
34 126 161 42 I[<FFFFFFE0FFFFFFE0078003E0078000E00780006007800020078000300780
0030078000100780001007802010078020100780200007802000078060000780E00007FFE00007
FFE0000780E0000780600007802000078020000780200007802000078000000780000007800000
0780000007800000078000000780000007800000FFFE0000FFFE0000>28
34 126 161 37 70 D[<FFFC3FFFFFFC3FFF078001E0078001E0078001E0078001E0078001E007
8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E007FFFFE007FFFFE0
078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001
E0078001E0078001E0078001E0078001E0078001E0FFFC3FFFFFFC3FFF>32
34 126 161 41 72 D[<FFFCFFFC07800780078007800780078007800780078007800780078007
8007800780078007800780078007800780078007800780078007800780078007800780FFFCFFFC
>14 34 126 161 20 I[<FF800001FF80FF800001FF8007800001F00005C00002F00005C00002
F00004E00004F00004E00004F00004E00004F00004700008F00004700008F00004380010F00004
380010F00004380010F000041C0020F000041C0020F000041C0020F000040E0040F000040E0040
F00004070080F00004070080F00004070080F00004038100F00004038100F00004038100F00004
01C200F0000401C200F0000400E400F0000400E400F0000400E400F00004007800F00004007800
F0001F003000F000FFE0301FFF80FFE0301FFF80>41 34 126 161 51 77
D[<FF8007FFFFC007FF07C000F805E0002004F0002004F0002004780020047C0020043C002004
1E0020041E0020040F002004078020040780200403C0200401E0200401E0200400F0200400F820
0400782004003C2004003C2004001E2004000F2004000F20040007A0040003E0040003E0040001
E0040001E0040000E01F000060FFE00060FFE00020>32 34 126 161 41
I[<000FF00000781E0000E0070003C003C0078001E00F0000F01E0000781E0000783C00003C3C
00003C7C00003E7800001E7800001EF800001FF800001FF800001FF800001FF800001FF800001F
F800001FF800001F7800001E7C00003E7C00003E3C00003C3E00007C1E0000781E0000780F0000
F0078001E003C003C000E0070000781E00000FF000>32 34 125 161 43
I[<FFFFF800FFFFFE0007801F00078007C0078003C0078001E0078001E0078001F0078001F007
8001F0078001F0078001F0078001E0078003E0078003C00780078007801F0007FFFC0007800000
078000000780000007800000078000000780000007800000078000000780000007800000078000
00078000000780000007800000FFFC0000FFFC0000>28 34 126 161 38
I[<7FFFFFFC7FFFFFFC7803C03C6003C00C4003C0044003C004C003C006C003C0068003C00280
03C0028003C0028003C0020003C0000003C0000003C0000003C0000003C0000003C0000003C000
0003C0000003C0000003C0000003C0000003C0000003C0000003C0000003C0000003C0000003C0
000003C0000003C0000003C00001FFFF8001FFFF80>31 34 126 161 40
84 D[<FFFC07FFFFFC07FF078000F8078000200780002007800020078000200780002007800020
078000200780002007800020078000200780002007800020078000200780002007800020078000
20078000200780002007800020078000200780002007800020078000200380004003C0004001C0
008000E0018000F00300003C0E00001FFC000007F000>32 34 126 161
41 I[<1FF000381C007C06007C07007C0380380380000380000380007F8007C3801E03803C0380
780380780380F00384F00384F00384F00784780B843C11C80FE0F0>22 21
126 148 28 97 D[<0E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E0000
0E00000E00000E00000E00000E1F800E60E00E80300F00380E001C0E001E0E000E0E000F0E000F
0E000F0E000F0E000F0E000F0E000F0E000E0E001E0E001C0F00380C80700C60E0081F80>24
35 127 162 31 I[<01FE000707000C0F801C0F80380F80780700700000F00000F00000F00000
F00000F00000F00000F000007000007800403800401C00800C010007060001F800>18
21 126 148 24 I[<0000700007F00007F00000F0000070000070000070000070000070000070
00007000007000007000007001F8700706700E01701C00F0380070780070700070F00070F00070
F00070F00070F00070F00070F000707000707800703800701C00F00C017807067F01F87F>24
35 126 162 31 I[<01FC000707000C03801C01C03801C07800E07000E0F000E0FFFFE0F00000
F00000F00000F00000F000007000007800203800201C00400E008007030000FC00>19
21 127 148 24 I[<003E0000E30001C780038F80030F80070700070000070000070000070000
070000070000070000070000FFF800FFF800070000070000070000070000070000070000070000
0700000700000700000700000700000700000700000700000700000700007FF8007FF800>17
35 128 162 17 I[<01F078071D9C0E0E1C1C07001C07003C07803C07803C07803C07801C0700
1C07000E0E000F1C0019F0001000001000001800001C00001FFF000FFFE00FFFF03800F8600018
40001CC0000CC0000CC0000C6000186000183800700E01C001FE00>22 32
127 148 28 I[<0E000000FE000000FE0000001E0000000E0000000E0000000E0000000E000000
0E0000000E0000000E0000000E0000000E0000000E0000000E1F80000E60E0000E8070000F0038
000F0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E00
38000E0038000E0038000E0038000E0038000E003800FFE3FF80FFE3FF80>25
35 127 162 31 I[<1C003E003E003E001C00000000000000000000000000000000000E00FE00
FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC0FFC0>
10 34 127 161 16 I[<0E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00
000E00000E00000E00000E00000E03FC0E03FC0E01E00E01800E02000E04000E08000E10000E38
000EF8000F1C000E1E000E0E000E07000E07800E03C00E01C00E01E00E01F0FFE3FEFFE3FE>23
35 127 162 29 107 D[<0E00FE00FE001E000E000E000E000E000E000E000E000E000E000E00
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FF
E0FFE0>11 35 127 162 16 I[<0E1FC07F00FE60E18380FE807201C01F003C00E00F003C00E0
0E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800
E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E0FFE3FF8FFEFFE3FF
8FFE>39 21 127 148 47 I[<0E1F8000FE60E000FE8070001F0038000F0038000E0038000E00
38000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E
0038000E0038000E003800FFE3FF80FFE3FF80>25 21 127 148 31 I[<00FC000703800E01C0
1C00E0380070780078700038F0003CF0003CF0003CF0003CF0003CF0003CF0003C700038780078
3800701C00E00E01C007038000FC00>22 21 127 148 28 I[<0E1F80FE60E0FE80700F00380E
001C0E001E0E001E0E000F0E000F0E000F0E000F0E000F0E000F0E000F0E001E0E001E0E001C0F
00380E80700E60E00E1F800E00000E00000E00000E00000E00000E00000E00000E0000FFE000FF
E000>24 31 127 148 31 I[<01F8200704600E02601C01603801E07800E07800E0F000E0F000
E0F000E0F000E0F000E0F000E0F000E07000E07800E03801E01C01E00C02E0070CE001F0E00000
E00000E00000E00000E00000E00000E00000E00000E0000FFE000FFE>23
31 126 148 29 I[<0E1E00FE6300FE87801E87800F03000F00000E00000E00000E00000E0000
0E00000E00000E00000E00000E00000E00000E00000E00000E0000FFF000FFF000>17
21 127 148 22 I[<0FC4303C600CC00CC004C004E004F0007F803FF00FF800FC001E800E8006
C006C006C004E00CD81887E0>15 21 126 148 22 I[<02000200020002000200060006000600
0E001E003FF8FFF80E000E000E000E000E000E000E000E000E000E000E040E040E040E040E040E
040708030801F0>14 31 127 158 21 I[<0E003800FE03F800FE03F8001E0078000E0038000E
0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E0038000E003800
0E0038000E0078000E0078000700BC0003833F8000FC3F80>25 21 127
148 31 I[<FFC1FEFFC1FE1E00700E00200E002007004007004003808003808003808001C10001
C10000E20000E20000E200007400007400003800003800003800001000>23
21 127 148 29 I[<FF8FF87F80FF8FF87F801E01C01E000E00C00C000E00E008000E01E00800
0701601000070170100007023030000382382000038238200001C418400001C41C400001C41C40
0000E80C800000E80E800000E80E80000070070000007007000000700700000020020000>33
21 127 148 40 I[<FF83FEFF83FE0F01E007008003810003830001C20000E400007800007000
003800003C00004E00008E000187000103800201C00601C01E00E0FF03FEFF03FE>23
21 127 148 29 I[<FFC1FEFFC1FE1E00700E00200E0020070040070040038080038080038080
01C10001C10000E20000E20000E200007400007400003800003800003800001000001000002000
002000002000F84000F84000F88000B980006300003E0000>23 31 127
148 29 I[<FFFFFF>24 1 128 140 28 123 D E /Fd 30 122 400 360
dfs[<000C0038007000E001C003C0038007800F000F001E001E003E003C003C007C007C007C00
7800F800F800F800F800F800F800F800F800F800F800F80078007C007C007C003C003C003E001E
001E000F000F000780038003C001C000E000700038000C>14 49 124 164
24 40 D[<C000700038001C000E000F000700078003C003C001E001E001F000F000F000F800F8
00F80078007C007C007C007C007C007C007C007C007C007C007C007800F800F800F800F000F001
F001E001E003C003C0078007000F000E001C0038007000C000>14 49 125
164 24 I[<3C007E00FF00FF00FF80FF807F803D800180018003000300070006000C001C003800
2000>9 18 124 135 18 44 D[<3C7EFFFFFFFF7E3C0000000000003C7EFFFFFFFF7E3C>8
22 124 149 18 58 D[<0001FF0040001FFFC1C0007F80F3C001FC001FC003F0000FC007E00007
C00FC00003C01FC00003C03F800001C03F800001C07F800000C07F000000C07F000000C0FF0000
0000FF00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF000000007F00
0000007F000000C07F800000C03F800000C03F800001C01FC00001800FC000018007E000030003
F000060001FC001C00007F807800001FFFE0000001FF0000>34 34 125
161 46 67 D[<FFFFFF8000FFFFFFF80007F001FC0007F0007F0007F0003F8007F0000FC007F0
000FE007F00007E007F00007F007F00003F007F00003F807F00003F807F00003F807F00003FC07
F00003FC07F00003FC07F00003FC07F00003FC07F00003FC07F00003FC07F00003FC07F00003FC
07F00003F807F00003F807F00003F807F00007F007F00007F007F0000FE007F0000FC007F0001F
8007F0007F0007F001FE00FFFFFFF800FFFFFFC000>38 34 126 161 49
I[<FFFFFFFC00FFFFFFFC0007F000FC0007F0003E0007F0001E0007F0000E0007F000060007F0
00060007F000060007F00C030007F00C030007F00C030007F00C000007F01C000007F03C000007
FFFC000007FFFC000007F03C000007F01C000007F00C000007F00C000007F00C018007F00C0180
07F000018007F000030007F000030007F000030007F000070007F000070007F0000F0007F0001F
0007F000FE00FFFFFFFE00FFFFFFFE00>33 34 126 161 42 I[<0001FF0020001FFFE0E0007F
8079E001FC001FE003F80007E007E00003E00FC00001E01FC00001E03F800000E03F800000E07F
800000607F000000607F00000060FF00000000FF00000000FF00000000FF00000000FF00000000
FF00000000FF0007FFFEFF0007FFFE7F00000FE07F00000FE07F80000FE03F80000FE03F80000F
E01FC0000FE00FE0000FE007E0000FE003F8000FE001FC001FE0007F8073E0001FFFE1E00001FF
8060>39 34 125 161 50 71 D[<FFFFE0FFFFE003F80003F80003F80003F80003F80003F80003
F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003
F80003F80003F80003F80003F80003F80003F80003F80003F80003F80003F800FFFFE0FFFFE0>
19 34 127 161 24 73 D[<FFF000001FFEFFF800003FFE07F800003FC007F800003FC006FC00
006FC006FC00006FC0067E0000CFC0067E0000CFC0063F00018FC0063F00018FC0063F00018FC0
061F80030FC0061F80030FC0060FC0060FC0060FC0060FC00607E00C0FC00607E00C0FC00607E0
0C0FC00603F0180FC00603F0180FC00601F8300FC00601F8300FC00600FC600FC00600FC600FC0
0600FC600FC006007EC00FC006007EC00FC006003F800FC006003F800FC006001F000FC006001F
000FC006001F000FC0FFF00E01FFFEFFF00E01FFFE>47 34 125 161 60
77 D[<0007FE0000003FFFC00000FE07F00003F801FC0007F000FE000FE0007F001FC0003F801F
80001F803F80001FC03F80001FC07F00000FE07F00000FE07F00000FE0FF00000FF0FF00000FF0
FF00000FF0FF00000FF0FF00000FF0FF00000FF0FF00000FF0FF00000FF0FF00000FF07F00000F
E07F80001FE07F80001FE03F80001FC01FC0003F801FC0003F800FE0007F0007F000FE0003F801
FC0000FE07F000003FFFC0000007FE0000>36 34 125 161 48 79 D[<FFFFFF8000FFFFFFF000
07F003F80007F001FC0007F000FE0007F0007F0007F0007F0007F0007F8007F0007F8007F0007F
8007F0007F8007F0007F8007F0007F0007F0007F0007F000FE0007F001FC0007F003F80007FFFF
F00007FFFF800007F000000007F000000007F000000007F000000007F000000007F000000007F0
00000007F000000007F000000007F000000007F000000007F000000007F0000000FFFF800000FF
FF800000>33 34 126 161 43 I[<FFFFFF0000FFFFFFE00007F007F80007F001FC0007F000FE
0007F0007F0007F0007F8007F0007F8007F0007F8007F0007F8007F0007F8007F0007F8007F000
7F0007F000FE0007F001FC0007F007F80007FFFFE00007FFFF800007F00FE00007F007F00007F0
03F80007F001FC0007F001FC0007F001FC0007F001FC0007F001FE0007F001FE0007F001FE0007
F001FE0307F001FF0307F000FF0707F000FF8EFFFF803FFCFFFF800FF8>40
34 126 161 48 82 D[<01FE020007FFCE001F01FE003C007E003C001E0078000E0078000E00F8
000600F8000600FC000600FC000000FF000000FFF000007FFF80003FFFE0003FFFF8001FFFFC00
07FFFE0003FFFF00003FFF000001FF0000003F8000001F8000001F80C0000F80C0000F80C0000F
80E0000F00E0000F00F0001E00FC001C00FF807800E7FFF000807FC000>25
34 125 161 36 I[<FFFF801FFEFFFF801FFE07F00000C007F00000C007F00000C007F00000C0
07F00000C007F00000C007F00000C007F00000C007F00000C007F00000C007F00000C007F00000
C007F00000C007F00000C007F00000C007F00000C007F00000C007F00000C007F00000C007F000
00C007F00000C007F00000C007F00000C007F00001C003F000018003F800018001F800038000FC
000700007E000E00003F807C00000FFFF0000000FF8000>39 34 126 161
49 85 D[<FF800000FF8000001F8000001F8000001F8000001F8000001F8000001F8000001F80
00001F8000001F8000001F8000001F8000001F87F0001FBFFC001FF03E001FC01F001F800F801F
800FC01F8007C01F8007E01F8007E01F8007E01F8007E01F8007E01F8007E01F8007E01F8007C0
1F8007C01F800FC01F800F801FC01F001E707E001C3FFC00180FE000>27
35 126 162 36 98 D[<00FF8007FFE00F83F01F03F03E03F07E03F07C01E07C0000FC0000FC00
00FC0000FC0000FC0000FC00007C00007E00007E00003F00301F00600FC0E007FF8000FE00>20
22 126 149 28 I[<00FE0007FF800F83E01F01E03E00F07E00F07C00F8FC00F8FC0078FFFFF8
FFFFF8FC0000FC0000FC0000FC00007E00007E00183E00381F00300F80F003FFC000FF00>21
22 126 149 29 101 D[<001F8000FFE001F1F003E3F007E3F00FC3F00FC1E00FC0000FC0000F
C0000FC0000FC0000FC000FFFE00FFFE000FC0000FC0000FC0000FC0000FC0000FC0000FC0000F
C0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0007FFC007FFC00>
20 35 126 162 20 I[<00FE0F8003FF9FC00F83E3C01F01F3C01E00F0003E00F8003E00F8003E
00F8003E00F8003E00F8001E00F0001F01F0000F83E0000BFF800008FE00001800000018000000
1C0000001FFFE0001FFFFC000FFFFF0007FFFF001FFFFF807C001FC078000FC0F80007C0F80007
C0F80007C07C000F803E001F001F807E000FFFFC0001FFE000>26 33 127
149 32 I[<0E003F807F807F807F807F803F800E00000000000000000000000000FF80FF801F80
1F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F80FFF0FFF0>
12 36 126 163 17 105 D[<FF80FF801F801F801F801F801F801F801F801F801F801F801F801F
801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F80
FFF0FFF0>12 35 126 162 17 108 D[<FF03F000FF0FFC001F187E001F203E001F403F001F40
3F001F803F001F803F001F803F001F803F001F803F001F803F001F803F001F803F001F803F001F
803F001F803F001F803F001F803F001F803F00FFF1FFE0FFF1FFE0>27 22
125 149 36 110 D[<00FF0007FFE00F81F01F00F83E007C7C003E7C003E7C003EFC003FFC003F
FC003FFC003FFC003FFC003FFC003F7C003E7E007E3E007C1F00F80F81F007FFE000FF00>24
22 126 149 32 I[<FF87F000FFBFFC001FF07E001FC01F001F800F801F800FC01F800FC01F80
07E01F8007E01F8007E01F8007E01F8007E01F8007E01F8007E01F8007C01F800FC01F800FC01F
801F801FC01F001FF07E001FBFFC001F8FE0001F8000001F8000001F8000001F8000001F800000
1F8000001F8000001F800000FFF00000FFF00000>27 32 126 149 36 I[<FF0F80FF1FE01F33
F01F63F01F43F01F43F01FC1E01F80001F80001F80001F80001F80001F80001F80001F80001F80
001F80001F80001F80001F8000FFF800FFF800>20 22 126 149 27 114
D[<07F9801FFF80380780700380F00180F00180F80000FF0000FFF8007FFE003FFF001FFF8007
FF80003FC0C007C0C003C0E003C0E003C0F00380FC0F00EFFE00C3F800>18
22 126 149 26 I[<00C00000C00000C00000C00001C00001C00003C00007C0000FC0001FC000
FFFF00FFFF000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC000
0FC1800FC1800FC1800FC1800FC18007C18007E30003FE0000FC00>17 32
127 159 24 I[<FF81FF00FF81FF001F803F001F803F001F803F001F803F001F803F001F803F00
1F803F001F803F001F803F001F803F001F803F001F803F001F803F001F803F001F803F001F807F
001F80FF000FC1BF0007FF3FE001FC3FE0>27 22 125 149 36 I[<FFF01FE0FFF01FE00FC007
000FC006000FE00E0007E00C0007F01C0003F0180003F8180001F8300001F8300000FC600000FC
6000007EC000007EC000007FC000003F8000003F8000001F0000001F0000000E0000000E000000
0C0000000C00000018000078180000FC380000FC300000FC60000069E000007F8000001F000000
>27 32 127 149 33 121 D E /Fe 2 61 438 432 dfs[<78FCFCFEFE7A020202020404040810
102040>7 18 123 133 17 59 D[<00000000E000000003E00000000FC00000003F00000000FC
00000003F00000000FC00000003F00000000FC00000003F00000000FC00000003F00000000FC00
000003F00000000FC00000003F00000000FC00000000F000000000FC000000003F000000000FC0
00000003F000000000FC000000003F000000000FC000000003F000000000FC000000003F000000
000FC000000003F000000000FC000000003F000000000FC000000003E000000000E0>35
35 123 159 47 I E /Ff 65 126 438 432 dfs[<00F0000003F8000003FC000007FC0000071E
00000F0E00000E0E00000E0E00000E0E00000E0E00000E0E00000E1E7FC00E3CFFC00E7CFFC007
787FC007F0380007F0380007E0380007C070000F8070001F8070003FC0E0007DC0E00078E0E000
78E1C000F071C000F07B8000F03B8000F01F0000F01F01C0F00E01C0781F81C0787FC3C03FFBFF
803FF1FF801FE0FF0007803C00>26 37 126 164 31 38 D[<000F001F003E007C00F801F003E0
07C00F800F001E001E003C003C003C00780078007800F000F000F000F000F000F000F000F000F0
00F000F0007800780078003C003C003C001E001E000F000F8007C003E001F000F8007C003F001F
000F>16 47 119 169 31 40 D[<7000F8007C003E001F000F8007C003E001F000F00078007800
3C003C003C001E001E001E000F000F000F000F000F000F000F000F000F000F000F001E001E001E
003C003C003C0078007800F001F003E007C00F801F003E007C00F8007000>16
47 123 169 31 I[<000E0000001F0000001F0000001F0000001F0000001F0000001F0000001F
0000001F0000001F0000001F00007FFFFF80FFFFFFC0FFFFFFC0FFFFFFC07FFFFF80001F000000
1F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000000E0000
>26 27 126 159 31 43 D[<1C003F007F007F807F803F801F8007800F800F001F007E00FC00F8
006000>9 15 117 134 31 I[<7FFFFEFFFFFFFFFFFFFFFFFF7FFFFE>24
5 125 148 31 I[<387CFEFEFE7C38>7 7 116 134 31 I[<00000E00001F00001F00003F0000
3E00007E00007C00007C0000FC0000F80001F80001F00003F00003E00007E00007C0000FC0000F
80000F80001F80001F00003F00003E00007E00007C0000FC0000F80001F80001F00001F00003F0
0003E00007E00007C0000FC0000F80001F80001F00003F00003E00003E00007E00007C0000FC00
00F80000F80000700000>24 47 125 169 31 I[<007E0001FF8003FFC007FFE00FC3F01F00F8
1E00783E007C3C003C7C003E78001E78001E78001EF0000FF0000FF0000FF0000FF0000FF0000F
F0000FF0000FF0000FF0000FF8001F78001E78001E78001E7C003E3C003C3E007C1F00F81F81F8
0FC3F007FFE003FFC001FF80007E00>24 37 125 164 31 I[<00700000700000F00000F00001
F00003F00007F0007FF000FFF000FEF000F8F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F0007FFFE0FFFFF0FFFFF07FFFE0>20 37 122 164 31 I[<00FE0003FFC00FFF
E01FFFF83E03FC7C007C78003EF0001EF0000FF8000FF8000F70000F00000F00000F00001E0000
1E00001E00003C00007C0000F80001F00003E00007C0000F80001F00003E00007C0001F80003F0
0007C0000F800F1F000F3E000F7FFFFFFFFFFFFFFFFF7FFFFF>24 37 125
164 31 I[<1C3E7F7F7F3E1C0000000000000000000000001C3E7E7F7F3F1F0F1F1E3E7CF8F060
>8 34 117 153 31 59 D[<00000E00001F00007F0000FF0003FE0007FC001FF0003FE000FF80
01FF0007FC000FF8003FE0007FC000FF0000FE0000FF00007FC0003FE0000FF80007FC0001FF00
00FF80003FE0001FF00007FC0003FE0000FF00007F00001F00000E>24 31
125 161 31 I[<7FFFFF80FFFFFFC0FFFFFFC0FFFFFFC07FFFFF80000000000000000000000000
00000000000000007FFFFF80FFFFFFC0FFFFFFC0FFFFFFC07FFFFF80>26
15 126 153 31 I[<700000F80000FE0000FF00007FC0003FE0000FF80007FC0001FF0000FF80
003FE0001FF00007FC0003FE0000FF00007F0000FF0003FE0007FC001FF0003FE000FF8001FF00
07FC000FF8003FE0007FC000FF0000FE0000F80000700000>24 31 125
161 31 I[<001E0000003F0000003F0000003F0000007380000073800000738000007380000073
800000F3C00000F3C00000F3C00000E1C00001E1E00001E1E00001E1E00001E1E00001E1E00003
C0F00003C0F00003C0F00003C0F00007C0F80007FFF80007FFF80007FFF80007FFF8000F003C00
0F003C000F003C000F003C000F003C001E001E00FFC0FFC0FFE1FFC0FFE1FFC0FFC0FFC0>26
37 126 164 31 65 D[<FFFFC000FFFFF000FFFFF800FFFFFC000F007E000F003E000F001F000F
000F000F000F000F000F000F000F000F000F000F001F000F001E000F003E000F00FC000FFFF800
0FFFE0000FFFF8000FFFFC000F003E000F001F000F000F000F000F800F0007800F0007800F0007
800F0007800F0007800F000F800F000F000F001F000F007E00FFFFFE00FFFFFC00FFFFF800FFFF
E000>25 37 126 164 31 I[<001F81C0007FE1C001FFFBC003FFFFC007F03FC00FC01FC01F80
0FC01F0007C03E0007C03C0003C07C0003C0780003C0780003C078000000F0000000F0000000F0
000000F0000000F0000000F0000000F0000000F0000000F00000007800000078000000780003C0
7C0003C03C0003C03E0003C01F0007801F8007800FC00F0007F03F0003FFFE0001FFFC00007FF0
00001FC000>26 37 126 164 31 I[<7FFF8000FFFFE000FFFFF8007FFFFC000F00FE000F003E
000F001F000F000F800F000F800F0007800F0007C00F0003C00F0003C00F0003E00F0001E00F00
01E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0003E00F0003C00F
0003C00F0003C00F0007C00F000F800F000F800F001F000F003E000F00FE007FFFFC00FFFFF800
FFFFF0007FFF8000>27 37 127 164 31 I[<FFFFFF80FFFFFF80FFFFFF80FFFFFF800F000780
0F0007800F0007800F0007800F0007800F0007800F0000000F0000000F0000000F03C0000F03C0
000F03C0000FFFC0000FFFC0000FFFC0000FFFC0000F03C0000F03C0000F03C0000F0000000F00
00000F0000000F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E0FFFFFFE0FF
FFFFE0FFFFFFE0FFFFFFE0>27 37 126 164 31 I[<FFFFFFC0FFFFFFC0FFFFFFC0FFFFFFC00F
0003C00F0003C00F0003C00F0003C00F0003C00F0003C00F0000000F0000000F0000000F01E000
0F01E0000F01E0000FFFE0000FFFE0000FFFE0000FFFE0000F01E0000F01E0000F01E0000F0000
000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000FFF8
0000FFFC0000FFFC0000FFF80000>26 37 126 164 31 I[<003F070000FFC70001FFEF0007FF
FF0007E0FF000F807F001F003F001E001F003E001F003C000F007C000F0078000F0078000F00F8
000000F0000000F0000000F0000000F0000000F0000000F0000000F001FFC0F001FFE0F001FFE0
F801FFC078000F0078000F0078001F003C001F003E001F001E001F001F003F000F807F0007E0FF
0007FFFF0001FFEF0000FFCF00003F0F00>27 37 126 164 31 I[<7FE07FE0FFF0FFF0FFF0FF
F07FE07FE00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F000F000F000FFFFF000FFFFF000FFFFF000FFFFF000F000F000F000F000F
000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00
0F000F007FE07FE0FFF0FFF0FFF0FFF07FE07FE0>28 37 127 164 31 I[<7FFFF8FFFFFCFFFF
FC7FFFF80078000078000078000078000078000078000078000078000078000078000078000078
000078000078000078000078000078000078000078000078000078000078000078000078000078
000078000078000078000078007FFFF8FFFFFCFFFFFC7FFFF8>22 37 124
164 31 I[<7FC07FC0FFE0FFC0FFE0FFC07FC07FC00E001C000E0038000E0078000E00F0000E00
E0000E01C0000E03C0000E0780000E0700000E0E00000E1E00000E3C00000E3E00000E7F00000E
F700000EE780000FC380000FC1C0000F81C0000F00E0000E00E0000E0070000E0070000E003800
0E0038000E001C000E001C000E000E000E000E007FC01FE0FFE03FE0FFE03FE07FC01FE0>27
37 127 164 31 75 D[<7FFC0000FFFE0000FFFE00007FFC000007800000078000000780000007
800000078000000780000007800000078000000780000007800000078000000780000007800000
078000000780000007800000078000000780000007800000078000000780000007800000078001
80078003C0078003C0078003C0078003C0078003C0078003C07FFFFFC0FFFFFFC0FFFFFFC07FFF
FFC0>26 37 126 164 31 I[<FE0007F0FF000FF0FF000FF0FF801FF01D801B801D801B801DC0
3B801DC03B801CC033801CE073801CE073801CE073801C6063801C70E3801C70E3801C30C3801C
39C3801C39C3801C39C3801C1983801C1983801C1F83801C0F03801C0F03801C0603801C000380
1C0003801C0003801C0003801C0003801C0003801C0003801C000380FF801FF0FF801FF0FF801F
F0FF801FF0>28 37 127 164 31 I[<7F00FF80FF81FFC0FF81FFC07FC0FF800EC01C000EC01C
000EE01C000E601C000E601C000E701C000E701C000E301C000E381C000E381C000E381C000E18
1C000E1C1C000E1C1C000E0C1C000E0E1C000E0E1C000E061C000E071C000E071C000E071C000E
031C000E039C000E039C000E019C000E019C000E01DC000E00DC000E00DC007FC0FC00FFE07C00
FFE07C007FC03C00>26 37 126 164 31 I[<03FFC01FFFF83FFFFC3FFFFC7E007E7C003E7800
1E78001EF8001FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF000
0FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF8001F78001E78001E7C003E7F00
FE3FFFFC3FFFFC1FFFF803FFC0>24 37 125 164 31 I[<FFFFC000FFFFF000FFFFF800FFFFFC
000F007E000F003F000F001F000F000F000F0007800F0007800F0007800F0007800F0007800F00
07800F000F000F001F000F003F000F007E000FFFFC000FFFF8000FFFF0000FFFC0000F0000000F
0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000
FFF00000FFF00000FFF00000FFF00000>25 37 126 164 31 I[<7FFF0000FFFFE000FFFFF000
7FFFF8000F01FC000F007E000F001E000F001F000F000F000F000F000F000F000F000F000F001F
000F001E000F007E000F01FC000FFFF8000FFFF0000FFFE0000FFFF0000F01F8000F0078000F00
7C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C780F003C780F
003E787FE01FF0FFF01FF0FFF00FE07FE003C0>29 37 127 164 31 82
D[<01FC1C07FF9C0FFFFC3FFFFC3E03FC7C00FC78007CF0003CF0003CF0003CF0003CF0000078
00007C00003E00003FE0001FFE0007FFC001FFF0001FF80001FC00007C00001E00001E00000F00
000F70000FF0000FF0000FF0001FF8001EFC003EFF00FCFFFFF8FFFFF0E3FFE0E0FF80>24
37 125 164 31 I[<7FFFFFC0FFFFFFC0FFFFFFC0FFFFFFC0F01E03C0F01E03C0F01E03C0F01E
03C0F01E03C0F01E03C0001E0000001E0000001E0000001E0000001E0000001E0000001E000000
1E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000
001E0000001E0000001E0000001E0000001E0000001E000003FFF00007FFF80007FFF80003FFF0
00>26 37 126 164 31 I[<FFF0FFF0FFF0FFF0FFF0FFF0FFF0FFF00F000F000F000F000F000F
000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F
000F000F000F0007801E0007801E0007C03E0003C03C0003F0FC0001FFF80000FFF000007FE000
001F8000>28 37 127 164 31 I[<7FC03FE0FFE07FF0FFE07FF07FC03FE00F000F000F000F00
0F000F000F000F0007801E0007801E0007801E0007801E0003C03C0003C03C0003C03C0003C03C
0003E07C0001E0780001E0780001E0780001E0780000F0F00000F0F00000F0F00000F0F0000070
E0000079E0000079E0000079E0000039C0000039C0000039C0000039C000001F8000001F800000
1F8000000F0000>28 37 127 164 31 I[<FF000FF0FF801FF0FF801FF0FF000FF0380001C038
0001C0380001C0380001C0380001C01C0003801C0003801C0003801C0003801C0003801C000380
1C0F03801C1F83800E1F87000E1F87000E3FC7000E39C7000E39C7000E39C7000E39C7000E30C7
000670E6000670E6000670E6000770EE0007606E0007606E0007606E0007606E0003E07C0003C0
3C0003C03C0003C03C00>28 37 127 164 31 I[<3FFFFF7FFFFF7FFFFF7FFFFF78001E78003C
78007C7800787800F07801F00001E00003E00003C0000780000F80000F00001E00003E00003C00
007C0000780000F00001F00001E00003C00007C00007800F0F800F0F000F1E000F3E000F3C000F
78000FFFFFFFFFFFFFFFFFFFFFFFFF>24 37 125 164 31 90 D[<FFFFFFFFFFFFFFFFF000F000
F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F0
00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000FFFFFFFF
FFFFFFFF>16 47 116 169 31 I[<FFFFFFFFFFFFFFFF000F000F000F000F000F000F000F000F
000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F000F000F000F000F000F000F000FFFFFFFFFFFFFFFFF>16
47 126 169 31 93 D[<7FFFFEFFFFFFFFFFFFFFFFFF7FFFFE>24 5 125
126 31 95 D[<07FC00001FFF00003FFFC0003FFFE0003E03F0001C01F0000000F80000007800
0000780000007800007FF80003FFF8000FFFF8003FE078007E00780078007800F0007800F00078
00F0007800F00078007800F8007E03F8003FFFFFE03FFFFFE00FFE3FE003F00FE0>27
26 125 153 31 97 D[<FF800000FF800000FF800000FF80000007800000078000000780000007
8000000780000007800000078000000783E000079FF80007BFFE0007FFFF0007F83F0007E00F80
07C0078007C003C0078003C0078003E0078001E0078001E0078001E0078001E0078001E0078001
E0078003E0078003C007C007C007C0078007E00F8007F83F0007FFFE0007BFFC00079FF8000387
E000>27 37 127 164 31 I[<007FC001FFF007FFF80FFFF81F80F83E00703C00007800007800
00F80000F00000F00000F00000F00000F00000F00000F800007800007C00783E00783F00F81FC1
F00FFFE007FFE001FF80007E00>21 26 123 153 31 I[<0007FC000007FC000007FC000007FC
0000003C0000003C0000003C0000003C0000003C0000003C0000003C0000FC3C0003FF3C0007FF
BC000FFFFC001F81FC003E00FC003C007C007C003C0078003C00F8003C00F0003C00F0003C00F0
003C00F0003C00F0003C00F0003C00F8003C0078007C0078007C003C00FC003E01FC001F83FC00
1FFFFFE007FFBFE003FE3FE000F83FE0>27 37 126 164 31 I[<007F0001FFC007FFE00FFFF0
1F81F83F00783C003C7C003C78001E78001EFFFFFEFFFFFEFFFFFEFFFFFEF00000F00000780000
7800007C001E3E001E1F803E1FE07C0FFFF803FFF001FFE0003F80>23 26
125 153 31 I[<0001F80007FC000FFE001FFE003E3E007C1C0078000078000078000078000078
007FFFFCFFFFFCFFFFFCFFFFFC0078000078000078000078000078000078000078000078000078
000078000078000078000078000078000078000078000078000078007FFFF87FFFF87FFFF87FFF
F8>23 37 126 164 31 I[<00FC0F8003FF3FC007FFFFE00FFFFFE00F87E1C01F03E0001E01E0
003C00F0003C00F0003C00F0003C00F0003C00F0003C00F0001E01E0001F03E0000F87C0000FFF
C0001FFF80001FFF00001CFC00001C0000001C0000000E0000000FFFE0001FFFF8003FFFFE003C
001F007800078070000380E00001C0E00001C0E00001C0E00001C0700003807C000F803F003F00
1FFFFE000FFFFC0003FFF000007F8000>27 40 126 153 31 I[<FF800000FF800000FF800000
FF800000078000000780000007800000078000000780000007800000078000000787E000079FF0
0007BFF80007FFFC0007F83C0007E01E0007C01E0007C01E0007801E0007801E0007801E000780
1E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007
801E00FFFC7FF0FFFC7FF0FFFC7FF0FFFC7FF0>28 37 127 164 31 I[<00300000780000FC00
00FC000078000030000000000000000000000000000000000000007FFC007FFC007FFC007FFC00
003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00
003C00003C00003C00003C00003C007FFFFCFFFFFEFFFFFE7FFFFC>23 38
124 165 31 I[<FF000000FF000000FF000000FF00000007000000070000000700000007000000
070000000700000007000000070FFF80070FFF80070FFF80070FFF800700F0000701E0000703C0
0007078000070F0000071E0000073C0000077E0000077F000007E7800007E3800007C1C0000781
E0000700E00007007000070078000700380007001C00FFF8FFE0FFF8FFE0FFF8FFE0FFF8FFE0>
27 37 126 164 31 107 D[<FFFC00FFFC00FFFC00FFFC00003C00003C00003C00003C00003C00
003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00
003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00FFFFFFFFFFFF
FFFFFFFFFFFF>24 37 125 164 31 I[<FC781E00FDFC7F00FFFEFF80FFFFFF801F0FC3C01E07
81C01E0781C01E0781C01C0701C01C0701C01C0701C01C0701C01C0701C01C0701C01C0701C01C
0701C01C0701C01C0701C01C0701C01C0701C01C0701C01C0701C0FF8FE3F8FF9FE7F8FF9FE7F8
FF8FE3F8>29 26 128 153 31 I[<FF87E000FF9FF000FFBFF800FFFFFC0007F83C0007E01E00
07C01E0007C01E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007801E
0007801E0007801E0007801E0007801E0007801E0007801E00FFFC7FF0FFFC7FF0FFFC7FF0FFFC
7FF0>28 26 127 153 31 I[<00FC0003FF0007FF801FFFE01F87E03E01F07C00F87800787800
78F0003CF0003CF0003CF0003CF0003CF0003CF0003CF8007C7800787C00F87C00F83E01F01F87
E01FFFE007FF8003FF0000FC00>22 26 124 153 31 I[<FF83E000FF9FF800FFBFFE00FFFFFF
0007F83F0007E00F8007C0078007C003C0078003C0078003E0078001E0078001E0078001E00780
01E0078001E0078001E0078003E0078003C007C007C007C0078007E00F8007F83F0007FFFE0007
BFFC00079FF8000787E00007800000078000000780000007800000078000000780000007800000
0780000007800000FFFC0000FFFC0000FFFC0000FFFC0000>27 39 127
153 31 I[<FFE07E00FFE1FF80FFE7FFC0FFEFFFC001FF87C001FE038001FC000001F8000001F0
000001F0000001F0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001
E0000001E0000001E0000001E00000FFFFF000FFFFF000FFFFF000FFFFF000>26
26 126 153 31 114 D[<03FC700FFFF03FFFF07FFFF07C03F0F801F0F000F0F000F0F000F07C
00007FE0001FFF0007FFC000FFE00003F00000F870003CF0003CF0003CF8003CFC007CFF01F8FF
FFF0FFFFF0E7FFC0E1FE00>22 26 124 153 31 I[<0070000000F0000000F0000000F0000000
F0000000F0000000F000007FFFFE00FFFFFE00FFFFFE00FFFFFE0000F0000000F0000000F00000
00F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F007
8000F0078000F0078000F0078000F80F00007C1F00007FFE00003FFC00001FF8000007E000>25
33 127 160 31 I[<FF83FE00FF83FE00FF83FE00FF83FE0007801E0007801E0007801E000780
1E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007801E0007
801E0007801E0007803E0007803E0007C0FE0003FFFFF003FFFFF001FF9FF0007E1FF0>28
26 127 153 31 I[<7FE07FE0FFF0FFF0FFF0FFF07FE07FE007000E0007000E0007801E000380
1C0003801C0003C03C0001C0380001C0380001E0780000E0700000E0700000E070000070E00000
70E0000070E0000039C0000039C0000039C000001F8000001F8000001F8000000F0000>28
26 127 153 31 I[<7FF0FFC07FF1FFE07FF1FFE07FF0FFC003C0380001E0700000E0F0000070
E0000079C000003FC000001F8000000F0000000E0000000F0000001F8000003B80000039C00000
70E00000E0F00001E0700001C0380003801C007FE07FE0FFF0FFF0FFF0FFF07FE07FE0>28
26 127 153 31 120 D[<7FE07FE0FFF0FFF0FFF0FFF07FE07FE007000E0007800E0003801E00
03801C0001C01C0001C03C0001C0380000E0380000E0380000F0700000707000007070000038E0
000038E0000038E000001CC000001DC000001DC000000F8000000F800000078000000700000007
000000070000000E0000000E0000000E0000001C0000381C00007C3C00007CF800007FF000007F
E000003FC000000F800000>28 39 127 153 31 I[<3FFFFF807FFFFF807FFFFF807FFFFF8078
003F0078007E007800FC007801F8000003F0000007E000000FC000001F8000003F0000007E0000
00FC000001F8000003F0000007E007800FC007801F8007803F0007807E000780FFFFFF80FFFFFF
80FFFFFF80FFFFFF80>25 26 126 153 31 I[<00007F0003FF000FFF001FFF001F00003C0000
3C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C0000
7C0000F8007FF800FFE000FFC000FFE0007FF80000F800007C00003C00003C00003C00003C0000
3C00003C00003C00003C00003C00003C00003C00003C00003C00003C00001F00001FFF000FFF00
03FF00007F>24 47 125 169 31 I[<7E0000FFC000FFF000FFF80000F800003C00003C00003C
00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003E00001F
00001FFE0007FF0003FF0007FF001FFE001F00003E00003C00003C00003C00003C00003C00003C
00003C00003C00003C00003C00003C00003C00003C00003C0000F800FFF800FFF000FFC0007E00
00>24 47 125 169 31 125 D E /Fg 47 123 438 432 dfs[<0001FF81FE00001FFFEFFF8000
7F80FF87C000FC00FE0FE001F801FE0FE003F801FC0FE007F001FC0FE007F001FC07C007F001FC
000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC0000FF
FFFFFFF800FFFFFFFFF800FFFFFFFFF80007F001FC000007F001FC000007F001FC000007F001FC
000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC000007
F001FC000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC000007F001FC
000007F001FC000007F001FC000007F001FC000007F001FC00007FFF1FFFE0007FFF1FFFE0007F
FF1FFFE000>43 42 127 169 41 11 D[<00030007000E001C0038007000F001E003C003C007C0
07800F800F001F001F003F003E003E007E007E007E007C007C00FC00FC00FC00FC00FC00FC00FC
00FC00FC00FC00FC00FC007C007C007E007E007E003E003E003F001F001F000F000F80078007C0
03C003C001E000F000700038001C000E00070003>16 60 122 172 27 40
D[<8000C000E000700038001C001E000F000780078007C003C003E001E001F001F001F800F800
F800FC00FC00FC007C007C007E007E007E007E007E007E007E007E007E007E007E007E007C007C
00FC00FC00FC00F800F801F801F001F001E003E003C007C0078007800F001E001C0038007000E0
00C0008000>15 60 123 172 27 I[<1C007F007F00FF80FFC0FFC07FC07FC01CC000C000C001
80018001800300030006000C00180030002000>10 21 123 136 19 44
D[<FFFF80FFFF80FFFF80FFFF80FFFF80FFFF80>17 6 127 144 23 I[<000E00001E00007E00
07FE00FFFE00FFFE00F8FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE00
00FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE00
00FE0000FE0000FE0000FE0000FE0000FE0000FE007FFFFE7FFFFE7FFFFE>23
39 123 166 34 49 D[<000003800000000007C00000000007C0000000000FE0000000000FE000
0000000FE0000000001FF0000000001FF0000000003FF8000000003FF8000000003FF800000000
73FC0000000073FC00000000F3FE00000000E1FE00000000E1FE00000001C0FF00000001C0FF00
000003C0FF80000003807F80000007807FC0000007003FC0000007003FC000000E003FE000000E
001FE000001E001FF000001C000FF000001FFFFFF000003FFFFFF800003FFFFFF80000780007FC
0000700003FC0000700003FC0000E00001FE0000E00001FE0001E00001FF0001C00000FF0001C0
0000FF00FFFE001FFFFEFFFE001FFFFEFFFE001FFFFE>47 41 126 168
53 65 D[<FFFFFFF80000FFFFFFFF8000FFFFFFFFC00003F8001FF00003F8000FF80003F80007
FC0003F80003FC0003F80003FC0003F80003FE0003F80001FE0003F80001FE0003F80001FE0003
F80003FE0003F80003FC0003F80003FC0003F80007F80003F8000FF00003F8001FE00003F800FF
C00003FFFFFE000003FFFFFFE00003F80007F00003F80003FC0003F80001FE0003F80001FE0003
F80000FF0003F80000FF0003F80000FF8003F80000FF8003F80000FF8003F80000FF8003F80000
FF8003F80000FF8003F80000FF0003F80001FF0003F80003FE0003F80007FC0003F8001FF800FF
FFFFFFF000FFFFFFFFC000FFFFFFFE0000>41 41 125 168 50 I[<00003FF001800003FFFE03
80000FFFFF8780003FF007DF8000FF8001FF8001FE00007F8003FC00003F8007F000001F800FF0
00000F801FE0000007801FE0000007803FC0000007803FC0000003807FC0000003807F80000003
807F8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80
00000000FF8000000000FF8000000000FF80000000007F80000000007F80000000007FC0000003
803FC0000003803FC0000003801FE0000003801FE0000007000FF00000070007F000000E0003FC
00001E0001FE00003C0000FF8000F800003FF007E000000FFFFFC0000003FFFF000000003FF800
00>41 41 124 168 51 I[<FFFFFFF80000FFFFFFFF8000FFFFFFFFE00003FC001FF80003FC00
07FC0003FC0001FE0003FC0000FF0003FC00007F8003FC00003FC003FC00001FC003FC00001FE0
03FC00001FE003FC00000FF003FC00000FF003FC00000FF003FC00000FF003FC00000FF803FC00
000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF8
03FC00000FF803FC00000FF803FC00000FF003FC00000FF003FC00000FF003FC00001FE003FC00
001FE003FC00001FC003FC00003FC003FC00007F8003FC00007F0003FC0001FE0003FC0003FC00
03FC001FF800FFFFFFFFE000FFFFFFFF8000FFFFFFFC0000>45 41 125
168 54 I[<FFFFFFFFE0FFFFFFFFE0FFFFFFFFE003FC001FE003FC0007F003FC0001F003FC0001
F003FC0000F003FC00007003FC00007003FC00007003FC01C07803FC01C03803FC01C03803FC01
C03803FC03C00003FC03C00003FC0FC00003FFFFC00003FFFFC00003FFFFC00003FC0FC00003FC
03C00003FC03C00003FC01C00E03FC01C00E03FC01C00E03FC01C01C03FC00001C03FC00001C03
FC00001C03FC00003C03FC00003803FC00007803FC0000F803FC0001F803FC0003F803FC001FF8
FFFFFFFFF0FFFFFFFFF0FFFFFFFFF0>39 41 125 168 46 I[<FFFFFFFFC0FFFFFFFFC0FFFFFF
FFC003FC003FC003FC000FE003FC0003E003FC0001E003FC0001E003FC0000E003FC0000E003FC
0000E003FC0000F003FC03807003FC03807003FC03807003FC03800003FC07800003FC07800003
FC1F800003FFFF800003FFFF800003FFFF800003FC1F800003FC07800003FC07800003FC038000
03FC03800003FC03800003FC03800003FC00000003FC00000003FC00000003FC00000003FC0000
0003FC00000003FC00000003FC00000003FC000000FFFFFC0000FFFFFC0000FFFFFC0000>36
41 125 168 44 I[<00007FE003000003FFFC0700001FFFFF0F00003FF00FFF0000FF8001FF00
01FE0000FF0003F800003F0007F000003F000FF000001F001FE000000F001FE000000F003FC000
000F003FC0000007007FC0000007007F80000007007F8000000000FF8000000000FF8000000000
FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8001
FFFFF87F8001FFFFF87F8001FFFFF87FC00000FF003FC00000FF003FC00000FF001FE00000FF00
1FE00000FF000FF00000FF0007F00000FF0003F80000FF0001FE0000FF0000FF8001FF00003FF0
07BF00001FFFFF1F000003FFFE0F0000007FF00300>45 41 124 168 55
I[<FFFFF01FFFFEFFFFF01FFFFEFFFFF01FFFFE03FC00007F8003FC00007F8003FC00007F8003
FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC0000
7F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FFFFFFFF8003
FFFFFFFF8003FFFFFFFF8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC0000
7F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003
FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F8003FC00007F80FFFFF01F
FFFEFFFFF01FFFFEFFFFF01FFFFE>47 41 125 168 55 I[<FFFFFCFFFFFCFFFFFC01FE0001FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE00FFFFFCFFFFFCFFFFFC>22
41 126 168 26 I[<FFFFF001FFFCFFFFF001FFFCFFFFF001FFFC03FC00001E0003FC00003C00
03FC0000780003FC0000F00003FC0001E00003FC0003C00003FC0007000003FC001E000003FC00
3C000003FC0078000003FC00F0000003FC01E0000003FC0380000003FC07C0000003FC1FC00000
03FC3FE0000003FC7FF0000003FCFFF8000003FDE7F8000003FF83FC000003FF01FE000003FE01
FF000003FC00FF000003FC007F800003FC003FC00003FC003FC00003FC001FE00003FC000FF000
03FC000FF80003FC0007F80003FC0003FC0003FC0001FE0003FC0001FF0003FC0000FF0003FC00
007F80FFFFF00FFFFEFFFFF00FFFFEFFFFF00FFFFE>47 41 125 168 55
75 D[<FFFFFC0000FFFFFC0000FFFFFC000003FC00000003FC00000003FC00000003FC00000003
FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC000000
03FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC0000
0003FC00000003FC0001C003FC0001C003FC0001C003FC0001C003FC0003C003FC00038003FC00
038003FC00078003FC00078003FC000F8003FC000F8003FC001F8003FC007F8003FC01FF00FFFF
FFFF00FFFFFFFF00FFFFFFFF00>34 41 125 168 42 I[<FFFE0000001FFFC0FFFE0000001FFF
C0FFFF0000003FFFC003FF0000003FF00003FF0000003FF00003BF80000077F00003BF80000077
F000039FC00000E7F000039FC00000E7F000038FE00001C7F000038FE00001C7F0000387F00003
87F0000387F0000387F0000387F0000387F0000383F8000707F0000383F8000707F0000381FC00
0E07F0000381FC000E07F0000380FE001C07F0000380FE001C07F0000380FF003807F00003807F
003807F00003807F003807F00003803F807007F00003803F807007F00003801FC0E007F0000380
1FC0E007F00003800FE1C007F00003800FE1C007F00003800FE1C007F000038007F38007F00003
8007F38007F000038003FF0007F000038003FF0007F000038001FE0007F000038001FE0007F000
038000FC0007F000038000FC0007F000FFFE00FC01FFFFC0FFFE007801FFFFC0FFFE007801FFFF
C0>58 41 125 168 66 I[<FFFC0000FFFEFFFE0000FFFEFFFF0000FFFE03FF8000038003FF80
00038003BFC0000380039FE0000380039FF0000380038FF80003800387F80003800383FC000380
0381FE0003800381FF0003800380FF80038003807FC0038003803FC0038003801FE0038003800F
F0038003800FF80380038007FC0380038003FC0380038001FE0380038000FF0380038000FF8380
0380007FC3800380003FE3800380001FE3800380000FF38003800007FB8003800007FF80038000
03FF8003800001FF8003800000FF80038000007F80038000007F80038000003F80038000001F80
038000000F80FFFE00000780FFFE00000380FFFE00000380>47 41 125
168 55 I[<0000FFE000000007FFFC0000003FC07F8000007F001FC00001FC0007F00003F80003
F80007F00001FC000FF00001FE001FE00000FF001FE00000FF003FC000007F803FC000007F807F
C000007FC07F8000003FC07F8000003FC07F8000003FC0FF8000003FE0FF8000003FE0FF800000
3FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF
8000003FE07F8000003FC07FC000007FC07FC000007FC03FC000007F803FC000007F801FE00000
FF001FE00000FF000FF00001FE0007F00001FC0003F80003F80001FC0007F00000FF001FE00000
3FC07F8000000FFFFE00000000FFE00000>43 41 124 168 53 I[<FFFFFFF800FFFFFFFF00FF
FFFFFFC003FC003FE003FC000FF003FC0007F803FC0007FC03FC0003FC03FC0003FE03FC0003FE
03FC0003FE03FC0003FE03FC0003FE03FC0003FE03FC0003FE03FC0003FC03FC0007FC03FC0007
F803FC000FF003FC003FE003FFFFFF8003FFFFFE0003FC00000003FC00000003FC00000003FC00
000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC
00000003FC00000003FC00000003FC00000003FC000000FFFFF00000FFFFF00000FFFFF00000>
39 41 125 168 48 I[<FFFFFFE00000FFFFFFFE0000FFFFFFFF800003FC007FE00003FC000FF0
0003FC0007F80003FC0007FC0003FC0003FC0003FC0003FE0003FC0003FE0003FC0003FE0003FC
0003FE0003FC0003FE0003FC0003FE0003FC0003FC0003FC0007F80003FC0007F80003FC001FE0
0003FC007FC00003FFFFFE000003FFFFF0000003FC00FC000003FC007F000003FC003F800003FC
003F800003FC001FC00003FC001FE00003FC001FE00003FC001FE00003FC001FE00003FC001FE0
0003FC001FF00003FC001FF00003FC001FF00003FC001FF00703FC001FF80703FC000FF80703FC
0007F80EFFFFF003FE1CFFFFF001FFF8FFFFF0003FF0>48 41 125 168
53 82 D[<007F806003FFF0E007FFF9E00F807FE01F001FE03E0007E07C0003E07C0001E0FC00
01E0FC0001E0FC0000E0FE0000E0FE0000E0FF000000FFC000007FFE00007FFFE0003FFFFC001F
FFFE000FFFFF8007FFFFC003FFFFE000FFFFE00007FFF000007FF000000FF8000007F8000003F8
600001F8E00001F8E00001F8E00001F8F00001F0F00001F0F80003F0FC0003E0FF0007C0FFE01F
80F3FFFF00E0FFFE00C01FF000>29 41 124 168 39 I[<7FFFFFFFFFC07FFFFFFFFFC07FFFFF
FFFFC07F803FC03FC07E003FC007C078003FC003C078003FC003C070003FC001C0F0003FC001E0
F0003FC001E0E0003FC000E0E0003FC000E0E0003FC000E0E0003FC000E0E0003FC000E000003F
C0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC00000
00003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003F
C0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC00000
00003FC0000000003FC00000007FFFFFE000007FFFFFE000007FFFFFE000>43
40 126 167 49 I[<FFFFF001FFFCFFFFF001FFFCFFFFF001FFFC03FC0000070003FC00000700
03FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC00
00070003FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC00000700
03FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC00
00070003FC0000070003FC0000070003FC0000070003FC0000070003FC0000070003FC00000700
01FC00000E0001FE00000E0000FE00001C00007E00001C00007F00003800003FC000F000000FF0
07E0000007FFFFC0000001FFFF000000001FF80000>46 41 125 168 54
I[<7FFFF81FFFF07FFFF81FFFF07FFFF81FFFF001FF0000780000FF8000F000007FC001E00000
7FC001C000003FE003C000001FF0078000000FF80F0000000FF80E00000007FC1E00000003FE3C
00000003FE7800000001FF7000000000FFF0000000007FE0000000007FC0000000003FE0000000
001FF0000000001FF0000000001FF8000000001FFC000000003FFE000000007BFE00000000F1FF
00000000E0FF80000001E0FFC0000003C07FC0000007803FE0000007001FF000000F001FF00000
1E000FF800003C0007FC0000380003FE0000780003FE0000F00001FF0000E00000FF80FFFF801F
FFFEFFFF801FFFFEFFFF801FFFFE>47 41 126 168 53 88 D[<3FFFFFFF803FFFFFFF803FFFFF
FF803FF000FF003F8001FE003F0003FE003E0003FC003C0007F8007C000FF80078000FF0007800
1FE00070003FE00070003FC00070007F80007000FF80000000FF00000001FE00000003FE000000
03FC00000007F80000000FF80000000FF00000001FE00000003FE00000003FC001C0007F8001C0
00FF8001C000FF0001C001FE0001C003FE0003C003FC0003C007F80003C00FF80007800FF00007
801FE0000F803FE0001F803FC0007F807F8001FF80FFFFFFFF80FFFFFFFF80FFFFFFFF80>34
41 124 168 43 90 D[<01FF800007FFF0000F81F8001FC07E001FC07E001FC03F000F803F8007
003F8000003F8000003F8000003F80000FFF8000FFFF8007FC3F800FE03F803F803F803F003F80
7F003F80FE003F80FE003F80FE003F80FE003F807E007F807F00DF803F839FFC0FFF0FFC01FC03
FC>30 27 126 154 33 97 D[<FFE0000000FFE0000000FFE00000000FE00000000FE00000000F
E00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE0000000
0FE00000000FE00000000FE1FE00000FE7FF80000FFE07E0000FF801F0000FF000F8000FE000FC
000FE000FE000FE0007F000FE0007F000FE0007F000FE0007F800FE0007F800FE0007F800FE000
7F800FE0007F800FE0007F800FE0007F800FE0007F000FE0007F000FE0007F000FE000FE000FE0
00FC000FF001F8000FF803F0000F9E07E0000F07FF80000E01FC0000>33
42 126 169 39 I[<001FF80000FFFE0003F01F0007E03F800FC03F801F803F803F801F007F80
0E007F0000007F000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000007F
0000007F0000007F8000003F8001C01F8001C00FC0038007E0070003F01E0000FFFC00001FE000
>26 27 126 154 31 I[<00003FF80000003FF80000003FF800000003F800000003F800000003
F800000003F800000003F800000003F800000003F800000003F800000003F800000003F8000000
03F800000003F800001FE3F80000FFFBF80003F03FF80007E00FF8000FC007F8001F8003F8003F
8003F8007F0003F8007F0003F8007F0003F800FF0003F800FF0003F800FF0003F800FF0003F800
FF0003F800FF0003F800FF0003F8007F0003F8007F0003F8007F0003F8003F8003F8001F8003F8
000F8007F80007C00FF80003F03BFF8000FFF3FF80003FC3FF80>33 42
126 169 39 I[<003FE00001FFF80003F07E0007C01F000F801F801F800F803F800FC07F000FC0
7F0007C07F0007E0FF0007E0FF0007E0FFFFFFE0FFFFFFE0FF000000FF000000FF0000007F0000
007F0000007F0000003F8000E01F8000E00FC001C007E0038003F81F0000FFFE00001FF000>27
27 126 154 32 I[<0007F0003FFC00FE3E01F87F03F87F03F07F07F07F07F03E07F00007F000
07F00007F00007F00007F00007F000FFFFC0FFFFC0FFFFC007F00007F00007F00007F00007F000
07F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F000
07F00007F00007F0007FFF807FFF807FFF80>24 42 126 169 21 I[<00FF81F003FFE7F80FC1
FE7C1F80FC7C1F007C383F007E107F007F007F007F007F007F007F007F007F007F007F007F003F
007E001F007C001F80FC000FC1F8001FFFE00018FF800038000000380000003C0000003E000000
3FFFF8001FFFFF001FFFFF800FFFFFC007FFFFE01FFFFFF03E0007F07C0001F8F80000F8F80000
F8F80000F8F80000F87C0001F03C0001E01F0007C00FC01F8003FFFE00007FF000>30
40 126 154 34 I[<FFE0000000FFE0000000FFE00000000FE00000000FE00000000FE0000000
0FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000
000FE00000000FE07F00000FE1FFC0000FE787E0000FEE03F0000FF803F0000FF803F8000FF003
F8000FF003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE0
03F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000F
E003F8000FE003F800FFFE3FFF80FFFE3FFF80FFFE3FFF80>33 42 125
169 39 I[<07000FC01FE03FE03FE03FE01FE00FC007000000000000000000000000000000FFE0
FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E00FE00FE00FE0FFFEFFFEFFFE>15 43 125 170 20 I[<FFE00000FFE00000FFE000000FE000
000FE000000FE000000FE000000FE000000FE000000FE000000FE000000FE000000FE000000FE0
00000FE000000FE01FFC0FE01FFC0FE01FFC0FE007800FE00F000FE01E000FE03C000FE078000F
E0E0000FE3C0000FE7C0000FEFE0000FFFE0000FFFF0000FF3F8000FE3F8000FC1FC000FC0FE00
0FC07F000FC07F000FC03F800FC01FC00FC00FC00FC00FE0FFFC3FFEFFFC3FFEFFFC3FFE>31
42 126 169 37 107 D[<FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E00FE00FE00FE00FE00FE0FFFEFFFEFFFE>15 42 125 169 20 I[<FFC07F0000FFC1FFC000FF
C787E0000FCE03F0000FD803F0000FD803F8000FF003F8000FF003F8000FE003F8000FE003F800
0FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F800FFFE3FFF80FFFE3F
FF80FFFE3FFF80>33 27 125 154 39 110 D[<003FE00001FFFC0003F07E000FC01F801F800F
C03F800FE03F0007E07F0007F07F0007F07F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF00
07F8FF0007F8FF0007F8FF0007F87F0007F07F0007F03F800FE03F800FE01F800FC00FC01F8007
F07F0001FFFC00003FE000>29 27 126 154 34 I[<FFE1FE0000FFE7FF8000FFFE07E0000FF8
03F0000FF001F8000FE000FC000FE000FE000FE000FF000FE0007F000FE0007F000FE0007F800F
E0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F000FE000FF00
0FE000FF000FE000FE000FE001FC000FF001F8000FF803F0000FFE0FE0000FE7FF80000FE1FC00
000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000
00000FE0000000FFFE000000FFFE000000FFFE000000>33 39 126 154
39 I[<FFC1F0FFC7FCFFCE3E0FD87F0FD87F0FF07F0FF03E0FF01C0FE0000FE0000FE0000FE000
0FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE000FFFF00
FFFF00FFFF00>24 27 126 154 28 114 D[<03FE300FFFF01E03F03800F0700070F00070F000
70F80070FC0000FFE0007FFE007FFF803FFFE01FFFF007FFF800FFF80003FC0000FC60007CE000
3CF0003CF00038F80038FC0070FF01E0F7FFC0C1FF00>22 27 126 154
27 I[<00700000700000700000700000F00000F00000F00001F00003F00003F00007F0001FFFF0
FFFFF0FFFFF007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F000
07F00007F00007F03807F03807F03807F03807F03807F03803F03803F87001F86000FFC0001F80
>21 38 127 165 27 I[<FFE03FF800FFE03FF800FFE03FF8000FE003F8000FE003F8000FE003
F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE0
03F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000F
E007F80007E007F80007E00FF80003F03BFF8001FFF3FF80003FC3FF80>33
27 125 154 39 I[<FFFE03FF80FFFE03FF80FFFE03FF8007F000700007F000700007F800F000
03F800E00003FC01E00001FC01C00001FC01C00000FE03800000FE038000007F070000007F0700
00007F8F0000003F8E0000003FDE0000001FDC0000001FDC0000000FF80000000FF80000000FF8
00000007F000000007F000000003E000000003E000000001C000000001C0000000038000000003
80000038078000007C07000000FE0F000000FE0E000000FE1E000000FE3C0000007C780000003F
E00000000FC0000000>33 39 127 154 37 121 D[<3FFFFF803FFFFF803F007F003C00FE0038
01FE007803FC007803F8007007F800700FF000700FE000001FC000003FC000007F8000007F0000
00FF000001FE038001FC038003F8038007F803800FF007800FE007801FE007003FC00F003F801F
007F007F00FFFFFF00FFFFFF00>25 27 126 154 31 I E /Fh 25 87 438
432 dfs[<387C7EFC7C38>7 6 123 133 17 46 D[<0000C0000180000780007F8007F7800787
80000780000F00000F00000F00000F00000F00000F00001E00001E00001E00001E00001E00001E
00003C00003C00003C00003C00003C00003C0000780000780000780000780000780000780000F0
0000F00000F00000F00000F00000F00001F000FFFFE0FFFFE0>19 40 122
167 30 49 D[<0003FC00000FFF00003C0F80006007C0008003E001E003E001F003E003F003E0
03F003E001E003E000C007E0000007C0000007C000000F8000001F0000001E00000038000000E0
00001FE0000000780000001C0000001E0000000F0000000F8000000F8000000FC000000FC00000
0FC018000FC07C000FC07C000F80FC001F80FC001F80F8001F0040003E0040003E0020007C0038
00F8001E03E00007FFC00000FE0000>27 41 125 167 30 51 D[<00C000C000F0078000FFFF00
00FFFE0001FFF80001FFE000011F00000100000001000000010000000200000002000000020000
000200000002000000021F800004E0E000050070000600380004003C0000003C0000001E000000
1E0000001E0000001E0000003E0000003E0070003E00F8003E00F8003E00F8007C00F0007C0080
0078008000F8008000F000C001E0004003C00060078000381F00000FFC000007F00000>26
41 123 167 30 53 D[<0000FC000007FF00000F0180003C01C0007003C000E007C001C007C003
C0078007800000070000000F0000001F0000001E0000003E0000003E0000003C1F80007C606000
7C8038007D001C00FA001C00FA001E00FC000F00FC000F00F8000F00F8000F00F8001F00F0001F
00F0001F00F0001F00F0001F00F0001E00F0003E00F0003C00F0003C0070007800700070003800
E0003C01C0001E0780000FFE000003F80000>26 41 123 167 30 I[<080000000C0000001FFF
FFC01FFFFFC01FFFFF803FFFFF8020000100600002004000040040000800800010008000200000
004000000040000000800000010000000200000006000000040000000800000018000000380000
00300000007000000060000000E0000000E0000001E0000001C0000003C0000003C0000007C000
0007800000078000000F8000000F8000000F8000000F8000001F0000001F0000001F0000000E00
0000>26 42 120 168 30 I[<0000003000000000300000000070000000007000000000F00000
0000F800000001F800000001F800000002F800000002F800000004FC0000000C7C000000087C00
0000107C000000107C000000207C000000203E000000403E000000403E000000803E000000803E
000001003F000001001F000002001F000002001F000004001F000004001F80000FFFFF80000FFF
FF800010000F800010000F800020000F8000200007C000400007C000C00007C000800007C00180
0007C001000007E003800003E00FC00007E0FFF000FFFFFFF000FFFF>40
42 126 169 46 65 D[<01FFFFFF0003FFFFFFC0000FC003F0000F8000F8000F8000FC000F8000
7C000F80007E000F80007E001F00003E001F00003E001F00007E001F00007E001F00007C001F00
00FC003E0000F8003E0001F0003E0003E0003E0007C0003E001F00003FFFFC00007C000F80007C
0007C0007C0003F0007C0001F0007C0000F8007C0000F800F80000FC00F80000FC00F80000FC00
F80000FC00F80000FC00F80000F801F00001F801F00001F801F00003F001F00007E001F0000FC0
01F0001F8003F0007F00FFFFFFFC00FFFFFFE000>39 41 126 168 43 I[<00000FF00100007F
FE030001FC07070007E0018E001F80005E003E00003E007C00003E00F800001E01F000001E03E0
00000C07C000000C07C000000C0F8000000C1F8000000C1F0000000C3F000000083F000000007E
000000007E000000007E000000007E00000000FC00000000FC00000000FC00000000FC00000000
FC00000000FC00000000FC00000000FC000000207C000000207C000000207C000000403E000000
403E000000801E000000801F000001000F8000020007C000040003E000180001F0003000007E01
E000003FFF80000007FC0000>40 43 122 169 44 I[<01FFFFFF800003FFFFFFE000000FC003
F800000F80007C00000F80003E00000F80001F00000F80000F80000F80000780001F000007C000
1F000007C0001F000003E0001F000003E0001F000003E0001F000003E0003E000003F0003E0000
03F0003E000003F0003E000003F0003E000003F0003E000003F0007C000003E0007C000007E000
7C000007E0007C000007E0007C000007C0007C000007C000F800000FC000F800000F8000F80000
1F8000F800001F0000F800001F0000F800003E0001F000007C0001F00000F80001F00001F00001
F00003E00001F00007C00001F0001F800003F000FE0000FFFFFFF80000FFFFFFC00000>44
41 126 168 47 I[<01FFFFFFFF03FFFFFFFF000FC0007F000F80000F000F800007000F800007
000F800003000F800003001F000003001F000003001F000001001F000001001F000801001F0008
01003E001000003E001000003E001000003E003000003E00F000003FFFF000007FFFE000007C00
E000007C006000007C006000007C002000007C00200200F800400400F800400400F800000400F8
00000800F800000800F800001801F000001001F000003001F000003001F000007001F00000E001
F00003E003F0000FE0FFFFFFFFC0FFFFFFFFC0>40 41 126 168 42 I[<01FFFFFFFE03FFFFFF
FE000FC0007E000F80001E000F80000E000F800006000F800006000F800006001F000006001F00
0006001F000002001F000002001F001002001F001002003E002000003E002000003E002000003E
006000003E01E000003FFFE000007FFFC000007C01C000007C00C000007C00C000007C00400000
7C00400000F800800000F800800000F800000000F800000000F800000000F800000001F0000000
01F000000001F000000001F000000001F000000001F000000003F0000000FFFFE00000FFFFE000
00>39 41 126 168 40 I[<00000FF00100007FFE030001FC07070007E0018E001F80005E003E
00003E007C00003E00F800001E01F000001E03E000000C07C000000C07C000000C0F8000000C1F
8000000C1F0000000C3F000000083F000000007E000000007E000000007E000000007E00000000
FC00000000FC00000000FC00000000FC00000000FC0001FFFFFC0001FFFFFC000003F0FC000003
E07C000003E07C000003E07C000003E03E000003E03E000007C01F000007C01F000007C00F8000
07C007C0000FC003E00013C001F8002380007E01C180003FFF00800007FC0000>40
43 122 169 48 I[<01FFFF03FFFE03FFFF07FFFE000FC0001F80000F80001F00000F80001F00
000F80001F00000F80001F00000F80001F00001F00003E00001F00003E00001F00003E00001F00
003E00001F00003E00001F00003E00003E00007C00003E00007C00003E00007C00003E00007C00
003E00007C00003FFFFFFC00007FFFFFF800007C0000F800007C0000F800007C0000F800007C00
00F800007C0000F80000F80001F00000F80001F00000F80001F00000F80001F00000F80001F000
00F80001F00001F00003E00001F00003E00001F00003E00001F00003E00001F00003E00001F000
03E00003F00007E000FFFF81FFFF00FFFF81FFFF00>47 41 126 168 46
I[<03FFFF03FFFF000FC0000F80000F80000F80000F80000F80001F00001F00001F00001F0000
1F00001F00003E00003E00003E00003E00003E00003E00007C00007C00007C00007C00007C0000
7C0000F80000F80000F80000F80000F80000F80001F00001F00001F00001F00001F00001F00003
F000FFFFC0FFFFC0>24 41 126 168 22 I[<01FFFF800003FFFF8000000FC00000000F800000
000F800000000F800000000F800000000F800000001F000000001F000000001F000000001F0000
00001F000000001F000000003E000000003E000000003E000000003E000000003E000000003E00
0000007C000000007C000000007C000000007C000000007C000000007C00002000F800004000F8
00004000F800004000F800008000F800008000F800018001F000018001F000030001F000030001
F000070001F0000E0001F0003E0003F001FE00FFFFFFFC00FFFFFFFC00>35
41 126 168 38 76 D[<01FF80000007FF8003FFC000000BFF80000BC000000FE000000BC00000
17C000000BC0000017C0000009E0000027C0000009E0000027C0000009E0000047C0000011E000
004F80000011E000008F80000010F000010F80000010F000010F80000010F000020F80000010F0
00020F800000207800041F000000207800041F000000207800081F000000207800081F00000020
7800101F000000203C00201F000000403C00203E000000403C00403E000000403C00403E000000
401E00803E000000401E00803E000000401E01003E000000801E01007C000000801E02007C0000
00800F04007C000000800F04007C000000800F08007C000000800F08007C00000100079000F800
000100079000F80000010007A000F80000010007A000F80000010007C000F800000300038000F8
00000FC0038001F80000FFF803007FFFC000FFF803007FFFC000>57 41
126 168 56 I[<01FFC0003FFE03FFC0007FFE000FE00007E0000BF00003800009F00001000009
F80001000008F80001000008FC00010000107C00020000107E00020000103E00020000103F0002
0000101F00020000101F80020000200F80040000200FC00400002007C00400002007E004000020
03E00400002003F00400004001F00800004001F80800004000F80800004000FC08000040007C08
000040007E08000080003E10000080003F10000080001F10000080001F90000080000F90000080
0007D00001000007E00001000003E00001000003E00001000001E00001000001E00003000000E0
000FC00000C000FFF800004000FFF800004000>47 41 126 168 46 I[<00001FF0000000F03C
000003C00700000F0003C0001E0001E0003C0000F000F80000F001F000007801E000007C03C000
007C07C000003C0F8000003E0F8000003E1F0000003E1F0000003E3F0000003F3E0000003F7E00
00003F7E0000003F7E0000003F7E0000003FFC0000007EFC0000007EFC0000007EFC0000007EFC
0000007CFC000000FCFC000000FCFC000000F8FC000001F87C000001F07C000003F07C000003E0
3E000007C03E00000F801E00000F001F00001E000F80003C00078000780003C000F00000F003C0
00007C0F0000000FF80000>40 43 122 169 47 I[<01FFFFFF0003FFFFFFE0000FC003F0000F
8000F8000F80007C000F80003E000F80003F000F80003F001F00003F001F00003F001F00003F00
1F00003F001F00003F001F00007E003E00007E003E00007C003E0000F8003E0001F0003E0007E0
003E001F80007FFFFC00007C000000007C000000007C000000007C000000007C00000000F80000
0000F800000000F800000000F800000000F800000000F800000001F000000001F000000001F000
000001F000000001F000000001F000000003F0000000FFFF800000FFFF800000>40
41 126 168 42 I[<01FFFFFC000003FFFFFF8000000FC00FE000000F8001F000000F8000F800
000F80007C00000F80007C00000F80007E00001F00007E00001F00007E00001F00007E00001F00
007E00001F00007E00001F0000FC00003E0000F800003E0001F800003E0001E000003E0003C000
003E000F0000003E007C0000007FFFE00000007C00700000007C003C0000007C001E0000007C00
0F0000007C000F000000F8000F800000F8000F800000F8000F800000F8000F800000F8000F8000
00F8000F800001F0001F800001F0001F800001F0001F800001F0001F804001F0001F804001F000
1F808003F0000FC080FFFF8007C100FFFF8003C20000000000FC00>42 42
126 168 45 82 D[<0001FC020007FF06001E038E003800DC0070007C00E0003C01E0001C03C0
001C03C0001C0380000807800008078000080780000807C0000807C0000007E0000003F0000003
FE000001FFE00001FFFE0000FFFF00003FFF80000FFFC00000FFE000000FE0000003F0000001F0
000001F0000001F0200000F0200000F0200000F0200000E0600001E0600001E0700001C0700003
C0780007807C000700E6001E00E3C07C00C1FFF000803FC000>31 43 125
169 33 I[<1FFFFFFFFE3FFFFFFFFE3F003F007E38003E001E30003E000C20003E000460003E00
0440003E000440007C000440007C000480007C000480007C000480007C000480007C00040000F8
00000000F800000000F800000000F800000000F800000000F800000001F000000001F000000001
F000000001F000000001F000000001F000000003E000000003E000000003E000000003E0000000
03E000000003E000000007C000000007C000000007C000000007C000000007C000000007C00000
000FC000000FFFFFC0001FFFFFC000>39 41 121 168 44 I[<7FFFC00FFF80FFFFC01FFF8003
F00001F80003E00000E00003E00000400003E00000400003E00000400003E00000400007C00000
800007C00000800007C00000800007C00000800007C00000800007C0000080000F80000100000F
80000100000F80000100000F80000100000F80000100000F80000100001F00000200001F000002
00001F00000200001F00000200001F00000200001F00000200003E00000400003E00000400003E
00000400003E00000400003E00000800003E00000800003E00001800001E00001000001E000020
00000F00006000000F0000C0000007800180000003C00700000001F01C000000007FF800000000
1FC0000000>41 42 120 168 46 I[<FFFF0003FFC0FFFE0003FFC00FE00000FC0007C0000070
0007C00000200007E00000400003E00000400003E00000800003E00000800003E00001000001F0
0001000001F00002000001F00006000001F00004000001F80008000000F80008000000F8001000
0000F80010000000F80020000000FC00200000007C00400000007C00400000007C00800000007C
01800000003E01000000003E02000000003E02000000003E04000000003F04000000001F080000
00001F08000000001F10000000001F10000000001FA0000000000FC0000000000FC0000000000F
80000000000F80000000000700000000000700000000000600000000000600000000>42
42 120 168 46 I E /Fi 3 21 438 432 dfs[<FFFFFFFFE0FFFFFFFFE0FFFFFFFFE0>35
3 123 143 47 0 D[<007C0001FF0007FFC00FFFE01FFFF03FFFF83FFFF87FFFFC7FFFFCFFFFFE
FFFFFEFFFFFEFFFFFEFFFFFEFFFFFEFFFFFE7FFFFC7FFFFC3FFFF83FFFF81FFFF00FFFE007FFC0
01FF00007C00>23 25 125 154 30 15 D[<000000006000000001E000000007E00000001F8000
00007E00000001F800000007E00000001F800000007E00000001F800000007E00000001F800000
007E00000001F800000007E00000001F800000007E00000000F800000000F8000000007E000000
001F8000000007E000000001F8000000007E000000001F8000000007E000000001F8000000007E
000000001F8000000007E000000001F8000000007E000000001F8000000007C000000001E00000
0000E0000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000007FFFFFFFC0FFFFFFFFE0FFFFFFFFE0>35 48 123
165 47 20 D E /Fj 35 123 576 432 dfs[<1C003E007F00FF80FF80FF807F003E001C00>9
9 123 136 25 46 D[<000E00001E00007E0007FE00FFFE00FFFE00F8FE0000FE0000FE0000FE
0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE
0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE
007FFFFE7FFFFE7FFFFE>23 39 123 166 45 49 D[<00FF800003FFF0000FFFFC001F03FE0038
00FF007C007F80FE003FC0FF003FC0FF003FE0FF001FE0FF001FE07E001FE03C003FE000003FE0
00003FC000003FC000007F8000007F000000FE000000FC000001F8000003F0000003E000000780
00000F0000001E0000003C00E0007000E000E000E001C001C0038001C0070001C00FFFFFC01FFF
FFC03FFFFFC07FFFFFC0FFFFFF80FFFFFF80FFFFFF80>27 39 125 166
45 I[<007F800003FFF00007FFFC000F81FE001F00FF003F80FF003F807F803F807F803F807F80
1F807F800F007F800000FF000000FF000000FE000001FC000001F8000007F00000FFC00000FFF0
000001FC0000007E0000007F0000007F8000003FC000003FC000003FE000003FE03C003FE07E00
3FE0FF003FE0FF003FE0FF003FC0FF007FC07E007F807C007F003F01FE001FFFFC0007FFF00000
FF8000>27 39 125 166 45 I[<00000E0000001E0000003E0000007E000000FE000000FE0000
01FE000003FE0000077E00000E7E00000E7E00001C7E0000387E0000707E0000E07E0000E07E00
01C07E0003807E0007007E000E007E000E007E001C007E0038007E0070007E00E0007E00FFFFFF
F8FFFFFFF8FFFFFFF80000FE000000FE000000FE000000FE000000FE000000FE000000FE000000
FE00007FFFF8007FFFF8007FFFF8>29 39 126 166 45 I[<0C0003000F803F000FFFFE000FFF
FC000FFFF8000FFFF0000FFFE0000FFFC0000FFE00000E0000000E0000000E0000000E0000000E
0000000E0000000E7FC0000FFFF8000F80FC000E003E000C003F0000001F8000001FC000001FC0
00001FE000001FE018001FE07C001FE0FE001FE0FE001FE0FE001FE0FE001FC0FC001FC078003F
8078003F803C007F001F01FE000FFFF80003FFF00000FF8000>27 39 125
166 45 I[<0007F000003FFC0000FFFE0001FC0F0003F01F8007E03F800FC03F801FC03F801F80
3F803F801F003F8000007F0000007F0000007F000000FF000000FF0FC000FF3FF800FF707C00FF
C03E00FFC03F00FF801F80FF801FC0FF001FC0FF001FE0FF001FE0FF001FE07F001FE07F001FE0
7F001FE07F001FE03F001FE03F001FC01F801FC01F803F800FC03F0007E07E0003FFFC0000FFF0
00003FC000>27 39 125 166 45 I[<380000003E0000003FFFFFF03FFFFFF03FFFFFF07FFFFF
E07FFFFFC07FFFFF807FFFFF0070000E0070000E0070001C00E0003800E0007000E000E0000000
E0000001C000000380000007800000078000000F0000000F0000001F0000001F0000003F000000
3E0000003E0000007E0000007E0000007E0000007E000000FE000000FE000000FE000000FE0000
00FE000000FE000000FE000000FE0000007C000000380000>28 41 124
168 45 I[<00003FF001800003FFFE0380000FFFFF8780003FF007DF8000FF8001FF8001FE0000
7F8003FC00003F8007F000001F800FF000000F801FE0000007801FE0000007803FC0000007803F
C0000003807FC0000003807F80000003807F8000000000FF8000000000FF8000000000FF800000
0000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000007F
80000000007F80000000007FC0000003803FC0000003803FC0000003801FE0000003801FE00000
07000FF00000070007F000000E0003FC00001E0001FE00003C0000FF8000F800003FF007E00000
0FFFFFC0000003FFFF000000003FF80000>41 41 124 168 67 67 D[<FFFFFFF80000FFFFFFFF
8000FFFFFFFFE00003FC001FF80003FC0007FC0003FC0001FE0003FC0000FF0003FC00007F8003
FC00003FC003FC00001FC003FC00001FE003FC00001FE003FC00000FF003FC00000FF003FC0000
0FF003FC00000FF003FC00000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF803
FC00000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF803FC00000FF003FC0000
0FF003FC00000FF003FC00001FE003FC00001FE003FC00001FC003FC00003FC003FC00007F8003
FC00007F0003FC0001FE0003FC0003FC0003FC001FF800FFFFFFFFE000FFFFFFFF8000FFFFFFFC
0000>45 41 125 168 71 I[<FFFFFFFFC0FFFFFFFFC0FFFFFFFFC003FC003FC003FC000FE003
FC0003E003FC0001E003FC0001E003FC0000E003FC0000E003FC0000E003FC0000F003FC038070
03FC03807003FC03807003FC03800003FC07800003FC07800003FC1F800003FFFF800003FFFF80
0003FFFF800003FC1F800003FC07800003FC07800003FC03800003FC03800003FC03800003FC03
800003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC
00000003FC000000FFFFFC0000FFFFFC0000FFFFFC0000>36 41 125 168
57 70 D[<00007FE003000003FFFC0700001FFFFF0F00003FF00FFF0000FF8001FF0001FE0000
FF0003F800003F0007F000003F000FF000001F001FE000000F001FE000000F003FC000000F003F
C0000007007FC0000007007F80000007007F8000000000FF8000000000FF8000000000FF800000
0000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8001FFFFF87F
8001FFFFF87F8001FFFFF87FC00000FF003FC00000FF003FC00000FF001FE00000FF001FE00000
FF000FF00000FF0007F00000FF0003F80000FF0001FE0000FF0000FF8001FF00003FF007BF0000
1FFFFF1F000003FFFE0F0000007FF00300>45 41 124 168 72 I[<FFFFFCFFFFFCFFFFFC01FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE
0001FE0001FE0001FE0001FE0001FE0001FE0001FE0001FE00FFFFFCFFFFFCFFFFFC>22
41 126 168 35 73 D[<0000FFE000000007FFFC0000003FC07F8000007F001FC00001FC0007F0
0003F80003F80007F00001FC000FF00001FE001FE00000FF001FE00000FF003FC000007F803FC0
00007F807FC000007FC07F8000003FC07F8000003FC07F8000003FC0FF8000003FE0FF8000003F
E0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF80
00003FE0FF8000003FE07F8000003FC07FC000007FC07FC000007FC03FC000007F803FC000007F
801FE00000FF001FE00000FF000FF00001FE0007F00001FC0003F80003F80001FC0007F00000FF
001FE000003FC07F8000000FFFFE00000000FFE00000>43 41 124 168
69 79 D[<007F806003FFF0E007FFF9E00F807FE01F001FE03E0007E07C0003E07C0001E0FC00
01E0FC0001E0FC0000E0FE0000E0FE0000E0FF000000FFC000007FFE00007FFFE0003FFFFC001F
FFFE000FFFFF8007FFFFC003FFFFE000FFFFE00007FFF000007FF000000FF8000007F8000003F8
600001F8E00001F8E00001F8E00001F8F00001F0F00001F0F80003F0FC0003E0FF0007C0FFE01F
80F3FFFF00E0FFFE00C01FF000>29 41 124 168 51 83 D[<01FF800007FFF0000F81F8001FC0
7E001FC07E001FC03F000F803F8007003F8000003F8000003F8000003F80000FFF8000FFFF8007
FC3F800FE03F803F803F803F003F807F003F80FE003F80FE003F80FE003F80FE003F807E007F80
7F00DF803F839FFC0FFF0FFC01FC03FC>30 27 126 154 44 97 D[<FFE0000000FFE0000000FF
E00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE0000000
0FE00000000FE00000000FE00000000FE00000000FE00000000FE1FE00000FE7FF80000FFE07E0
000FF801F0000FF000F8000FE000FC000FE000FE000FE0007F000FE0007F000FE0007F000FE000
7F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F000FE0
007F000FE0007F000FE000FE000FE000FC000FF001F8000FF803F0000F9E07E0000F07FF80000E
01FC0000>33 42 126 169 51 I[<001FF80000FFFE0003F01F0007E03F800FC03F801F803F80
3F801F007F800E007F0000007F000000FF000000FF000000FF000000FF000000FF000000FF0000
00FF0000007F0000007F0000007F8000003F8001C01F8001C00FC0038007E0070003F01E0000FF
FC00001FE000>26 27 126 154 41 I[<00003FF80000003FF80000003FF800000003F8000000
03F800000003F800000003F800000003F800000003F800000003F800000003F800000003F80000
0003F800000003F800000003F800001FE3F80000FFFBF80003F03FF80007E00FF8000FC007F800
1F8003F8003F8003F8007F0003F8007F0003F8007F0003F800FF0003F800FF0003F800FF0003F8
00FF0003F800FF0003F800FF0003F800FF0003F8007F0003F8007F0003F8007F0003F8003F8003
F8001F8003F8000F8007F80007C00FF80003F03BFF8000FFF3FF80003FC3FF80>33
42 126 169 51 I[<003FE00001FFF80003F07E0007C01F000F801F801F800F803F800FC07F00
0FC07F0007C07F0007E0FF0007E0FF0007E0FFFFFFE0FFFFFFE0FF000000FF000000FF0000007F
0000007F0000007F0000003F8000E01F8000E00FC001C007E0038003F81F0000FFFE00001FF000
>27 27 126 154 43 I[<0007F0003FFC00FE3E01F87F03F87F03F07F07F07F07F03E07F00007
F00007F00007F00007F00007F00007F000FFFFC0FFFFC0FFFFC007F00007F00007F00007F00007
F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007
F00007F00007F00007F0007FFF807FFF807FFF80>24 42 126 169 28 I[<FFE0000000FFE000
0000FFE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE0
0000000FE00000000FE00000000FE00000000FE00000000FE00000000FE07F00000FE1FFC0000F
E787E0000FEE03F0000FF803F0000FF803F8000FF003F8000FF003F8000FE003F8000FE003F800
0FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F800FFFE3FFF80FFFE3F
FF80FFFE3FFF80>33 42 125 169 51 104 D[<07000FC01FE03FE03FE03FE01FE00FC0070000
00000000000000000000000000FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFFFEFFFE>15
43 125 170 27 I[<FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E00FE00FE00FE00FE0FFFEFFFEFFFE>15 42 125 169 27 108 D[<FFC07F800FF000FFC1FFE0
3FFC00FFC383F0707E000FC603F8C07F000FCC01F9803F000FD801FF003F800FF001FE003F800F
F001FE003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC00
3F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE0
01FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F
80FFFE1FFFC3FFF8FFFE1FFFC3FFF8FFFE1FFFC3FFF8>53 27 125 154
80 I[<FFC07F0000FFC1FFC000FFC787E0000FCE03F0000FD803F0000FD803F8000FF003F8000F
F003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F800
0FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F800FFFE3FFF80FFFE3FFF80FFFE3FFF80>33 27 125 154 51
I[<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007E07F0007F07F0007F07F
0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F87F0007F0
7F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC00003FE000>29
27 126 154 45 I[<FFE1FE0000FFE7FF8000FFFE07E0000FF803F0000FF001F8000FE000FC00
0FE000FE000FE000FF000FE0007F000FE0007F000FE0007F800FE0007F800FE0007F800FE0007F
800FE0007F800FE0007F800FE0007F800FE0007F000FE000FF000FE000FF000FE000FE000FE001
FC000FF001F8000FF803F0000FFE0FE0000FE7FF80000FE1FC00000FE00000000FE00000000FE0
0000000FE00000000FE00000000FE00000000FE00000000FE00000000FE0000000FFFE000000FF
FE000000FFFE000000>33 39 126 154 51 I[<FFC1F0FFC7FCFFCE3E0FD87F0FD87F0FF07F0F
F03E0FF01C0FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000F
E0000FE0000FE0000FE0000FE000FFFF00FFFF00FFFF00>24 27 126 154
37 114 D[<03FE300FFFF01E03F03800F0700070F00070F00070F80070FC0000FFE0007FFE007F
FF803FFFE01FFFF007FFF800FFF80003FC0000FC60007CE0003CF0003CF00038F80038FC0070FF
01E0F7FFC0C1FF00>22 27 126 154 36 I[<00700000700000700000700000F00000F00000F0
0001F00003F00003F00007F0001FFFF0FFFFF0FFFFF007F00007F00007F00007F00007F00007F0
0007F00007F00007F00007F00007F00007F00007F00007F03807F03807F03807F03807F03807F0
3803F03803F87001F86000FFC0001F80>21 38 127 165 36 I[<FFE03FF800FFE03FF800FFE0
3FF8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000F
E003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F800
0FE003F8000FE003F8000FE003F8000FE007F80007E007F80007E00FF80003F03BFF8001FFF3FF
80003FC3FF80>33 27 125 154 51 I[<FFFE03FF80FFFE03FF80FFFE03FF8007F000700007F0
00700007F800F00003F800E00003FC01E00001FC01C00001FC01C00000FE03800000FE03800000
7F070000007F070000007F8F0000003F8E0000003FDE0000001FDC0000001FDC0000000FF80000
000FF80000000FF800000007F000000007F000000003E000000003E000000001C00000>33
27 127 154 48 I[<FFFE03FF80FFFE03FF80FFFE03FF8007F000700007F000700007F800F000
03F800E00003FC01E00001FC01C00001FC01C00000FE03800000FE038000007F070000007F0700
00007F8F0000003F8E0000003FDE0000001FDC0000001FDC0000000FF80000000FF80000000FF8
00000007F000000007F000000003E000000003E000000001C000000001C0000000038000000003
80000038078000007C07000000FE0F000000FE0E000000FE1E000000FE3C0000007C780000003F
E00000000FC0000000>33 39 127 154 48 121 D[<3FFFFF803FFFFF803F007F003C00FE0038
01FE007803FC007803F8007007F800700FF000700FE000001FC000003FC000007F8000007F0000
00FF000001FE038001FC038003F8038007F803800FF007800FE007801FE007003FC00F003F801F
007F007F00FFFFFF00FFFFFF00>25 27 126 154 41 I E /Fk 70 124
438 432 dfs[<0007F81F80003C067060007003E0F000E007C1F001C00FC1F003C00F80E00780
078040078007800007800780000780078000078007800007800780000780078000078007800007
800780000780078000FFFFFFFF00FFFFFFFF000780078000078007800007800780000780078000
078007800007800780000780078000078007800007800780000780078000078007800007800780
000780078000078007800007800780000780078000078007800007800780000780078000078007
80000780078000078007C000FFF87FFE00FFF87FFE00>36 42 127 169
35 11 D[<0007F800003C06000070010000E0070001C00F8003C00F8007800F80078007000780
000007800000078000000780000007800000078000000780000007800000FFFFFF80FFFFFF8007
800F80078007800780078007800780078007800780078007800780078007800780078007800780
078007800780078007800780078007800780078007800780078007800780078007800780078007
800780078007800780FFF87FFCFFF87FFC>30 42 127 169 33 I[<0007F980003C0780007007
8000E00F8001C00F8003C00F800780078007800780078007800780078007800780078007800780
0780078007800780078007800780FFFFFF80FFFFFF800780078007800780078007800780078007
800780078007800780078007800780078007800780078007800780078007800780078007800780
0780078007800780078007800780078007800780078007800780078007800780FFFCFFFCFFFCFF
FC>30 42 127 169 33 I[<0003F803FC00001E061E030000700138008000E003F0038001C007
E007C003C007E007C0078007C007C0078003C00380078003C00000078003C00000078003C00000
078003C00000078003C00000078003C00000078003C00000078003C00000FFFFFFFFFFC0FFFFFF
FFFFC0078003C007C0078003C003C0078003C003C0078003C003C0078003C003C0078003C003C0
078003C003C0078003C003C0078003C003C0078003C003C0078003C003C0078003C003C0078003
C003C0078003C003C0078003C003C0078003C003C0078003C003C0078003C003C0078003C003C0
078003C003C0078003C003C0078003C003C0FFFC7FFE7FFEFFFC7FFE7FFE>47
42 127 169 51 I[<78FCFCFEFE7A020202020404040810102040>7 18
123 169 17 39 D[<0004000800100020004000C0018003000300060006000E000C001C001800
38003800380030007000700070007000F000F000E000E000E000E000E000E000E000E000E000E0
00E000F000F0007000700070007000300038003800380018001C000C000E000600060003000300
018000C000400020001000080004>14 61 123 172 23 I[<800040002000100008000C000600
030003000180018001C000C000E0006000700070007000300038003800380038003C003C001C00
1C001C001C001C001C001C001C001C001C001C003C003C00380038003800380030007000700070
006000E000C001C0018001800300030006000C0008001000200040008000>14
61 125 172 23 I[<000038000000003800000000380000000038000000003800000000380000
000038000000003800000000380000000038000000003800000000380000000038000000003800
0000003800000000380000000038000000003800000000380000FFFFFFFFFEFFFFFFFFFEFFFFFF
FFFE00003800000000380000000038000000003800000000380000000038000000003800000000
380000000038000000003800000000380000000038000000003800000000380000000038000000
00380000000038000000003800000000380000>39 41 125 162 47 43
D[<78FCFCFEFE7A020202020404040810102040>7 18 123 133 17 I[<FFFEFFFEFFFE>15
3 127 142 20 I[<78FCFCFCFC78>6 6 123 133 17 I[<00000600000E00000E00001C00001C
00001C0000380000380000380000700000700000E00000E00000E00001C00001C00001C0000380
000380000380000700000700000700000E00000E00000E00001C00001C00001C00003800003800
00700000700000700000E00000E00000E00001C00001C00001C000038000038000038000070000
0700000700000E00000E00000E00001C00001C0000380000380000380000700000700000700000
E00000E00000C00000>23 60 125 172 30 I[<007F000001C1C0000780F0000F0078000E0038
001C001C003C001E003C001E003C001E0078000F0078000F0078000F0078000F00F8000F80F800
0F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8
000F80F8000F80F8000F80F8000F8078000F0078000F0078000F0078000F003C001E003C001E00
3C001E001C001C000E0038000F0078000780F00001C1C000007F0000>25
41 126 167 30 I[<00100000700001F0000FF000FEF000F0F00000F00000F00000F00000F000
00F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F000
00F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F000
00F00001F8007FFFE07FFFE0>19 40 123 167 30 I[<00FE0007FF800E07E01803F02001F820
00F840007C40007CF8007EFC007EFC003EFC003EFC003E78007E00007E00007C00007C0000F800
00F80001F00001E00003C0000780000700000E00001C0000380000700000600000C00001800203
00020600040C000418000410000C3FFFFC7FFFF8FFFFF8FFFFF8>23 40
125 167 30 I[<007F000003FFC0000701F0000C00F80010007C001C007C003E007E003E003E00
3E003E001E003E000C007E0000007C0000007C00000078000000F0000000E0000001C000000700
0000FF00000001E0000000F0000000780000003C0000003E0000001F0000001F0000001F800000
1F8030001F8078001F80FC001F80FC001F80FC001F00F8001F0040003F0040003E0030007C0018
00F8000F01F00003FFC000007F0000>25 41 126 167 30 I[<00006000000060000000E00000
01E0000001E0000003E0000003E0000005E0000009E0000009E0000011E0000021E0000021E000
0041E0000081E0000081E0000101E0000201E0000201E0000401E0000801E0000801E0001001E0
003001E0002001E0004001E000C001E000FFFFFF80FFFFFF800001E0000001E0000001E0000001
E0000001E0000001E0000001E0000001E0000003F000007FFF80007FFF80>25
40 126 167 30 I[<1800181F00F01FFFE01FFFC01FFF801FFF0011F800100000100000100000
100000100000100000100000100000107E001183801600C01800E010007000007800003C00003C
00003C00003E00003E00003E70003EF8003EF8003EF8003EF8003C80003C40007C400078200078
3000F01801E00E07C007FF0001FC00>23 41 125 167 30 I[<000FE000003FF80000F81C0001
E00C0003801E0007803E000F003E000E001C001E0000001C0000003C0000003C0000007C000000
7800000078000000F83F0000F840E000F9807000F9003800FA001C00FC001E00FC001E00FC000F
00F8000F00F8000F80F8000F80F8000F80F8000F8078000F8078000F8078000F807C000F803C00
0F003C000F001C001E001E001E000E003C000700780003C0F00001FFC000007F0000>25
41 126 167 30 I[<20000000380000003FFFFF803FFFFF803FFFFF007FFFFF00600002004000
040040000400400008008000100080002000000020000000400000008000000080000001000000
030000000200000006000000060000000C0000000C0000001C0000001C0000001C000000380000
00380000003800000078000000780000007800000078000000F8000000F8000000F8000000F800
0000F8000000F8000000F8000000F8000000700000>25 42 125 168 30
I[<007F000001FFC0000381F000060078000C003C001C001C0018000E0038000E0038000E0038
000E003C000E003C000E003E001C001F8018001FC038000FF0600007F8C00003FF800001FF0000
007FC00000FFE000030FF8000603FC001C01FE0038007E0030003F0070000F0070000780E00007
80E0000380E0000380E0000380E0000380F0000300700007007800060038000C001E0038000F80
F00003FFE000007F0000>25 41 126 167 30 I[<007F000001FFC00007C1E0000F0070001E00
38001C003C003C001C0078001E0078001E00F8000F00F8000F00F8000F00F8000F00F8000F80F8
000F80F8000F80F8000F8078000F8078001F803C001F803C001F801C002F800E004F800700CF80
03810F80007E0F8000000F0000000F0000000F0000001E0000001E0000001E0000003C001C003C
003E0078003E0070003C00E0001801C0001C0780000FFE000003F80000>25
41 126 167 30 I[<78FCFCFCFC78000000000000000000000000000078FCFCFCFC78>6
26 123 153 17 I[<78FCFCFCFC78000000000000000000000000000070F8FCFCFC7C04040404
0808081010202040>6 38 123 153 17 I[<FFFFFFFFFEFFFFFFFFFE7FFFFFFFFE000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000
7FFFFFFFFEFFFFFFFFFEFFFFFFFFFE>39 15 125 149 47 61 D[<000018000000001800000000
18000000003C000000003C000000003C000000007E000000007E00000000FF000000009F000000
009F000000011F800000010F800000010F8000000207C000000207C000000207C000000403E000
000403E000000403E000000801F000000801F000001801F800001000F800001000F800002000FC
000020007C00003FFFFC00007FFFFE000040003E000040003E000080001F000080001F00008000
1F000100000F800100000F800100000F8002000007C007000007C01F80000FE0FFF000FFFFFFF0
00FFFF>40 42 126 169 46 65 D[<FFFFFF8000FFFFFFF00007E000FC0003E0007E0003E0003F
0003E0001F8003E0000F8003E0000F8003E0000FC003E0000FC003E0000FC003E0000FC003E000
0FC003E0000F8003E0001F8003E0001F0003E0003E0003E0007C0003E001F80003FFFFE00003E0
00F80003E0003E0003E0001F0003E0000F8003E00007C003E00007E003E00003E003E00003F003
E00003F003E00003F003E00003F003E00003F003E00003F003E00007E003E00007E003E0000FC0
03E0001F8003E0003F0007E000FE00FFFFFFF800FFFFFFE000>36 41 126
168 43 I[<0000FF00100007FFE030001FC07830003E000C7000F80006F001F00003F003E00001
F007C00000F00F800000700F800000701F000000303F000000303E000000303E000000107E0000
00107E000000107C00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00
000000FC00000000FC00000000FC000000007C000000007E000000007E000000103E000000103E
000000103F000000101F000000200F800000200F8000006007C000004003E000008001F0000180
00F8000300003E000E00001FC038000007FFE0000000FF8000>36 43 125
169 44 I[<FFFFFFFF80FFFFFFFF8007E0001F8003E000078003E00001C003E00000C003E00000
C003E000004003E000004003E000004003E000004003E000002003E001002003E001002003E001
000003E001000003E003000003E003000003E00F000003FFFF000003FFFF000003E00F000003E0
03000003E003000003E001000003E001001003E001001003E001001003E000001003E000002003
E000002003E000002003E000002003E000006003E000006003E00000E003E00001E003E00003C0
07E0001FC0FFFFFFFFC0FFFFFFFFC0>36 41 126 168 42 69 D[<FFFFFFFF00FFFFFFFF0007E0
003F0003E000070003E000038003E000018003E000018003E000008003E000008003E000008003
E000008003E000004003E002004003E002004003E002000003E002000003E002000003E0060000
03E00E000003FFFE000003FFFE000003E00E000003E006000003E002000003E002000003E00200
0003E002000003E002000003E000000003E000000003E000000003E000000003E000000003E000
000003E000000003E000000003E000000003E000000007F0000000FFFFE00000FFFFE00000>34
41 126 168 40 I[<0000FF00100007FFE030001FC07830003E000C7000F80006F001F00003F0
03E00001F007C00000F00F800000700F800000701F000000303F000000303E000000303E000000
107E000000107E000000107C00000000FC00000000FC00000000FC00000000FC00000000FC0000
0000FC00000000FC00000000FC00000000FC0000FFFF7C0000FFFF7E000003F07E000001F03E00
0001F03E000001F03F000001F01F000001F00F800001F00F800001F007C00001F003E00001F001
F00002F000F80002F0003E000C70001FC038300007FFE0100000FF8000>40
43 125 169 48 I[<FFFF81FFFFFFFF81FFFF07F0000FE003E00007C003E00007C003E00007C0
03E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007
C003E00007C003E00007C003E00007C003E00007C003E00007C003FFFFFFC003FFFFFFC003E000
07C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003E0
0007C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003E00007C003
E00007C007F0000FE0FFFF81FFFFFFFF81FFFF>40 41 126 168 46 I[<FFFF80FFFF8007F000
03E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E000
03E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E000
03E00003E00003E00003E00003E00003E00003E00003E00003E00007F000FFFF80FFFF80>17
41 126 168 22 I[<FFE0000003FFC0FFE0000003FFC007E0000003F80002F0000005F00002F0
000005F0000278000009F0000278000009F0000278000009F000023C000011F000023C000011F0
00021E000021F000021E000021F000021E000021F000020F000041F000020F000041F000020780
0081F0000207800081F0000207800081F0000203C00101F0000203C00101F0000203E00201F000
0201E00201F0000201E00201F0000200F00401F0000200F00401F0000200F00401F00002007808
01F0000200780801F00002003C1001F00002003C1001F00002003C1001F00002001E2001F00002
001E2001F00002000F4001F00002000F4001F00002000F4001F0000200078001F0000700078001
F0000F80030003F800FFF803007FFFC0FFF803007FFFC0>50 41 126 168
56 77 D[<FFE0001FFFFFF0001FFF03F80001F002F80000E0027C000040027E000040023E0000
40021F000040021F800040020F8000400207C000400207E000400203E000400201F000400201F8
00400200F8004002007C004002007E004002003E004002001F004002001F804002000F80400200
07C040020003E040020003E040020001F040020000F840020000F8400200007C400200003E4002
00003E400200001F400200000FC00200000FC002000007C002000003C002000003C007000001C0
0F800000C0FFF80000C0FFF8000040>40 41 126 168 46 I[<0001FF0000000F01E000003C00
78000078003C0000E0000E0001E0000F0003C000078007800003C00F800003E01F000001F01F00
0001F03E000000F83E000000F87E000000FC7E000000FC7C0000007C7C0000007CFC0000007EFC
0000007EFC0000007EFC0000007EFC0000007EFC0000007EFC0000007EFC0000007EFC0000007E
7C0000007C7E000000FC7E000000FC7E000000FC3E000000F83F000001F81F000001F01F000001
F00F800003E007800003C007C00007C003E0000F8000F0001E000078003C00003C007800000F01
E0000001FF0000>39 43 125 169 47 I[<FFFFFF8000FFFFFFF00007E000FC0003E0003E0003
E0001F0003E0000F8003E0000FC003E00007C003E00007E003E00007E003E00007E003E00007E0
03E00007E003E00007E003E00007C003E0000FC003E0000F8003E0001F0003E0003E0003E001F8
0003FFFFE00003E000000003E000000003E000000003E000000003E000000003E000000003E000
000003E000000003E000000003E000000003E000000003E000000003E000000003E000000003E0
00000003E000000003E000000007F0000000FFFF800000FFFF800000>35
41 126 168 42 I[<FFFFFE000000FFFFFFC0000007E003F0000003E000FC000003E0003E0000
03E0001F000003E0001F800003E0000F800003E0000FC00003E0000FC00003E0000FC00003E000
0FC00003E0000FC00003E0000FC00003E0000F800003E0001F000003E0001E000003E0003C0000
03E000F8000003E003E0000003FFFE00000003E00780000003E001E0000003E000F0000003E000
78000003E0007C000003E0003C000003E0003E000003E0003E000003E0003E000003E0003E0000
03E0003F000003E0003F000003E0003F000003E0003F000003E0003F008003E0003F808003E000
1F808007F0000F8100FFFF8007C100FFFF8003C20000000000FC00>41 42
126 168 45 82 D[<00FE010003FF83000F81E3001E0037003C001F0038000F00780007007000
0700F0000300F0000300F0000300F0000100F8000100F8000100FC0000007C0000007F0000003F
E000001FFF00000FFFE00007FFF80003FFFC00007FFE000007FF0000007F0000001F8000000F80
000007C0000007C0800003C0800003C0800003C0800003C0C00003C0C0000380C0000380E00007
80F0000700F8000E00EE001C00C3C07800C1FFF000803FC000>26 43 125
169 33 I[<7FFFFFFFF87FFFFFFFF87C007C00F870007C003860007C001840007C000840007C00
08C0007C000CC0007C000C80007C000480007C000480007C000480007C000480007C000400007C
000000007C000000007C000000007C000000007C000000007C000000007C000000007C00000000
7C000000007C000000007C000000007C000000007C000000007C000000007C000000007C000000
007C000000007C000000007C000000007C000000007C000000007C000000007C000000007C0000
0000FE000000FFFFFE0000FFFFFE00>38 41 126 168 44 I[<FFFE03FFF803FFC0FFFE03FFF8
03FFC00FE0003F80007E0007C0001F0000180003E0001F0000180003E0000F8000100003E0000F
8000100001F0000F8000200001F0000FC000200001F0000FC000200000F8000FC000400000F800
13E000400000F80013E000400000FC0013E000C000007C0033F0008000007C0021F0008000007E
0021F0008000003E0021F8010000003E0040F8010000003E0040F8010000001F0040F802000000
1F00807C020000001F00807C020000000F80807C040000000F81003E040000000F81003E040000
0007C1003E0800000007C2001F0800000007C2001F0800000003E2001F1000000003E4000F9000
000003E4000F9000000001F4000FA000000001F80007E000000001F80007E000000000F80007C0
00000000F00003C000000000F00003C00000000070000380000000006000018000000000600001
8000000000600001800000>58 42 127 168 62 87 D[<FF80FF80FF80E000E000E000E000E000
E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0
00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000
E000E000E000E000E000E000E000E000E000E000FF80FF80FF80>9 60 122
172 17 91 D[<FF80FF80FF800380038003800380038003800380038003800380038003800380
038003800380038003800380038003800380038003800380038003800380038003800380038003
800380038003800380038003800380038003800380038003800380038003800380038003800380
03800380FF80FF80FF80>9 60 127 172 17 93 D[<01FC00000E0780001001C0003C00E0003E
00F0003E0078001C00780008007800000078000000780000007800007FF80003E078000F807800
1F0078003E0078007C00780078007820F8007820F8007820F8007820F800F8207C00F8203C013C
401F063FC007F80F00>27 26 126 153 30 97 D[<07800000FF800000FF8000000F8000000780
000007800000078000000780000007800000078000000780000007800000078000000780000007
800000078000000783F000078C1C0007B0070007A0038007C003C0078001E0078001E0078000F0
078000F0078000F8078000F8078000F8078000F8078000F8078000F8078000F8078000F0078000
F0078001F0078001E0078001C007C003C00740078007200E0006181C000407E000>29
42 127 169 33 I[<007F8001C0700780080F003C1E007C3C007C3C00387C0010780000F80000
F80000F80000F80000F80000F80000F80000F800007800007C00003C00043C00041E00080F0010
07802001C0C0007F00>22 26 126 153 27 I[<00000F000001FF000001FF0000001F0000000F
0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000
0F0000000F00003F0F0001C0CF0003802F000F001F001E001F001C000F003C000F007C000F0078
000F0078000F00F8000F00F8000F00F8000F00F8000F00F8000F00F8000F00F8000F0078000F00
78000F003C000F003C000F001E001F000E002F0007004F8001C18FF8007E0FF8>29
42 126 169 33 I[<007E0003C3800700E00E00F01C00703C00783C003878003C78003CF8003C
F8003CFFFFFCF80000F80000F80000F80000F800007800007C00003C00043C00041E00080E0010
07002001C0C0007F00>22 26 126 153 27 I[<001F000070C000E1E001C3E003C3E00381C007
8080078000078000078000078000078000078000078000078000078000FFFE00FFFE0007800007
800007800007800007800007800007800007800007800007800007800007800007800007800007
800007800007800007800007800007800007800007C000FFFE00FFFE00>19
42 127 169 18 I[<0000078001FC1840070721C00E03C1C01E03C0803C01E0003C01E0007C01
F0007C01F0007C01F0007C01F0007C01F0003C01E0003C01E0001E03C0000E0380001707000011
FC0000300000003000000030000000380000001C0000001FFFC0000FFFF80007FFFC001C003E00
30000F007000070060000380E0000380E0000380E0000380E0000380700007007000070038000E
000C0018000780F00000FF8000>26 40 126 154 30 I[<07800000FF800000FF8000000F8000
000780000007800000078000000780000007800000078000000780000007800000078000000780
000007800000078000000783F800078C1C0007900E0007A0070007A0078007C0078007C0078007
800780078007800780078007800780078007800780078007800780078007800780078007800780
07800780078007800780078007800780078007800780078007800780FFFCFFFCFFFCFFFC>30
42 127 169 33 I[<07000F801F801F800F800700000000000000000000000000000000000000
07807F807F800F8007800780078007800780078007800780078007800780078007800780078007
800780078007800780FFF8FFF8>13 41 127 168 17 I[<007800FC00FC00FC00FC0078000000
000000000000000000000000000000007C07FC07FC007C003C003C003C003C003C003C003C003C
003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C00
3C7038F838F870F07060C01F80>14 53 130 168 18 I[<07800000FF800000FF8000000F8000
000780000007800000078000000780000007800000078000000780000007800000078000000780
0000078000000780000007807FF007807FF007801F8007801C0007801800078020000780400007
808000078100000782000007870000079F800007A7800007C7C0000783E0000781E0000781F000
0780F8000780780007807C0007803E0007801E0007801F0007801F80FFFC7FF8FFFC7FF8>29
42 127 169 32 I[<0780FF80FF800F8007800780078007800780078007800780078007800780
078007800780078007800780078007800780078007800780078007800780078007800780078007
8007800780078007800780FFFCFFFC>14 42 127 169 17 I[<0781F800FC00FF860E030700FF
98070C03800FA0079003C007A003D001E007C003E001E007C003E001E0078003C001E0078003C0
01E0078003C001E0078003C001E0078003C001E0078003C001E0078003C001E0078003C001E007
8003C001E0078003C001E0078003C001E0078003C001E0078003C001E0078003C001E0078003C0
01E0078003C001E0078003C001E0FFFC7FFE3FFFFFFC7FFE3FFF>48 26
127 153 52 I[<0783F800FF8C1C00FF900E000FA0070007A0078007C0078007C0078007800780
078007800780078007800780078007800780078007800780078007800780078007800780078007
80078007800780078007800780078007800780078007800780FFFCFFFCFFFCFFFC>30
26 127 153 33 I[<007F000001C1C000070070000E0038001C001C003C001E003C001E007800
0F0078000F00F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F8078
000F0078000F003C001E003C001E001E003C000E0038000700700001C1C000007F0000>25
26 126 153 30 I[<0783F000FF8C1C00FFB00F0007A0078007C003C0078003E0078001E00780
01F0078001F0078000F8078000F8078000F8078000F8078000F8078000F8078000F8078000F007
8001F0078001F0078001E0078003C007C003C007C0078007A00E0007983C000787E00007800000
078000000780000007800000078000000780000007800000078000000780000007800000FFFC00
00FFFC0000>29 38 127 153 33 I[<003F008001E0C180038021800F0013801E000B801E000F
803C0007807C0007807C00078078000780F8000780F8000780F8000780F8000780F8000780F800
0780F80007807C0007807C0007803C0007803E000F801E000F800F0017800780278001E0C78000
7F0780000007800000078000000780000007800000078000000780000007800000078000000780
000007800000FFFC0000FFFC>30 38 126 153 32 I[<0787C0FF98E0FF91F00FA1F007C1F007
C0E007C00007800007800007800007800007800007800007800007800007800007800007800007
800007800007800007800007800007C000FFFE00FFFE00>20 26 127 153
23 I[<07F8401C06C03001C06000C06000C0E00040E00040F00040F800007E00007FF0003FFE00
0FFF0003FF80003FC00007C08001E08001E0C000E0C000E0C000E0E000C0F001C0F80180C40700
83F800>19 26 126 153 24 I[<00800000800000800000800001800001800001800003800003
80000780000F80001FFF80FFFF8007800007800007800007800007800007800007800007800007
800007800007800007800007800007804007804007804007804007804007804007804003C08001
C08000E100003E00>18 37 127 164 23 I[<07800780FF80FF80FF80FF800F800F8007800780
078007800780078007800780078007800780078007800780078007800780078007800780078007
80078007800780078007800780078007800780078007800F8007800F800380178001C027C000E0
47FC003F87FC>30 26 127 153 33 I[<FFF00FF8FFF00FF80F8003C0078003800780010003C0
020003C0020003E0020001E0040001E0040000F0080000F0080000F81800007810000078100000
3C2000003C2000003E6000001E4000001E4000000F8000000F8000000700000007000000070000
00020000>29 26 127 153 32 I[<FFF1FFC1FFFFF1FFC1FF0F803E00780F001E003007801E00
2007801E002007801F002003C03F004003C027004003C027804001E067808001E043808001E043
C08000F081C10000F081C10000F881E300007900E200007900E200007D00F600003E007400003E
007400001E007800001C003800001C003800000C0030000008001000>40
26 127 153 44 I[<FFF03FF0FFF03FF00FC01F8007C00E0003C0080001E0180000F0100000F8
200000784000003C8000001F8000001F0000000F000000078000000BC000001BE0000011E00000
20F0000040780000807C0001803C0001001E0007000F001F801F80FFC07FF8FFC07FF8>29
26 127 153 32 I[<FFF00FF8FFF00FF80F8003C0078003800780010003C0020003C0020003E0
020001E0040001E0040000F0080000F0080000F818000078100000781000003C2000003C200000
3E6000001E4000001E4000000F8000000F80000007000000070000000700000002000000020000
0004000000040000000400000008000070080000F8100000F8100000F8200000F0400000608000
001F000000>29 38 127 153 32 I[<7FFFF87800F06001F04001E04003C0C007C0800780800F
00801F00001E00003C00007C0000780000F00001F00001E00403C00407C0040780040F000C1F00
081E00083C00187C00387800F8FFFFF8>22 26 126 153 27 I[<FFFFFFF8>29
1 128 144 30 I E /Fl 12 119 995 432 dfs[<00003FF001800003FFFE0380000FFFFF8780
003FF007DF8000FF8001FF8001FE00007F8003FC00003F8007F000001F800FF000000F801FE000
0007801FE0000007803FC0000007803FC0000003807FC0000003807F80000003807F8000000000
FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000
000000FF8000000000FF80000000007F80000000007F80000000007FC0000003803FC000000380
3FC0000003801FE0000003801FE0000007000FF00000070007F000000E0003FC00001E0001FE00
003C0000FF8000F800003FF007E000000FFFFFC0000003FFFF000000003FF80000>41
41 124 168 115 67 D[<01FF800007FFF0000F81F8001FC07E001FC07E001FC03F000F803F80
07003F8000003F8000003F8000003F80000FFF8000FFFF8007FC3F800FE03F803F803F803F003F
807F003F80FE003F80FE003F80FE003F80FE003F807E007F807F00DF803F839FFC0FFF0FFC01FC
03FC>30 27 126 154 76 97 D[<001FF80000FFFE0003F01F0007E03F800FC03F801F803F803F
801F007F800E007F0000007F000000FF000000FF000000FF000000FF000000FF000000FF000000
FF0000007F0000007F0000007F8000003F8001C01F8001C00FC0038007E0070003F01E0000FFFC
00001FE000>26 27 126 154 71 99 D[<003FE00001FFF80003F07E0007C01F000F801F801F80
0F803F800FC07F000FC07F0007C07F0007E0FF0007E0FF0007E0FFFFFFE0FFFFFFE0FF000000FF
000000FF0000007F0000007F0000007F0000003F8000E01F8000E00FC001C007E0038003F81F00
00FFFE00001FF000>27 27 126 154 74 101 D[<07000FC01FE03FE03FE03FE01FE00FC00700
0000000000000000000000000000FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFFFEFFFE>15
43 125 170 46 105 D[<FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0
0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00F
E00FE00FE00FE00FE00FE0FFFEFFFEFFFE>15 42 125 169 46 108 D[<FFC07F800FF000FFC1
FFE03FFC00FFC383F0707E000FC603F8C07F000FCC01F9803F000FD801FF003F800FF001FE003F
800FF001FE003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001
FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F80
0FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC003F800FE001FC
003F80FFFE1FFFC3FFF8FFFE1FFFC3FFF8FFFE1FFFC3FFF8>53 27 125
154 138 I[<FFC07F0000FFC1FFC000FFC787E0000FCE03F0000FD803F0000FD803F8000FF003
F8000FF003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE0
03F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000F
E003F8000FE003F800FFFE3FFF80FFFE3FFF80FFFE3FFF80>33 27 125
154 88 I[<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007E07F0007F07F00
07F07F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F87F
0007F07F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC00003FE000>29
27 126 154 78 I[<00700000700000700000700000F00000F00000F00001F00003F00003F000
07F0001FFFF0FFFFF0FFFFF007F00007F00007F00007F00007F00007F00007F00007F00007F000
07F00007F00007F00007F00007F03807F03807F03807F03807F03807F03803F03803F87001F860
00FFC0001F80>21 38 127 165 62 116 D[<FFE03FF800FFE03FF800FFE03FF8000FE003F800
0FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003
F8000FE003F8000FE007F80007E007F80007E00FF80003F03BFF8001FFF3FF80003FC3FF80>33
27 125 154 88 I[<FFFE03FF80FFFE03FF80FFFE03FF8007F000700007F000700007F800F000
03F800E00003FC01E00001FC01C00001FC01C00000FE03800000FE038000007F070000007F0700
00007F8F0000003F8E0000003FDE0000001FDC0000001FDC0000000FF80000000FF80000000FF8
00000007F000000007F000000003E000000003E000000001C00000>33 27
127 154 83 I E /Fm 8 117 829 432 dfs[<000E00001E00007E0007FE00FFFE00FFFE00F8FE
0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE
0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE
0000FE0000FE0000FE007FFFFE7FFFFE7FFFFE>23 39 123 166 65 49
D[<00003FF001800003FFFE0380000FFFFF8780003FF007DF8000FF8001FF8001FE00007F8003
FC00003F8007F000001F800FF000000F801FE0000007801FE0000007803FC0000007803FC00000
03807FC0000003807F80000003807F8000000000FF8000000000FF8000000000FF8000000000FF
8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000007F800000
00007F80000000007FC0000003803FC0000003803FC0000003801FE0000003801FE0000007000F
F00000070007F000000E0003FC00001E0001FE00003C0000FF8000F800003FF007E000000FFFFF
C0000003FFFF000000003FF80000>41 41 124 168 96 67 D[<01FF800007FFF0000F81F8001F
C07E001FC07E001FC03F000F803F8007003F8000003F8000003F8000003F80000FFF8000FFFF80
07FC3F800FE03F803F803F803F003F807F003F80FE003F80FE003F80FE003F80FE003F807E007F
807F00DF803F839FFC0FFF0FFC01FC03FC>30 27 126 154 63 97 D[<003FE00001FFF80003F0
7E0007C01F000F801F801F800F803F800FC07F000FC07F0007C07F0007E0FF0007E0FF0007E0FF
FFFFE0FFFFFFE0FF000000FF000000FF0000007F0000007F0000007F0000003F8000E01F8000E0
0FC001C007E0038003F81F0000FFFE00001FF000>27 27 126 154 61 101
D[<FFE0000000FFE0000000FFE00000000FE00000000FE00000000FE00000000FE00000000FE0
0000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000F
E07F00000FE1FFC0000FE787E0000FEE03F0000FF803F0000FF803F8000FF003F8000FF003F800
0FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8
000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003F8000FE003
F800FFFE3FFF80FFFE3FFF80FFFE3FFF80>33 42 125 169 73 104 D[<FFE1FE0000FFE7FF80
00FFFE07E0000FF803F0000FF001F8000FE000FC000FE000FE000FE000FF000FE0007F000FE000
7F000FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0007F800FE0
007F000FE000FF000FE000FF000FE000FE000FE001FC000FF001F8000FF803F0000FFE0FE0000F
E7FF80000FE1FC00000FE00000000FE00000000FE00000000FE00000000FE00000000FE0000000
0FE00000000FE00000000FE0000000FFFE000000FFFE000000FFFE000000>33
39 126 154 73 112 D[<FFC1F0FFC7FCFFCE3E0FD87F0FD87F0FF07F0FF03E0FF01C0FE0000F
E0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000FE0000F
E0000FE000FFFF00FFFF00FFFF00>24 27 126 154 54 114 D[<007000007000007000007000
00F00000F00000F00001F00003F00003F00007F0001FFFF0FFFFF0FFFFF007F00007F00007F000
07F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F03807F03807F038
07F03807F03807F03803F03803F87001F86000FFC0001F80>21 38 127
165 52 116 D E /Fn 36 119 400 300 dfs[<0007800000001C600380003010078000601007
8000E010038000C010010001C07001000380F002000380F0020003806004000700000800070000
10000700006000073C00A00007E20310000781041000070108080007011008000F022008001F84
2008001C782C08003C003E080038003E080078001C0800780000080078000008007000001000F0
00001000700000200070000020007800004000380000800038000100001C000200000E000C0000
0380700000007F800000>33 37 123 163 49 38 D[<70F8F8F0E0>5 5
122 132 20 46 D[<0000030000000300000007000000070000000F0000000F0000001F000000
2F0000002F0000004F0000004F8000008780000087800001078000020780000207800004078000
0407800008078000080780001007800030078000200780007FFF80004007C0008007C0008003C0
010003C0030003C0020003C0040003C0040003C00C0003C03C0007C0FF003FFC>30
35 125 162 48 65 D[<00FFFFE0000F0038000F001C000F001E001E000E001E000F001E000F00
1E000F003C000E003C001E003C001E003C003C00780078007800F0007801E00078078000FFFF80
00F001E000F000F000F0007801E0007801E0003801E0003C01E0003C03C0007803C0007803C000
7803C000F0078000F0078001E0078003C0078007000F801E00FFFFF000>32
34 125 161 45 I[<00FFFFF000000F003C00000F000E00000F000700001E000380001E000380
001E0001C0001E0001C0003C0001C0003C0001E0003C0001E0003C0001E000780001E000780001
E000780001E000780001E000F00003C000F00003C000F00003C000F00003C001E000078001E000
078001E000070001E0000F0003C0000E0003C0001C0003C0003C0003C000380007800070000780
00E000078001C00007800700000F801C0000FFFFF00000>35 34 125 161
49 68 D[<00007F00800003808100000E00630000380027000070001F0000E0000E0001C0000E
000380000E000700000E000F000004000E000004001E000004003C000004003C00000800780000
000078000000007800000000F000000000F000000000F000000000F000000000F0003FFC00E000
01E000E00001E000E00001E000E00003C000E00003C000F00003C000700003C000700007800038
0007800018000F80001C0013800006002300000381C1000000FE000000>33
36 121 162 51 71 D[<00FFF8000F00000F00000F00001E00001E00001E00001E00003C00003C
00003C00003C0000780000780000780000780000F00000F00000F00000F00001E00001E00001E0
0001E00003C00003C00003C00003C0000780000780000780000780000F8000FFF800>21
34 125 161 25 73 D[<0007FFC000003C0000003C0000003C0000007800000078000000780000
0078000000F0000000F0000000F0000000F0000001E0000001E0000001E0000001E0000003C000
0003C0000003C0000003C00000078000000780000007800000078000000F0000000F0000380F00
00780F0000F81E0000F81E0000F03C0000403800004070000021E000001F800000>26
35 124 161 35 I[<00FFF807FC000F0001E0000F0001C0000F000100001E000200001E000400
001E001000001E002000003C004000003C008000003C010000003C040000007808000000781800
00007838000000787C000000F0BC000000F23C000000F41E000000F81E000001F01F000001E00F
000001E00F000001E00F800003C007800003C007800003C003C00003C003C000078003C0000780
01E000078001E000078001E0000F8001F000FFF80FFE00>38 34 125 161
49 I[<00FFFC00000F8000000F0000000F0000001E0000001E0000001E0000001E0000003C0000
003C0000003C0000003C00000078000000780000007800000078000000F0000000F0000000F000
0000F0000001E0000001E0000001E0002001E0002003C0004003C0004003C0008003C000800780
0180078001000780030007800F000F803E00FFFFFE00>27 34 125 161
41 I[<00FF800007FC000F80000F80000F80001780000F80001780001780002F000013C0002F00
0013C0004F000013C0008F000023C0009E000023C0011E000023C0011E000023C0021E000043C0
043C000043C0043C000043C0083C000041E0083C000081E01078000081E02078000081E0207800
0081E04078000101E040F0000101E080F0000101E100F0000101E100F0000200F201E0000200F2
01E0000200F401E0000200F801E0000400F803C0000400F003C0000400F003C0000C00E003C000
1E00C007C000FFC0C07FFC00>46 34 125 161 59 I[<00FF000FFC000F8001E0000F80018000
0FC000800013C001000013C001000011E001000011E001000021E002000020F002000020F00200
0020F0020000407804000040780400004078040000403C040000803C080000803E080000801E08
0000801E080001001F100001000F100001000F10000100079000020007A000020007A000020003
E000020003E000040003C000040001C000040001C0000C0001C0001E00008000FFC0008000>38
34 125 161 48 I[<0000FE0000078380000C00E0003800700070003800E0003801C0001C0380
001C0700001C0F00001E1E00001E1C00001E3C00001E3C00001E7800001E7800001E7800001EF0
00003CF000003CF000003CF0000078F0000078E0000078E00000F0E00000F0E00001E0E00001C0
F00003C0F00007807000070078000E0038001C001C0038000E00E0000703800001FC0000>31
36 121 162 49 I[<00FFFFC0000F0070000F003C000F001C001E000E001E000E001E000F001E
000F003C001E003C001E003C001E003C003C0078003800780070007801E00078078000FFFC0000
F00E0000F0070000F0038001E003C001E003C001E003C001E003C003C0078003C0078003C00780
03C0078007800F0007800F0107800F01078007020F800702FFF8038C000000F0>32
35 125 161 48 82 D[<0001F020000E0C40001802C0003001C0006001C000E0018000C0018001
C0018001C0018003C0010003C0010003C0000003C0000003E0000001F8000001FF000000FFE000
007FF000001FF8000003FC0000007C0000003C0000001E0000001E0000001E0020001C0020001C
0020001C00200018006000380060003000700060007000C000C8018000C607000081FC0000>27
36 125 162 36 I[<1FFFFFF81E03C0381803C0183003C0182007801820078018400780104007
8010400F0010800F0010800F0010000F0000001E0000001E0000001E0000001E0000003C000000
3C0000003C0000003C00000078000000780000007800000078000000F0000000F0000000F00000
00F0000001E0000001E0000001E0000001E0000003E00000FFFF0000>29
34 119 161 47 I[<3FFE03FF03C0007803C0006003C000200780004007800040078000400780
00400F0000800F0000800F0000800F0000801E0001001E0001001E0001001E0001003C0002003C
0002003C0002003C0002007800040078000400780004007800040070000800F0000800F0001000
7000100070002000700040003000400038018000180200000E0C000003F00000>32
35 119 161 48 I[<FFF03FF80FF81F0007C003C01E00078001801E00078001001E0007800200
1E000F8002001E000F8004001F00178004001F00178008000F00278008000F00278010000F0047
8010000F00C78020000F00878020000F01078040000F010780C0000F02078080000F0207810000
0F04078100000F0407C200000F0807C200000F0803C400000F1003C400000F1003C800000F2003
C800000F2003D000000F4003D000000FC003E000000F8003E000000F0003C000000F0003800000
0E00038000000E00030000000C00030000000C0002000000>45 35 118
161 65 87 D[<FFF001FF1F8000780F0000600F0000400F8000C0078000800780010007C00200
03C0060003C0040003E0080001E0100001E0200001F0600000F0400000F0800000F9000000FB00
00007A0000007C00000078000000780000007800000078000000F0000000F0000000F0000000F0
000001E0000001E0000001E0000001E0000003E000003FFC0000>32 34
118 161 48 89 D[<00F8C00185C00705C00E03800E03801C03803C0380380700780700780700
780700F00E00F00E00F00E00F00E10F01C20701C20703C20305C40308C400F0780>20
21 123 148 33 97 D[<007E0001C1000301800703800E07801C07803C00003800007800007800
00780000F00000F00000F00000F00000F00100700100700200300C001830000FC000>17
21 123 148 29 99 D[<00003C0003F80000380000380000380000700000700000700000700000
E00000E00000E00000E00001C000F9C00185C00705C00E03800E03801C03803C03803807007807
00780700780700F00E00F00E00F00E00F00E10F01C20701C20703C20305C40308C400F0780>22
35 123 162 33 I[<00F803840E021C023C0238027804F018FFE0F000F000E000E000E000E000
E002E0026004701830600F80>15 21 122 148 29 I[<00003E0000470000CF00018F00018600
0380000380000380000700000700000700000700000700000E0000FFF0000E00000E00000E0000
1C00001C00001C00001C00001C0000380000380000380000380000380000700000700000700000
700000700000E00000E00000E00000E00000C00001C00001C000718000F18000F300006200003C
0000>24 45 130 162 20 I[<001F180030B800E0B801C07001C0700380700780700700E00F00
E00F00E00F00E01E01C01E01C01E01C01E01C01E03800E03800E0780060B8006170001E7000007
00000700000E00000E00000E00701C00F01800F0300060E0003F8000>21
31 126 148 29 I[<00C001E001C001C0000000000000000000000000000000001C0023004300
43008700870087000E000E001C001C001C00380038003840708070807080710032001C00>11
33 123 160 20 105 D[<00F0000FE00000E00000E00000E00001C00001C00001C00001C00003
80000380000380000380000700000701E0070210070C700E10F00E10F00E20600E40001D80001E
00001FC0001C7000383800383800381C00381C20703840703840703840701880E01880600F00>
20 35 125 162 29 107 D[<01E01FC001C001C001C0038003800380038007000700070007000E
000E000E000E001C001C001C001C0038003800380038007000700070007100E200E200E200E200
64003800>11 35 124 162 16 I[<1C0F002631C04740C08780E08780E08700E08700E00E01C0
0E01C00E01C00E01C01C03801C03801C03801C0704380708380E08380E103806107006203003C0
>22 21 123 148 36 110 D[<007E0001C3000381800701C00E01C01C01E03C01E03801E07801
E07801E07801E0F003C0F003C0F00380F00780700700700E00700C0030180018700007C000>19
21 123 148 33 I[<01C1F002621804741C08780C08700E08700E08701E00E01E00E01E00E01E
00E01E01C03C01C03C01C03C01C07803807003807003C0E003C1C0072380071E00070000070000
0E00000E00000E00000E00001C00001C00001C0000FFC000>23 31 127
148 33 I[<1C1F002620804741C08783C08703C08701808700000E00000E00000E00000E00001C
00001C00001C00001C0000380000380000380000380000700000300000>18
21 123 148 28 114 D[<00FC000183000200800401800C03800C03000C00000F00000FF00007
FC0003FE00003E00000F00000700700700F00600F00600E004004008002030001FC000>17
21 125 148 27 I[<00C001C001C001C001C003800380038003800700FFF8070007000E000E00
0E000E001C001C001C001C003800380038003810702070207040708031001E00>13
31 124 158 21 I[<1E00602300E04380E04381C08381C08701C08701C00703800E03800E0380
0E03801C07001C07001C07001C07081C0E10180E101C0E101C1E200C262007C3C0>21
21 123 148 35 I[<1E03802307C04387C04383C08381C08700C08700C00700800E00800E0080
0E00801C01001C01001C01001C02001C02001C04001C08001C08000C300003C000>18
21 123 148 29 I E /Fo 54 122 400 300 dfs[<0000C018000000C018000000C01800000180
300000018030000001803000000180300000030060000003006000000300600000030060000003
006000000600C000000600C000000600C000000600C000000C018000FFFFFFFFC0FFFFFFFFC000
180300000018030000001803000000180300000030060000003006000000300600000030060000
FFFFFFFFC0FFFFFFFFC000600C000000C018000000C018000000C018000000C018000001803000
0001803000000180300000018030000003006000000300600000030060000003006000000600C0
00000600C000000600C00000>34 45 125 162 55 35 D[<70F8FCFC7404040404080810102040
>6 15 124 132 19 44 D[<FFE0FFE0>11 2 127 139 21 I[<70F8F8F870>5
5 124 132 19 I[<01F000071C000C06001803003803803803807001C07001C07001C07001C0F0
01E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F0
01E07001C07001C07001C07803C03803803803801C07000C0600071C0001F000>19
34 126 160 32 48 D[<008003800F80F380038003800380038003800380038003800380038003
80038003800380038003800380038003800380038003800380038003800380038007C0FFFE>15
33 124 160 32 I[<03F8000C1E001007002007804007C07807C07803C07807C03807C0000780
000780000700000F00000E0000380003F000001C00000F000007800007800003C00003C00003E0
2003E07003E0F803E0F803E0F003C04003C0400780200780100F000C1C0003F000>19
34 126 160 32 51 D[<000200000600000E00000E00001E00001E00002E00004E00004E00008E
00008E00010E00020E00020E00040E00040E00080E00100E00100E00200E00200E00400E00800E
00FFFFF8000E00000E00000E00000E00000E00000E00000E00001F0001FFF0>21
33 127 160 32 I[<1000801E07001FFF001FFE001FF80013E000100000100000100000100000
10000010000010F800130E001407001803801003800001C00001C00001E00001E00001E00001E0
7001E0F001E0F001E0E001C08001C04003C04003802007001006000C1C0003F000>19
34 126 160 32 I[<007E0001C1000300800601C00E03C01C03C0180180380000380000780000
700000700000F0F800F30C00F40600F40300F80380F801C0F001C0F001E0F001E0F001E0F001E0
F001E07001E07001E07001E03801C03801C01803801C03000C0600070C0001F000>19
34 126 160 32 I[<01F800060E000803001001802001802000C06000C06000C06000C07000C0
7801803E01003F02001FC4000FF80003F80003FC00067F00083F80100F803007C06001C06000E0
C000E0C00060C00060C00060C000606000406000C03000801803000E0E0003F000>19
34 126 160 32 56 D[<01F000060C000C0600180700380380700380700380F001C0F001C0F001
C0F001E0F001E0F001E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001
C00001C00001C0000380000380300300780700780600700C002018001030000FC000>19
34 126 160 32 I[<0001800000018000000180000003C0000003C0000003C0000005E0000005
E000000DF0000008F0000008F0000010F800001078000010780000203C0000203C0000203C0000
401E0000401E0000401E0000800F0000800F0000FFFF000100078001000780030007C0020003C0
020003C0040003E0040001E0040001E00C0000F00C0000F03E0001F8FF800FFF>32
35 126 162 49 65 D[<FFFFF8000F800E0007800780078003C0078003E0078001E0078001F007
8001F0078001F0078001F0078001F0078001E0078003E0078007C007800F8007803E0007FFFE00
07800780078003C0078001E0078001F0078000F0078000F8078000F8078000F8078000F8078000
F8078000F8078001F0078001F0078003E0078007C00F800F00FFFFFC00>29
34 126 161 47 I[<0007E0100038183000E0063001C00170038000F0070000F00E0000701E00
00701C0000303C0000303C0000307C0000107800001078000010F8000000F8000000F8000000F8
000000F8000000F8000000F8000000F800000078000000780000107C0000103C0000103C000010
1C0000201E0000200E000040070000400380008001C0010000E0020000381C000007E000>28
36 125 162 47 I[<FFFFF0000F801E0007800700078003C0078001C0078000E0078000F00780
0078078000780780007C0780003C0780003C0780003C0780003E0780003E0780003E0780003E07
80003E0780003E0780003E0780003E0780003E0780003C0780003C0780007C0780007807800078
078000F0078000E0078001E0078003C0078007000F801E00FFFFF800>31
34 126 161 49 I[<FFFFFFC00F8007C0078001C0078000C00780004007800040078000600780
0020078000200780002007802020078020000780200007802000078060000780E00007FFE00007
80E000078060000780200007802000078020000780200007800000078000000780000007800000
07800000078000000780000007800000078000000FC00000FFFE0000>27
34 126 161 43 70 D[<0007F008003C0C1800E0021801C001B8038000F8070000780F0000381E
0000381E0000183C0000183C0000187C0000087800000878000008F8000000F8000000F8000000
F8000000F8000000F8000000F8000000F8001FFF780000F8780000787C0000783C0000783C0000
781E0000781E0000780F00007807000078038000B801C000B800E00318003C0C080007F000>32
36 125 162 51 I[<FFFC3FFF0FC003F0078001E0078001E0078001E0078001E0078001E00780
01E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E007FFFFE007
8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0
078001E0078001E0078001E0078001E0078001E00FC003F0FFFC3FFF>32
34 126 161 49 I[<03FFF0001F00000F00000F00000F00000F00000F00000F00000F00000F00
000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00
000F00000F00000F00700F00F80F00F80F00F80E00F01E00401C0020380018700007C000>20
35 126 161 33 74 D[<FFFC03FF000FC000F80007800060000780004000078000800007800100
000780020000078004000007800800000780100000078020000007804000000780800000078180
00000783C000000787E000000789E000000788F000000790F0000007A078000007C03C00000780
3C000007801E000007800F000007800F00000780078000078007C000078003C000078001E00007
8001E000078000F000078000F8000FC000FC00FFFC07FF80>33 34 126
161 51 I[<FFFE00000FC000000780000007800000078000000780000007800000078000000780
000007800000078000000780000007800000078000000780000007800000078000000780000007
800000078000000780000007800000078000800780008007800080078000800780018007800180
07800100078003000780030007800F000F803F00FFFFFF00>25 34 126
161 40 I[<FFC00003FF0FC00003F007C00003E005E00005E005E00005E004F00009E004F00009
E004F00009E004780011E004780011E004780011E0043C0021E0043C0021E0043C0021E0041E00
41E0041E0041E0040F0081E0040F0081E0040F0081E004078101E004078101E004078101E00403
C201E00403C201E00401E401E00401E401E00401E401E00400F801E00400F801E00400F801E004
007001E00E007001E01F007003F0FFE0203FFF>40 34 126 161 60 I[<FF8007FF07C000F807
C0007005E0002004F0002004F0002004780020047C0020043C0020041E0020041F0020040F0020
04078020040780200403C0200401E0200401E0200400F0200400F8200400782004003C2004003E
2004001E2004000F2004000F20040007A0040003E0040003E0040001E0040001E0040000E00E00
00601F000060FFE00020>32 34 126 161 49 I[<000FE00000783C0000E00E0003C007800780
03C00F0001E00E0000E01E0000F03C0000783C0000787C00007C7C00007C7800003C7800003CF8
00003EF800003EF800003EF800003EF800003EF800003EF800003EF800003EF800003E7800003C
7C00007C7C00007C3C0000783E0000F81E0000F00F0001E00F0001E0078003C003C0078000E00E
0000783C00000FE000>31 36 125 162 51 I[<FFFFF0000F803C0007800F0007800780078007
C0078003C0078003E0078003E0078003E0078003E0078003E0078003E0078003C0078007C00780
078007800F0007803C0007FFF00007800000078000000780000007800000078000000780000007
800000078000000780000007800000078000000780000007800000078000000FC00000FFFC0000
>27 34 126 161 44 I[<FFFFE000000F803C000007800E00000780078000078007C000078003
C000078003E000078003E000078003E000078003E000078003E000078003C000078007C0000780
07800007800E000007803C000007FFE000000780700000078038000007801C000007801E000007
800E000007800F000007800F000007800F000007800F000007800F800007800F800007800F8000
07800F808007800FC080078007C0800FC003C100FFFC01E2000000007C00>33
35 126 161 48 82 D[<03F0200C0C601802603001E07000E0600060E00060E00060E00020E000
20E00020F00000F000007800007F00003FF0001FFE000FFF0003FF80003FC00007E00001E00000
F00000F0000070800070800070800070800070C00060C00060E000C0F000C0C80180C6070081FC
00>20 36 125 162 36 I[<7FFFFFF87807807860078018400780084007800840078008C00780
0C8007800480078004800780048007800400078000000780000007800000078000000780000007
800000078000000780000007800000078000000780000007800000078000000780000007800000
0780000007800000078000000780000007800000078000000FC00003FFFF00>30
34 126 161 47 I[<FFF03FFC03FE1F8007E000F80F0003C000700F0003C000200F0003C00020
078001E00040078001E00040078001E0004003C002F0008003C002F0008003C002F0008001E004
78010001E00478010001E00478010000F0083C020000F0083C020000F0083C020000F8183E0600
0078101E04000078101E0400007C101E0400003C200F0800003C200F0800003C200F0800001E40
079000001E40079000001E40079000000F8003E000000F8003E000000F8003E00000070001C000
00070001C00000070001C000000300018000000200008000>47 35 127
161 67 87 D[<7FFFFE7E003E78003C7000786000784000F0C000F0C001E08003C08003C08007
80000780000F00001F00001E00003C00003C0000780000780000F00001F00001E00103C00103C0
010780010780030F00031E00021E00023C00063C000E78001EF8007EFFFFFE>24
34 125 161 40 90 D[<0FE0001838003C0C003C0E0018070000070000070000070000FF0007C7
001E07003C0700780700700700F00708F00708F00708F00F087817083C23900FC1E0>21
21 126 148 32 97 D[<0E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00
000E00000E00000E00000E00000E1F000E61C00E80600F00300E00380E003C0E001C0E001E0E00
1E0E001E0E001E0E001E0E001E0E001E0E001C0E003C0E00380F00700C80600C41C0083F00>23
35 127 162 36 I[<01FE000703000C07801C0780380300780000700000F00000F00000F00000
F00000F00000F00000F000007000007800403800401C00800C010007060001F800>18
21 126 148 29 I[<0000E0000FE00001E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000E001F8E00704E00C02E01C01E03800E07800E07000E0F000E0F000E0
F000E0F000E0F000E0F000E0F000E07000E07800E03800E01801E00C02E0070CF001F0FE>23
35 126 162 36 I[<01FC000707000C03801C01C03801C07801E07000E0F000E0FFFFE0F00000
F00000F00000F00000F000007000007800203800201C00400E008007030000FC00>19
21 127 148 29 I[<003C00C6018F038F030F070007000700070007000700070007000700FFF8
07000700070007000700070007000700070007000700070007000700070007000700070007807F
F8>16 35 128 162 20 I[<00007001F198071E180E0E181C07001C07003C07803C07803C0780
3C07801C07001C07000E0E000F1C0019F0001000001000001800001800001FFE000FFFC00FFFE0
3800F0600030400018C00018C00018C000186000306000303800E00E038003FE00>21
33 127 149 32 I[<0E0000FE00001E00000E00000E00000E00000E00000E00000E00000E0000
0E00000E00000E00000E00000E1F800E60C00E80E00F00700F00700E00700E00700E00700E0070
0E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF>24
35 127 162 36 I[<1C001E003E001E001C00000000000000000000000000000000000E00FE00
1E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC0>
10 34 127 161 19 I[<01C003E003E003E001C00000000000000000000000000000000001E00F
E001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0
00E000E000E000E000E060E0F0C0F18061803E00>11 44 130 161 20 I[<0E0000FE00001E00
000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E03FC0E01
F00E01C00E01800E02000E04000E08000E10000E38000EF8000F1C000E1E000E0E000E07000E07
800E03C00E01C00E01E00E00F00E00F8FFE3FE>23 35 127 162 35 I[<0E00FE001E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00
0E000E000E000E000E000E000E000E000E000E00FFE0>11 35 127 162
19 I[<0E1FC07F00FE60E183801E807201C00F003C00E00F003C00E00E003800E00E003800E00E
003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E0
0E003800E00E003800E00E003800E00E003800E00E003800E0FFE3FF8FFE>39
21 127 148 56 I[<0E1F80FE60C01E80E00F00700F00700E00700E00700E00700E00700E0070
0E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF>24
21 127 148 36 I[<01FC000707000C01801800C03800E0700070700070F00078F00078F00078
F00078F00078F00078F000787000707800F03800E01C01C00E038007070001FC00>21
21 127 148 32 I[<0E1F00FE61C00E80600F00700E00380E003C0E001C0E001E0E001E0E001E
0E001E0E001E0E001E0E001E0E003C0E003C0E00380F00700E80E00E41C00E3F000E00000E0000
0E00000E00000E00000E00000E00000E00000E0000FFE000>23 31 127
148 36 I[<0E3CFE461E8F0F0F0F060F000E000E000E000E000E000E000E000E000E000E000E00
0E000E000F00FFF0>16 21 127 148 25 114 D[<0F8830786018C018C008C008E008F0007F80
3FE00FF001F8003C801C800C800CC00CC008E018D0308FC0>14 21 126
148 25 I[<02000200020002000600060006000E001E003E00FFF80E000E000E000E000E000E00
0E000E000E000E000E000E040E040E040E040E040E040708030801F0>14
31 127 158 25 I[<0E0070FE07F01E00F00E00700E00700E00700E00700E00700E00700E0070
0E00700E00700E00700E00700E00700E00700E00F00E00F006017003827800FC7F>24
21 127 148 36 I[<FFC1FE1E00780E00300E00200E0020070040070040038080038080038080
01C10001C10000E20000E20000E200007400007400003800003800003800001000>23
21 127 148 35 I[<FF8FF8FF1E01E03C1C01C0180E01C0180E01E0100E01E010070260200702
70200702702003843040038438400384384001C8188001C81C8001C81C8000F00D0000F00F0000
F00F00006006000060060000600600>32 21 127 148 47 I[<FFC1FE1E00780E00300E00200E
002007004007004003808003808003808001C10001C10000E20000E20000E20000740000740000
3800003800003800001000001000002000002000002000004000F04000F08000F180004300003C
0000>23 31 127 148 35 121 D E /Fp 20 118 400 360 dfs[<FFFFF8FFFFF8FFFFF8FFFFF8
>21 4 127 148 29 45 D[<FFFFFFFF800000FFFFFFFFF0000003FC0003FC000001F800007F00
0001F800001F800001F8000007C00001F8000003F00001F8000001F80001F8000000F80001F800
00007C0001F80000003E0001F80000003F0001F80000001F0001F80000001F8001F80000000F80
01F80000000FC001F80000000FC001F800000007E001F800000007E001F800000007E001F80000
0007F001F800000003F001F800000003F001F800000003F001F800000003F801F800000003F801
F800000003F801F800000003F801F800000003F801F800000003F801F800000003F801F8000000
03F801F800000003F801F800000003F801F800000003F801F800000003F801F800000003F001F8
00000003F001F800000003F001F800000007F001F800000007E001F800000007E001F800000007
E001F80000000FC001F80000000FC001F80000001F8001F80000001F8001F80000003F0001F800
00003E0001F80000007E0001F8000000FC0001F8000001F80001F8000003F00001F8000007E000
01F800000F800001F800003F000003FC0001FC0000FFFFFFFFF00000FFFFFFFF800000>53
59 124 186 68 68 D[<FFFFF0FFFFF003FC0001F80001F80001F80001F80001F80001F80001F8
0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8
0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8
0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8
0001F80001F80001F80001F80001F80001F80001F80003FC00FFFFF0FFFFF0>20
59 124 186 31 73 D[<FFF8000000001FFF80FFFC000000003FFF8003FC000000003FE00001FC
000000003FC00001BE000000006FC00001BE000000006FC00001BE000000006FC000019F000000
00CFC000019F00000000CFC000019F00000000CFC000018F800000018FC000018F800000018FC0
000187C00000030FC0000187C00000030FC0000187C00000030FC0000183E00000060FC0000183
E00000060FC0000183E00000060FC0000181F000000C0FC0000181F000000C0FC0000181F00000
0C0FC0000180F80000180FC0000180F80000180FC00001807C0000300FC00001807C0000300FC0
0001807C0000300FC00001803E0000600FC00001803E0000600FC00001803E0000600FC0000180
1F0000C00FC00001801F0000C00FC00001801F0000C00FC00001800F8001800FC00001800F8001
800FC000018007C003000FC000018007C003000FC000018007C003000FC000018003E006000FC0
00018003E006000FC000018003E006000FC000018001F00C000FC000018001F00C000FC0000180
01F00C000FC000018000F818000FC000018000F818000FC0000180007C30000FC0000180007C30
000FC0000180007C30000FC0000180003E60000FC0000180003E60000FC0000180003E60000FC0
000180001FC0000FC0000180001FC0000FC0000180001FC0000FC00003C0000F80000FC00007E0
000F80000FC0000FF0000700001FE000FFFF00070007FFFF80FFFF00070007FFFF80>65
59 124 186 81 77 D[<FFFFFFFF0000FFFFFFFFE00003FC0003F80001F800007E0001F800003F
0001F800000F8001F8000007C001F8000007E001F8000003F001F8000003F001F8000003F801F8
000001F801F8000001FC01F8000001FC01F8000001FC01F8000001FC01F8000001FC01F8000001
FC01F8000001FC01F8000001F801F8000003F801F8000003F001F8000003F001F8000007E001F8
000007C001F800000F8001F800003F0001F800007C0001F80003F00001FFFFFFC00001F8000000
0001F80000000001F80000000001F80000000001F80000000001F80000000001F80000000001F8
0000000001F80000000001F80000000001F80000000001F80000000001F80000000001F8000000
0001F80000000001F80000000001F80000000001F80000000001F80000000001F80000000001F8
0000000001F80000000001F80000000001F80000000001F80000000001F80000000003FC000000
00FFFFF0000000FFFFF0000000>46 59 124 186 60 80 D[<000FF00080007FFE018001F00F81
8003C001C380070000E3800E000037801C00003F803C00001F803800000F807800000F80700000
07807000000780F000000380F000000380F000000380F000000380F000000180F800000180F800
000180FC000001807C000000007E000000003F000000003FC00000001FF00000000FFF0000000F
FFF0000007FFFF000001FFFFC00000FFFFF000003FFFFC000003FFFE0000007FFF00000007FF00
0000007F800000001FC00000000FC000000007E000000003E000000003E000000001F0C0000001
F0C0000001F0C0000000F0C0000000F0C0000000F0E0000000F0E0000000F0E0000000E0F00000
00E0F0000001E0F8000001C0F8000001C0FC00000380FE00000780F700000700E1C0001E00E0F0
003C00C07E00F000C00FFFE0008001FF0000>36 61 124 187 49 83 D[<003F80000001C0F000
00030038000004001C00000C001E000018000F00001C000F80003E000780003F0007C0003F0007
C0003F0007C0001E0007C000000007C000000007C000000007C00000003FC000000FE7C000007E
07C00001F007C00007E007C0000F8007C0001F0007C0003F0007C0003E0007C0007E0007C0007C
0007C060FC0007C060FC0007C060FC0007C060FC000FC060FC000FC0607C000FC0607E0017C060
3E0023E0C01F0041F18007C180FF0000FE003E00>35 37 124 164 43 97
D[<0007F800003C0E0000F0018001E000C003C00060078000300F0000701F0000F81F0001F83E
0001F83E0001F87E0000F07C0000007C000000FC000000FC000000FC000000FC000000FC000000
FC000000FC000000FC000000FC0000007C0000007C0000007E0000003E0000003E00000C1F0000
0C1F0000180F8000180780003003C0006001E000C000F00180003C0E000007F800>30
37 125 164 39 99 D[<0000000700000000FF00000007FF00000007FF000000003F000000001F
000000001F000000001F000000001F000000001F000000001F000000001F000000001F00000000
1F000000001F000000001F000000001F000000001F000000001F000000001F000000001F000000
001F000000001F000003F81F00001E061F000070019F0001E000DF0003C0007F000780003F000F
80003F000F00001F001F00001F003E00001F003E00001F007E00001F007C00001F007C00001F00
FC00001F00FC00001F00FC00001F00FC00001F00FC00001F00FC00001F00FC00001F00FC00001F
00FC00001F007C00001F007C00001F007E00001F003E00001F003E00001F001E00001F001F0000
1F000F00003F000780007F0003C0005F0001E0009F0000F0031F80003C0E1FFC0007F01FFC>38
60 125 187 49 I[<000FF00000383C0000E00F0001C00780038003C0078001E00F0001F01F00
00F01E0000F83E0000F83E0000F87C00007C7C00007C7C00007CFC00007CFC00007CFFFFFFFCFC
000000FC000000FC000000FC000000FC000000FC0000007C0000007C0000007E0000003E000000
3E00000C1E00000C1F0000180F0000180780003003C0006001E000C000F00180003C0E000007F8
00>30 37 125 164 39 I[<0000FC000003830000070380000E07C0001E0FC0003C0FC0007C0F
C0007C07800078000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
000000F8000000F8000000F8000000F8000000F8000000F80000FFFFFC00FFFFFC0000F8000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F80000
00F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F800
0000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
000000F8000001FC00003FFFF0003FFFF000>26 60 127 187 27 I[<00000007C0000FE01860
003838207000F01E40F001E00F80F003C007806007C007C0000F8003E0000F8003E0000F8003E0
001F8003F0001F8003F0001F8003F0001F8003F0001F8003F0001F8003F0000F8003E0000F8003
E0000F8003E00007C007C00003C007800001E00F000003F01E00000238380000060FE000000400
00000004000000000C000000000C000000000E000000000E000000000700000000078000000003
FFFF000003FFFFF00001FFFFFC0000FFFFFE00078000FF000E00001F801C000007C038000003C0
78000003C070000001E0F0000001E0F0000001E0F0000001E0F0000001E0F0000001E078000003
C038000003803C000007801E00000F000700001C0003C0007800007803C000000FFE0000>36
56 126 165 43 I[<038007C00FE00FE00FE007C0038000000000000000000000000000000000
0000000000000000000000E01FE0FFE0FFE007E003E003E003E003E003E003E003E003E003E003
E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E0
07F0FFFFFFFF>16 57 126 184 23 105 D[<00E01FC0007F00001FE060780181E000FFE1803C
0600F000FFE2001E0800780007E4001F10007C0003E4001F10007C0003E8000F20003C0003F000
0FC0003E0003F0000FC0003E0003F0000FC0003E0003E0000F80003E0003E0000F80003E0003E0
000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E0003
E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E00
03E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E
0003E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F80003E0003E0000F8000
3E0003E0000F80003E0003E0000F80003E0007F0001FC0007F00FFFF83FFFE0FFFF8FFFF83FFFE
0FFFF8>61 37 125 164 74 109 D[<00E03FC0001FE0C0F000FFE1007800FFE2003C0007E400
3E0003E8001E0003E8001E0003F0001F0003F0001F0003F0001F0003E0001F0003E0001F0003E0
001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003
E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F00
03E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0007F0003F80FFFF87FF
FCFFFF87FFFC>38 37 125 164 49 I[<0007F00000003C1E000000F007800001C001C0000380
00E000078000F0000F000078001E00003C001E00003C003E00003E003E00003E007C00001F007C
00001F007C00001F00FC00001F80FC00001F80FC00001F80FC00001F80FC00001F80FC00001F80
FC00001F80FC00001F80FC00001F807C00001F007C00001F007C00001F003E00003E003E00003E
001E00003C001F00007C000F00007800078000F00003C001E00001C001C00000F0078000003C1E
00000007F00000>33 37 125 164 43 I[<00E0FC001FE10600FFE20F00FFE41F8007E81F8003
E81F8003F00F0003F0060003F0000003F0000003E0000003E0000003E0000003E0000003E00000
03E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E000
0003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000007F0
0000FFFFC000FFFFC000>25 37 125 164 33 114 D[<00FF02000700C6000C002E0010001E00
30001E0060000E0060000E00E0000600E0000600E0000600F0000600F8000600FC0000007F0000
003FF000003FFF80001FFFE00007FFF00001FFFC00003FFE000001FE0000003F00C0001F00C000
0F80C0000780E0000380E0000380E0000380E0000380F0000300F0000300F8000700F8000600E4
000C00E2001800C1807000807F8000>25 37 125 164 34 I[<00180000001800000018000000
1800000018000000380000003800000038000000380000007800000078000000F8000000F80000
01F8000003F8000007F800001FFFFE00FFFFFE0000F8000000F8000000F8000000F8000000F800
0000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
000000F8000000F8000000F8000000F8000000F8018000F8018000F8018000F8018000F8018000
F8018000F8018000F8018000F801800078018000780300007C0300003C0200001E0600000F0C00
0003F000>25 53 127 180 33 I[<00E00007001FE000FF00FFE007FF00FFE007FF0007E0003F
0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E000
1F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0
001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003E0001F0003
E0003F0003E0003F0001E0003F0001E0005F0001F0009F0000F0009F000078011F80001C061FFC
0007F81FFC>38 37 125 164 49 I E /Fq 5 85 691 432 dfs[<0000001C000000001C000000
003C000000003C000000007C00000000FC00000000FC00000001FC00000001FC000000037C0000
00037C000000067C0000000E7C0000000C7C000000187C000000187C000000307C000000307E00
0000603E000000603E000000C03E000001C03E000001803E000003003E000003003E000006003E
000006003E00000FFFFE00000FFFFE000018003E000038003E000030003E000060003E00006000
3E0000C0003F0000C0001F000180001F000380001F000380001F000FC0003F00FFF007FFF8FFF0
07FFF0>37 42 124 169 70 65 D[<003FFFFF8000003FFFFFF0000001F801F8000001F0007C00
0001F0003E000001F0001F000001F0000F000003E0000F800003E0000F800003E0000F800003E0
0007800007C00007C00007C00007C00007C00007C00007C0000FC0000F80000FC0000F80000FC0
000F80000FC0000F80000FC0001F00000F80001F00001F80001F00001F80001F00001F80003E00
001F00003E00003F00003E00003E00003E00003E00007C00007C00007C00007C00007C0000F800
007C0000F80000F80001F00000F80003E00000F80003C00000F80007800001F0000F000001F000
3E000001F0007C000003F003F000007FFFFFC00000FFFFFE000000>42 41
124 168 72 68 D[<003FFFFFFF003FFFFFFF0001F8003F0001F0000F0001F0000E0001F0000E
0001F000060003E000060003E000060003E0000E0003E0000C0007C0000C0007C0180C0007C018
0C0007C01800000F803000000F803000000F807000000F80F000001FFFE000001FFFE000001F01
E000001F00E000003E00C000003E00C000003E00C000003E00C000007C018000007C000000007C
000000007C00000000F800000000F800000000F800000000F800000001F000000001F000000001
F000000003F00000007FFFC00000FFFFC00000>40 41 124 168 62 70
D[<003FFFFE00003FFFFFC00001F803F00001F001F80001F000F80001F0007C0001F0007C0003
E0007E0003E0007E0003E0007E0003E0007E0007C000FC0007C000FC0007C000F80007C001F800
0F8001F0000F8003E0000F8007C0000F800F00001F007C00001FFFE000001F00E000001F007800
003E003C00003E003C00003E003E00003E003E00007C003E00007C003E00007C003E00007C003E
0000F8007E0000F8007E0000F8007E0000F8007E0001F0007E0601F0007E0601F0007E0603F000
7E0C7FFF803E08FFFF801E1000000007E0>39 42 124 168 70 82 D[<0FFFFFFFF80FFFFFFFF8
1F803E01F81E007C00781C007C003038007C003038007C00303000F800303000F800306000F800
306000F800606001F00060C001F00060C001F000600001F000000003E000000003E000000003E0
00000003E000000007C000000007C000000007C000000007C00000000F800000000F800000000F
800000000F800000001F000000001F000000001F000000001F000000003E000000003E00000000
3E000000003E000000007C000000007C000000007C00000000FC000000FFFFF80000FFFFF80000
>37 41 117 168 69 84 D E end
%%EndProlog
%%BeginSetup
%%Feature: *Resolution 400
TeXDict begin 
%%EndSetup
%%Page: 0 1
bop 1060 928 a Fq(D)34 b(R)g(A)h(F)g(T)299 1049 y Fp(Do)r(cumen)n(t)30
b(for)g(a)g(Standard)h(Message-P)n(assi)q(ng)i(In)n(terface)913
1309 y Fo(Scott)23 b(Berryman,)c Fn(Y)-5 b(ale)25 b(Univ)933
1386 y Fo(James)c(Co)n(wnie,)g Fn(Meiko)j(Ltd)632 1464 y Fo(Jac)n(k)e
(Dongarra,)i Fn(Univ.)32 b(of)23 b(T)-5 b(ennesse)m(e)26 b(and)f(ORNL)1068
1541 y Fo(Al)c(Geist,)f Fn(ORNL)1060 1619 y Fo(Bill)e(Gropp,)k
Fn(ANL)1023 1696 y Fo(Rolf)f(Hemp)r(el,)c Fn(GMD)1016 1774
y Fo(Bob)22 b(Knigh)n(ten,)e Fn(Intel)1049 1851 y Fo(Rust)n(y)i(Lusk,)f
Fn(ANL)818 1929 y Fo(Stev)n(e)h(Otto,)f Fn(Or)m(e)m(gon)k(Gr)m(aduate)f(Inst)
771 2006 y Fo(T)-5 b(on)n(y)21 b(Skjellum,)c Fn(Missisippi)k(State)26
b(Univ)871 2083 y Fo(Marc)c(Snir,)e Fn(IBM)j(T.)g(J.)f(Watson)992
2161 y Fo(Da)n(vid)g(W)-5 b(alk)n(er,)19 b Fn(ORNL)835 2238
y Fo(Stev)n(e)i(Zenith,)f Fn(Kuck)26 b(&)e(Asso)m(ciates)1126
2404 y Fo(Ma)n(y)d(4,)g(1993)116 2481 y(This)g(w)n(ork)h(w)n(as)h(supp)r
(orted)h(b)n(y)d(ARP)-5 b(A)21 b(and)i(NSF)e(under)h(con)n(tract)i(n)n(um)n
(b)r(er)c(###,)f(b)n(y)i(the)256 2559 y(National)h(Science)e(F)-5
b(oundation)23 b(Science)d(and)j(T)-5 b(ec)n(hnology)21 b(Cen)n(ter)h(Co)r
(op)r(erativ)n(e)867 2636 y(Agreemen)n(t)f(No.)28 b(CCR-8809615.)p
eop
%%Page: 1 2
bop 100 477 a Fm(Chapter)44 b(1)100 756 y Fl(Colle)o(c)o(ti)n(v)l(e)k(Com)n
(m)-10 b(uni)n(c)o(ati)o(on)1189 1054 y Fk(Al)20 b(Geist)1168
1129 y(Marc)g(Snir)100 1331 y Fj(1.1)94 b(In)m(tro)s(duction)100
1469 y Fk(This)28 b(section)h(is)f(a)g(draft)j(of)e(the)g(curren)n(t)i(prop)r
(osal)g(for)e(collectiv)n(e)f(comm)n(unication.)51 b(Collectiv)n(e)100
1545 y(comm)n(unication)15 b(is)f(de\014ned)k(to)d(b)r(e)g(comm)n(unication)g
(that)i(in)n(v)n(olv)n(es)f(a)f(group)i(of)f(pro)r(cesses.)26
b(Examples)100 1620 y(are)20 b(broadcast)j(and)e(global)f(sum.)26
b(A)18 b(collectiv)n(e)i(op)r(eration)i(is)d(executed)i(b)n(y)f(ha)n(ving)h
(all)e(pro)r(cesses)h(in)100 1695 y(the)k(group)i(call)e(the)g(comm)n
(unication)g(routine,)i(with)e(matc)n(hing)h(parameters.)38
b(Routines)24 b(can)h(\(but)100 1770 y(are)f(not)g(required)i(to\))e(return)j
(as)c(so)r(on)h(as)f(their)h(participation)k(in)23 b(the)h(collectiv)n(e)g
(comm)n(unication)100 1846 y(is)i(complete.)43 b(The)27 b(completion)f(of)h
(a)f(call)g(indicates)i(that)g(the)f(caller)f(is)g(no)n(w)h(free)g(to)g
(access)f(the)100 1921 y(lo)r(cations)j(in)g(the)g(comm)n(unication)f
(bu\013er,)k(or)d(an)n(y)g(other)i(lo)r(cation)e(that)h(can)f(b)r(e)f
(referenced)j(b)n(y)100 1996 y(the)26 b(collectiv)n(e)e(op)r(eration.)43
b(Ho)n(w)n(ev)n(er,)26 b(it)f(do)r(es)g(not)h(indicate)g(that)h(other)f(pro)r
(cesses)g(in)f(the)g(group)100 2072 y(ha)n(v)n(e)c(started)i(the)d(op)r
(eration)j(\(unless)e(otherwise)g(indicated)i(in)d(the)h(description)h(of)f
(the)g(op)r(eration\).)100 2147 y(Ho)n(w)n(ev)n(er,)i(the)h(successful)f
(completion)g(of)g(a)f(collectiv)n(e)h(comm)n(unication)f(call)g(ma)n(y)f
(dep)r(end)k(on)e(the)100 2222 y(execution)f(of)f(a)g(matc)n(hing)g(call)f
(at)h(all)f(pro)r(cesses)h(in)g(the)g(group.)221 2299 y(The)g(syn)n(tax)h
(and)f(seman)n(tics)g(of)g(the)g(collectiv)n(e)f(op)r(erations)k(is)19
b(de\014ned)k(so)e(as)f(to)h(b)r(e)f(consisten)n(t)100 2374
y(with)h(the)g(syn)n(tax)h(and)g(seman)n(tics)e(of)h(the)h(p)r(oin)n(t)g(to)f
(p)r(oin)n(t)h(op)r(erations.)221 2452 y(The)e(reader)h(is)e(referred)j(to)e
(the)g(p)r(oin)n(t-to-p)r(oin)o(t)k(comm)n(unication)19 b(section)i(of)e(the)
i(curren)n(t)h(MPI)100 2527 y(draft)g(for)e(information)i(concerning)g(comm)n
(unication)d(bu\013ers)j(and)e(their)h(manipulations.)28 b(The)20
b(con-)100 2602 y(text)d(section)h(describ)r(es)f(the)h(formation,)g
(manipulation,)h(and)f(query)g(functions)h(\(suc)n(h)f(as)f(group)i(size\))
100 2677 y(that)j(are)f(a)n(v)m(ailable)h(for)f(groups)i(and)f(group)h(ob)s
(jects.)221 2754 y(The)31 b(collectiv)n(e)g(comm)n(unication)g(routines)j
(are)d(built)i(ab)r(o)n(v)n(e)f(the)g(p)r(oin)n(t-to-p)r(oin)n(t)k(routines.)
100 2830 y(While)17 b(v)n(endors)h(ma)n(y)e(optimize)f(certain)j(collectiv)n
(e)f(routines)i(for)e(their)h(arc)n(hitectures,)j(a)16 b(complete)g(li-)100
2905 y(brary)22 b(of)e(the)h(collectiv)n(e)e(comm)n(unication)h(routines)i
(can)e(b)r(e)g(written)i(en)n(tirely)f(using)g(p)r(oin)n(t-to-p)r(oin)n(t)100
2980 y(comm)n(unication)28 b(functions.)51 b(W)-5 b(e)27 b(are)h(using)h
(naiv)n(e)f(implemen)n(tations)g(of)g(the)h(collectiv)n(e)e(calls)g(in)100
3056 y(terms)f(of)i(p)r(oin)n(t)h(to)e(p)r(oin)n(t)i(op)r(erations)h(in)d
(order)i(to)f(pro)n(vide)h(an)f(op)r(erational)i(de\014nition)g(of)d(their)
100 3131 y(seman)n(tics.)221 3208 y(The)21 b(follo)n(wing)h(comm)n(unication)
e(functions)k(are)d(prop)r(osed.)191 3340 y Fi(\017)31 b Fk(Broadcast)22
b(from)f(one)g(mem)n(b)r(er)d(to)j(all)f(mem)n(b)r(ers)f(of)i(a)f(group.)191
3472 y Fi(\017)31 b Fk(Barrier)22 b(across)f(all)f(group)j(mem)n(b)r(ers)191
3605 y Fi(\017)31 b Fk(Gather)21 b(data)i(from)d(all)g(group)j(mem)n(b)r(ers)
18 b(to)j(one)g(mem)n(b)r(er.)1285 3771 y(1)p eop
%%Page: 2 3
bop 100 -134 a Fk(2)968 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)191 60 y Fi(\017)31 b Fk(Scatter)22 b(data)g(from)f
(one)g(mem)n(b)r(er)d(to)j(all)g(mem)n(b)r(ers)d(of)j(a)g(group.)191
183 y Fi(\017)31 b Fk(Global)20 b(op)r(erations)j(suc)n(h)e(as)f(sum,)f(max,)
g(min,)g(etc.,)h(w)n(ere)g(the)i(result)f(is)f(kno)n(wn)h(b)n(y)g(all)f
(group)252 258 y(mem)n(b)r(ers)d(and)k(a)f(v)m(ariation)i(where)e(the)h
(result)g(is)f(kno)n(wn)h(b)n(y)f(only)h(one)f(mem)n(b)r(er.)25
b(The)20 b(abilit)n(y)252 333 y(to)h(ha)n(v)n(e)g(user)h(de\014ned)h(global)e
(op)r(erations.)191 456 y Fi(\017)31 b Fk(Sim)n(ultaneous)20
b(shift)f(of)f(data)i(around)h(the)e(group,)i(the)e(simplest)e(example)g(b)r
(eing)i(all)f(mem)n(b)r(ers)252 531 y(sending)k(their)g(data)g(to)f
(\(rank+1\))i(with)e(wrap)h(around.)191 653 y Fi(\017)31 b
Fk(Scan)21 b(across)h(all)e(mem)n(b)r(ers)e(of)j(a)g(group)i(\(also)e(called)
f(parallel)i(pre\014x\).)191 776 y Fi(\017)31 b Fk(Broadcast)22
b(from)f(all)f(mem)n(b)r(ers)e(to)j(all)f(mem)n(b)r(ers)f(of)i(a)f(group.)191
899 y Fi(\017)31 b Fk(Scatter)c(data)f(from)f(all)f(mem)n(b)r(ers)f(to)i(all)
f(mem)n(b)r(ers)f(of)i(a)g(group)i(\(also)f(called)f(complete)f(ex-)252
974 y(c)n(hange)e(or)f(index\).)221 1092 y(T)-5 b(o)21 b(simplify)f(the)h
(collectiv)n(e)g(comm)n(unication)g(in)n(terface)j(it)d(is)f(designed)j(with)
e(t)n(w)n(o)h(la)n(y)n(ers.)30 b(The)100 1168 y(lo)n(w)c(lev)n(el)f(routines)
k(ha)n(v)n(e)e(all)e(the)i(generalit)n(y)h(of,)g(and)f(mak)n(e)e(use)h(of,)i
(the)f(comm)n(unication)f(bu\013er)100 1243 y(routines)c(of)f(the)g(p)r(oin)n
(t-to-p)r(oin)n(t)k(section)20 b(whic)n(h)h(allo)n(ws)f(arbitrarily)j
(complex)c(messages)f(to)j(b)r(e)f(con-)100 1318 y(structed.)29
b(The)20 b(second)g(lev)n(el)f(routines)j(are)d(similar)f(to)i(the)g(upp)r
(er)i(lev)n(el)d(p)r(oin)n(t-to-p)r(oin)n(t)24 b(routines)d(in)100
1394 y(that)h(they)g(send)f(only)g(a)g(con)n(tiguous)i(bu\013er.)100
1583 y Fj(1.2)94 b(Group)32 b(F)-8 b(unctions)100 1719 y Fk(A)22
b(full)h(description)i(of)e(the)g(group)i(formation)f(and)f(manipulation)i
(functions)g(can)e(b)r(e)f(found)j(in)e(the)100 1794 y(con)n(text)j(c)n
(hapter)g(of)f(the)f(MPI)e(do)r(cumen)n(t.)38 b(Here)24 b(w)n(e)f(describ)r
(e)i(only)f(those)h(group)h(functions)h(that)100 1869 y(are)21
b(used)h(in)e(the)i(seman)n(tic)e(description)j(of)e(the)g(collectiv)n(e)g
(comm)n(unication)g(routines.)221 1944 y(An)h(initial)g(group)j(con)n
(taining)g(all)c(pro)r(cesses)i(is)e(supplied)j(b)n(y)f(default)h(in)e(MPI.)e
(MPI)h(pro)n(vides)100 2020 y(a)f(pro)r(cedure)j(that)f(returns)h(the)e
(handle)h(to)e(this)h(initial)f(group.)30 b(The)20 b(pro)r(cesses)g(in)h(the)
g(inital)f(group)100 2095 y(eac)n(h)30 b(ha)n(v)n(e)g(a)g(unique)g(rank)h(in)
e(the)h(group)i(represen)n(ted)h(b)n(y)d(in)n(tegers)h(\(0,)h(1,)f(2,)g(...,)
f([n)n(um)n(b)r(er-of-)100 2170 y(pro)r(cesses)21 b(-)g(1].)221
2293 y Fg(MPI)p 365 2293 21 3 v 26 w(ALLGR)n(OUP\(group\))f
Fk(Return)k(the)f(descriptor)h(of)f(the)f(inital)h(group)h(con)n(taining)h
(all)100 2415 y(pro)r(cesses.)100 2534 y Fg(OUT)d(group)32
b Fk(handle)22 b(to)f(descriptor)j(ob)s(ject)e(of)f(initial)g(group.)221
2700 y Fg(MPI)p 365 2700 V 26 w(RANK\(group,)i(rank\))e Fk(Return)h(the)g
(rank)f(of)g(the)g(calling)g(pro)r(cess)g(within)h(the)f(sp)r(eci-)100
2822 y(\014ed)g(group.)100 2941 y Fg(IN)i(group)32 b Fk(group)23
b(handle)100 3063 y Fg(OUT)f(rank)31 b Fk(in)n(teger)221 3229
y Fg(MPI)p 365 3229 V 26 w(GSIZE\(group,)25 b(size\))20 b Fk(Return)j(the)e
(n)n(um)n(b)r(er)h(of)g(pro)r(cesses)f(that)i(b)r(elong)f(to)f(the)h(sp)r
(ec-)100 3352 y(i\014ed)f(group.)100 3470 y Fg(IN)i(group)32
b Fk(group)23 b(handle)100 3593 y Fg(OUT)f(size)30 b Fk(in)n(teger)p
eop
%%Page: 3 4
bop 100 -134 a Fh(1.3.)47 b(COMMUNICA)-5 b(TION)17 b(FUNCTIONS)1283
b Fk(3)100 60 y Fj(1.3)94 b(Comm)-6 b(uni)n(cati)o(on)27 b(F)-8
b(unctions)100 195 y Fk(The)15 b(prop)r(osed)i(comm)n(unication)e(functions)j
(are)d(divided)h(in)n(to)g(t)n(w)n(o)g(la)n(y)n(ers.)26 b(The)15
b(lo)n(w)n(est)h(lev)n(el)e(uses)h(the)100 271 y(same)j(bu\013er)j
(descriptor)h(ob)s(jects)f(a)n(v)m(ailable)f(in)f(p)r(oin)n(t-to-p)r(oin)o(t)
24 b(to)19 b(create)i(noncon)n(tiguou)q(s,)i(m)n(ultiple)100
346 y(data)k(t)n(yp)r(e)g(messages.)40 b(The)26 b(second)g(lev)n(el)g(is)f
(similar)f(to)i(the)g(blo)r(c)n(k)g(send/receiv)n(e)i(p)r(oin)n(t-to-p)r(oin)
n(t)100 421 y(op)r(erations)h(in)e(that)h(it)f(supp)r(orts)i(only)e(con)n
(tiguous)j(bu\013ers)e(of)f(data.)47 b(F)-5 b(or)26 b(eac)n(h)i(comm)n
(unication)100 497 y(op)r(eration,)23 b(w)n(e)d(list)h(these)g(t)n(w)n(o)g
(lev)n(el)f(of)h(calls)f(together.)100 686 y Fj(1.4)94 b(Sync)m(hronization)
100 821 y Fg(Barrier)25 b(sync)n(hronization)100 983 y(MPI)p
244 983 21 3 v 25 w(BARRIER\()c(group)k(\))221 1105 y Fk(MPI)p
345 1105 19 3 v 21 w(BARRIER)18 b(blo)r(c)n(ks)k(the)h(calling)e(pro)r(cess)i
(un)n(til)g(all)e(group)j(mem)n(b)r(ers)19 b(ha)n(v)n(e)k(called)f(it;)g(the)
100 1180 y(call)e(returns)k(at)d(an)n(y)g(pro)r(cess)g(only)g(after)i(all)d
(group)j(mem)n(b)r(ers)18 b(ha)n(v)n(e)k(en)n(tered)h(the)e(call.)100
1296 y Fg(IN)i(group)32 b Fk(group)23 b(handle)221 1411 y Ff(MPI)p
318 1411 20 3 v 26 w(BARRIER)q(\()37 b(group)f(\))21 b Fk(is)100
1526 y Ff(MPI_CR)q(E)q(A)q(TE)q(_)q(B)q(U)q(FF)q(E)q(R)q(\()q(b)q(uf)q(f)q(e)
q(r)q(_h)q(a)q(n)q(d)q(l)q(e,)37 b(MPI_BU)q(F)q(F)q(ER)q(,)g(MPI_PE)q(R)q(SI)
q(S)q(T)q(E)q(NT)q(\))q(;)100 1601 y(MPI_GS)q(I)q(Z)q(E\()g(group,)g(&size)f
(\);)100 1677 y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100
1752 y(if)d(\(rank)q(==)q(0)q(\))100 1827 y({)195 1902 y(for)i(\(i=1;)h(i)c
(<)h(size;)j(i++\))291 1978 y(MPI_RE)q(C)q(V\()q(b)q(u)q(f)q(f)q(er)q(_)q(h)q
(a)q(nd)q(l)q(e)q(,)h(i,)c(tag,)i(group,)i(return)q(_)q(ha)q(n)q(d)q(l)q(e\))
q(;)195 2053 y(for)e(\(i=1;)h(i)c(<)h(size;)j(i++\))291 2128
y(MPI_SE)q(N)q(D\()q(b)q(u)q(f)q(f)q(er)q(_)q(h)q(a)q(nd)q(l)q(e)q(,)h(i,)c
(tag,)i(group\))q(;)100 2204 y(})100 2279 y(else)100 2354 y({)195
2429 y(MPI_S)q(EN)q(D)q(\()q(b)q(uf)q(f)q(e)q(r)q(_)q(ha)q(n)q(d)q(l)q(e,)i
(0,)d(tag,)h(group\))q(;)195 2505 y(MPI_R)q(EC)q(V)q(\()q(b)q(uf)q(f)q(e)q(r)
q(_)q(ha)q(n)q(d)q(l)q(e,)i(0,)d(tag,)h(group,)h(return)q(_)q(h)q(a)q(n)q(dl)
q(e)q(\))q(;)100 2580 y(})100 2655 y(MPI_FR)q(E)q(E)q(\(b)q(u)q(f)q(f)q(er)q
(_)q(h)q(a)q(n)q(dl)q(e)q(\))q(;)100 2844 y Fj(1.5)94 b(Data)30
b(mo)m(v)m(e)e(functions)100 2980 y Fg(Broadcast)100 3141 y(MPI)p
244 3141 21 3 v 25 w(BCAST\()21 b(bu\013er)p 738 3141 V 25
w(handle,)i(group,)j(ro)r(ot)f(\))221 3264 y Ff(MPI)p 318 3264
20 3 v 26 w(BCAST)20 b Fk(broadcasts)g(a)d(message)f(from)h(the)g(pro)r(cess)
h(with)g(rank)g Ff(root)i Fk(to)e(all)e(other)j(pro)r(cesses)100
3339 y(of)25 b(the)h(group.)41 b(It)25 b(is)f(called)h(b)n(y)g(all)f(mem)n(b)
r(ers)f(of)i(group)i(using)f(the)f(same)e(argumen)n(ts)j(for)g
Ff(group,)100 3414 y(and)34 b(root)p Fk(.)c(On)21 b(return)j(the)d(con)n(ten)
n(ts)i(of)e(the)h(bu\013er)g(of)f(the)g(pro)r(cess)h(with)f(rank)g
Ff(root)j Fk(is)c(con)n(tained)100 3490 y(in)h(the)g(bu\013er)i(of)e(the)g
(calling)g(pro)r(cess.)100 3605 y Fg(INOUT)h(bu\013er)p 542
3605 21 3 v 25 w(handle)29 b Fk(Handle)21 b(for)h(bu\013er)h(where)e(from)f
(message)f(is)h(sen)n(t)i(or)f(receiv)n(ed.)p eop
%%Page: 4 5
bop 100 -134 a Fk(4)968 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)100 60 y Fg(IN)23 b(group)32 b Fk(group)23
b(handle)100 183 y Fg(IN)g(ro)r(ot)33 b Fk(rank)22 b(of)f(broadcast)i(ro)r
(ot)f(\(in)n(teger\))221 349 y Fg(MPI)p 365 349 21 3 v 26 w(BCASTC\()e(buf,)j
(len,)g(t)n(yp)r(e,)g(group,)j(ro)r(ot)f(\))221 472 y Ff(MPI)p
318 472 20 3 v 26 w(BCASTC)33 b Fk(b)r(eha)n(v)n(es)c(lik)n(e)f(broadcast,)34
b(restricted)d(to)e(a)f(blo)r(c)n(k)h(bu\013er.)53 b(It)28
b(is)g(called)h(b)n(y)g(all)100 547 y(pro)r(cesses)21 b(with)g(the)h(same)c
(argumen)n(ts)23 b(for)e Ff(len,)35 b(group)25 b Fk(and)d Ff(root)p
Fk(.)100 666 y Fg(INOUT)g(bu\013er)30 b Fk(Starting)24 b(address)e(of)f
(bu\013er)h(\(c)n(hoice)g(t)n(yp)r(e\))100 789 y Fg(IN)h(len)30
b Fk(Num)n(b)r(er)20 b(of)h(en)n(tries)h(in)f(bu\013er)h(\(in)n(teger\))100
912 y Fg(IN)h(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100
1034 y Fg(IN)i(group)32 b Fk(group)23 b(handle)100 1157 y Fg(in)f(ro)r(ot)33
b Fk(rank)22 b(of)f(broadcast)j(ro)r(ot)d(\(in)n(teger\))221
1276 y Ff(MPI)p 318 1276 V 26 w(BCAST\()36 b(buffer)p 753 1276
V 28 w(handle)q(,)h(group,)f(root)f(\))21 b Fk(is)100 1395
y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(group,)g(&size)f(\);)100 1471
y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100 1546 y(MPI_IR)q(E)q(C)q(V\()
q(h)q(a)q(n)q(dl)q(e)q(,)h(buffer)q(_h)q(a)q(n)q(d)q(l)q(e,)g(root,)f(tag,)f
(group,)i(return)q(_h)q(a)q(n)q(d)q(l)q(e\))q(;)100 1621 y(if)c(\(rank)q(==)q
(r)q(o)q(o)q(t\))195 1697 y(for)i(\(i=0;)h(i)c(<)h(size;)j(i++\))291
1772 y(MPI_SE)q(N)q(D\()q(b)q(u)q(f)q(f)q(er)q(_)q(h)q(a)q(nd)q(l)q(e)q(,)h
(i,)c(tag,)i(group\))q(;)100 1847 y(MPI_WA)q(I)q(T)q(\(h)q(a)q(n)q(d)q(le)q
(\))100 2006 y Fg(Circular)23 b(shift)100 2168 y(MPI)p 244
2168 21 3 v 25 w(CSHIFT\()f(in)n(buf,)h(outbuf,)h(group,)h(shift\))221
2290 y Fk(Pro)r(cess)30 b(with)h(rank)h Ff(i)f Fk(sends)g(the)g(data)h(in)f
(its)f(input)j(bu\013er)f(to)f(pro)r(cess)h(with)f(rank)g(\()p
Ff(i)22 b Fk(+)100 2366 y Ff(shift)p Fk(\))f(mo)r(d)16 b Ff(group)p
592 2366 20 3 v 27 w(size)p Fk(,)22 b(who)c(receiv)n(es)h(the)g(data)g(in)g
(its)f(output)j(bu\013er.)28 b(All)17 b(pro)r(cesses)i(mak)n(e)e(the)100
2441 y(call)k(with)g(the)h(same)e(v)m(alues)h(for)h Ff(group)p
Fk(,)j(and)d Ff(shift)p Fk(.)33 b(The)21 b Ff(shift)k Fk(v)m(alue)c(can)g(b)r
(e)h(p)r(ositiv)n(e,)f(zero,)h(or)100 2516 y(negativ)n(e.)100
2649 y Fg(IN)h(in)n(buf)29 b Fk(handle)23 b(to)e(input)h(bu\013er)h
(descriptor)100 2772 y Fg(IN)g(outbuf)31 b Fk(handle)22 b(to)f(output)j
(bu\013er)f(descriptor)100 2895 y Fg(IN)g(group)32 b Fk(handle)23
b(to)e(group)100 3018 y Fg(IN)i(shift)30 b Fk(in)n(teger)221
3198 y Fg(MPI)p 365 3198 21 3 v 26 w(CSHIFT1\()22 b(buf,)h(group,)j(shift\))
221 3321 y Fk(Pro)r(cess)19 b(with)i(rank)g Ff(i)f Fk(sends)h(the)g(data)g
(in)f(its)g(bu\013er)i(to)e(pro)r(cess)h(with)f(rank)i(\()p
Ff(i)13 b Fk(+)f Ff(shift)p Fk(\))22 b(mo)r(d)100 3396 y Ff(group)p
259 3396 20 3 v 27 w(size)p Fk(,)g(who)e(receiv)n(es)f(the)h(data)g(in)g(the)
f(same)f(bu\013er.)28 b(All)18 b(pro)r(cesses)i(mak)n(e)e(the)i(call)e(with)i
(the)100 3471 y(same)f(v)m(alues)h(for)i Ff(group)p Fk(,)i(and)e
Ff(shift)p Fk(.)31 b(The)20 b Ff(shift)25 b Fk(v)m(alue)20
b(can)h(b)r(e)g(p)r(ositiv)n(e,)g(zero,)g(or)g(negativ)n(e.)100
3605 y Fg(INOUT)h(buf)30 b Fk(handle)22 b(to)f(bu\013er)i(descriptor)p
eop
%%Page: 5 6
bop 100 -134 a Fh(1.5.)47 b(D)n(A)-5 b(T)g(A)19 b(MO)n(VE)g(FUNCTIONS)1463
b Fk(5)100 60 y Fg(IN)23 b(group)32 b Fk(handle)23 b(to)e(group)100
189 y Fg(IN)i(shift)30 b Fk(in)n(teger)221 382 y Fg(MPI)p 365
382 21 3 v 26 w(CSHIFTC\()21 b(in)n(buf,)h(outbuf,)i(len,)f(t)n(yp)r(e,)g
(group,)i(shift\))221 505 y Fk(Beha)n(v)n(es)j(lik)n(e)e Ff(MPI)p
675 505 20 3 v 26 w(CSHIFT)q Fk(,)k(with)d(bu\013ers)i(restricted)g(to)f(b)r
(e)e(blo)r(c)n(ks)h(of)h(n)n(umeric)f(units.)47 b(All)100 580
y(pro)r(cesses)21 b(mak)n(e)e(the)j(call)e(with)h(the)g(same)e(v)m(alues)i
(for)g Ff(len,)35 b(group)q Fk(,)24 b(and)d Ff(shift)q Fk(.)100
709 y Fg(IN)i(in)n(buf)29 b Fk(initial)21 b(lo)r(cation)h(of)f(input)i
(bu\013er)100 837 y Fg(OUT)f(outbuf)30 b Fk(initial)21 b(lo)r(cation)h(of)f
(output)j(bu\013er)100 966 y Fg(IN)f(len)30 b Fk(n)n(um)n(b)r(er)21
b(of)g(en)n(tries)h(in)f(input)i(\(and)f(output\))i(bu\013ers)100
1094 y Fg(IN)f(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100
1223 y Fg(IN)i(group)32 b Fk(handle)23 b(to)e(group)100 1351
y Fg(IN)i(shift)30 b Fk(in)n(teger)221 1527 y Fg(MPI)p 365
1527 21 3 v 26 w(CSHIFTC1\()21 b(buf,)j(len,)e(t)n(yp)r(e,)i(group,)h
(shift\))221 1651 y Fk(Beha)n(v)n(es)h(lik)n(e)e Ff(MPI)p 671
1651 20 3 v 25 w(CSHIFT)q(1)q Fk(,)29 b(with)c(bu\013ers)h(restricted)h(to)e
(b)r(e)f(blo)r(c)n(ks)h(of)g(n)n(umeric)g(units.)40 b(All)100
1726 y(pro)r(cesses)21 b(mak)n(e)e(the)j(call)e(with)h(the)g(same)e(v)m
(alues)i(for)g Ff(len,)35 b(group)q Fk(,)24 b(and)d Ff(shift)q
Fk(.)100 1854 y Fg(INOUT)h(buf)30 b Fk(initial)21 b(lo)r(cation)g(of)g
(bu\013er)100 1983 y Fg(IN)i(len)30 b Fk(n)n(um)n(b)r(er)21
b(of)g(en)n(tries)h(in)f(input)i(\(and)f(output\))i(bu\013ers)100
2111 y Fg(IN)f(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100
2240 y Fg(IN)i(group)32 b Fk(handle)23 b(to)e(group)100 2368
y Fg(IN)i(shift)30 b Fk(in)n(teger)221 2497 y Ff(MPI)p 318
2497 V 26 w(CSHIFT\()37 b(inbuf,)g(outbuf)q(,)f(group,)h(shift\))25
b Fk(is)100 2626 y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(group,)g(&size)f(\);)100
2702 y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100 2777
y(MPI_IS)q(E)q(N)q(D\()h(handle)q(,)g(inbuf,)f(mod\(ra)q(n)q(k)q(+)q(sh)q(i)q
(f)q(t)q(,)g(size\),)h(tag,)e(group\))q(;)100 2852 y(MPI_RE)q(C)q(V)q(\()h
(outbuf)q(,)h(mod\(ra)q(nk)q(-)q(s)q(h)q(i)q(ft)q(,)q(s)q(i)q(ze)q(\))q(,)g
(tag,)e(group,)i(return_)q(h)q(a)q(n)q(d)q(le)q(\))100 2927
y(MPI_WA)q(I)q(T)q(\(h)q(a)q(n)q(d)q(le)q(\))q(;)100 3092 y
Fg(End-o\013)23 b(shift)100 3256 y(MPI)p 244 3256 21 3 v 25
w(EOSHIFT\()f(in)n(buf,)g(outbuf,)i(group,)i(shift\))221 3379
y Fk(Pro)r(cess)21 b(with)h(rank)h Ff(i)p Fk(,)f(max)o(\()p
Ff(0)p Fe(;)10 b Fi(\000)p Ff(shif)q(t)q Fk(\))23 b Fi(\024)18
b Ff(i)i Fe(<)e Ff(min)p Fk(\()p Ff(si)q(z)q(e)q Fe(;)10 b
Ff(s)q(iz)q(e)19 b Fi(\000)14 b Ff(shift)q Fk(\),)25 b(sends)e(the)f(data)100
3454 y(in)17 b(its)f(input)j(bu\013er)g(to)e(pro)r(cess)g(with)g(rank)h
Ff(i+)34 b(shift)p Fk(,)20 b(who)d(receiv)n(es)g(the)h(data)g(in)f(its)f
(output)k(bu\013er.)100 3530 y(The)j(output)j(bu\013er)f(of)e(pro)r(cesses)g
(whic)n(h)h(do)g(not)g(receiv)n(e)f(data)h(is)e(left)i(unc)n(hanged.)37
b(All)22 b(pro)r(cesses)100 3605 y(mak)n(e)d(the)j(call)e(with)h(the)g(same)e
(v)m(alues)h(for)i Ff(group)p Fk(,)i(and)e Ff(shift)p Fk(.)p
eop
%%Page: 6 7
bop 100 -134 a Fk(6)968 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)100 60 y Fg(IN)23 b(in)n(buf)29 b Fk(handle)23
b(to)e(input)h(bu\013er)h(descriptor)100 190 y Fg(IN)g(outbuf)31
b Fk(handle)22 b(to)f(output)j(bu\013er)f(descriptor)100 319
y Fg(IN)g(group)32 b Fk(handle)23 b(to)e(group)100 449 y Fg(IN)i(shift)30
b Fk(in)n(teger)221 643 y Fg(MPI)p 365 643 21 3 v 26 w(EOSHIFT1\()22
b(buf,)h(group,)j(shift\))221 767 y Fk(Pro)r(cess)21 b(with)h(rank)h
Ff(i)p Fk(,)f(max)o(\()p Ff(0)p Fe(;)10 b Fi(\000)p Ff(shif)q(t)q
Fk(\))23 b Fi(\024)18 b Ff(i)i Fe(<)e Ff(min)p Fk(\()p Ff(si)q(z)q(e)q
Fe(;)10 b Ff(s)q(iz)q(e)19 b Fi(\000)14 b Ff(shift)q Fk(\),)25
b(sends)e(the)f(data)100 842 y(in)e(its)h(bu\013er)h(to)e(pro)r(cess)h(with)g
(rank)h Ff(i+)33 b(shift)p Fk(,)24 b(who)d(receiv)n(es)f(the)h(data)h(in)e
(the)h(same)e(bu\013er.)29 b(The)100 917 y(output)24 b(bu\013er)f(of)e(pro)r
(cesses)g(whic)n(h)h(do)f(not)h(receiv)n(e)f(data)h(is)e(left)i(unc)n
(hanged.)31 b(All)20 b(pro)r(cesses)h(mak)n(e)100 993 y(the)g(call)f(with)i
(the)f(same)e(v)m(alues)h(for)i Ff(group)p Fk(,)i(and)e Ff(shift)p
Fk(.)100 1140 y Fg(INOUT)g(buf)30 b Fk(handle)22 b(to)f(bu\013er)i
(descriptor)100 1269 y Fg(IN)g(group)32 b Fk(handle)23 b(to)e(group)100
1399 y Fg(IN)i(shift)30 b Fk(in)n(teger)221 1594 y Fg(MPI)p
365 1594 V 26 w(EOSHIFTC\()20 b(in)n(buf,)j(outbuf,)h(len,)e(t)n(yp)r(e,)i
(group,)h(shift\))221 1717 y Fk(Beha)n(v)n(es)h(lik)n(e)e Ff(MPI)p
671 1717 20 3 v 25 w(EOSHIF)q(T)q Fk(,)k(with)d(bu\013ers)h(restricted)h(to)e
(b)r(e)g(blo)r(c)n(ks)g(of)g(n)n(umeric)g(units.)40 b(All)100
1792 y(pro)r(cesses)21 b(mak)n(e)e(the)j(call)e(with)h(the)g(same)e(v)m
(alues)i(for)g Ff(len,)35 b(group)q Fk(,)24 b(and)d Ff(shift)q
Fk(.)100 1922 y Fg(IN)i(in)n(buf)29 b Fk(initial)21 b(lo)r(cation)h(of)f
(input)i(bu\013er)100 2051 y Fg(OUT)f(outbuf)30 b Fk(initial)21
b(lo)r(cation)h(of)f(output)j(bu\013er)100 2181 y Fg(IN)f(len)30
b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(in)f(input)i(\(and)f(output\))i
(bu\013ers)100 2311 y Fg(IN)f(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g
(bu\013er)100 2440 y Fg(IN)i(group)32 b Fk(handle)23 b(to)e(group)100
2570 y Fg(IN)i(shift)30 b Fk(in)n(teger)221 2746 y Fg(MPI)p
365 2746 21 3 v 26 w(EOSHIFTC1\()21 b(buf,)j(len,)e(t)n(yp)r(e,)h(group,)j
(shift\))221 2870 y Fk(Beha)n(v)n(es)f(lik)n(e)f Ff(MPI)p 670
2870 20 3 v 25 w(EOSHIF)q(T)q(1)p Fk(,)29 b(with)24 b(bu\013er)i(restricted)g
(to)f(b)r(e)f(blo)r(c)n(ks)g(of)g(n)n(umeric)g(units.)39 b(All)100
2945 y(pro)r(cesses)21 b(mak)n(e)e(the)j(call)e(with)h(the)g(same)e(v)m
(alues)i(for)g Ff(len,)35 b(group)q Fk(,)24 b(and)d Ff(shift)q
Fk(.)100 3075 y Fg(INOUT)h(buf)30 b Fk(initial)21 b(lo)r(cation)g(of)g
(bu\013er)100 3204 y Fg(IN)i(len)30 b Fk(n)n(um)n(b)r(er)21
b(of)g(en)n(tries)h(in)f(bu\013er)100 3334 y Fg(IN)i(t)n(yp)r(e)30
b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100 3463 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)100 3593 y Fg(IN)i(shift)30 b Fk(in)n(teger)p
eop
%%Page: 7 8
bop 100 -134 a Fh(1.5.)41 b(D)n(A)-5 b(T)g(A)18 b(MO)n(VE)h(FUNCTIONS)1470
b Fk(7)100 60 y Fg(Gather)100 222 y(MPI)p 244 222 21 3 v 25
w(GA)-6 b(THER\()21 b(in)n(buf,)i(list)p 914 222 V 24 w(of)p
993 222 V 27 w(outbufs,)h(group,)h(ro)r(ot,)i(return)p 1871
222 V 25 w(status\))221 344 y Fk(Eac)n(h)c(pro)r(cess)h(\(including)h(the)f
(ro)r(ot)g(pro)r(cess\))g(sends)g(the)g(con)n(ten)n(t)h(of)f(its)f(input)i
(bu\013er)f(to)g(the)100 420 y(ro)r(ot)c(pro)r(cess.)27 b(The)18
b(ro)r(ot)i(pro)r(cess)f(places)g(all)f(the)h(incoming)f(messages)f(in)i(the)
g(lo)r(cation)h(sp)r(eci\014ed)f(b)n(y)100 495 y(the)j(output)i(bu\013er)g
(handle)f(corresp)r(onding)i(to)d(the)g(sender's)h(rank.)30
b(F)-5 b(or)22 b(example,)e(the)i(ro)r(ot)g(places)100 570
y(the)e(data)g(from)f(pro)r(cess)h(with)g(rank)g(3)f(in)g(the)h(lo)r(cation)h
(sp)r(eci\014ed)f(b)n(y)f(the)h(third)h(bu\013er)g(descriptor)h(in)100
645 y(the)g(list)e(of)i(outbufs.)31 b(The)21 b(list)p 828 645
19 3 v 22 w(of)p 898 645 V 22 w(outbufs)j(argumen)n(t)f(is)d(ignored)j(for)f
(all)f(non-ro)r(ot)j(pro)r(cesses.)29 b(The)100 721 y(routine)c(is)d(called)h
(b)n(y)g(all)f(mem)n(b)r(ers)f(of)i(group)i(using)f(the)f(same)e(argumen)n
(ts)j(for)g Ff(group)p Fk(,)j(and)d Ff(root)p Fk(.)100 796
y(The)d(input)h(bu\013er)h(of)e(eac)n(h)g(pro)r(cess)g(ma)n(y)f(ha)n(v)n(e)h
(a)g(di\013eren)n(t)i(length.)100 920 y Fg(IN)g(in)n(buf)29
b Fk(handle)23 b(to)e(input)h(bu\013er)h(descriptor)100 1044
y Fg(IN)g(list)p 302 1044 21 3 v 25 w(of)p 382 1044 V 26 w(outbufs)31
b Fk(list)20 b(of)h(bu\013er)i(descriptor)g(handles)g(\(ro)r(ot\))100
1169 y Fg(IN)g(group)32 b Fk(group)23 b(handle)100 1294 y Fg(IN)g(ro)r(ot)33
b Fk(rank)22 b(of)f(receiving)g(pro)r(cess)h(\(in)n(teger\))100
1418 y Fg(OUT)g(return)p 475 1418 V 26 w(status)32 b Fk(return)23
b(status)f(handle)221 1652 y Fd(Discussion:)65 b Fc(Do)23 b(w)n(e)h(w)n(an)n
(t)f(the)h(collectiv)n(e)h(routines)d(to)h(ha)n(v)n(e)h(return)f(status)g
(handles?)36 b(And)23 b(if)f(so)100 1728 y(what)c(information)d(do)j(w)n(e)i
(w)n(an)n(t)f(the)g(handle)f(to)g(b)r(e)h(able)e(to)i(return?)221
1961 y Fg(MPI)p 365 1961 V 26 w(GA)-6 b(THER)n(C\()20 b(in)n(buf,)j(outbuf,)h
(inlen,)d(t)n(yp)r(e,)i(group,)j(ro)r(ot\))221 2083 y Ff(MPI)p
318 2083 20 3 v 26 w(GATHERC)j Fk(b)r(eha)n(v)n(es)c(lik)n(e)f
Ff(MPI)p 1027 2083 V 25 w(GATHER)k Fk(restricted)f(to)d(blo)r(c)n(k)g
(bu\013ers,)j(and)e(with)g(the)f(addi-)100 2158 y(tional)g(restriction)h
(that)g(all)d(input)j(bu\013ers)g(should)f(ha)n(v)n(e)g(the)g(same)d(length.)
35 b(All)22 b(pro)r(cesses)h(should)100 2234 y(pro)n(vided)g(the)f(same)d(v)m
(alues)h(for)i Ff(inlen,)37 b(group)p Fk(,)24 b(and)d Ff(root)j
Fk(.)100 2357 y Fg(IN)f(in)n(buf)29 b Fk(\014rst)22 b(v)m(ariable)g(of)f
(input)i(bu\013er)f(\(c)n(hoice\))100 2482 y Fg(OUT)g(outbuf)30
b Fk(\014rst)22 b(v)m(ariable)g(of)f(output)j(bu\013er)e({)f(signi\014can)n
(t)i(only)e(at)g(ro)r(ot)h(\(matc)n(hes)f(t)n(yp)r(e\))100
2607 y Fg(IN)i(inlen)29 b Fk(Num)n(b)r(er)20 b(of)h(\(w)n(ord\))i(v)m
(ariables)e(in)g(input)i(bu\013er)f(\(in)n(teger\))100 2731
y Fg(IN)h(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100
2856 y Fg(IN)i(group)32 b Fk(group)23 b(handle)100 2981 y Fg(IN)g(ro)r(ot)33
b Fk(rank)22 b(of)f(receiving)g(pro)r(cess)h(\(in)n(teger\))221
3105 y Ff(MPI)p 318 3105 V 26 w(GATHERC)q(\()37 b(inbuf,)g(outbuf)q(,)f
(inlen,)h(type,)f(group,)h(root\))56 b Fk(is)100 3228 y Ff(MPI_GS)q(I)q(Z)q
(E\()37 b(&size,)g(group\))q(;)100 3304 y(MPI_RA)q(N)q(K)q(\()f(&rank,)h
(group\))q(;)100 3379 y(MPI_IS)q(E)q(N)q(DC)q(\()q(h)q(a)q(nd)q(l)q(e)q(,)g
(inbuf,)f(inlen,)h(root,)f(tag,)f(group\))q(;)100 3454 y(if)e(\(rank)q(==)q
(r)q(o)q(o)q(t\))195 3530 y(for)i(\(i=0;)h(i)c(<)h(size;)j(i++\))195
3605 y({)p eop
%%Page: 8 9
bop 100 -134 a Fk(8)975 b Fh(CHAPTER)17 b(1.)41 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)291 60 y Ff(MPI_RE)q(C)q(VC)q(\()q(o)q(u)q(t)q(bu)q(f)q
(,)37 b(inlen,)f(i,)e(tag,)h(group,)i(return)q(_s)q(t)q(a)q(t)q(us)q(\))q(;)
291 135 y(outbuf)g(+=)c(inlen;)195 211 y(})100 286 y(MPI_WA)q(I)q(T)q(\(h)q
(a)q(n)q(d)q(le)q(\))q(;)100 448 y Fg(Scatter)100 611 y(MPI)p
244 611 21 3 v 25 w(SCA)-6 b(TTER\()21 b(list)p 745 611 V 24
w(of)p 824 611 V 26 w(in)n(bufs,)i(outbuf,)h(group,)i(ro)r(ot,)g(return)p
1899 611 V 26 w(status\))221 734 y Fk(The)21 b(ro)r(ot)g(pro)r(cess)g(sends)g
(the)g(con)n(ten)n(t)j(of)d(its)f Ff(i)p Fk(-th)i(input)g(bu\013er)h(to)e
(the)g(pro)r(cess)g(with)g(rank)g Ff(i)p Fk(;)100 809 y(eac)n(h)h(pro)r(cess)
f(\(including)j(the)d(ro)r(ot)i(pro)r(cess\))f(stores)g(the)f(incoming)g
(message)e(in)i(its)g(output)j(bu\013er.)100 884 y(The)19 b(routine)h(is)e
(called)h(b)n(y)g(all)f(mem)n(b)r(ers)e(of)j(the)h(group)h(using)e(the)g
(same)e(argumen)n(ts)j(for)g Ff(group)p Fk(,)i(and)100 959
y Ff(root)p Fk(.)100 1086 y Fg(IN)h(list)p 302 1086 V 25 w(of)p
382 1086 V 26 w(in)n(bufs)30 b Fk(list)20 b(of)h(bu\013er)h(descriptor)i
(handles)100 1213 y Fg(IN)f(outbuf)31 b Fk(bu\013er)22 b(descriptor)i(handle)
100 1339 y Fg(IN)f(group)32 b Fk(handle)100 1466 y Fg(IN)23
b(ro)r(ot)33 b Fk(rank)22 b(of)f(sending)h(pro)r(cess)f(\(in)n(teger\))100
1593 y Fg(OUT)h(return)p 475 1593 V 26 w(status)32 b Fk(return)23
b(status)f(handle)221 1720 y Ff(MPI)p 318 1720 20 3 v 26 w(SCATTER)q(\()37
b(list)p 754 1720 V 26 w(of)p 842 1720 V 25 w(inbufs)q(,)f(outbuf)q(,)h
(group,)f(root,)g(return)p 1976 1720 V 28 w(status)q(\))25
b Fk(is)100 1847 y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(group,)g(&size)f(\);)100
1922 y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100 1997
y(MPI_IR)q(E)q(C)q(V\()q(h)q(a)q(n)q(dl)q(e)q(,)h(outbuf)q(,)f(root,)g(tag,)f
(group\))q(;)100 2072 y(if)e(\(rank)q(=r)q(o)q(o)q(t)q(\))195
2148 y(for)i(\(i=0;)h(i)c(<)h(size;)j(i++\))291 2223 y(MPI_SE)q(N)q(D\()q(i)q
(n)q(b)q(u)q(f[)q(i)q(])q(,)g(i,)e(tag,)h(group\))q(;)100 2298
y(MPI_WA)q(I)q(T)q(\(h)q(a)q(n)q(d)q(le)q(,)i(return)q(_)q(st)q(a)q(t)q(u)q
(s)q(\);)221 2472 y Fg(MPI)p 365 2472 21 3 v 26 w(SCA)-6 b(TTER)n(C\()19
b(in)n(buf,)k(outbuf,)h(len,)e(t)n(yp)r(e,)h(group,)j(ro)r(ot\))221
2595 y Ff(MPI)p 318 2595 20 3 v 26 w(SCATTER)q(C)i Fk(b)r(eha)n(v)n(es)c(lik)
n(e)e Ff(MPI)p 1055 2595 V 25 w(SCATTE)q(R)27 b Fk(restricted)f(to)d(blo)r(c)
n(k)g(bu\013ers,)i(and)f(with)f(the)h(ad-)100 2670 y(ditional)j(restriction)h
(that)f(all)e(output)k(bu\013ers)e(ha)n(v)n(e)f(the)g(same)e(length.)43
b(The)26 b(input)h(bu\013er)g(blo)r(c)n(k)100 2746 y(of)d(the)h(ro)r(ot)f
(pro)r(cess)h(is)e(partitioned)k(in)n(to)e Ff(n)f Fk(consecutiv)n(e)h(blo)r
(c)n(ks,)g(eac)n(h)f(consisting)i(of)e Ff(len)h Fk(w)n(ords.)100
2821 y(The)19 b Ff(i)p Fk(-th)i(blo)r(c)n(k)f(is)f(sen)n(t)h(to)g(the)g
Ff(i)p Fk(-th)h(pro)r(cess)f(in)f(the)h(group)i(and)e(stored)h(in)e(its)h
(output)i(bu\013er.)29 b(The)100 2896 y(routine)e(is)c(called)h(b)n(y)h(all)f
(mem)n(b)r(ers)e(of)j(the)g(group)h(using)f(the)g(same)e(argumen)n(ts)j(for)f
Ff(group,)37 b(len)p Fk(,)100 2971 y(and)22 b Ff(root)p Fk(.)100
3098 y Fg(IN)h(in)n(buf)29 b Fk(\014rst)22 b(en)n(try)h(in)e(input)h
(bu\013er)h({)d(signi\014can)n(t)k(only)d(at)g(ro)r(ot)g(\(c)n(hoice\).)100
3225 y Fg(OUT)h(outbuf)30 b Fk(\014rst)22 b(en)n(try)h(in)e(output)i
(bu\013er)g(\(c)n(hoice\).)100 3351 y Fg(IN)g(len)30 b Fk(n)n(um)n(b)r(er)21
b(of)g(en)n(tries)h(to)f(b)r(e)g(stored)h(in)f(output)j(bu\013er)e(\(in)n
(teger\))100 3478 y Fg(IN)h(t)n(yp)r(e)30 b Fk(data)22 b(t)n(yp)r(e)f(of)g
(bu\013er)100 3605 y Fg(IN)i(group)32 b Fk(handle)p eop
%%Page: 9 10
bop 100 -134 a Fh(1.5.)41 b(D)n(A)-5 b(T)g(A)18 b(MO)n(VE)h(FUNCTIONS)1470
b Fk(9)100 60 y Fg(IN)23 b(ro)r(ot)33 b Fk(rank)22 b(of)f(sending)h(pro)r
(cess)f(\(in)n(teger\))221 189 y Ff(MPI)p 318 189 20 3 v 26
w(SCATTER)q(C)q(\()37 b(inbuf,)f(outbu)q(f,)h(outlen)q(,)g(type,)e(group)q(,)
h(root\))56 b Fk(is)100 318 y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(&size,)g(group\))q
(;)100 394 y(MPI_RA)q(N)q(K)q(\()f(&rank,)h(group\))q(;)100
469 y(MPI_IR)q(E)q(C)q(VC)q(\()g(handle)q(,)f(outbuf)q(,)h(outlen)q(,)f
(type,)g(root,)g(tag,)f(group,)i(return)q(_)q(h)q(an)q(d)q(l)q(e)q(\))q(;)100
544 y(if)c(\(rank)q(=r)q(o)q(o)q(t)q(\))195 620 y(for)i(\(i=0;)h(i)c(<)h
(size;)j(i++\))195 695 y({)291 770 y(MPI_SE)q(N)q(DC)q(\()q(i)q(n)q(b)q(uf)q
(,)h(outlen)q(,)f(type,)g(i,)e(tag,)h(group\))q(;)291 845 y(inbuf)h(+=)d
(outlen)q(;)195 921 y(})100 996 y(MPI_WA)q(I)q(T)q(\(h)q(a)q(n)q(d)q(le)q(\))
q(;)100 1161 y Fg(All-to-all)23 b(scatter)100 1325 y(MPI)p
244 1325 21 3 v 25 w(ALLSCA)-6 b(TTER\()20 b(list)p 881 1325
V 25 w(of)p 961 1325 V 26 w(in)n(bufs,)j(list)p 1306 1325 V
24 w(of)p 1385 1325 V 26 w(outbufs,)i(group,)g(return)p 2092
1325 V 26 w(status\))221 1448 y Fk(Eac)n(h)g(pro)r(cess)g(in)f(the)h(group)h
(sends)f(its)g Ff(i)p Fk(-th)h(bu\013er)g(in)e(its)h(input)h(bu\013er)g(list)
e(to)h(the)g(pro)r(cess)100 1523 y(with)i(rank)h Ff(i)g Fk(\(itself)f
(included\);)33 b(eac)n(h)27 b(pro)r(cess)h(places)f(the)g(incoming)g
(messages)e(in)i(the)h(lo)r(cation)100 1598 y(sp)r(eci\014ed)e(b)n(y)g
(output)i(bu\013er)f(handle)g(corresp)r(onding)i(to)c(the)h(rank)g(of)g(the)g
(sender.)42 b(F)-5 b(or)25 b(example,)100 1674 y(eac)n(h)20
b(pro)r(cess)g(places)f(the)h(data)h(from)e(pro)r(cess)h(with)f(rank)i(3)e
(in)g(the)h(lo)r(cation)h(sp)r(eci\014ed)f(b)n(y)g(the)g(third)100
1749 y(bu\013er)25 b(descriptor)h(in)d(the)g(list)g(of)g(outbufs.)37
b(The)23 b(routine)i(is)d(called)h(b)n(y)g(all)g(mem)n(b)r(ers)d(of)k(the)f
(group)100 1824 y(using)f(the)f(same)e(argumen)n(ts)j(for)g
Ff(group)p Fk(.)100 1953 y Fg(IN)h(list)p 302 1953 V 25 w(of)p
382 1953 V 26 w(in)n(bufs)30 b Fk(list)20 b(of)h(bu\013er)h(descriptor)i
(handles)100 2082 y Fg(IN)f(list)p 302 2082 V 25 w(of)p 382
2082 V 26 w(outbufs)31 b Fk(list)20 b(of)h(bu\013er)i(descriptor)g(handles)
100 2210 y Fg(IN)g(group)32 b Fk(handle)100 2339 y Fg(OUT)22
b(return)p 475 2339 V 26 w(status)32 b Fk(return)23 b(status)f(handle)221
2515 y Fg(MPI)p 365 2515 V 26 w(ALLSCA)-6 b(TTER)n(C)o(\()20
b(in)n(buf,)i(outbuf,)i(len,)f(t)n(yp)r(e,)g(group\))221 2638
y Ff(MPI)p 318 2638 20 3 v 26 w(ALLSCAT)q(T)q(E)q(R)q(C)29
b Fk(b)r(eha)n(v)n(es)e(lik)n(e)e Ff(MPI)p 1158 2638 V 25 w(ALLSCA)q(T)q(TE)q
(R)30 b Fk(restricted)e(to)d(blo)r(c)n(k)h(bu\013ers,)i(and)e(with)100
2714 y(the)f(additional)i(restriction)h(that)e(all)e(blo)r(c)n(ks)h(sen)n(t)g
(from)g(one)g(pro)r(cess)g(to)g(another)i(ha)n(v)n(e)f(the)f(same)100
2789 y(length.)j(The)18 b(input)i(bu\013er)h(blo)r(c)n(k)d(of)h(eac)n(h)g
(pro)r(cess)g(is)e(partitioned)22 b(in)n(to)e Ff(n)e Fk(consecutiv)n(e)i(blo)
r(c)n(ks,)f(eac)n(h)100 2864 y(consisting)26 b(of)g Ff(len)g
Fk(w)n(ords.)42 b(The)24 b Ff(i)p Fk(-th)j(blo)r(c)n(k)e(is)f(sen)n(t)i(to)f
(the)h Ff(it)p Fk(-th)h(pro)r(cess)f(in)f(the)g(group.)43 b(Eac)n(h)100
2939 y(pro)r(cess)24 b(concatenates)h(the)f(incoming)e(messages,)g(in)h(the)g
(order)i(of)e(the)h(senders')g(ranks,)g(and)h(store)100 3015
y(them)18 b(in)g(its)g(output)k(bu\013er.)28 b(The)18 b(routine)j(is)d
(called)g(b)n(y)h(all)f(mem)n(b)r(ers)e(of)j(the)g(group)i(using)e(the)g
(same)100 3090 y(argumen)n(ts)j(for)g Ff(group)p Fk(,)i(and)e
Ff(len)p Fk(.)100 3219 y Fg(IN)h(in)n(buf)29 b Fk(\014rst)22
b(en)n(try)h(in)e(input)h(bu\013er)h(\(matc)n(hes)e(t)n(yp)r(e\).)100
3347 y Fg(OUT)h(outbuf)30 b Fk(\014rst)22 b(en)n(try)h(in)e(output)i
(bu\013er)g(\(matc)n(hes)e(t)n(yp)r(e\).)100 3476 y Fg(IN)i(len)30
b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(sen)n(t)g(from)e(eac)n(h)i(pro)r
(cess)f(to)g(eac)n(h)g(other)i(\(in)n(teger\).)100 3605 y Fg(IN)g(t)n(yp)r(e)
30 b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)p eop
%%Page: 10 11
bop 100 -134 a Fk(10)938 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)100 60 y Fg(IN)23 b(group)32 b Fk(handle)221
187 y Ff(MPI)p 318 187 20 3 v 26 w(ALLSCAT)q(T)q(E)q(R)q(C)q(\()k(inbuf,)h
(outbuf)q(,)f(len,)g(type,)f(group)q(\))25 b Fk(is)100 315
y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(group,)g(&size)f(\);)100 390
y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100 465 y(for)e(\(i=0;)i(i)d(<)f
(rank;)k(i++\))195 540 y({)227 616 y(MPI_IR)q(E)q(C)q(V)q(C\()q(r)q(e)q(c)q
(v)q(_h)q(a)q(n)q(d)q(le)q(s)q([)q(i)q(])q(,)g(outbuf)q(,)h(len,)e(type,)h
(tag,)f(group,)i(return_)q(h)q(a)q(n)q(d)q(le)q(\))q(;)227
691 y(outbuf)g(+=)c(len;)195 766 y(})100 841 y(for)h(\(i=0;)i(i)d(<)f(size;)k
(i++\))195 917 y({)227 992 y(MPI_IS)q(E)q(N)q(D)q(C\()q(s)q(e)q(n)q(d)q(_h)q
(a)q(n)q(d)q(le)q([)q(i)q(])q(,)g(inbuf,)h(len,)e(type,)h(i,)e(tag,)h
(group\))q(;)227 1067 y(inbuf)h(+=)e(len;)195 1143 y(})100
1218 y(MPI_WA)q(I)q(T)q(AL)q(L)q(\()q(s)q(en)q(d)q(_)q(h)q(a)q(nd)q(l)q(e)q
(\))q(;)100 1293 y(MPI_WA)q(I)q(T)q(AL)q(L)q(\()q(r)q(ec)q(v)q(_)q(h)q(a)q
(nd)q(l)q(e)q(\))q(;)100 1456 y Fg(All-to-all)23 b(broadcast)100
1618 y(MPI)p 244 1618 21 3 v 25 w(ALLCAST\()e(in)n(buf,)i(list)p
938 1618 V 24 w(of)p 1017 1618 V 27 w(outbufs,)h(group,)h(return)p
1724 1618 V 26 w(status\))221 1741 y Fk(Eac)n(h)19 b(pro)r(cess)g(in)f(the)h
(group)i(broadcasts)g(its)e(input)h(bu\013er)h(to)d(all)g(pro)r(cesses)h
(\(including)i(itself)5 b(\);)100 1817 y(eac)n(h)19 b(pro)r(cess)g(places)g
(the)g(incoming)f(messages)f(in)h(the)h(lo)r(cation)h(sp)r(eci\014ed)f(b)n(y)
g(output)j(bu\013er)e(handle)100 1892 y(corresp)r(onding)28
b(to)c(the)g(rank)i(of)e(the)g(sender.)38 b(F)-5 b(or)24 b(example,)f(eac)n
(h)i(pro)r(cess)f(places)g(the)h(data)g(from)100 1967 y(pro)r(cess)i(with)f
(rank)i(3)e(in)g(the)h(lo)r(cation)g(sp)r(eci\014ed)g(b)n(y)f(the)h(third)h
(bu\013er)g(descriptor)h(in)e(the)f(list)g(of)100 2042 y(outbufs.)37
b(The)23 b(routine)i(is)e(called)g(b)n(y)g(all)g(mem)n(b)r(ers)e(of)i(the)h
(group)h(using)f(the)g(same)d(argumen)n(ts)j(for)100 2118 y
Ff(group)p Fk(.)100 2245 y Fg(IN)f(in)n(buf)29 b Fk(bu\013er)23
b(descriptor)g(handle)g(for)f(input)g(bu\013er)100 2372 y Fg(IN)h(list)p
302 2372 V 25 w(of)p 382 2372 V 26 w(outbufs)31 b Fk(list)20
b(of)h(bu\013er)i(descriptor)g(handles)100 2499 y Fg(IN)g(group)32
b Fk(handle)100 2626 y Fg(OUT)22 b(return)p 475 2626 V 26 w(status)32
b Fk(return)23 b(status)f(handle)221 2800 y Fg(MPI)p 365 2800
V 26 w(ALLCASTC\()e(in)n(buf,)i(outbuf,)i(len,)f(t)n(yp)r(e,)g(group\))221
2923 y Ff(MPI)p 318 2923 20 3 v 26 w(ALLCAST)q(C)28 b Fk(b)r(eha)n(v)n(es)c
(lik)n(e)e Ff(MPI)p 1055 2923 V 25 w(ALLCAS)q(T)27 b Fk(restricted)f(to)d
(blo)r(c)n(k)g(bu\013ers,)i(and)f(with)f(the)h(ad-)100 2998
y(ditional)g(restriction)h(that)e(all)f(blo)r(c)n(ks)h(sen)n(t)g(from)f(one)g
(pro)r(cess)h(to)g(another)i(ha)n(v)n(e)e(the)g(same)d(length.)100
3073 y(Eac)n(h)j(pro)r(cess)g(concatenates)i(the)f(incoming)e(messages,)g(in)
g(the)i(order)h(of)e(the)g(senders')h(ranks,)g(and)100 3149
y(store)c(them)d(in)i(its)g(output)i(bu\013er.)29 b(The)18
b(routine)j(is)d(called)h(b)n(y)g(all)f(mem)n(b)r(ers)e(of)j(the)g(group)i
(using)f(the)100 3224 y(same)f(argumen)n(ts)j(for)g Ff(group)p
Fk(,)i(and)e Ff(len)p Fk(.)100 3351 y Fg(IN)h(in)n(buf)29 b
Fk(\014rst)22 b(en)n(try)h(in)e(input)h(bu\013er)h(\(c)n(hoice\).)29
b(ro)r(ot)21 b(\(in)n(teger\))100 3478 y Fg(OUT)h(outbuf)30
b Fk(\014rst)22 b(en)n(try)h(in)e(output)i(bu\013er)g(\(c)n(hoice\).)100
3605 y Fg(IN)g(len)30 b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(sen)n(t)g
(from)e(eac)n(h)i(pro)r(cess)f(to)g(eac)n(h)g(other)i(\(including)g(itself)5
b(\).)p eop
%%Page: 11 12
bop 100 -134 a Fh(1.6.)47 b(GLOBAL)19 b(COMPUTE)f(OPERA)-5
b(TIONS)1171 b Fk(11)100 60 y Fg(IN)23 b(t)n(yp)r(e)30 b Fk(data)22
b(t)n(yp)r(e)f(of)g(bu\013er)100 185 y Fg(IN)i(group)32 b Fk(group)23
b(handle)221 310 y Ff(MPI)p 318 310 20 3 v 26 w(ALLCAST)q(C)q(\()37
b(inbuf,)f(outbu)q(f,)h(len,)e(type,)h(group\))25 b Fk(is)100
436 y Ff(MPI_GS)q(I)q(Z)q(E\()37 b(group,)g(&size)f(\);)100
511 y(MPI_RA)q(N)q(K)q(\()g(group,)h(&rank)f(\);)100 586 y(for)e(\(i=0;)i(i)d
(<)f(rank;)k(i++\))195 661 y({)227 737 y(MPI_IR)q(E)q(C)q(V)q(C\()q(r)q(e)q
(c)q(v)q(_h)q(a)q(n)q(d)q(le)q(s)q([)q(i)q(])q(,)g(outbuf)q(,)h(len,)e(type,)
h(tag,)f(group,)i(return_)q(h)q(a)q(n)q(d)q(le)q(\))q(;)227
812 y(outbuf)g(+=)c(len;)195 887 y(})100 963 y(for)h(\(i=0;)i(i)d(<)f(size;)k
(i++\))195 1038 y({)227 1113 y(MPI_IS)q(E)q(N)q(D)q(C\()q(s)q(e)q(n)q(d)q(_h)
q(a)q(n)q(d)q(le)q([)q(i)q(])q(,)g(inbuf,)h(len,)e(type,)h(i,)e(tag,)h
(group\))q(;)195 1188 y(})100 1264 y(MPI_WA)q(I)q(T)q(AL)q(L)q(\()q(s)q(en)q
(d)q(_)q(h)q(a)q(nd)q(l)q(e)q(\))q(;)100 1339 y(MPI_WA)q(I)q(T)q(AL)q(L)q(\()
q(r)q(ec)q(v)q(_)q(h)q(a)q(nd)q(l)q(e)q(\))q(;)100 1530 y Fj(1.6)94
b(Global)29 b(Comput)o(e)d(Op)s(erations)100 1665 y Fg(Reduce)100
1827 y(MPI)p 244 1827 21 3 v 25 w(REDUCE\()21 b(in)n(buf,)h(outbuf,)i(group,)
i(ro)r(ot,)g(op\))221 1950 y Fk(Com)n(bines)d(the)h(v)m(alues)f(pro)n(vided)j
(in)d(the)h(input)i(bu\013er)f(of)e(eac)n(h)h(pro)r(cess)g(in)g(the)g(group,)
i(using)100 2025 y(the)c(op)r(eration)j Ff(op)p Fk(,)d(and)h(returns)i(the)d
(com)n(bined)g(v)m(alue)f(in)h(the)g(output)j(bu\013er)f(of)e(the)g(pro)r
(cess)g(with)100 2100 y(rank)h Ff(root)p Fk(.)33 b(Eac)n(h)21
b(pro)r(cess)h(can)g(pro)n(vide)i(one)e(v)m(alue,)g(or)g(a)f(sequence)h(of)g
(v)m(alues,)g(in)f(whic)n(h)i(case)e(the)100 2175 y(com)n(bine)e(op)r
(eration)j(is)c(executed)j(p)r(oin)n(t)n(wise)f(on)g(eac)n(h)g(en)n(try)h(of)
f(the)g(sequence.)27 b(F)-5 b(or)20 b(example,)e(if)h(the)100
2251 y(op)r(eration)25 b(is)d Ff(max)j Fk(and)f(input)h(bu\013ers)g(con)n
(tains)f(t)n(w)n(o)g(\015oating)h(p)r(oin)n(t)f(n)n(um)n(b)r(ers,)h(then)f
(outbuf\(1\))j(=)100 2326 y(global)19 b(max\(in)n(buf\(1\)\))j(and)d
(outbuf\(2\))j(=)c(global)h(max\(in)n(buf\(2\)\).)30 b(All)17
b(input)j(bu\013ers)f(should)h(de\014ne)100 2401 y(sequences)e(of)g(equal)g
(length)h(of)f(en)n(tries)h(of)f(t)n(yp)r(es)g(that)i(matc)n(h)d(the)i(t)n
(yp)r(e)f(of)g(the)g(op)r(erands)i(of)e Ff(op)p Fk(.)28 b(The)100
2477 y(output)g(bu\013er)e(should)h(de\014ne)f(a)e(sequence)i(of)f(the)g
(same)e(length)k(of)e(en)n(tries)h(of)f(t)n(yp)r(es)g(that)i(matc)n(h)100
2552 y(the)e(t)n(yp)r(e)g(of)g(the)h(result)f(of)g Ff(op)p
Fk(.)40 b(\(Note)25 b(that,)i(here)f(as)e(for)h(all)g(other)h(comm)n
(unication)e(op)r(erations,)100 2627 y(the)e(t)n(yp)r(e)g(of)f(en)n(tries)h
(inserted)h(in)e(a)g(message)f(dep)r(end)i(on)g(the)g(information)h(pro)n
(vided)g(b)n(y)f(the)g(input)100 2702 y(bu\013er)e(descriptor,)h(and)e(not)f
(on)h(the)f(declarations)i(of)e(these)h(v)m(ariables)f(in)g(the)g(calling)g
(program.)28 b(The)100 2778 y(t)n(yp)r(es)19 b(of)f(the)h(v)m(ariables)f(in)g
(the)h(calling)f(program)h(need)g(not)g(matc)n(h)f(the)h(t)n(yp)r(es)f
(de\014ned)i(b)n(y)f(the)g(bu\013er)100 2853 y(descriptor,)32
b(but)e(in)e(suc)n(h)h(case)f(the)g(outcome)g(of)g(a)g(reduce)i(op)r(eration)
g(ma)n(y)d(b)r(e)h(implemen)n(tation)100 2928 y(dep)r(enden)n(t.\))221
3003 y(The)22 b(op)r(eration)i(de\014ned)g(b)n(y)e Ff(op)h
Fk(is)e(asso)r(ciativ)n(e)h(and)h(comm)n(utativ)n(e,)e(and)i(the)g(implemen)n
(tation)100 3079 y(can)h(tak)n(e)h(adv)m(an)n(tage)h(of)e(asso)r(ciativit)n
(y)h(and)g(comm)n(utativit)n(y)f(in)g(order)i(to)e(c)n(hange)h(order)h(of)e
(ev)m(alu-)100 3154 y(ation.)41 b(The)24 b(routine)j(is)d(called)h(b)n(y)g
(all)g(group)i(mem)n(b)r(ers)22 b(using)k(the)f(same)e(argumen)n(ts)j(for)g
Ff(group,)100 3229 y(root)d Fk(and)f Ff(op)p Fk(.)100 3354
y Fg(IN)h(in)n(buf)29 b Fk(handle)23 b(to)e(input)h(bu\013er)100
3480 y Fg(IN)h(outbuf)31 b Fk(handle)22 b(to)f(output)j(bu\013er)f({)d
(signi\014can)n(t)j(only)e(at)g(ro)r(ot)100 3605 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)p eop
%%Page: 12 13
bop 100 -134 a Fk(12)938 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)100 60 y Fg(IN)23 b(ro)r(ot)33 b Fk(rank)22
b(of)f(ro)r(ot)h(pro)r(cess)f(\(in)n(teger\))100 184 y Fg(IN)i(op)31
b Fk(op)r(eration)221 305 y(The)24 b(bu\013er)j(descriptor)g(con)n(tains)f
(data)g(t)n(yp)r(e)f(information)h(so)e(that)i(the)f(correct)h(form)e(of)h
(the)100 380 y(op)r(eration)e(can)e(b)r(e)g(p)r(erformed.)28
b(W)-5 b(e)21 b(list)f(b)r(elo)n(w)h(the)g(op)r(erations)i(whic)n(h)f(are)f
(supp)r(orted.)100 502 y Fg(MPI)p 244 502 21 3 v 25 w(MAX)30
b Fk(maxim)n(um)100 625 y Fg(MPI)p 244 625 V 25 w(MIN)h Fk(minim)n(um)100
749 y Fg(MPI)p 244 749 V 25 w(MIN)g Fk(minim)n(um)100 872 y
Fg(MPI)p 244 872 V 25 w(SUM)f Fk(sum)100 996 y Fg(MPI)p 244
996 V 25 w(PR)n(OD)e Fk(pro)r(duct)100 1120 y Fg(MPI)p 244
1120 V 25 w(AND)h Fk(and)22 b(\(logical)f(or)h(bit-wise)e(in)n(teger\))100
1243 y Fg(MPI)p 244 1243 V 25 w(OR)29 b Fk(or)22 b(\(logical)f(or)g(bit-wise)
g(in)n(teger\))100 1367 y Fg(MPI)p 244 1367 V 25 w(X)n(OR)29
b Fk(xor)21 b(\(logical)g(or)h(bit-wise)f(in)n(teger\))100
1490 y Fg(MPI)p 244 1490 V 25 w(MAXLOC)29 b Fk(rank)22 b(of)f(pro)r(cess)g
(with)g(maxim)n(um)c(v)m(alue)100 1614 y Fg(MPI)p 244 1614
V 25 w(MINLOC)29 b Fk(rank)22 b(of)f(pro)r(cess)h(with)f(minim)n(um)c(v)m
(alue)221 1782 y Fg(MPI)p 365 1782 V 26 w(REDUCEC\()i(in)n(buf,)k(outbuf,)h
(len,)f(t)n(yp)r(e,)g(group,)i(ro)r(ot,)h(op\))221 1905 y Fk(Is)20
b(same)f(as)h Ff(MPI)p 610 1905 20 3 v 26 w(REDUCE)q Fk(,)k(restricted)f(to)e
(a)f(blo)r(c)n(k)h(bu\013er.)100 2026 y Fg(IN)i(in)n(buf)29
b Fk(\014rst)22 b(lo)r(cation)g(in)f(input)h(bu\013er)100 2150
y Fg(OUT)g(outbuf)30 b Fk(\014rst)22 b(lo)r(cation)g(in)f(output)i(bu\013er)g
({)e(signi\014can)n(t)i(only)e(at)g(ro)r(ot)100 2273 y Fg(IN)i(len)30
b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(in)f(input)i(and)f(output)h
(bu\013er)g(\(in)n(teger\))100 2397 y Fg(IN)g(t)n(yp)r(e)30
b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100 2521 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)100 2644 y Fg(IN)i(ro)r(ot)33 b
Fk(rank)22 b(of)f(ro)r(ot)h(pro)r(cess)f(\(in)n(teger\))100
2768 y Fg(IN)i(op)31 b Fk(op)r(eration)221 2936 y Fg(MPI)p
365 2936 21 3 v 26 w(USER)p 583 2936 V 23 w(REDUCE\()21 b(in)n(buf,)h
(outbuf,)i(group,)i(ro)r(ot,)g(function\))221 3059 y Fk(Same)e(as)g(the)h
(reduce)g(op)r(eration)i(function)g(ab)r(o)n(v)n(e)f(except)f(that)h(a)e
(user)h(supplied)h(function)h(is)100 3134 y(used.)g Ff(functi)q(o)q(n)21
b Fk(is)c(an)g(asso)r(ciativ)n(e)h(and)g(comm)n(utativ)n(e)f(function)j(with)
d(t)n(w)n(o)h(argumen)n(ts.)28 b(The)17 b(t)n(yp)r(es)100 3209
y(of)j(the)h(t)n(w)n(o)f(argumen)n(ts)i(and)f(of)f(the)g(returned)k(v)m(alue)
19 b(of)i(the)f(function,)j(and)e(the)f(t)n(yp)r(es)h(of)f(all)f(en)n(tries)
100 3285 y(in)25 b(the)h(input)h(and)f(output)i(bu\013ers)f(all)d(agree.)41
b(The)25 b(output)j(bu\013er)f(has)f(the)f(same)f(length)i(as)f(the)100
3360 y(input)e(bu\013er.)100 3481 y Fg(IN)g(in)n(buf)29 b Fk(handle)23
b(to)e(input)h(bu\013er)100 3605 y Fg(IN)h(outbuf)31 b Fk(handle)22
b(to)f(output)j(bu\013er)f({)d(signi\014can)n(t)j(only)e(at)g(ro)r(ot)p
eop
%%Page: 13 14
bop 100 -134 a Fh(1.6.)47 b(GLOBAL)19 b(COMPUTE)f(OPERA)-5
b(TIONS)1171 b Fk(13)100 60 y Fg(IN)23 b(group)32 b Fk(handle)23
b(to)e(group)100 189 y Fg(IN)i(ro)r(ot)33 b Fk(rank)22 b(of)f(ro)r(ot)h(pro)r
(cess)f(\(in)n(teger\))221 366 y Fg(MPI)p 365 366 21 3 v 26
w(USER)p 583 366 V 23 w(REDUCEC\()f(in)n(buf,)i(outbuf,)i(len,)f(t)n(yp)r(e,)
g(group,)j(ro)r(ot,)g(function\))221 490 y Fk(Is)20 b(same)f(as)h
Ff(MPI)p 610 490 20 3 v 636 490 V 49 w(USER)p 783 490 V 26
w(REDUCE)q Fk(,)k(restricted)f(to)e(a)f(blo)r(c)n(k)h(bu\013er.)100
619 y Fg(IN)i(in)n(buf)29 b Fk(\014rst)22 b(lo)r(cation)g(in)f(input)h
(bu\013er)100 748 y Fg(OUT)g(outbuf)30 b Fk(\014rst)22 b(lo)r(cation)g(in)f
(output)i(bu\013er)g({)e(signi\014can)n(t)i(only)e(at)g(ro)r(ot)100
877 y Fg(IN)i(len)30 b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(in)f(input)i
(and)f(output)h(bu\013er)g(\(in)n(teger\))100 1007 y Fg(IN)g(t)n(yp)r(e)30
b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100 1136 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)100 1265 y Fg(IN)i(ro)r(ot)33 b
Fk(rank)22 b(of)f(ro)r(ot)h(pro)r(cess)f(\(in)n(teger\))100
1395 y Fg(IN)i(function)30 b Fk(user)22 b(pro)n(vided)h(function)221
1625 y Fd(Discussion:)221 1693 y Fc(Do)e(w)n(e)h(also)d(w)n(an)n(t)i(a)g(v)n
(ersion)f(of)g(reduce)i(that)f(broadcasts)f(the)h(result)f(to)h(all)e(pro)r
(cesses)i(in)f(the)h(group?)100 1759 y(\(This)12 b(can)h(b)r(e)f(ac)n(hiev)n
(ed)i(b)n(y)f(a)g(reduce)g(follo)n(w)n(ed)f(b)n(y)h(a)f(broadcast,)h(but)f(a)
h(com)n(bined)e(function)h(ma)n(y)g(b)r(e)h(somewhat)100 1826
y(more)18 b(e\016cien)n(t.\))25 b(These)19 b(w)n(ould)f(b)r(e)g(resp)r(ectiv)
n(ely:)221 1940 y Fd(MPI)p 352 1940 V 24 w(GOP\()23 b(in)n(buf,)c(outbuf,)g
(group,)f(op\))221 2102 y(MPI)p 352 2102 V 24 w(GOPC\()23 b(in)n(buf,)c
(outbuf,)g(len,)i(t)n(yp)r(e,)h(group,)c(op\))221 2264 y(MPI)p
352 2264 V 24 w(USER)p 551 2264 V 22 w(GOP\()k(in)n(buf,)e(outbuf,)f(group,)f
(function\))221 2426 y(MPI)p 352 2426 V 24 w(USER)p 551 2426
V 22 w(GOPC\()k(in)n(buf,)e(outbuf,)f(len,)i(t)n(yp)r(e,)h(group,)c
(function\))221 2541 y Fc(Do)g(w)n(e)h(w)n(an)n(t)g(a)f(user)g(pro)n(vided)f
Fb(function)g Fc(\(t)n(w)n(o)j(IN)e(parameters,)f(one)h(OUT)g(v)m(alue\),)g
(or)f(a)h(user)g(pro)n(vided)100 2607 y(pro)r(cedure)i(that)h(o)n(v)n
(erwrites)h(the)f(second)g(input)e(\(ie.)31 b(one)21 b(IN)g(param,)d(one)i
(INOUT)i(param,)c(the)k(equiv)m(alen)n(t)100 2674 y(of)i(C)h
Fa(a)k(op=)g(b)c Fc(t)n(yp)r(e)g(assignmen)n(t\)?)42 b(The)25
b(second)f(c)n(hoice)i(ma)n(y)e(allo)n(w)f(a)i(more)e(e\016cien)n(t)j
(implemen)n(tation,)100 2740 y(without)18 b(c)n(hanging)f(the)i(seman)n(tics)
g(of)f(the)h(MPI)f(call.)100 3016 y Fg(Scan)100 3180 y(MPI)p
244 3180 21 3 v 25 w(SCAN\()k(in)n(buf,)g(outbuf,)i(group,)i(op)e(\))221
3304 y Fk(MPI)p 345 3304 19 3 v 21 w(SCAN)16 b(is)g(used)i(to)g(p)r(erform)g
(a)f(parallel)h(pre\014x)h(with)f(resp)r(ect)g(to)f(an)h(asso)r(ciativ)n(e)g
(reduction)100 3379 y(op)r(eration)k(on)e(data)h(distributed)i(across)d(the)g
(group.)29 b(The)19 b(op)r(eration)j(returns)h(in)c(the)h(output)j(bu\013er)
100 3454 y(of)d(the)f(pro)r(cess)h(with)g(rank)g Ff(i)g Fk(the)f(reduction)k
(of)c(the)h(v)m(alues)f(in)g(the)h(input)h(bu\013ers)g(of)f(pro)r(cesses)f
(with)100 3530 y(ranks)k Ff(0,...,)q(i)q Fk(.)34 b(The)22 b(t)n(yp)r(e)h(of)f
(op)r(erations)i(supp)r(orted)i(and)d(their)g(seman)n(tic,)f(and)h(the)f
(constrain)n(ts)100 3605 y(on)f(input)i(and)f(output)h(bu\013ers)g(are)e(as)f
(for)i Ff(MPI)p 1225 3605 20 3 v 25 w(REDUC)q(E)p Fk(.)p eop
%%Page: 14 15
bop 100 -134 a Fk(14)938 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)100 60 y Fg(IN)23 b(in)n(buf)29 b Fk(handle)23
b(to)e(input)h(bu\013er)100 190 y Fg(IN)h(outbuf)31 b Fk(handle)22
b(to)f(output)j(bu\013er)100 319 y Fg(IN)f(group)32 b Fk(handle)23
b(to)e(group)100 448 y Fg(IN)i(op)31 b Fk(op)r(eration)221
625 y Fg(MPI)p 365 625 21 3 v 26 w(SCANC\()14 b(in)n(buf,)j(outbuf,)h(len,)g
(t)n(yp)r(e,)f(group,)k(op)c(\))d Fk(Same)g(as)g Ff(MPI)p 2070
625 20 3 v 26 w(SCAN)p Fk(,)j(restricted)100 748 y(to)k(blo)r(c)n(k)g
(bu\013ers.)100 895 y Fg(IN)i(in)n(buf)29 b Fk(\014rst)22 b(input)h(bu\013er)
g(elemen)n(t)d(\(c)n(hoice\))100 1024 y Fg(OUT)i(outbuf)30
b Fk(\014rst)22 b(output)i(bu\013er)f(elemen)n(t)d(\(c)n(hoice\))100
1154 y Fg(IN)j(len)30 b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(in)f(input)
i(and)f(output)h(bu\013er)g(\(in)n(teger\))100 1283 y Fg(IN)g(t)n(yp)r(e)30
b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100 1412 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)100 1542 y Fg(IN)i(op)31 b Fk(op)r(eration)221
1736 y Fg(MPI)p 365 1736 21 3 v 26 w(USER)p 583 1736 V 23 w(SCAN\()22
b(in)n(buf,)g(outbuf,)i(group,)i(function)d(\))221 1860 y Fk(Same)16
b(as)h(the)h(scan)f(op)r(eration)j(function)g(ab)r(o)n(v)n(e)e(except)g(that)
g(a)f(user)h(supplied)h(function)h(is)c(used.)100 1935 y Ff(functi)q(o)q(n)29
b Fk(is)24 b(an)h(asso)r(ciativ)n(e)g(and)h(comm)n(utativ)n(e)e(function)j
(with)e(t)n(w)n(o)h(argumen)n(ts.)40 b(The)25 b(t)n(yp)r(es)g(of)100
2010 y(the)c(t)n(w)n(o)h(argumen)n(ts)g(and)g(of)f(the)g(returned)j(v)m
(alues)d(all)f(agree.)100 2140 y Fg(IN)j(in)n(buf)29 b Fk(handle)23
b(to)e(input)h(bu\013er)100 2269 y Fg(IN)h(outbuf)31 b Fk(handle)22
b(to)f(output)j(bu\013er)100 2399 y Fg(IN)f(group)32 b Fk(handle)23
b(to)e(group)100 2528 y Fg(IN)i(function)30 b Fk(user)22 b(pro)n(vided)h
(function)221 2705 y Fg(MPI)p 365 2705 V 26 w(USER)p 583 2705
V 23 w(SCANC\()e(in)n(buf,)h(outbuf,)i(len,)f(t)n(yp)r(e,)g(group,)j
(function\))221 2828 y Fk(Is)20 b(same)f(as)h Ff(MPI)p 610
2828 20 3 v 26 w(USER)p 760 2828 V 26 w(SCAN)p Fk(,)j(restricted)g(to)e(a)g
(blo)r(c)n(k)g(bu\013er.)100 2958 y Fg(IN)i(in)n(buf)29 b Fk(\014rst)22
b(lo)r(cation)g(in)f(input)h(bu\013er)100 3087 y Fg(OUT)g(outbuf)30
b Fk(\014rst)22 b(lo)r(cation)g(in)f(output)i(bu\013er)100
3217 y Fg(IN)g(len)30 b Fk(n)n(um)n(b)r(er)21 b(of)g(en)n(tries)h(in)f(input)
i(and)f(output)h(bu\013er)g(\(in)n(teger\))100 3346 y Fg(IN)g(t)n(yp)r(e)30
b Fk(data)22 b(t)n(yp)r(e)f(of)g(bu\013er)100 3475 y Fg(IN)i(group)32
b Fk(handle)23 b(to)e(group)100 3605 y Fg(IN)i(function)30
b Fk(user)22 b(pro)n(vided)h(function)p eop
%%Page: 15 16
bop 100 -134 a Fh(1.7.)41 b(CORRECTNESS)1739 b Fk(15)221 60
y Fd(Discussion:)221 135 y Fc(Do)21 b(w)n(e)h(w)n(an)n(t)f(scan)g(op)r
(erations)e(executed)k(b)n(y)f(segmen)n(ts?)31 b(\(The)21 b(HPF)g
(de\014nition)e(of)h(pre\014x)h(and)f(su\016x)100 211 y(op)r(eration)f(migh)n
(t)i(b)r(e)g(handy)f({)h(in)f(addition)g(to)h(the)h(scanned)f(v)n(ector)i(of)
e(v)m(alues)g(there)h(is)f(a)f(mask)h(that)g(tells)100 286
y(where)e(segmen)n(ts)g(start)g(and)e(end.\))100 587 y Fj(1.7)94
b(Correctness)100 832 y Fd(Discussion:)46 b Fc(This)18 b(is)f(still)g(v)n
(ery)j(preliminary)221 1018 y Fk(The)25 b(seman)n(tics)g(of)h(the)g
(collectiv)n(e)f(comm)n(unication)h(op)r(erations)h(can)f(b)r(e)f(deriv)n(ed)
i(from)e(their)100 1093 y(op)r(erational)i(de\014nition)g(in)d(terms)f(of)h
(p)r(oin)n(t-to-p)r(oin)o(t)k(comm)n(unication.)38 b(It)24
b(is)f(assumed)h(that)h(mes-)100 1168 y(sages)c(p)r(ertaining)j(to)d(one)h
(op)r(eration)h(cannot)h(b)r(e)c(confused)k(with)d(messages)e(p)r(ertaining)
24 b(to)e(another)100 1243 y(op)r(eration.)29 b(Also)18 b(messages)f(p)r
(ertaining)22 b(to)d(t)n(w)n(o)h(distinct)g(o)r(ccurrences)h(of)e(the)g(same)
f(op)r(eration)j(can-)100 1319 y(not)h(b)r(e)e(confused,)j(if)e(the)g(t)n(w)n
(o)h(o)r(ccurrences)h(ha)n(v)n(e)e(distinct)i(parameters.)28
b(The)21 b(relev)m(an)n(t)h(parameters)100 1394 y(for)f(this)g(purp)r(ose)h
(are)e Ff(group)p Fk(,)k Ff(root)f Fk(and)e Ff(op)p Fk(.)28
b(messages)18 b(p)r(ertaining)23 b(to)e(another)h(o)r(ccurrence)g(of)f(the)
100 1469 y(same)h(op)r(eration,)28 b(with)d(di\013eren)n(t)h(parameters.)39
b(The)25 b(implemen)n(ter)e(can,)i(of)g(course,)h(use)e(another,)100
1545 y(more)c(e\016cien)n(t)h(implemen)n(tation,)f(as)g(long)i(as)e(it)h(has)
g(the)g(same)e(e\013ect.)221 1721 y Fd(Discussion:)221 1788
y Fc(This)c(statemen)n(t)i(do)r(es)d(not)i(y)n(et)h(apply)d(to)i(the)g
(curren)n(t,)g(incomplete)f(and)g(somewhat)f(careless)i(de\014nitions)100
1854 y(I)i(pro)n(vided)g(in)g(this)g(draft.)221 1920 y(The)j(de\014nition)e
(ab)r(o)n(v)n(e)i(means)e(that)i(messages)f(p)r(ertaining)e(to)j(a)f
(collectiv)n(e)i(comm)n(unication)c(carry)i(in-)100 1987 y(formation)c(iden)n
(tifying)h(the)i(op)r(eration)d(itself,)h(and)h(the)h(v)m(alues)f(of)f(the)j
Fa(group)e Fc(and,)f(where)i(relev)m(an)n(t,)f Fa(root)h Fc(or)100
2053 y Fa(op)g Fc(parameters.)k(Is)18 b(this)g(acceptable?)221
2239 y Fk(A)i(few)g(examples:)100 2377 y Ff(MPI_BC)q(A)q(S)q(TC)q(\()q(b)q(u)
q(f,)37 b(len,)e(type,)h(group,)h(0\);)100 2452 y(MPI_BC)q(A)q(S)q(TC)q(\()q
(b)q(u)q(f,)g(len,)e(type,)h(group,)h(1\);)221 2589 y Fk(Tw)n(o)18
b(consecutiv)n(e)i(broadcasts,)i(in)d(the)g(same)d(group,)21
b(with)e(the)h(same)c(tag,)k(but)g(di\013eren)n(t)h(ro)r(ots.)100
2665 y(Since)h(the)h(op)r(erations)h(are)e(distinguishable,)k(messages)20
b(from)h(one)h(broadcast)j(cannot)f(b)r(e)d(confused)100 2740
y(with)g(messages)e(from)h(the)i(other)g(broadcast;)h(the)f(program)g(is)e
(safe)g(and)i(will)e(execute)h(as)f(exp)r(ected.)100 2878 y
Ff(MPI_BC)q(A)q(S)q(TC)q(\()q(b)q(u)q(f,)37 b(len,)e(type,)h(group,)h(0\);)
100 2953 y(MPI_BC)q(A)q(S)q(TC)q(\()q(b)q(u)q(f,)g(len,)e(type,)h(group,)h
(0\);)221 3091 y Fk(Tw)n(o)26 b(consecutiv)n(e)h(broadcasts,)j(in)c(the)h
(same)d(group,)29 b(with)d(the)h(same)d(tag)j(and)g(ro)r(ot.)44
b(Since)100 3166 y(p)r(oin)n(t-to-p)r(oin)o(t)22 b(comm)n(unication)c
(preserv)n(es)i(the)g(order)g(of)e(messages)f(here,)j(to)r(o,)f(messages)d
(from)i(one)100 3241 y(broadcast)j(will)c(not)j(b)r(e)e(confused)j(with)d
(messages)f(from)h(the)h(other)h(broadcast;)i(the)d(program)h(is)d(safe)100
3317 y(and)22 b(will)d(execute)i(as)g(in)n(tended.)100 3454
y Ff(MPI_RA)q(N)q(K)q(\(&)q(r)q(a)q(n)q(k,)37 b(group\))100
3530 y(if)c(\(rank)q(==)q(0)q(\))164 3605 y({)p eop
%%Page: 16 17
bop 100 -134 a Fk(16)938 b Fh(CHAPTER)17 b(1.)48 b(COLLECTIVE)17
b(COMMUNICA)-5 b(TION)195 60 y Ff(MPI_B)q(CA)q(S)q(T)q(C)q(\(b)q(u)q(f)q(,)37
b(len,)e(type,)h(group,)g(0\);)195 135 y(MPI_S)q(EN)q(D)q(C)q(\()q(bu)q(f)q
(,)h(len,)e(type,)h(2,)d(tag,)i(group\))q(;)164 211 y(})100
286 y(elseif)i(\(rank=)q(=1)q(\))164 361 y({)195 437 y(MPI_R)q(EC)q(V)q(C)q
(\()q(bu)q(f)q(,)g(len,)e(type,)h(MPI_DO)q(NT)q(C)q(A)q(R)q(E)q(,)g(tag,)f
(group\))q(;)195 512 y(MPI_B)q(CA)q(S)q(T)q(C)q(\(b)q(u)q(f)q(,)i(len,)e
(type,)h(group,)g(0\);)195 587 y(MPI_R)q(EC)q(V)q(C)q(\()q(bu)q(f)q(,)h(len,)
e(type,)h(MPI_DO)q(NT)q(C)q(A)q(R)q(E)q(,)g(tag,)f(group\))q(;)164
662 y(})100 738 y(else)164 813 y({)195 888 y(MPI_S)q(EN)q(D)q(C)q(\()q(bu)q
(f)q(,)i(len,)e(type,)h(2,)d(tag,)i(group\))q(;)195 963 y(MPI_B)q(CA)q(S)q(T)
q(C)q(\(b)q(u)q(f)q(,)i(len,)e(type,)h(group,)g(0\);)164 1039
y(})221 1180 y Fk(Pro)r(cess)23 b(zero)g(executes)h(a)f(broadcast)k(follo)n
(w)n(ed)d(b)n(y)g(a)f(send)h(to)g(pro)r(cess)g(one;)h(pro)r(cess)f(t)n(w)n(o)
g(ex-)100 1256 y(ecutes)f(a)f(send)h(to)g(pro)r(cess)g(one,)g(follo)n(w)n(ed)
g(b)n(y)g(a)f(broadcast;)k(and)e(pro)r(cess)f(one)g(executes)f(a)h(receiv)n
(e,)100 1331 y(a)29 b(broadcast)i(and)f(a)f(receiv)n(e.)52
b(A)27 b(p)r(ossible)j(outcome)e(is)g(for)i(the)g(op)r(erations)h(to)e(b)r(e)
g(matc)n(hed)g(as)100 1406 y(illustrated)23 b(b)n(y)e(the)h(diagram)e(b)r
(elo)n(w.)227 1698 y Ff(0)733 b(1)701 b(2)609 1849 y(/)33 b(-)f(>)65
b(receiv)q(e)386 b(/)33 b(-)g(send)545 1924 y(/)797 b(/)100
2000 y(broadc)q(a)q(s)q(t)100 b(/)287 b(broadc)q(a)q(s)q(t)227
b(/)96 b(broadc)q(a)q(s)q(t)450 2075 y(/)764 b(/)164 2150 y(send)98
b(-)415 b(receiv)q(e)68 b(<)33 b(-)221 2442 y Fk(The)20 b(reason)g(is)f(that)
i(broadcast)i(is)18 b(not)j(a)e(sync)n(hronous)k(op)r(eration;)g(the)d(call)f
(at)h(a)f(pro)r(cess)h(ma)n(y)100 2518 y(return)27 b(b)r(efore)e(the)g(other)
h(pro)r(cesses)e(ha)n(v)n(e)h(en)n(tered)i(the)d(broadcast.)40
b(Th)n(us,)26 b(the)f(message)d(sen)n(t)j(b)n(y)100 2593 y(pro)r(cess)e(zero)
g(can)g(arriv)n(e)h(to)f(pro)r(cess)g(one)g(b)r(efore)h(the)f(message)e(sen)n
(t)j(b)n(y)f(pro)r(cess)g(t)n(w)n(o,)g(and)h(b)r(efore)100
2668 y(the)d(call)f(to)i(broadcast)h(on)e(pro)r(cess)h(one.)p
eop
%%Trailer
end
userdict /end-hook known{end-hook}if
%%EOF
From owner-mpi-collcomm@CS.UTK.EDU  Thu May  6 05:15:20 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA11384; Thu, 6 May 93 05:15:20 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA01443; Thu, 6 May 93 05:10:47 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 6 May 1993 05:10:45 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA01415; Thu, 6 May 93 05:10:38 -0400
Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA24928
  (5.65c/IDA-1.4.4 for mpi-collcomm@cs.utk.edu); Thu, 6 May 1993 10:10:32 +0100
Date: Thu, 6 May 1993 10:10:32 +0100
From: James Cownie <jim@meiko.co.uk>
Message-Id: <199305060910.AA24928@hub.meiko.co.uk>
Received: by float.co.uk (5.0/SMI-SVR4)
	id AA02565; Thu, 6 May 93 10:10:05 BST
To: mpi-collcomm@cs.utk.edu
Subject: Comments on draft
Content-Length: 2380

Sorry for the abruptness, I have no time...

1) The definition of GATHER has changed so that it now uses an array
of buffer decsriptors. This is fine, but means we don't have the
functionality we had before of being able to gather arbitrarily sized
chunks into a single buffer. This is a useful function (e.g. gather a
general block distributed array [one with different sized chunks on
each processor] into a contiguous area for output). However it is easy
to construct using a SCAN followed by a GATHER (which is how the
library would implement it anyway). Therefore I'm happy with this
change.

2) On page 12 MPI_MIN appears twice. (This is less than minimal !)

3) The USER_REDUCE still does not have any capability for
vectorisation. I would like this to be possible. Therefore I propose

a) The USER_REDUCE function take an argument which is the CHUNKSIZE
   i.e. the number of items to be passed to the user function will
   always be a multiple of CHUNKSIZE. (1 <= CHUNKSIZE <= the number of
   items in INBUF and OUTBUF, and CHUNKSIZE is a factor of Number of
   Items in Inbuf).  
b) The user function is passed a count of the number of items AND
   pointers to the two (or three ?) buffers, and should operate on all of
   the elements in the buffers.

By using this interface, 
a) We can reduce the number of calls to the user function.
   (This is significant if all it does is a simple comparison
   operation)
b) The user can ensure that she can get whole structures contiguously.
   (By specifying the CHUNKSIZE to ensure a whole structure
   is present)
c) The interface almost reduces to the previous case when CHUNKSIZE is
   one.
d) The amount of inhibition of pipelining caused by having to chunk
   things is controlled by the user. If CHUNKSIZE == 1, then the effect
   is exactly as before. If CHUNKSIZE == NELEMS, then pipelining is
   completely inhibited. The USER HAS CONTROL.

4) The scan with breaks may be useful. There's no need for a backward
   scan (parallel suffix), this can be achieved by constructing a
   reversed group of processes and doing the forward scan.

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com


From owner-mpi-collcomm@CS.UTK.EDU Fri May  7 14:27:49 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK)
	id AA01078; Fri, 7 May 93 14:27:49 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA21908; Fri, 7 May 93 14:27:11 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 7 May 1993 14:27:11 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA21894; Fri, 7 May 93 14:27:08 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA27019; Fri, 7 May 93 13:27:41 CDT
Date: Fri, 7 May 93 13:27:41 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9305071827.AA27019@Aurora.CS.MsState.Edu>
To: mpi-context@cs.utk.edu
Subject: Re: mpi-context: context and group (longer)
Cc: mpi-collcomm@cs.utk.edu

For your information, the following was sent (by me) to the IAC subcommittee.
- Tony

----- Begin Included Message -----

From tony Fri May  7 13:25:17 1993
Date: Fri, 7 May 93 13:24:37 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
To: mpi-iac@CS.UTK.EDU, stoessel@irsun21.ifp.fr
Subject: Re: Subset comments
Cc: tony
Content-Length: 6858

IACers,

I think that a subset is essential, because of the number of features
in MPI that are so remote from current practice; furthermore, if one were
to look at the liberal vs. conservative nature of committee (as others
have observed), it is not equal over all features/proposed features.  
Hence, I offer my thoughts.  This is not meant to be a "flame."

For instance, I have argued for the addition of a two-word match against
tags, in order to allow easier layerability.  A tag would be matched as
follows (following Jim Cownie):

		(received_tag xor (not dont_care_bits)) and care_bits

This would allow the user, not only complete freedom in use of tags,
but also the ability to develop further layers on top of MPI that partition
the use of tag.  I will bring this idea up again at the next meeting,
at the second reading of pt-2-pt.  It is necessary to have this, and it
is a small step away from usual practice.  However, it is hard to convince
people to add this, despite its negligible impact on performance (two
logical operations, instead of one, assuming user passes the one's complement
of the dont_care_bits.)  However, its impact on MPI flexibility is immense.
Hence, I view this feature as essential to the subset and full MPI likewise.

Another instance.  Contexts.  We are arguing for/not for contexts that
are independent of groups.  Contexts as an extended, system-registered part
of the tag field help us to build libraries that can co-exist, register
at runtime, and do not interfere with the message-passing of other parts of
the system.  I want an "open system"; hence, I want to see the tag partitioning.
Contexts work very well in Zipcode (my message passing system, developed at
Caltech, LLNL), and are helpful with the libraries we develop on Zipcode.
Because vendor systems do not have contexts, Zipcode, when it layers on
vendor systems must requeue messages.  This is undesirable from a performance
standpoint.  Hence, it is highly desirable for MPI to provide contexts of
the type I describe, as a simple tag registration/partitioning mechanism
that is understandable as an extension of existing practice.  If contexts
are limited, and their is a mechanism to find this out (environment), then
messaging systems like Zipcode could do requeueing of messages as necessary,
or manage contexts themselves at times, and use the precious "fast" contexts
on user-specified communications, leaving others to be requeued and slower.
Hence, I view contexts (whether plentiful or scarce in a given implementation)
as essential to the subset, and the full standard.  As Don Heller of Shell
has noted...
"contexts allow the development of a software industry [for multicomputers]."

Groups.  Yes, we need them too.  They are important for managing who is
communicating with others.  So, they have to stay in the subset as well.
Rik Littlefield, Lyndon Clarke, and I have argued (and will continue to do
so) for attributes based on group/context-scope.  This would allow the
methods implementing communication to be changed in MPI for each group/context
scope, permitting optimizations.  This is not current practice, except
in our Zipcode 1.0 release, which has this useful capability, but it 
justifiably useful.  I think these ideas can/should remain in the standard
and in the subset.  

There are multitudinous types of send/receive that we are currently
proposing, but not  using in practice currently, which have been
proposed and accepted with relative ease by MPI.  Practically, send,
receive, receive unblocked, is enough, provided the kernel is smart
enough to do overlapping of communication and computation.  Actually,
if the semantics of the Reactive Kernel were taken, which allows the
system to handle all memory management, then receive would provide the
pointer to the data, and send would be like free, with an allocate
mechanism like malloc.  These reduce the number of copies of data,
except when extremely regular data structures are in use (less and less
likely).  The RK semantics thought out by Seitz et al are remarkably
simple, but highly optimizable, and can even work very fast in shared
memory.  These semantics do not appear as options in MPI; we only have
multitudinous buffer-oriented operations.  When memory management units
are involved, binding control of the memory and messaging operations
gives even more opportunity for the system to optimize.  Allowing the
user to receive messages without having first to know their size is
elegant, and simplifies error issues.  

As we all know, there are faster implementation strategies than RK
semantics for message-passing that are low level, such as channels, active
messages (unchampioned in this standard), and shared-memory (eg, CRAY
Tera 3D).  These need not be part of this standard, but it would be
helpful if the standard were unhostile to such possibly efficient
implementation mechanisms.  The "buffer descriptor" approach in MPI
is the best match (being the highest level interface) to a runtime
system that exploits channels, and/or active messages, and/or remote
memory writes, etc.  The optimizability of the highest level is complemented
by the fact that the user no longer knows if a buffer is ever formed
on the local or remote end [well it should be written to make that so].
Furthermore, heterogeneity can be encapsulated in transfers at this level.
Hence, I am convinced that "buffer descriptor" stuff should remain in the 
subset.

The committee has shied away from defining the process model, and this
has led not only to a very static model (arguably OK), but a predilection
to the SPMD (handling of groups, definition of subgroups, need for 
dynamic contexts diminished, etc).  All of these factors make the standard
backward-looking if so adopted, and make it really difficult to justify
in the distributed environment.  I am not sure why this has happened, but
it is unfortunate.  It means that MPI codes will be partially portable,
but not totally, as each system will have different process management.
SPMD programs will be reasonably portable, as the process management is
simple, and therefore localized.  The handling of the host/node model
is not well established in MPI, and may not be suitably supported.  That
would be a big problem to my mind.

To summarize, it is my view that the enabling mechanisms: group, context,
tag selection, and buffer descriptors described above are essential aspects
of a standard and subset, and should not be sacrificed.   MPMD programming,
the host/node model, should be supported.

- Tony 

.       .       .       .       .       .       .       .       .      .
"There is no lifeguard at the gene pool." - C. H. Baldwin
"In the end ... there can be only one." - Ramirez (Sean Connery) in <Highlander>

Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu





----- End Included Message -----


From owner-mpi-collcomm@CS.UTK.EDU Sun May  9 22:30:55 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK)
	id AA03775; Sun, 9 May 93 22:30:55 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA00830; Sun, 9 May 93 22:30:41 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 9 May 1993 22:30:40 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA00810; Sun, 9 May 93 22:30:38 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA29012; Sun, 9 May 93 21:31:10 CDT
Date: Sun, 9 May 93 21:31:10 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9305100231.AA29012@Aurora.CS.MsState.Edu>
To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu
Subject: Mea culpa on previous letter to Ho/Pierce; Question about proposal VIII (Sears)
Cc: tony@CS.UTK.EDU

Dear friends/colleagues:

1) I think I misunderstood who was saying what in my last message, re Ho/Pierce.
   But, I hope my point was clear.  I disagree with Howard on the concept that
   the context is convenient, but not important/essential.  I thank Paul for his
   examples.  If I was sounding inflamed, ignore that, I had low blood sugar.
 
   I think that the context should be a separate, logical extension to the tag
   field, the latter should be 32 bits long, for sufficient flexibility in layering,
   and the tag matching should permit layering by selective bit inclusion/exclusion.
   Please see that my tag matching and context/tag (as a logical partitioning) are
   correlated.  The MPI layer partitions the context field from the "logical tag,"
   and controls it.  The user has control over the format of the rest of the tag,
   and may build layers (or use other layers), that subsequently partition the rest
   of the tag to continue to build safe service layers.  That is why I keep pounding 
   the receipt selectivity mechanism based on the dont_care bits, and the care_bits.    
   
   I do not view the context as a large quantity in some implementations.  In certain
   cases, there might be as few as 16, but the implementer (like a compiler writer
   using registers), allows the code to work if the hard limit is exceeded, by
   reserving a context that supports a higher-cost propocal (eg, requeueing of
   messages).  

-------------------------------------------

2) Lyndon suggested some time ago that I revive my two-dimensional grid example, 
vis a vis Rik Littlefield's collections of examples, and ask how that applies to
Mark Sear's proposal VIII...

Basically, in this case, there is a two-dimensional logical array of processes
(a virtual topology), of shape PxQ.  In a given position (p,q), there are three
possible contexts for a given global operation G (eg, combine):

	1) G over the whole collective, PxQ topology
	2) G over the row that includes process (p,q) [the pth row]
	3) G over a column that includes process (p,q) [the pth column]

How many contexts are needed to provide for safe intermixing of operation G
over the three possible combinations.  Assume that the operations of 1, 2, 3
may operate in sequence correctly, for now (ie, two G's over the
whole collection work correctly).   This is true of multiple combines, deterministic
broadcasts, etc.  It is not true if there are non-deterministic G operations
included.

If there are three contexts of communication, then everything is fine, because
row G, col G, and whole G cannot interfere.  By extension, a total of three
unique contexts are needed for the whole PxQ topology, reusing the same context
on parallel (row, column) entities, and an additional one for the "whole."
I pointed up this example at an earlier MPI meeting.

In the Proposal VIII model, contexts and groups are disjoint, so no property of
group will provide the safety needed.  Hence, I assert that either three contexts
should be needed here, or tag values would be needed to disambiguate the G operations.

Now, in an earlier mail message, Ho/Pierce bantered about "how many contexts are
needed for a given group."  I argued, one per unique library, with the discussion
of non-deterministic broadcast, because I wanted my non-deterministic broadcast
to work safely when other communication was going on.  At least, there must be
a point-to-point context for a group, and a global operations context for a
group [which is what Zipcode 1.0 has].  Further non-deterministic operations
would need more contexts.  In the light of this issue, I would suggest that my
PxQ topology above reasonably needs 6 contexts of communication to be implemented
safely [and assuming that all global operations are deterministic, as asserted].

In short, six contexts are needed for this group.  In the PxQxR three-dimensional
case, the number is larger:

	1) G over the whole collective, PxQxR topology
	2) G over PxQ plane
	3) G over PxR plane
	4) G over QxR plane

By simple study, this requires 1 + 3 * 6 = 19 contexts to be implemented safely,
and sets a limit on the number of contexts that one would want to have, to support
a single 3D topology safely.

Remember, I am assuming that we are talking about Proposal VIII, which does not
offer group-based safety for messaging.  Proposal I, by Snir, does offer this
safety inherently [which masks the need for explicit contexts when dealing with
SPMD-type calculations].  I am arguing that the assumption of small numbers of
contexts (where small is about 8 or 16, and where the code breaks if the number
is exceeded) is not reasonable.  The code must continue to work, if slower, if
many contexts are needed, beyond that which is available as fast, hardware
contexts.  Proposal VIII must accomodate that to be reasonable.

In an application at Livermore, one of my colleagues uses multiple two-dimensional
topologies to implement stages of a calculation.  They overlap.  So, he wants
at least two unique topologies, completely supported, or at least 12 contexts.
If he goes to three dimensions, he want 38 minimum, for his code to work.

Summary: Proposal VIII cannot cope with practical situations illustrated above,
because it breaks when there are "too many contexts," and contexts are assumed
rare (eg, 8 or 16).  To be reasonable, it would have to have a fall-back to 
support many contexts in some way.  Lyndon and others have made this request,
in the past weeks.

Comments?  
- Tony









From owner-mpi-collcomm@CS.UTK.EDU Mon Jun 14 17:45:55 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA28928; Mon, 14 Jun 93 17:45:55 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06771; Mon, 14 Jun 93 17:45:46 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 14 Jun 1993 17:45:45 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06762; Mon, 14 Jun 93 17:45:43 -0400
Received: by cs.sandia.gov (4.1/SMI-4.1)
	id AA18417; Mon, 14 Jun 93 15:45:57 MDT
Date: Mon, 14 Jun 93 15:45:57 MDT
From: mccurley@cs.sandia.gov (Kevin S. McCurley)
Message-Id: <9306142145.AA18417@cs.sandia.gov>
To: mpi-collcomm@cs.utk.edu
Subject: non-blocking calls


I spent a little bit of time reading the collective communication draft that I last
saw come across the net, and was frankly startled by the lack of non-blocking 
collective calls.  According to the first paragraph: 

  Routines can (but are not required to) return as soon as their participation in
  the collective communication is complete.  The completion of the call indicates
  that the caller is now free to access the locations in the communication buffer,
  or any other location that can be referenced by the collective operation.

The major reason for having a non-blocking communication call is the
large latency associated with the action, during which a great deal of
useful work can often be done on the processor while it is waiting.
Given the fact that collective communication calls are going to have
huge latencies (on every architecture I know of), it appears to be
even more important to give the programmer freedom to overlap
communication and computation in these routines.  I would like to
suggest that non-blocking routines be included, perhaps along the
lines of the isend/irecv calls that are found on the Intel line.  A
vendor can always choose to implement them with the blocking routines.
Without this addition, I fear that the MPI collective communication
calls will be essentially useless for many applications requiring high
performance.

Kevin McCurley
Sandia National Laboratories

From owner-mpi-collcomm@CS.UTK.EDU Tue Jun 15 10:21:35 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA05137; Tue, 15 Jun 93 10:21:35 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11107; Tue, 15 Jun 93 10:21:23 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 15 Jun 1993 10:21:22 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11099; Tue, 15 Jun 93 10:21:21 -0400
Received: by msr.EPM.ORNL.GOV (4.1/1.34)
	id AA16442; Tue, 15 Jun 93 10:21:40 EDT
Date: Tue, 15 Jun 93 10:21:40 EDT
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9306151421.AA16442@msr.EPM.ORNL.GOV>
To: mpi-collcomm@cs.utk.edu
Subject: Re: non-blocking calls


Kevin writes:
>and was frankly startled by the lack of non-blocking
> collective calls.
>I would like to suggest that non-blocking routines be included,

Several people have made this same suggestion, and the at least one
person raised the same arguments for them.
In every case when they were asked to submit such a proposal to
the collective committee for review nothing ever happened.

I am not inclined to write such a proposal because I don't
believe the gains are worth
a. the complexity of writing applications around such routines
b. the increase in the number of routines in the collective section
c. the laughter I get explaining why MPI has a non-blocking barrier call.

You are welcome to submit such a proposal for review.

Al (keep it simple) Geist

From owner-mpi-collcomm@CS.UTK.EDU Tue Jun 15 10:41:34 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA05247; Tue, 15 Jun 93 10:41:34 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA12374; Tue, 15 Jun 93 10:41:47 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 15 Jun 1993 10:41:46 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA12307; Tue, 15 Jun 93 10:41:39 -0400
Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA29909
  (5.65c/IDA-1.4.4 for mpi-collcomm@cs.utk.edu); Tue, 15 Jun 1993 15:41:41 +0100
Date: Tue, 15 Jun 1993 15:41:41 +0100
From: James Cownie <jim@meiko.co.uk>
Message-Id: <199306151441.AA29909@hub.meiko.co.uk>
Received: by tycho.co.uk (5.0/SMI-SVR4)
	id AA01193; Tue, 15 Jun 93 15:40:46 BST
To: geist@msr.EPM.ORNL.GOV
Cc: mpi-collcomm@cs.utk.edu
Subject: Re: non-blocking calls
Content-Length: 1589

Kevin writes:
>and was frankly startled by the lack of non-blocking
> collective calls.
>I would like to suggest that non-blocking routines be included,

Understanding the semantics, and achieving an implementation of
non-blocking collective operations is non-trivial. (Read HARD).

While I am in favour of these operations at an abstract level I (now)
have difficulty in coming to the conclusion that they should be in
MPI-1.  In particular it is worth noting that understanding (and
implementing) the correct behaviour of collective operations on
overlapping groups is itself non-trivial, without the added complexity
of non-blocking versions. It also appears to be the case that
non-blocking collective operations are well beyond current
parctice. (I was going to say that no-one does them, but not having
perfect knowledge this is not a safe statement to make !)

There is perhaps one point which we should consider, which is that to
disambiguate non-blocking collective communications we would require a
tag. However the blocking collective communications do not currently
have such an argument. If we seriously expect that MPI-2 will adopt
non-blocking collective operations, and want to preserve the symettry
of argument lists for blocking and non-blocking calls, maybe we should
put that tag back in...

-- Jim
James Cownie 
Meiko Limited			Meiko Inc.
650 Aztec West			Reservoir Place
Bristol BS12 4SD		1601 Trapelo Road
England				Waltham
				MA 02154

Phone : +44 454 616171		+1 617 890 7676
FAX   : +44 454 618188		+1 617 890 5042
E-Mail: jim@meiko.co.uk   or    jim@meiko.com

From owner-mpi-collcomm@CS.UTK.EDU Mon Jun 28 15:39:43 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA14740; Mon, 28 Jun 93 15:39:43 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06955; Mon, 28 Jun 93 15:38:24 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 28 Jun 1993 15:38:22 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from iliamna.cse.ogi.edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06945; Mon, 28 Jun 93 15:38:20 -0400
Received: by iliamna.cse.ogi.edu (/\==/\ Smail3.1.25.1 #25.17)
	id <m0oAP2q-0002vvC@iliamna.cse.ogi.edu>; Mon, 28 Jun 93 12:39 PDT
Message-Id: <m0oAP2q-0002vvC@iliamna.cse.ogi.edu>
Date: Mon, 28 Jun 93 12:39 PDT
From: otto@cse.ogi.edu (Steve Otto)
To: mpi-collcomm@cs.utk.edu
Subject: C-C Discussion Results / Minutes


Dear Collective-Communications Member,

	Myself, Marc Snir, Jim Cownie, and Dan Nessett have 
spent a little time putting togther the results of our coll-comm
discussions at the just-finished MPI meeting.  We did this so
as to assist Bob Knighten in reconstructing his lost file, but
I think the result is sufficiently interesting to send out.  Of course,
we are also interested in any feedback from you.

	Comments, corrections, vote results from Marc, Jim, and Dan
appear by ">>>Name".

	Al Geist: please comment.  Also, I was planning on writing
some of this into the current collective-communications chapter. OK
with you, Al?

--Steve Otto

Collective Communication discussion:
------------------------------------

Sec 3.1
-------
The intro (Sec 3.1) now makes some statements about implied synchronization
and correct usage of collective routines.  We discussed this for a few
minutes; it was agreed that more needs to be written about this, with
examples, and including a discussion of interaction with pt-pt calls
that may be outstanding at time of collective call.

Briefly, the semantics revolves around the following questions:

	1) what synchronization side-effects can we assume from the coll
	   comm (c-c) routines?

	   Ans: none.  c-c routines are not necessarily synchronizing.
	   Completion on one process does not mean that the c-c routine
	   has even been called on other processes.  (except for barrier,
	   of course)

	2) what are the conditions for achieving a correct matching of
	   a c-c call across a set of processors?

	    i) same context?

		Ans: yes, must have same context.

	    ii) same group?

		Ans: yes, must have same group.

	    iii) same root?

		Ans: yes, must have same root.  This may be modified
		however, if we adopt a "rootless" version of bcast.
		Adam Greenberg made this point.  It is often convenient,
		more efficient, to be able to match a bcast(root) on
		one process with bcast-recv() on other processes, which
		don't know who the root is.

	>>>Jim: Note that as currently specified (i.e. with the possibility of a
	>>> barrier), if this criterion comes into effect it is solely to 
	>>> determine when a deadlock may occur. 


	    iv) does the c-c call contain a barrier?

		Ans: It may.  Even though they may not be synchronizing
		on some implementations, other implementations may use
		a rendezvous of some type to implement the c-c routines.
		Therefore: cannot count on the synchronization being there,
		but must program in such a way as to allow for it.

	    v) interaction of c-c calls and pt-pt calls : can they be
	       mixed freely?  Rik Littlefield brought up the point: does
	       a posted wild-card recv (same context) mess up a subsequent
	       bcast?

		Ans: it is the current feeling of the sub-committee that
		we can think of the c-c messages as being completely
		separated from pt-pt messages.  Concretely speaking,
		if the user calls a c-c routine with context A, which
		may be used for some pt-pt calls, MPI actually shifts
		the context to A' which is one of a set of "hidden"
		contexts used for c-c routines only.  In this way,
		one could make c-c calls even though there may be
		oustanding pt-pt calls. There is a one-one correspondence
		set up between A and A'.

	>>>Jim: This may NOT fit trivially with what the context people 
	>>> are proposing. They have a
	>>> concept of "safety" (I prefer "quiescence") of a context which may
	>>> be required by a library of the context which it is passed. 
	>>> As a user I certainly want the property we give here.

	3) are the c-c routines defined operationally by the pt-pt
	   implementations given in the chapter?

		Ans: No, and the pt-pt implmentations appearing in
		the chapter will be removed.  They may appear in an Annex.

It is agreed that we need more examples making these points clear.  Marc
suggests also an example that uses more than one communicator, partially
overlaps them across processes, and then sets up a cycle of dependences
among them.

Matching of buffers:  what receiving buffer can match what sending buffer
isn't made clear in the current chapter (I think).  We will follow
pt-pt on this, but we need a clear statement of this in the coll chapter.
Once we are clear about which buffers are being sent and which are being
received, then we apply the point to point rules.


Sec 3.2

Will be taken out, and we will instead refer to the actual contexts and groups
chapter.

Sec 3.3

Just said that there are two levels of calls to c-c.  Note that:  even if
the proposal for buff-descriptors as datatypes for pt-pt is adopted, we
will still have a "simple" level of c-c routines that make the additional
restriction that the amount of data coming from each process (in a gather)
is the same.  I think, however, that at that point we won't consider it
as another "level" of c-c call -- rather, it will just be a more restrictive
version of gather() and we will think of it as just another routine.
>>>Jim: section accepted 18:0:0

Sec 3.4

barrier().  remove pt-pt implementation.  The synch() routine is there
so as to provide synchronization for a sub-group that doesn't quite
exist yet.  Ie, Marc put it in so that we can use it for implementing
group/subgroup/context operations.  I'm not sure if we're going to
keep it visible at the user level.

Sec 3.5

bcast().

	"Terms" chapter needs to explain what we mean by "INOUT" arg.
There are some subtleties here.  We usually think of this as "IN" on
root, "OUT" on others.  Actually though, what if the root overwrites
its own buffer with the same data?  Is this now an "INOUT"?  No big
deal here...I guess it just needs to be said somewhere.

	Arg order may change to achieve consistency with pt-pt.

	pt-pt implementation of bcast is removed.

	IF the buff-desc-as-datatypes proposal goes through, I think
we can remove bcastC() -- there won't be a reason for it.

	Adam (Moose): want a version of bcast so that all processes
need not know the root of the broadcast (the process that IS the root
of course, knows that it is the root, and makes the appropriate call).
One way of saying this is that we may want a normal bcast(root) on the
root, to be matchable by a routine called bcast-recv() on other
processes, and where bcast-recv() does not take root as an arg.

	So: this is a suggested alternative or addition to bcast.  Did
we take a vote on this?

>>>Marc: NO vote, as far as I recall. [Ed: Marc means that no vote was taken]
>>>Jim: I don't think so, If so it was a straw asking Alan[Ed: Adam]
>>> for a concrete proposal. 
>>> 3.5 as is 18:1:0

gather().

	We will re-name "inhandles" and "outhandles" as "send-buffer"
and "receive-buffer" since this is clearer.

	gatherC() will still exist even if the buff-desc-as-datatypes
proposal goes through, for efficiency reasons (it will have an equal-length
restriction...which must be carefully spelled out for general buffer-types).

	list-of-handles are really array-of-handles.  We are thinking
of a simple array data structure instead of the more general list, since
the length of the array is deducible from the group.

	We need statements about what constraints in the sizes of the
buffers are necessary (ie, receiving buffers have to be large enough
to hold the results).  Also, we need to make clear that it is the
users responsibility to make the recv buffer on a gather large enough...
the gather won't do it for you.  This means, for instance, that the
user may need to first do another gather() (on ints) before the "real" one
so as to get the required sizes.

>>>Jim: 16:0:2

scatter().

	Time reversal of gather.
>>>Jim: 16:0:2

allscatter().

	Name will be changed to scatter_gather() or all_to_all()...did
	we vote?

>>>Marc: NO vote, just agreement for name change [Ed: Marc means no vote was taken]
>>>Jim: Don't remember one. I think we took the view that all the names
>>> are still up for grabs
>>> 17:0:1

allcast().

	These routines should just follow bcast,gather..etc
>>>Jim: 16:0:2

Sec 3.6

reduce().

	As currently written, reduce(MPI_SUM) infers the actual operation
(eg, int add or float add or double add) from the type info in the buff-desc.
It is ERRONEOUS to call reduce(MPI_SUM) on buff-descriptors containing
mixed types.

	We need to spell out completely which ops are allowed for which
datatypes.  Ie, no XOR for doubles.

	We had several discussions about whether or not the implementation
of reduce() could exploit commutativity of its operators.  The best
example I heard in the discussion was: the hypercube algorithm for
an all-reduce() (the all version is where all processes get the answer).
In this algorithm, processes pair up along successive dimensions of the
hypercube, summing their partial results and exchanging them.  If one
writes this down, it is clear that we are exploiting commutativity of
+ along with associativity of +.  I guess this is an example of one
of the algorithms that Rik Littlefield was talking about, where he said
that exploiting commutativity was important for avoiding network congestion.

	OK.  So it became pretty clear that we would have to allow
implementations of reduce() to exploit commutativity of the operators.
Did we have votes on this?

>>>Marc: Voted to allow use of commutativity and allow user defined non
>>> commutative operands.
>>>Steve: See the votes below.


	One more point relating to this.  Stability of MIN_LOC, MAX_LOC.
The standard will state that an implementation of reduce(MIN_LOC),
reduce(MAX_LOC) must satisfy a stability requirement:  In the event of
ties, the location of the *first* entry (in group rank order) will be
returned.  This causes a trickiness with commutativity: if an implementation
exploits commutativity for its MAX_LOC, MIN_LOC, it still needs to
satisfy the stability requirement.

	Another comment: floating point arithmetic is not associative,
yet we are allowing an implementation of reduce() to assume it and
exploit it (or else we don't get much parallelism).  We are ignoring
rounding when it comes to demanding that reduce(buff=MPI_REAL, MPI_SUM)
get the "right" answer.

	Did I get this stuff right?
>>>Jim: Seems about right to me
	Comments?
	Vote results?
>>> Have a reduce      				 17:0:1
>>> Have a reduce which exploits commutativity   16:0:4  (Straw)
>>> Have a reduce which allows non-commutative    7:2:9  (Straw)
>>> I think the conclusion was to have one version of the reduce (since
>>> all of the pre-defined functions are treated as commutative (with
>>> the stability criterion in place), but to provide two versions of
>>> the user functions , one for comm, one for non-comm. (Of course
>>> subject to the cross product effect of contig, non-contig etc).

>>>Steve: OK, this is consistent with what Marc said in the above.

user-reduce().

	We had some discussion to try and understand what this
routine meant.  There was some confusion about the meaning of the
two size or length parameters.  We eventually arrived at a good
example.  Suppose each process has a vector of complexes, and we
wish to do a reduce( op = complex mult) of them.  A vector of
complexes means that what we see in memory is:  (real part, imag part,
real part, imag part, ...).

	Then, we call user-reduce with "unitsize" set to 2 reals. This
tells the routine that we are interesting in combining 2 reals at
a time.

	Now, to allow vectorization or pipelining of the function
(Ie, if we have a 1000 complexes on each process we don't want to call
the user function for complex op a 1000 times).  Therefore, the
mpi routine will call the user-function with several args: pointers
to the input vectors and the output vector, and len, which says
how many items (how many complexes in the example) are to be
combined.  The user writes the user-func something like this:

// a C version of a user-func for complex mult

user-func( struct complex *invec, struct complex *inoutvec, int *len)
{
    int i;
    struct complex temp;

    for (i=0; i< *len; ++i) {
        temp.real = invec[i].real*inoutvec[i].real -
				invec[i].imag*inoutvec[i].imag;
	temp.imag = invec[i].real*inoutvec[i].imag +
				invec[i].imag*inoutvec[i].real;
	inoutvec[i] = temp;
    }
}
	

	Comment:  In C, complexes are viewed as a non-trivial structure
	or record.  So, we were forced to use the general user-reduce
	capability to do this example.  In Fortran of course, complex is
	a data type and we could have done the above example simply
	by calling the simple: reduce(MPI_COMPLEX) on a vector of type
	complex.
>>>Jim: Do we have MPI_COMPLEX ?

	Comment:  Suppose each process has a 1000 complexes to combine
	using the user-func() defined above.  The system MAY call user-func
	with *len = 1000, or it may not...it may decide to do the user-op
	in smaller chunks...so as to allow some possible pipelining
	across processes.  So as I understand it, the choices for *len
	are under the control of mpi_user_reduce().
>>>Jim: Exactly so.

Commutativity again: do we wish to define a user-reduce() that is
guaranteed to not exploit commutativity of user-func().  Example:
write a user-func() for matrix multiply, so that we can multiply
many matrices across multiple processes.  We are happy to have user_reduce()
to exploit associativity in order to achieve parallelism, but not
so happy to have it exploit commutativity -- it will give the wrong
answer!

	We could either restrict implementations of user_reduce(), or
	provide a separate, additional, user_reduce_commutative().

	Did we have a vote on this?

>>>Marc: Yes, voted to have both
>>> (first vote to allow use of commutativity; 2nd to allow
>>> also noncommutative user-defined operators)

>>>Jim: Not this explicit issue, I think we voted earlier (see above).
>>> There was a vote on having a user reduce 16:0:2

all-reduce(), etc

	Follow reduce, user-reduce().

scan() functions...

	Marc pointed out that the semantics of scan can be derived
	that of reduce().

	Example:

		Suppose we do a scan(op=MPI_SUM).

		Then the results are as if:

		on process 0, we did a reduce(op=MPI_SUM) on processes 0.
		on process 1, we did a reduce(op=MPI_SUM) on processes 0,1.
		on process 2, we did a reduce(op=MPI_SUM) on processes 0,1,2.
		...
>>>Jim: 17:0:0


>>>Dan Nessett:
3.1 - no vote, needs revision
3.2 - will be removed from chapter
3.3 - no vote, no substance
3.4 - (only MPI_BARRIER, not MPI_SYNCH) 18 for; 0 against; 0 abstains
3.5 - 16 for; 0 against; 2 abstains
3.6 - 16 for; 0 against; 2 abstains

From owner-mpi-collcomm@CS.UTK.EDU Sun Jul 18 11:24:50 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA08270; Sun, 18 Jul 93 11:24:50 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03351; Sun, 18 Jul 93 11:24:12 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 18 Jul 1993 11:24:11 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03185; Sun, 18 Jul 93 11:21:08 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA19755; Sun, 18 Jul 93 10:21:07 CDT
Date: Sun, 18 Jul 93 10:21:07 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9307181521.AA19755@Aurora.CS.MsState.Edu>
To: mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, mpi-core@cs.utk.edu,
        mpi-pt2pt@cs.utk.edu
Subject: All about threads

Topic: Tacit Thread Safety Requirement of MPI1, Context Chapter, etc.

Dear colleagues:

    In reviewing comments about the latest context draft, it has
been repeatedly told me that we are at a crucial stage in MPI, 
because we have to agree on the context model, etc, as soon as
possible.  I concur with that assessment.  In trying to find
a consistent way to acquire safe communication space for groups,
the issue of thread safety arises, because overlapping concurrent
threads would have to work correctly.  I am currently confident about the
single-threaded case, and I am NOT CONFIDENT about the multi-threaded case.
Does anyone have real experience with multi-threaded message passing
(has it been done in an application setting, like we assume for MPI)?

    I need immediate guidance (specific guidance) about what multi-threaded
programming MEANS in MPI, if this was in fact a reasonable requirement
for MPI in the first place, and how multi-threading impacts point-to-point
and collective communication (that is, real programs with examples).
For instance, do we assume named threads or unnamed threads (and would
this help).  Is there an examplar threads package ???

    Here is one problem in a nutshell.  We discussed "statically 
initialized libraries" from time to time.  Well, if there are multiple
overlapping threads, then one would need to have separate contexts
statically initialized for each concurrent thread.  Such threads have
group scope.  Hence, groups would have to cache contexts for each
concurrent thread (notice: groups cacheing contexts). 

    I propose that we have a serious discussion on what thread safety
really means for MPI1.  I need for there to be well-formulated guidelines and
in-depth debate immediately, so that the context committee can work
effectively within these requirements, or give feedback as to why they
are unreasonable.  Otherwise, I/we can't really make the context chapter
bullet-proof in time for the next meeting (except for the single-thread
case).

    We have discussed how contexts provide group safety, but not
temporal safety from multiple invocations of operations on a context
(for which a programming paradihm must be described; e.g., synchronizing
or implicitly synchronizing ... also could be called quiescent-at-exit).
Now we need to have a notion of how to provide safety with multiple
threads, or how to program the multi-threaded environment consistently,
with interspersesd MPI calls.  


				-	-	-

    To summarize, I seriously propose that in absence of an in-depth
debate and specification of what thread safety means in MPI1, that we
abandon this requirement altogether (analogous to the abandonment of
non-blocking collective operations).  If thread safety were to remain
a de jure requirement of MPI1, then I ask that there be examples
(analogous to or supersets of our contexts examples, pt2pt examples,
and collective examples) illustrating same.  If this is to be an added
task of my subcommittee [which makes reasonablee sense to me] then I
am eager for assistance nonetheless.  I would want to see what people think
existing thread practice is, what the design choices are, and which we
choose to support, as well.  It is not obvious to me that we really
know what we mean (formally, practically) by "thread safety" for
SPMD/MPMD message passing applications.  Recall that there are at
least three kinds of threads: O/S threads, compiler threads, user threads
(we seeem to really mean the latter in our discussions).

    Thanks + please advise soonest.

				Tony Skjellum

PS References to accessible texts or papers or software (eg, portable
thread packages) are acceptable forms of advice.
    
PPS I would like to have a new draft of the context chapter out by
August 1 (with possible revisions by August 5).  I am getting one
extremely negative set of feedback from a single vendor
representative, and one more balanced feedback (ie, only two people
are communicating with me on the context chapter).  I am not seeing
widespread debate over the context chapter.  This MUST happen now,
between the meetings, since we have our best current draft available.
We will not be successful if we are debating it all again at the next
meeting without careful thought now (eg, on the threads issue).

From owner-mpi-collcomm@CS.UTK.EDU Sun Jul 18 11:31:34 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA08296; Sun, 18 Jul 93 11:31:34 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03770; Sun, 18 Jul 93 11:30:59 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sun, 18 Jul 1993 11:30:58 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA03600; Sun, 18 Jul 93 11:28:50 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA19772; Sun, 18 Jul 93 10:28:48 CDT
Date: Sun, 18 Jul 93 10:28:48 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9307181528.AA19772@Aurora.CS.MsState.Edu>
To: mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, mpi-core@cs.utk.edu,
        mpi-pt2pt@cs.utk.edu
Subject: Heterogeneous communication proposal

Dear colleagues:

In order to make inter-vendor MPI implementations and cluster
computing with MPI even a reasonable possibility, I suggest that we
need to adopt the requirement that data formats follow IEEE Std
1596.5-1993 Data Transfer Formats Optimized for SCI.  I propose that
debate be started on this topic, and that a presentation be made at
MPI in which the features of 1596.5-1993 are discussed and elaborated.
Currently, there is little hope for standardization between vendors
(or home-brew heterogeneous MPI) implementations.  We recognize that
XDR is inefficient, so this IEEE standard seems the logical
alternative.  If we say nothing, implementations will surely become
incompatible.

I volunteer to champion this effort, but only after the context chapter
issues are resolved (so for September meeting or later). It is very
important, to my mind, that we embrace other reasonable standards in
creating MPI, such as this data standard.

- Tony Skjellum

Enclosure:

From dbg@SLAC.Stanford.EDU Thu Jul 15 15:37:11 1993
Date: Thu, 15 Jul 1993 12:56:25 -0800
From: dbg@SLAC.Stanford.EDU (Dave Gustavson)
Subject: SCI Data Transfer Formats standard approved
To: sci_announce@hplsci.hpl.hp.com
X-Envelope-To: sci_announce@hplsci.hpl.hp.com
Content-Transfer-Encoding: 7BIT
X-Sender: dbg@scs.slac.stanford.edu
Content-Length: 4329
X-Lines: 86
Status: RO

In its June 1993 meeting, the IEEE Standards Board approved:

IEEE Std 1596.5-1993 Data Transfer Formats Optimized for SCI. 
(The approved document was Draft 1.0 8Dec92, but with significant edits to
clarify the vendor-dependent formats listed in the appendix.)


Congratulations to the working group, and especially to working group
chairman David James!

This new standard defines a set of data types and formats that will work
efficiently on SCI for transferring data among heterogeneous processors in
a multiprocessor SCI system.

This work has attracted much interest, even beyond the SCI community. It
solves a difficult problem that must be faced in heterogeneous systems.

Over the years a great amount of effort has been invested in translating
data among dissimilar computers. Computer-bus bridges have incorporated
byte swappers to try to handle the big-endian/little-endian conversion.
Software and hardware have been used to convert floating point formats. 

It was always tempting to have the hardware swap byte addresses to preserve
full-bus-width integers, which seem to look the same on big- and
little-endian machines, and then not swap bytes when passing character
strings. 

But finally we understood that this problem cannot be solved by the
hardware (at least until some far-future day when we all use standardized
fully tagged self-describing data structures!). 

The magnitude of the problem became clearer during work on Futurebus+,
where we had to deal with multiple bus widths and their interfaces with
other standards like VME and SCI. When you observe data flowing along paths
of various widths through a connected system, you see how hardware
byte-swappers can arbitrarily scramble the data bytes of various number
formats such as long integer or floating point. Furthermore, the scrambling
may depend on the particular path used and on the state of the bridge
hardware at the time the data passed through!

Finally the solution became clear: first, keep the relative byte address of
each component of a data item fixed as it flows through the complex system.
(This is now referred to as the "address invariance" principle.) Thus,
character strings arrive unchanged, but other data items may have been
created with their bytes in inconvenient (but well-defined) places. 

Then provide the descriptive tools needed to tell the compiler what the
original format of the data was. (That is what this standard does.) 

The compiler knows the properties of the machine for which it is compiling,
and thus now has enough information to allow it to generate code to perform
the needed conversions before trying to do arithmetic on the foreign data.
For example, when the compiler loads a long integer into a register it may
swap bytes to convert from little-endian to big-endian significance, so
that the register will contain the correct arithmetic value for use in
calculations. Similarly, when an arithmetic result is stored back into a
structure that is declared with foreign data types the compiler ensures
that the conversions are done appropriately before the data are stored.

This capability is critical for work in heterogeneous multiprocessors, but
it is also useful for interpreting data tapes or disk files that were
written on a different machine as well.

The IEEE Std 1596.5-defined descriptors include type (character, integer,
floating), sizes, alignment, endian-ness, and atomic properties (can I be
certain this long integer is always changed as a unit, never by a series of
narrower loads and stores that might allow inconsistent data to be
momentarily visible to a sharing machine).

The standard also includes a C-code test suite that can be used to check
the degree of compliance of a given implementation.

The chairman is Dr. David V. James, MS 301-4G, Apple Computer, 20525
Mariani Avenue, Cupertino, CA  95014, 408-974-1321, fax 408-974-9793,
dvj@apple.com.


Again, my hearty congratulations on a job well done!

David Gustavson, SCI (IEEE Std 1596-1992 Scalable Coherent Interface) chair

David B. Gustavson                                      phone 415/961-3539
SCI (ANSI/IEEE Std 1596 Scalable Coherent Interface) chairman
SLAC Computation Research Group, Stanford University      fax 415/961-3530
POB 4349, MS 88, Stanford, CA 94309                  dbg@slac.stanford.edu


From owner-mpi-collcomm@CS.UTK.EDU Tue Jul 20 07:32:21 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA23610; Tue, 20 Jul 93 07:32:21 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA14626; Tue, 20 Jul 93 07:32:56 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 20 Jul 1993 07:32:55 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA14618; Tue, 20 Jul 93 07:32:52 -0400
Via: uk.ac.southampton.ecs; Tue, 20 Jul 1993 12:32:28 +0100
Via: brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:23:58 BST
From: Ian Glendinning <igl@ecs.soton.ac.uk>
Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk;
          Tue, 20 Jul 93 12:34:13 BST
Date: Tue, 20 Jul 93 12:34:15 BST
Message-Id: <11655.9307201134@holt.ecs.soton.ac.uk>
To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu
Subject: Quiescent contexts, threads, and collective communications

Hi,
   it seems to me that the recent discussion in mpi-context about thread
safety is also relevant to collective communications, because the routine
mpi_contexts_alloc(), which we've been mainly discussing, is a collective
operation.  When used in conjunction with threads, it would seem elegant
to allow a context to be passed as an argument to the routine (as part of a
communicator) rather than just a group.  Unfortunately this would add the
requirement that the context be quiescent at the time of the call, which
people do not seem to like.  However, as presently defined, the collective
communication routines *do* take communicators as arguments, and so surely
there is also a requirement for quiescence here, which no one seems to have
objected to.  Am I missing something here?
   Ian
From owner-mpi-collcomm@CS.UTK.EDU Tue Jul 20 10:56:54 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA25451; Tue, 20 Jul 93 10:56:54 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA27923; Tue, 20 Jul 93 10:57:28 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 20 Jul 1993 10:57:27 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA27899; Tue, 20 Jul 93 10:57:07 -0400
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA17392; Tue, 20 Jul 93 09:56:57 CDT
Date: Tue, 20 Jul 93 09:56:57 CDT
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9307201456.AA17392@Aurora.CS.MsState.Edu>
To: igl@ecs.soton.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu
Subject: Re:  Quiescent contexts, threads, and collective communications

>From owner-mpi-context@CS.UTK.EDU Tue Jul 20 06:33:42 1993
>Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
>	   id AA16171; Tue, 20 Jul 93 06:33:42 CDT
>Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP);
>	   id AA19264; Tue, 20 Jul 93 06:33:41 CDT
>Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
>	id AA14636; Tue, 20 Jul 93 07:33:26 -0400
>X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 07:33:25 EDT
>Errors-To: owner-mpi-context@CS.UTK.EDU
>Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
>	id AA14618; Tue, 20 Jul 93 07:32:52 -0400
>Via: uk.ac.southampton.ecs; Tue, 20 Jul 1993 12:32:28 +0100
>Via: brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:23:58 BST
>From: Ian Glendinning <igl@ecs.soton.ac.uk>
>Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk;
>          Tue, 20 Jul 93 12:34:13 BST
>Date: Tue, 20 Jul 93 12:34:15 BST
>Message-Id: <11655.9307201134@holt.ecs.soton.ac.uk>
>To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu
>Subject: Quiescent contexts, threads, and collective communications
>Status: R
>
>Hi,
>   it seems to me that the recent discussion in mpi-context about thread
>safety is also relevant to collective communications, because the routine
>mpi_contexts_alloc(), which we've been mainly discussing, is a collective
>operation.  When used in conjunction with threads, it would seem elegant
>to allow a context to be passed as an argument to the routine (as part of a
>communicator) rather than just a group.  Unfortunately this would add the
>requirement that the context be quiescent at the time of the call, which
>people do not seem to like.  However, as presently defined, the collective
>communication routines *do* take communicators as arguments, and so surely
>there is also a requirement for quiescence here, which no one seems to have
>objected to.  Am I missing something here?
>   Ian
>
I agree with Ian.  If we move back to having comm as the first argument
to mpi_make_comm() and mpi_contexts_alloc(), then these are just
collective calls, and all our worries about multi-threadedness appear
to apply equally well to collcomm chapter.

-Tony

From owner-mpi-collcomm@CS.UTK.EDU Mon Aug  2 16:07:56 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA08511; Mon, 2 Aug 93 16:07:56 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA08277; Mon, 2 Aug 93 16:07:07 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 2 Aug 1993 16:07:03 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from gstws.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA08248; Mon, 2 Aug 93 16:06:57 -0400
Received: by gstws.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA17955; Mon, 2 Aug 1993 16:06:48 -0400
Date: Mon, 2 Aug 1993 16:06:48 -0400
From: geist@gstws.epm.ornl.gov (Al Geist)
Message-Id: <9308022006.AA17955@gstws.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: New collective draft incorporating general datatypes from pt2pt.


%!PS-Adobe-2.0
%%Creator: dvips 5.516 Copyright 1986, 1993 Radical Eye Software
%%Title: cc.dvi
%%CreationDate: Mon Aug  2 15:47:27 1993
%%Pages: 14
%%PageOrder: Ascend
%%BoundingBox: 0 0 612 792
%%EndComments
%DVIPSCommandLine: dvips -D 300 -o cc.ps cc.dvi
%DVIPSSource:  TeX output 1993.08.02:1546
%%BeginProcSet: tex.pro
/TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N
/X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72
mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1}
ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale
isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div
hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul
TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get
round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{
statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0
0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn
begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X
array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo
setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx
FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{
pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}
B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup
length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B
/ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type
/stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp
0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2
index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff
ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice
ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{
ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn
/base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1
sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D
}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0
moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add
.99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage
userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook
known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X
/IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for
65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0
0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V
{}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7
getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}
ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley
false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley
scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave
transform round exch round exch itransform moveto rulex 0 rlineto 0
ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta
0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}
B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{
3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p
-1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{
3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end
%%EndProcSet
%%BeginProcSet: special.pro
TeXDict begin /SDict 200 dict N SDict begin /@SpecialDefaults{/hs 612 N
/vs 792 N /ho 0 N /vo 0 N /hsc 1 N /vsc 1 N /ang 0 N /CLIP 0 N /rwiSeen
false N /rhiSeen false N /letter{}N /note{}N /a4{}N /legal{}N}B
/@scaleunit 100 N /@hscale{@scaleunit div /hsc X}B /@vscale{@scaleunit
div /vsc X}B /@hsize{/hs X /CLIP 1 N}B /@vsize{/vs X /CLIP 1 N}B /@clip{
/CLIP 2 N}B /@hoffset{/ho X}B /@voffset{/vo X}B /@angle{/ang X}B /@rwi{
10 div /rwi X /rwiSeen true N}B /@rhi{10 div /rhi X /rhiSeen true N}B
/@llx{/llx X}B /@lly{/lly X}B /@urx{/urx X}B /@ury{/ury X}B /magscale
true def end /@MacSetUp{userdict /md known{userdict /md get type
/dicttype eq{userdict begin md length 10 add md maxlength ge{/md md dup
length 20 add dict copy def}if end md begin /letter{}N /note{}N /legal{}
N /od{txpose 1 0 mtx defaultmatrix dtransform S atan/pa X newpath
clippath mark{transform{itransform moveto}}{transform{itransform lineto}
}{6 -2 roll transform 6 -2 roll transform 6 -2 roll transform{
itransform 6 2 roll itransform 6 2 roll itransform 6 2 roll curveto}}{{
closepath}}pathforall newpath counttomark array astore /gc xdf pop ct 39
0 put 10 fz 0 fs 2 F/|______Courier fnt invertflag{PaintBlack}if}N
/txpose{pxs pys scale ppr aload pop por{noflips{pop S neg S TR pop 1 -1
scale}if xflip yflip and{pop S neg S TR 180 rotate 1 -1 scale ppr 3 get
ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg TR}if xflip yflip
not and{pop S neg S TR pop 180 rotate ppr 3 get ppr 1 get neg sub neg 0
TR}if yflip xflip not and{ppr 1 get neg ppr 0 get neg TR}if}{noflips{TR
pop pop 270 rotate 1 -1 scale}if xflip yflip and{TR pop pop 90 rotate 1
-1 scale ppr 3 get ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg
TR}if xflip yflip not and{TR pop pop 90 rotate ppr 3 get ppr 1 get neg
sub neg 0 TR}if yflip xflip not and{TR pop pop 270 rotate ppr 2 get ppr
0 get neg sub neg 0 S TR}if}ifelse scaleby96{ppr aload pop 4 -1 roll add
2 div 3 1 roll add 2 div 2 copy TR .96 dup scale neg S neg S TR}if}N /cp
{pop pop showpage pm restore}N end}if}if}N /normalscale{Resolution 72
div VResolution 72 div neg scale magscale{DVImag dup scale}if 0 setgray}
N /psfts{S 65781.76 div N}N /startTexFig{/psf$SavedState save N userdict
maxlength dict begin /magscale false def normalscale currentpoint TR
/psf$ury psfts /psf$urx psfts /psf$lly psfts /psf$llx psfts /psf$y psfts
/psf$x psfts currentpoint /psf$cy X /psf$cx X /psf$sx psf$x psf$urx
psf$llx sub div N /psf$sy psf$y psf$ury psf$lly sub div N psf$sx psf$sy
scale psf$cx psf$sx div psf$llx sub psf$cy psf$sy div psf$ury sub TR
/showpage{}N /erasepage{}N /copypage{}N /p 3 def @MacSetUp}N /doclip{
psf$llx psf$lly psf$urx psf$ury currentpoint 6 2 roll newpath 4 copy 4 2
roll moveto 6 -1 roll S lineto S lineto S lineto closepath clip newpath
moveto}N /endTexFig{end psf$SavedState restore}N /@beginspecial{SDict
begin /SpecialSave save N gsave normalscale currentpoint TR
@SpecialDefaults count /ocount X /dcount countdictstack N}N /@setspecial
{CLIP 1 eq{newpath 0 0 moveto hs 0 rlineto 0 vs rlineto hs neg 0 rlineto
closepath clip}if ho vo TR hsc vsc scale ang rotate rwiSeen{rwi urx llx
sub div rhiSeen{rhi ury lly sub div}{dup}ifelse scale llx neg lly neg TR
}{rhiSeen{rhi ury lly sub div dup scale llx neg lly neg TR}if}ifelse
CLIP 2 eq{newpath llx lly moveto urx lly lineto urx ury lineto llx ury
lineto closepath clip}if /showpage{}N /erasepage{}N /copypage{}N newpath
}N /@endspecial{count ocount sub{pop}repeat countdictstack dcount sub{
end}repeat grestore SpecialSave restore end}N /@defspecial{SDict begin}
N /@fedspecial{end}B /li{lineto}B /rl{rlineto}B /rc{rcurveto}B /np{
/SaveX currentpoint /SaveY X N 1 setlinecap newpath}N /st{stroke SaveX
SaveY moveto}N /fil{fill SaveX SaveY moveto}N /ellipse{/endangle X
/startangle X /yrad X /xrad X /savematrix matrix currentmatrix N TR xrad
yrad scale 0 0 1 startangle endangle arc savematrix setmatrix}N end
%%EndProcSet
TeXDict begin 40258431 52099146 1000 300 300
(/home/sun4/u0/geist/PAPERS/MPI/cc.dvi) @start /Fa 6
63 df<0000180000300000600000E00000C0000180000380000700000600000E00000C00
001C0000380000380000700000700000E00000E00001E00001C00001C000038000038000
0380000780000700000700000F00000E00000E00001E00001E00001E00001C00001C0000
3C00003C00003C00003C0000380000780000780000780000780000780000780000780000
780000700000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000700000780000780000780000780000780000780000780000
7800003800003C00003C00003C00003C00001C00001C00001E00001E00001E00000E0000
0E00000F000007000007000007800003800003800003800001C00001C00001E00000E000
00E000007000007000003800003800001C00000C00000E00000600000700000380000180
0000C00000E0000060000030000018157C768121>32 D<C0000060000030000038000018
00000C00000E000007000003000003800001800001C00000E00000E00000700000700000
3800003800003C00001C00001C00000E00000E00000E00000F0000070000070000078000
03800003800003C00003C00003C00001C00001C00001E00001E00001E00001E00000E000
00F00000F00000F00000F00000F00000F00000F00000F000007000007800007800007800
007800007800007800007800007800007800007800007800007800007800007800007800
007800007800007800007800007800007800007800007800007800007800007800007000
00F00000F00000F00000F00000F00000F00000F00000F00000E00001E00001E00001E000
01E00001C00001C00003C00003C00003C0000380000380000780000700000700000F0000
0E00000E00000E00001C00001C00003C0000380000380000700000700000E00000E00001
C0000180000380000300000700000E00000C0000180000380000300000600000C0000015
7C7F8121>I<0018007800F001E003C007800F001F001E003E003C007C007C007800F800
F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800
F800F800F800F8000D25707E25>56 D<F800F800F800F800F800F800F800F800F800F800
F800F800F800F800F800F800F800F800F800F800F800F800F80078007C007C003C003E00
1E001F000F00078003C001E000F0007800180D25708025>58 D<007C007C007C007C007C
007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C
007C00F800F800F800F001F001E003E003C0078007000E001C003800F000C000F0003800
1C000E000700078003C003E001E001F000F000F800F800F8007C007C007C007C007C007C
007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C
0E4D798025>60 D<F8F8F8F8F8F8F8F8F8F8F8F8F8F8050E708025>62
D E /Fb 38 123 df<00E001E0038007000E001C001C0038003800700070007000E000E0
00E000E000E000E000E000E000E000700070007000380038001C001C000E000700038001
E000E00B217A9C16>40 D<C000E000700038001C000E000E000700070003800380038001
C001C001C001C001C001C001C001C001C0038003800380070007000E000E001C00380070
00E000C0000A217B9C16>I<01C00001C00001C00001C00071C700F9CF807FFF001FFC00
07F00007F0001FFC007FFF00F9CF8071C70001C00001C00001C00001C00011127E9516>
I<387C7E7E3E0E1E1C78F060070B798416>44 D<00E00001F00001F00001B00001B00003
B80003B80003B800031800071C00071C00071C00071C00071C000E0E000E0E000FFE000F
FE001FFF001C07001C07001C07007F1FC0FF1FE07F1FC013197F9816>65
D<7FF800FFFE007FFF001C0F001C07801C03801C03801C03801C07801C07001FFF001FFE
001FFE001C1F001C03801C03C01C01C01C01C01C01C01C01C01C03C01C07807FFF80FFFF
007FFC0012197F9816>I<01F18007FB800FFF801F0F803C0780380380700380700380F0
0000E00000E00000E00000E00000E00000E00000E00000F000007003807003803803803C
07001F0F000FFE0007FC0001F00011197E9816>I<7FF800FFFE007FFF001C0F001C0780
1C03C01C01C01C01C01C01E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E0
1C01C01C01C01C03C01C07801C0F807FFF00FFFE007FF8001319809816>I<7FFFC0FFFF
C07FFFC01C01C01C01C01C01C01C01C01C00001C00001C1C001C1C001FFC001FFC001FFC
001C1C001C1C001C00001C00E01C00E01C00E01C00E01C00E07FFFE0FFFFE07FFFE01319
7F9816>I<FFFEFFFEFFFE03800380038003800380038003800380038003800380038003
80038003800380038003800380FFFEFFFEFFFE0F197D9816>73 D<FFC000FFC000FFC000
1C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0000
1C00001C00001C00401C00E01C00E01C00E01C00E0FFFFE0FFFFE0FFFFE013197F9816>
76 D<FC07E0FE0FE0FE0FE03A0B803B1B803B1B803B1B803B1B803B1B803BBB8039B380
39B38039B38039B38039F38038E38038E380380380380380380380380380380380FE0FE0
FE0FE0FE0FE013197F9816>I<7E1FC0FF3FE07F1FC01D07001D87001D87001D87001DC7
001DC7001CC7001CC7001CE7001CE7001CE7001C67001C67001C77001C77001C37001C37
001C37001C17007F1F00FF9F007F0F0013197F9816>I<1FFC003FFE007FFF00780F00F0
0780E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E0
0380E00380E00380F00780F00780780F007FFF003FFE001FFC0011197E9816>I<7FF800
FFFE007FFF001C0F801C03801C03C01C01C01C01C01C01C01C03C01C03801C0F801FFF00
1FFE001FF8001C00001C00001C00001C00001C00001C00001C00007F0000FF80007F0000
12197F9816>I<7FE000FFF8007FFC001C1E001C0F001C07001C07001C07001C07001C0F
001C1E001FFC001FF8001FFC001C1C001C0E001C0E001C0E001C0E001C0E201C0E701C0E
707F07E0FF87E07F03C014197F9816>82 D<07E3001FFF003FFF00781F00F00700E00700
E00700E00000F000007800003F80001FF00007FC0000FE00000F00000700000380000380
600380E00380E00700F80F00FFFE00FFFC00C7F00011197E9816>I<7FFFE0FFFFE0FFFF
E0E0E0E0E0E0E0E0E0E0E0E0E000E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000E00000E00000E00000E00007FC000FFE0007FC0013197F9816
>I<7F07F0FF8FF87F07F01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C
01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C00E03800E038007070007FF0003
FE0000F8001519809816>I<FE0FE0FF1FE0FE0FE01C07001C07000E0E000E0E00071C00
071C00071C0003B80003B80001F00001F00000E00000E00000E00000E00000E00000E000
00E00000E00003F80007FC0003F80013197F9816>89 D<1FE0003FF0007FF800783C0030
0E00000E00000E0003FE001FFE003E0E00700E00E00E00E00E00E00E00783E007FFFE03F
E7E00F83E013127E9116>97 D<03F80FFC1FFE3C1E780C7000E000E000E000E000E000F0
00700778073E0E1FFC0FF803F010127D9116>99 D<003F00007F00003F00000700000700
00070000070003C7000FF7001FFF003C1F00780F00700700E00700E00700E00700E00700
E00700E00700700F00700F003C1F001FFFE00FE7F007C7E014197F9816>I<03E00FF81F
FC3C1E780E7007E007FFFFFFFFFFFFE000E000700778073C0F1FFE0FFC03F010127D9116
>I<001F00007F8000FF8001E78001C30001C00001C0007FFF00FFFF00FFFF0001C00001
C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0003FFE007F
FF003FFE0011197F9816>I<018003C003C0018000000000000000007FC07FC07FC001C0
01C001C001C001C001C001C001C001C001C001C001C07FFFFFFF7FFF101A7D9916>105
D<FFC000FFC000FFC00001C00001C00001C00001C00001C00001C00001C00001C00001C0
0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C000FFFF80FFFF
80FFFF8011197E9816>108 D<F9C380FFEFC0FFFFE03C78E03C78E03870E03870E03870
E03870E03870E03870E03870E03870E03870E03870E0FE7CF8FE7CF8FE3C781512809116
>I<7E3C00FEFE007FFF000F87800F03800E03800E03800E03800E03800E03800E03800E
03800E03800E03800E03807FC7F0FFE7F87FC7F01512809116>I<03E0000FF8001FFC00
3C1E00780F00700700E00380E00380E00380E00380E00380F00780700700780F003C1E00
1FFC000FF80003E00011127E9116>I<7E3E00FEFF007FFF800F83C00F00E00E00E00E00
700E00700E00700E00700E00700E00700E00E00F01E00F83C00FFF800EFF000E3C000E00
000E00000E00000E00000E00000E00007FC000FFE0007FC000141B809116>I<FF0FC0FF
3FE0FF7FE007F04007C00007800007800007000007000007000007000007000007000007
0000070000FFFC00FFFC00FFFC0013127F9116>114 D<0FEC3FFC7FFCF03CE01CE01C70
007F801FF007F8003C600EE00EF00EF81EFFFCFFF8C7E00F127D9116>I<030000070000
0700000700000700007FFF00FFFF00FFFF00070000070000070000070000070000070000
07000007010007038007038007038007870003FE0001FC0000F80011177F9616>I<7E1F
80FE3F807E1F800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03
800E03800E0F800FFFF007FBF803E3F01512809116>I<7F1FC0FF1FE07F1FC01C07001E
0F000E0E000E0E000E0E00071C00071C00071C00071C0003B80003B80003B80001F00001
F00000E00013127F9116>I<7F1FC0FF9FE07F1FC01C07000E07000E0E000E0E00070E00
071C00071C00039C00039C0003980001B80001B80000F00000F00000F00000E00000E000
00E00001C00079C0007BC0007F80003F00003C0000131B7F9116>121
D<3FFFC07FFFC07FFFC0700780700F00701E00003C0000780001F00003E0000780000F00
001E01C03C01C07801C0FFFFC0FFFFC0FFFFC012127F9116>I E
/Fc 26 118 df<FFE0FFE0FFE00B037F8C10>45 D<F0F0F0F004047B830E>I<00C001C0
07C0FFC0FFC0FBC003C003C003C003C003C003C003C003C003C003C003C003C003C003C0
03C003C003C003C003C003C003C003C003C003C003C0FFFFFFFFFFFF10227CA118>49
D<03F0000FFC001FFE003C1F003007807007C06003C0E003E0C001E04001E04001E00001
E00001E00001E00003C00003C0000780000780000F00001E00003C0000780000F00001E0
0001C0000380000700000E00001C0000380000700000FFFFE0FFFFE0FFFFE013227EA118
>I<01F00007FC001FFF003E0F003807807003C02003C02003C00003C00003C00003C000
0780000780000F00001E0003FC0003F80003FE00000F000007800003C00003C00001E000
01E00001E00001E00001E08001E0C003C0E003C07007803C0F801FFF000FFC0003F00013
237EA118>I<001F00001F00002F00002F00006F0000EF0000CF0001CF0001CF00038F00
038F00078F00070F000F0F000E0F001E0F003C0F003C0F00780F00780F00F00F00FFFFF8
FFFFF8FFFFF8000F00000F00000F00000F00000F00000F00000F00000F00000F0015217F
A018>I<3FFF803FFF803FFF803C00003C00003C00003C00003C00003C00003C00003C00
003C00003CF8003FFE003FFF003F0F803E07803C03C03803C00001E00001E00001E00001
E00001E00001E00001E04003C04003C0E003C07007807C1F003FFE000FFC0003F0001322
7EA018>I<001F0000001F0000003F8000003F8000003B8000007BC0000073C0000071C0
0000F1E00000F1E00000E0E00001E0F00001E0F00001C0F00003C0780003C07800038078
0007803C0007803C0007003C000F001E000F001E000FFFFE001FFFFF001FFFFF001C000F
003C0007803C00078038000780780003C0780003C0700003C0F00001E0F00001E0E00001
E01B237EA220>65 D<FFFC00FFFF80FFFFC0F007F0F001F0F00078F0003CF0003CF0003C
F0003CF0003CF00038F00078F000F0F003E0FFFFC0FFFF00FFFFC0F00FE0F001F8F00078
F0003CF0001CF0001EF0001EF0001EF0001EF0001EF0003CF0007CF000F8F003F0FFFFE0
FFFFC0FFFE0017237BA220>I<000FF000003FFE0000FFFF8001F80F8003E00380078000
000F0000001E0000001E0000003C0000003C000000780000007800000078000000F00000
00F0000000F0000000F0000000F0000000F0000000F000FFC0F000FFC0F000FFC0780003
C0780003C0780003C03C0003C03C0003C01E0003C01E0003C00F0003C0078003C003E003
C001F807C000FFFFC0003FFF00000FF8001A257DA321>71 D<FFFC00FFFF80FFFFC0F003
E0F000F0F00078F00038F0003CF0003CF0003CF0003CF0003CF00038F00078F000F0F003
E0FFFFC0FFFF80FFFE00F01E00F00F00F00700F00780F00380F003C0F001E0F001E0F000
F0F000F0F00078F00038F0003CF0001EF0001EF0000F18237BA21F>82
D<00FE0003FFC007FFE00F81E01E00603C00003C00007800007800007800007800007800
007C00003C00003F00001FC0000FFC0007FF0001FF80003FC00007E00001F00000F00000
F8000078000078000078000078000078000078C000F0E000F0F801E07E07C03FFF800FFF
0001FC0015257EA31B>I<07E01FF83FFC381E201E000F000F000F000F00FF07FF1FFF3E
0F780FF00FF00FF00FF00FF83F7FFF3FEF1F8F10167E9517>97 D<F00000F00000F00000
F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F1F000F7FC00
FFFE00FC1F00F80F00F00780F00780F003C0F003C0F003C0F003C0F003C0F003C0F003C0
F003C0F00780F00780F80F00FC3E00FFFE00F7F800F1F00012237CA219>I<01FC0007FF
000FFF801F03803C0180780000780000700000F00000F00000F00000F00000F00000F000
007800007800007800003C00401F03C00FFFC007FF8001FC0012167E9516>I<0003C000
03C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C003
E3C00FFBC01FFFC03F0FC03C07C07803C07803C0F003C0F003C0F003C0F003C0F003C0F0
03C0F003C0F003C07803C07803C03C07C03E0FC01FFFC00FFBC003E3C012237EA219>I<
03F00007FC001FFE003E0F003C0780780380780380F001C0FFFFC0FFFFC0FFFFC0F00000
F00000F000007000007800007800003C00801F07800FFF8007FF0001F80012167E9516>
I<01F07807FFF80FFFF81F1F001E0F003C07803C07803C07803C07803C07801E0F001F1F
000FFE001FFC0019F0003800003800003C00001FFE001FFFC01FFFE03FFFF07801F07800
F8F00078F00078F00078F000787800F03E03E01FFFC00FFF8001FC0015217F9518>103
D<F000F000F000F000F000F000F000F000F000F000F000F000F000F1F8F3FCF7FEFE1EF8
0FF80FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F10
237CA219>I<F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0
F0F0F0F0F004237DA20B>108 D<F1F8F3FCF7FEFE1EF80FF80FF00FF00FF00FF00FF00F
F00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F10167C9519>110
D<01FC0007FF000FFF801F07C03C01E07800F07800F0700070F00078F00078F00078F000
78F00078F000787800F07800F07C01F03E03E01F07C00FFF8007FF0001FC0015167F9518
>I<F0E0F3E0F7E0FF00FE00FC00F800F800F000F000F000F000F000F000F000F000F000
F000F000F000F000F0000B167C9511>114 D<07F01FFC3FFE3C0E7806780078007C003F
003FF01FF80FFC01FE001F000F000F000FC00FF81EFFFE3FFC0FF010167F9513>I<0F00
0F000F000F000F000F00FFF8FFF8FFF80F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F080F1C07FC07F803E00E1C7F9B12>I<F00FF00FF00FF00FF00FF0
0FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF01FF83F7FFF7FCF1F0F10167C
9519>I E /Fd 1 49 df<07C018303018701C600C600CE00EE00EE00EE00EE00EE00EE0
0EE00EE00E600C600C701C30181C7007C00F157F9412>48 D E /Fe
12 120 df<70F8FCFC74040404080810102040060E7C840D>59 D<000001C00000078000
001E00000078000001E00000078000000E00000038000000F0000003C000000F0000003C
000000F0000000F00000003C0000000F00000003C0000000F0000000380000000E000000
0780000001E0000000780000001E0000000780000001C01A1A7C9723>I<E00000007800
00001E0000000780000001E0000000780000001C0000000700000003C0000000F0000000
3C0000000F00000003C0000003C000000F0000003C000000F0000003C00000070000001C
00000078000001E00000078000001E00000078000000E00000001A1A7C9723>62
D<000002000000060000000E0000000E0000001E0000001F0000002F0000002F0000004F
0000008F0000008F0000010F0000010F0000020F0000040F0000040F0000080F80000807
80001007800020078000200780007FFF8000400780008007800180078001000780020007
80020007C0040003C00C0003C01E0007C0FF807FFC1E207E9F22>65
D<00E001E001E000C000000000000000000000000000000E001300238043804380438087
00070007000E000E001C001C001C20384038403840388019000E000B1F7E9E10>105
D<0000C00001E00001E00001C0000000000000000000000000000000000000000000001E
00006300004380008380010380010380020700000700000700000700000E00000E00000E
00000E00001C00001C00001C00001C000038000038000038000038000070000070003070
0078E000F1C0006380003E00001328819E13>I<01E0000FE00001C00001C00001C00001
C0000380000380000380000380000700000700000701E00706100E08700E10F00E20F00E
40601C80001D00001E00001FC000387000383800383800381C2070384070384070384070
1880E01880600F0014207E9F18>I<1E07C07C00231861860023A032030043C034030043
80380380438038038087007007000700700700070070070007007007000E00E00E000E00
E00E000E00E00E000E00E01C101C01C01C201C01C038201C01C038401C01C01840380380
18801801800F0024147E9328>109 D<1E07802318C023A06043C0704380704380708700
E00700E00700E00700E00E01C00E01C00E01C00E03821C03841C07041C07081C03083803
101801E017147E931B>I<0F00601180702180E021C0E041C0E04380E08381C00701C007
01C00701C00E03800E03800E03800E03840E07080C07080C07080E0F1006131003E1E016
147E931A>117 D<0F01801183C02183E021C1E041C0E043806083804007004007004007
00400E00800E00800E00800E01000E01000C02000E04000E040006180001E00013147E93
16>I<0F006060118070F02180E0F821C0E07841C0E0384380E0188381C0100701C01007
01C0100701C0100E0380200E0380200E0380200E0380400E0380400E0380800E07808006
0781000709860001F078001D147E9321>I E /Ff 66 126 df<007000F001E003C00780
0F001E001C00380038007000700070007000E000E000E000E000E000E000E000E0007000
700070007000380038001C001E000F00078003C001F000F000700C24799F18>40
D<6000F00078003C001E000F000780038001C001C000E000E000E000E000700070007000
70007000700070007000E000E000E000E001C001C0038007800F001E003C007800F00060
000C247C9F18>I<01C00001C00001C00001C000C1C180F1C780F9CF807FFF001FFC0007
F00007F0001FFC007FFF00F9CF80F1C780C1C18001C00001C00001C00001C00011147D97
18>I<00600000F00000F00000F00000F00000F00000F00000F0007FFFC0FFFFE0FFFFE0
7FFFC000F00000F00000F00000F00000F00000F00000F00000600013147E9718>I<1C3E
7E7F3F1F070E1E7CF860080C788518>I<7FFF00FFFF80FFFF807FFF0011047D8F18>I<30
78FCFC78300606778518>I<000300000780000780000F80000F00001F00001E00001E00
003E00003C00007C0000780000780000F80000F00001F00001E00003E00003C00003C000
07C0000780000F80000F00000F00001F00001E00003E00003C00003C00007C0000780000
F80000F00000F0000060000011247D9F18>I<01F00007FC000FFE001F1F001C07003803
807803C07001C07001C0E000E0E000E0E000E0E000E0E000E0E000E0E000E0E000E0E000
E0F001E07001C07001C07803C03803801C07001F1F000FFE0007FC0001F000131C7E9B18
>I<01800380038007800F803F80FF80FB80438003800380038003800380038003800380
038003800380038003800380038003807FFCFFFE7FFC0F1C7B9B18>I<03F0000FFE003F
FF007C0F807003C0E001C0F000E0F000E06000E00000E00000E00001C00001C00003C000
0780000F00001E00003C0000780000F00001E00007C0000F80001E00E03C00E07FFFE0FF
FFE07FFFE0131C7E9B18>I<3078FCFC783000000000000000003078FCFC783006147793
18>58 D<183C7E7E3C180000000000000000183C7E7E3E1E0E1C3C78F060071A789318>
I<000300000780001F80003F00007E0001FC0003F00007E0001FC0003F00007E0000FC00
00FC00007E00003F00001FC00007E00003F00001FC00007E00003F00001F800007800003
0011187D9918>I<7FFFC0FFFFE0FFFFE0FFFFE0000000000000000000000000FFFFE0FF
FFE0FFFFE07FFFC0130C7E9318>I<600000F00000FC00007E00003F00001FC00007E000
03F00001FC00007E00003F00001F80001F80003F00007E0001FC0003F00007E0001FC000
3F00007E0000FC0000F0000060000011187D9918>I<00700000F80000F80000D80000D8
0001DC0001DC0001DC00018C00038E00038E00038E00038E000306000707000707000707
000707000FFF800FFF800FFF800E03800E03801C01C01C01C07F07F0FF8FF87F07F0151C
7F9B18>65 D<FFFC00FFFF00FFFF801C03C01C01C01C00E01C00E01C00E01C00E01C01E0
1C01C01C07C01FFF801FFF001FFFC01C03C01C00E01C00F01C00701C00701C00701C0070
1C00F01C00E01C03E0FFFFC0FFFF80FFFE00141C7F9B18>I<00F8E003FEE007FFE00F07
E01E03E03C01E03800E07000E07000E0700000E00000E00000E00000E00000E00000E000
00E00000E000007000007000E07000E03800E03C00E01E01C00F07C007FF8003FE0000F8
00131C7E9B18>I<7FF800FFFE007FFF001C0F801C03C01C03C01C01E01C00E01C00E01C
00F01C00701C00701C00701C00701C00701C00701C00701C00701C00F01C00E01C00E01C
01E01C01C01C03C01C0F807FFF00FFFE007FF800141C7F9B18>I<FFFFF0FFFFF0FFFFF0
1C00701C00701C00701C00701C00001C00001C0E001C0E001C0E001FFE001FFE001FFE00
1C0E001C0E001C0E001C00001C00001C00381C00381C00381C00381C0038FFFFF8FFFFF8
FFFFF8151C7F9B18>I<FFFFE0FFFFE0FFFFE01C00E01C00E01C00E01C00E01C00001C00
001C1C001C1C001C1C001FFC001FFC001FFC001C1C001C1C001C1C001C00001C00001C00
001C00001C00001C00001C0000FFC000FFC000FFC000131C7E9B18>I<01F1C003FDC00F
FFC01F0FC01C03C03803C03801C07001C07001C0700000E00000E00000E00000E00000E0
0000E00FF0E01FF0E00FF07001C07001C07003C03803C03803C01C07C01F0FC00FFFC003
FDC001F1C0141C7E9B18>I<7F07F0FF8FF87F07F01C01C01C01C01C01C01C01C01C01C0
1C01C01C01C01C01C01C01C01FFFC01FFFC01FFFC01C01C01C01C01C01C01C01C01C01C0
1C01C01C01C01C01C01C01C01C01C07F07F0FF8FF87F07F0151C7F9B18>I<7FFF00FFFF
807FFF0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0
0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0007FFF
00FFFF807FFF00111C7D9B18>I<7FE000FFE0007FE0000E00000E00000E00000E00000E
00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E
00000E00700E00700E00700E00700E00707FFFF0FFFFF07FFFF0141C7F9B18>76
D<FC01F8FE03F8FE03F83B06E03B06E03B06E03B06E03B8EE03B8EE0398CE0398CE039DC
E039DCE039DCE038D8E038D8E038F8E03870E03870E03800E03800E03800E03800E03800
E03800E0FE03F8FE03F8FE03F8151C7F9B18>I<7E07F0FF0FF87F07F01D81C01D81C01D
81C01DC1C01CC1C01CC1C01CE1C01CE1C01CE1C01C61C01C71C01C71C01C31C01C39C01C
39C01C39C01C19C01C19C01C1DC01C0DC01C0DC01C0DC07F07C0FF87C07F03C0151C7F9B
18>I<0FF8003FFE007FFF00780F00700700F00780E00380E00380E00380E00380E00380
E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380F00780
700700780F007FFF003FFE000FF800111C7D9B18>I<FFFE00FFFF80FFFFC01C03C01C01
E01C00E01C00701C00701C00701C00701C00701C00E01C01E01C03C01FFFC01FFF801FFE
001C00001C00001C00001C00001C00001C00001C00001C0000FF8000FF8000FF8000141C
7F9B18>I<7FF800FFFE007FFF001C0F801C03801C03C01C01C01C01C01C01C01C03C01C
03801C0F801FFF001FFE001FFE001C0F001C07001C03801C03801C03801C03801C03801C
039C1C039C1C039C7F01F8FF81F87F00F0161C7F9B18>82 D<03F3801FFF803FFF807C0F
80700780E00380E00380E00380E000007000007800003F00001FF00007FE0000FF00000F
800003C00001C00000E00000E06000E0E000E0E001E0F001C0F80780FFFF80FFFE00E7F8
00131C7E9B18>I<7FFFF8FFFFF8FFFFF8E07038E07038E07038E0703800700000700000
700000700000700000700000700000700000700000700000700000700000700000700000
700000700000700000700007FF0007FF0007FF00151C7F9B18>I<FF83FEFF83FEFF83FE
1C00701C00701C00701C00701C00701C00701C00701C00701C00701C00701C00701C0070
1C00701C00701C00701C00701C00701C00701C00700E00E00F01E00783C003FF8001FF00
007C00171C809B18>I<7F8FE07F9FE07F8FE00E07000F0700070E00078E00039C0003DC
0001F80001F80000F00000F00000700000F00000F80001F80001DC00039E00038E00070F
000707000E07800E03801E03C07F07F0FF8FF87F07F0151C7F9B18>88
D<FF07F8FF07F8FF07F81C01C01E03C00E03800F0780070700070700038E00038E0001DC
0001DC0001DC0000F80000F8000070000070000070000070000070000070000070000070
0000700001FC0003FE0001FC00151C7F9B18>I<3FFFE07FFFE07FFFE07001C07003C070
0780700700000F00001E00001C00003C0000780000700000F00001E00001C00003C00007
80000700000F00001E00E01C00E03C00E07800E07000E0FFFFE0FFFFE0FFFFE0131C7E9B
18>I<FFF8FFF8FFF8E000E000E000E000E000E000E000E000E000E000E000E000E000E0
00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000FFF8FF
F8FFF80D24779F18>I<600000F00000F00000F800007800007C00003C00003C00003E00
001E00001F00000F00000F00000F800007800007C00003C00003C00003E00001E00001F0
0000F00000F800007800007800007C00003C00003E00001E00001E00001F00000F00000F
8000078000078000030011247D9F18>I<FFF8FFF8FFF800380038003800380038003800
380038003800380038003800380038003800380038003800380038003800380038003800
3800380038003800380038FFF8FFF8FFF80D247F9F18>I<7FFF00FFFF80FFFF807FFF00
11047D7F18>95 D<1FE0003FF8007FFC00781E00300E0000070000070000FF0007FF001F
FF007F0700780700E00700E00700E00700F00F00781F003FFFF01FFBF007E1F014147D93
18>97 D<7E0000FE00007E00000E00000E00000E00000E00000E00000E3E000EFF800FFF
C00FC1E00F80E00F00700E00700E00380E00380E00380E00380E00380E00380F00700F00
700F80E00FC1E00FFFC00EFF80063E00151C809B18>I<01FE0007FF001FFF803E078038
0300700000700000E00000E00000E00000E00000E00000E000007000007001C03801C03E
03C01FFF8007FF0001FC0012147D9318>I<001F80003F80001F80000380000380000380
00038000038003E3800FFB801FFF803C1F80380F80700780700380E00380E00380E00380
E00380E00380E00380700780700780380F803C1F801FFFF00FFBF803E3F0151C7E9B18>
I<01F00007FC001FFE003E0F00380780700380700380E001C0E001C0FFFFC0FFFFC0FFFF
C0E000007000007001C03801C03E03C01FFF8007FF0001FC0012147D9318>I<001F8000
7FC000FFE000E1E001C0C001C00001C00001C0007FFFC0FFFFC0FFFFC001C00001C00001
C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0007F
FF007FFF007FFF00131C7F9B18>I<01E1F007FFF80FFFF81E1E301C0E00380700380700
3807003807003807001C0E001E1E001FFC001FF80039E0003800001C00001FFE001FFFC0
3FFFE07801F0700070E00038E00038E00038E000387800F07E03F01FFFC00FFF8001FC00
151F7F9318>I<7E0000FE00007E00000E00000E00000E00000E00000E00000E3E000EFF
800FFFC00FC1C00F80E00F00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00
E00E00E00E00E00E00E07FC3FCFFE7FE7FC3FC171C809B18>I<03800007C00007C00007
C0000380000000000000000000000000007FC000FFC0007FC00001C00001C00001C00001
C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C000FFFF00FF
FF80FFFF00111D7C9C18>I<FE0000FE0000FE00000E00000E00000E00000E00000E0000
0E3FF00E7FF00E3FF00E07800E0F000E1E000E3C000E78000EF0000FF8000FFC000F9C00
0F0E000E0F000E07000E03800E03C0FFC7F8FFC7F8FFC7F8151C7F9B18>107
D<7FE000FFE0007FE00000E00000E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0
0000E0007FFFC0FFFFE07FFFC0131C7E9B18>I<7CE0E000FFFBF8007FFFF8001F1F1C00
1E1E1C001E1E1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C00
1C1C1C001C1C1C001C1C1C001C1C1C007F1F1F00FFBFBF807F1F1F001914819318>I<7E
3E00FEFF807FFFC00FC1C00F80E00F00E00E00E00E00E00E00E00E00E00E00E00E00E00E
00E00E00E00E00E00E00E00E00E07FC3FCFFE7FE7FC3FC1714809318>I<01F0000FFE00
1FFF003E0F803803807001C07001C0E000E0E000E0E000E0E000E0E000E0F001E07001C0
7803C03C07803E0F801FFF000FFE0001F00013147E9318>I<7E3E00FEFF807FFFC00FC1
E00F80E00F00700E00700E00380E00380E00380E00380E00380E00380F00700F00700F80
E00FC1E00FFFC00EFF800E3E000E00000E00000E00000E00000E00000E00000E00007FC0
00FFE0007FC000151E809318>I<7F87E0FF9FF07FBFF803F87803F03003E00003C00003
C0000380000380000380000380000380000380000380000380000380007FFE00FFFF007F
FE0015147F9318>114 D<07F7003FFF007FFF00780F00E00700E00700E007007C00007F
E0001FFC0003FE00001F00600780E00380E00380F00380F80F00FFFF00FFFC00E7F00011
147D9318>I<0180000380000380000380000380007FFFC0FFFFC0FFFFC0038000038000
0380000380000380000380000380000380000380000380400380E00380E00380E001C1C0
01FFC000FF80003E0013197F9818>I<7E07E0FE0FE07E07E00E00E00E00E00E00E00E00
E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E01E00F03E007FFFC03FF
FE01FCFC1714809318>I<7F8FF0FF8FF87F8FF01E03C00E03800E03800E038007070007
0700070700038E00038E00038E00038E0001DC0001DC0001DC0000F80000F80000700015
147F9318>I<FF8FF8FF8FF8FF8FF83800E03800E03800E01C01C01C01C01C71C01CF9C0
1CF9C01CD9C01CD9C00DDD800DDD800DDD800D8D800F8F800F8F8007070015147F9318>
I<7F8FF07F9FF07F8FF0070700078E00039E0001DC0001F80000F80000700000F00000F8
0001DC00039E00038E000707000F07807F8FF0FF8FF87F8FF015147F9318>I<7F8FF0FF
8FF87F8FF00E01C00E03800E0380070380070700070700038700038600038E0001CE0001
CE0000CC0000CC0000DC0000780000780000780000700000700000700000F00000E00079
E0007BC0007F80003F00001E0000151E7F9318>I<0007E0001FE0007FE000780000E000
00E00000E00000E00000E00000E00000E00000E00000E00000E00000E00001E0007FC000
FF8000FF80007FC00001E00000E00000E00000E00000E00000E00000E00000E00000E000
00E00000E00000E000007800007FE0001FE00007E013247E9F18>123
D<7C0000FF0000FFC00003C00000E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000F000007FC0003FE0003FE0007FC000F00000E00000E00000E0
0000E00000E00000E00000E00000E00000E00000E00000E00003C000FFC000FF00007C00
0013247E9F18>125 D E /Fg 57 124 df<007E1F0001C1B1800303E3C00703C3C00E03
C1800E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC000E01C0000E01
C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01
C0000E01C0000E01C0000E01C0000E01C0000E01C0007F87FC001A1D809C18>11
D<007E0001C1800301800703C00E03C00E01800E00000E00000E00000E00000E0000FFFF
C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01
C00E01C00E01C00E01C00E01C07F87F8151D809C17>I<007FC001C1C00303C00703C00E
01C00E01C00E01C00E01C00E01C00E01C00E01C0FFFFC00E01C00E01C00E01C00E01C00E
01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F
CFF8151D809C17>I<6060F0F0F8F86868080808080808101010102020404080800D0C7F
9C15>34 D<004000800100020006000C000C0018001800300030007000600060006000E0
00E000E000E000E000E000E000E000E000E000E000E00060006000600070003000300018
0018000C000C00060002000100008000400A2A7D9E10>40 D<800040002000100018000C
000C000600060003000300038001800180018001C001C001C001C001C001C001C001C001
C001C001C001C0018001800180038003000300060006000C000C00180010002000400080
000A2A7E9E10>I<60F0F0701010101020204080040C7C830C>44
D<FFE0FFE00B0280890E>I<60F0F06004047C830C>I<00010003000600060006000C000C
000C0018001800180030003000300060006000C000C000C0018001800180030003000300
060006000C000C000C00180018001800300030003000600060006000C000C00010297E9E
15>I<030007003F00C70007000700070007000700070007000700070007000700070007
000700070007000700070007000700070007000F80FFF80D1C7C9B15>49
D<07C01830201C400C400EF00FF80FF807F8077007000F000E000E001C001C0038007000
6000C00180030006010C01180110023FFE7FFEFFFE101C7E9B15>I<60F0F06000000000
00000000000060F0F06004127C910C>58 D<60F0F0600000000000000000000060F0F070
1010101020204080041A7C910C>I<0FE03038401CE00EF00EF00EF00E000C001C003000
6000C0008001800100010001000100010001000000000000000000000003000780078003
000F1D7E9C14>63 D<000600000006000000060000000F0000000F0000000F0000001780
0000178000001780000023C0000023C0000023C0000041E0000041E0000041E0000080F0
000080F0000180F8000100780001FFF80003007C0002003C0002003C0006003E0004001E
0004001E000C001F001E001F00FF80FFF01C1D7F9C1F>65 D<001F808000E06180018019
80070007800E0003801C0003801C00018038000180780000807800008070000080F00000
00F0000000F0000000F0000000F0000000F0000000F0000000F000000070000080780000
8078000080380000801C0001001C0001000E000200070004000180080000E03000001FC0
00191E7E9C1E>67 D<FFFFC0000F00F0000F003C000F000E000F0007000F0007000F0003
800F0003C00F0001C00F0001C00F0001E00F0001E00F0001E00F0001E00F0001E00F0001
E00F0001E00F0001E00F0001C00F0001C00F0003C00F0003800F0007800F0007000F000E
000F001C000F007000FFFFC0001B1C7E9B20>I<FFFFFC0F003C0F000C0F00040F00040F
00060F00020F00020F02020F02000F02000F02000F06000FFE000F06000F02000F02000F
02000F02010F00010F00020F00020F00020F00060F00060F000C0F003CFFFFFC181C7E9B
1C>I<FFFFF80F00780F00180F00080F00080F000C0F00040F00040F02040F02000F0200
0F02000F06000FFE000F06000F02000F02000F02000F02000F00000F00000F00000F0000
0F00000F00000F00000F8000FFF800161C7E9B1B>I<FFF3FFC00F003C000F003C000F00
3C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F00
3C000FFFFC000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F00
3C000F003C000F003C000F003C000F003C000F003C00FFF3FFC01A1C7E9B1F>72
D<FFF00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F
000F000F000F000F000F000F000F000F000F00FFF00C1C7F9B0F>I<FF8000FF800F8000
F8000F8000F8000BC00178000BC00178000BC001780009E002780009E002780008F00478
0008F004780008F0047800087808780008780878000878087800083C107800083C107800
083C107800081E207800081E207800081E207800080F407800080F407800080780780008
07807800080780780008030078001C03007800FF8307FF80211C7E9B26>77
D<FF007FC00F800E000F8004000BC0040009E0040009E0040008F0040008F80400087804
00083C0400083C0400081E0400080F0400080F0400080784000807C4000803C4000801E4
000801E4000800F40008007C0008007C0008003C0008003C0008001C0008000C001C000C
00FF8004001A1C7E9B1F>I<003F800000E0E0000380380007001C000E000E001C000700
3C00078038000380780003C0780003C0700001C0F00001E0F00001E0F00001E0F00001E0
F00001E0F00001E0F00001E0F00001E0700001C0780003C0780003C0380003803C000780
1C0007000E000E0007001C000380380000E0E000003F80001B1E7E9C20>I<FFFF800F00
E00F00780F003C0F001C0F001E0F001E0F001E0F001E0F001E0F001C0F003C0F00780F00
E00FFF800F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00
000F0000FFF000171C7E9B1C>I<07E0801C1980300580700380600180E00180E00080E0
0080E00080F00000F800007C00007FC0003FF8001FFE0007FF0000FF80000F800007C000
03C00001C08001C08001C08001C0C00180C00180E00300D00200CC0C0083F800121E7E9C
17>83 D<7FFFFFC0700F01C0600F00C0400F0040400F0040C00F0020800F0020800F0020
800F0020000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000
000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000
001F800003FFFC001B1C7F9B1E>I<FFF07FC00F000E000F0004000F0004000F0004000F
0004000F0004000F0004000F0004000F0004000F0004000F0004000F0004000F0004000F
0004000F0004000F0004000F0004000F0004000F0004000F0004000F0004000700080007
800800038010000180100000C020000070C000001F00001A1D7E9B1F>I<FFE0FFE0FF1F
001F003C1E001E00180F001F00100F001F00100F001F001007801F002007802780200780
27802003C027804003C043C04003C043C04003E043C04001E081E08001E081E08001E081
E08000F100F10000F100F10000F100F100007900FA00007A007A00007A007A00003E007C
00003C003C00003C003C00003C003C00001800180000180018000018001800281D7F9B2B
>87 D<08081010202040404040808080808080B0B0F8F8787830300D0C7A9C15>92
D<1FC000307000783800781C00301C00001C00001C0001FC000F1C00381C00701C00601C
00E01C40E01C40E01C40603C40304E801F870012127E9115>97 D<FC00001C00001C0000
1C00001C00001C00001C00001C00001C00001C00001C00001C7C001D86001E03001C0180
1C01C01C00C01C00E01C00E01C00E01C00E01C00E01C00E01C00C01C01C01C01801E0300
19060010F800131D7F9C17>I<07E00C301878307870306000E000E000E000E000E000E0
0060007004300418080C3007C00E127E9112>I<003F0000070000070000070000070000
070000070000070000070000070000070003E7000C1700180F00300700700700600700E0
0700E00700E00700E00700E00700E00700600700700700300700180F000C370007C7E013
1D7E9C17>I<03E00C301818300C700E6006E006FFFEE000E000E000E000600070023002
18040C1803E00F127F9112>I<00F8018C071E061E0E0C0E000E000E000E000E000E00FF
E00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E007FE00F
1D809C0D>I<00038003C4C00C38C01C3880181800381C00381C00381C00381C00181800
1C38000C300013C0001000003000001800001FF8001FFF001FFF803003806001C0C000C0
C000C0C000C06001803003001C0E0007F800121C7F9215>I<FC00001C00001C00001C00
001C00001C00001C00001C00001C00001C00001C00001C7C001C87001D03001E03801C03
801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03
80FF9FF0141D7F9C17>I<18003C003C0018000000000000000000000000000000FC001C
001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80091D7F
9C0C>I<FC00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00
001C3FC01C0F001C0C001C08001C10001C20001C40001CE0001DE0001E70001C78001C38
001C3C001C1C001C0E001C0F001C0F80FF9FE0131D7F9C16>107
D<FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C
001C001C001C001C001C001C001C001C001C001C00FF80091D7F9C0C>I<FC7E07E0001C
838838001D019018001E01E01C001C01C01C001C01C01C001C01C01C001C01C01C001C01
C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C0
1C001C01C01C00FF8FF8FF8021127F9124>I<FC7C001C87001D03001E03801C03801C03
801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C0380FF9F
F014127F9117>I<03F0000E1C00180600300300700380600180E001C0E001C0E001C0E0
01C0E001C0E001C06001807003803003001806000E1C0003F00012127F9115>I<FC7C00
1D86001E03001C01801C01C01C00C01C00E01C00E01C00E01C00E01C00E01C00E01C01C0
1C01C01C01801E03001D06001CF8001C00001C00001C00001C00001C00001C00001C0000
FF8000131A7F9117>I<03C1000C3300180B00300F00700700700700E00700E00700E007
00E00700E00700E00700600700700700300F00180F000C370007C7000007000007000007
00000700000700000700000700003FE0131A7E9116>I<FCE01D301E781E781C301C001C
001C001C001C001C001C001C001C001C001C001C00FFC00D127F9110>I<1F9030704030
C010C010E010F8007F803FE00FF000F880388018C018C018E010D0608FC00D127F9110>
I<04000400040004000C000C001C003C00FFE01C001C001C001C001C001C001C001C001C
001C101C101C101C101C100C100E2003C00C1A7F9910>I<FC1F801C03801C03801C0380
1C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C07800C0780
0E1B8003E3F014127F9117>I<FF07E03C03801C01001C01000E02000E02000704000704
0007040003880003880003D80001D00001D00000E00000E00000E00000400013127F9116
>I<FF3FCFE03C0F03801C0701801C0701001C0B01000E0B82000E0B82000E1182000711
C4000711C4000720C40003A0E80003A0E80003C0680001C0700001C07000018030000080
20001B127F911E>I<7F8FF00F03800F030007020003840001C80001D80000F000007000
00780000F800009C00010E00020E000607000403801E07C0FF0FF81512809116>I<FF07
E03C03801C01001C01000E02000E020007040007040007040003880003880003D80001D0
0001D00000E00000E00000E000004000004000008000008000F08000F10000F300006600
003C0000131A7F9116>I<7FFC70386038407040F040E041C003C0038007000F040E041C
043C0C380870087038FFF80E127F9112>I<FFFFF01401808B15>I
E /Fh 17 118 df<78FCFCFCFC7800000000000078FCFCFCFC7806127D910D>58
D<FFFFF800FFFFFF000FC01FC00FC007E00FC001F00FC001F80FC000F80FC000FC0FC000
7C0FC0007C0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC000
7E0FC0007C0FC0007C0FC0007C0FC000F80FC000F80FC001F00FC007E00FC01FC0FFFFFF
00FFFFF8001F1C7E9B25>68 D<FFFFFFFF07E007E007E007E007E007E007E007E007E007
E007E007E007E007E007E007E007E007E007E007E007E007E007E007E0FFFFFFFF101C7F
9B12>73 D<FFC00003FFFFE00007FF0FE00007F00DF0000DF00DF0000DF00DF0000DF00C
F80019F00CF80019F00C7C0031F00C7C0031F00C3E0061F00C3E0061F00C1F00C1F00C1F
00C1F00C1F00C1F00C0F8181F00C0F8181F00C07C301F00C07C301F00C03E601F00C03E6
01F00C01FC01F00C01FC01F00C01FC01F00C00F801F00C00F801F0FFC0701FFFFFC0701F
FF281C7E9B2D>77 D<0FF8001C1E003E0F803E07803E07C01C07C00007C0007FC007E7C0
1F07C03C07C07C07C0F807C0F807C0F807C0780BC03E13F80FE1F815127F9117>97
D<03FC000E0E001C1F003C1F00781F00780E00F80000F80000F80000F80000F80000F800
007800007801803C01801C03000E0E0003F80011127E9115>99 D<01FC000F07001C0380
3C01C07801C07801E0F801E0F801E0FFFFE0F80000F80000F800007800007C00603C0060
1E00C00F038001FC0013127F9116>101 D<03F8F00E0F381E0F381C07303C07803C0780
3C07803C07801C07001E0F000E0E001BF8001000001800001800001FFF001FFFC00FFFE0
1FFFF07801F8F00078F00078F000787000707800F01E03C007FF00151B7F9118>103
D<1E003F003F003F003F001E00000000000000000000000000FF00FF001F001F001F001F
001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>105
D<FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F
001F001F001F001F001F001F001F001F001F00FFE0FFE00B1D7F9C0E>108
D<FF0FC07E00FF31E18F001F40F207801F80FC07C01F80FC07C01F00F807C01F00F807C0
1F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F
00F807C01F00F807C0FFE7FF3FF8FFE7FF3FF825127F9128>I<FF0FC0FF31E01F40F01F
80F81F80F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F
00F8FFE7FFFFE7FF18127F911B>I<01FC000F07801C01C03C01E07800F07800F0F800F8
F800F8F800F8F800F8F800F8F800F87800F07800F03C01E01E03C00F078001FC0015127F
9118>I<FF3F80FFE1E01F80F01F00781F007C1F003C1F003E1F003E1F003E1F003E1F00
3E1F003E1F003C1F007C1F00781F80F01FC1E01F3F001F00001F00001F00001F00001F00
001F0000FFE000FFE000171A7F911B>I<1FD830786018E018E018F000FF807FE07FF01F
F807FC007CC01CC01CE01CE018F830CFC00E127E9113>115 D<03000300030003000700
07000F000F003FFCFFFC1F001F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C
0F08079803F00E1A7F9913>I<FF07F8FF07F81F00F81F00F81F00F81F00F81F00F81F00
F81F00F81F00F81F00F81F00F81F00F81F00F81F01F80F01F80786FF01F8FF18127F911B
>I E /Fi 23 87 df<3078F8787005057C840D>46 D<000C001C00FC0F38003800380038
0038003800700070007000700070007000E000E000E000E000E000E001C001C001C001C0
01C001C0038003C0FFFE0F1E7C9D17>49 D<0000600000600000E00001C00003C00005C0
000DC00009C00011C000238000438000C380008380010380020380040700080700180700
100700200700400700FFFFF0000E00000E00000E00000E00000E00001C00001E0001FFE0
141E7E9D17>52 D<01803001FFE003FFC003FF0003FC0002000002000002000004000004
0000040000047C000587000603800C01800801C00001C00001E00001E00001E00001E070
03C0F803C0F003C0E00380800780400700400E00201C0018700007C000141F7D9D17>I<
000F8000704000C0200180E00301E00701E00E00C01E00001C00003C000038000078F800
790E007A07007C0300F80380F80380F003C0F003C0F003C0F003C0F00780E00780E00780
E00700E00F00600E00701C0030180018700007C000131F7C9D17>I<0000100000001800
000038000000380000007800000078000000FC000001BC0000013C0000033C0000023C00
00063C0000043E0000081E0000081E0000101E0000101E0000201E0000200F0000400F00
00400F0000FFFF0000800F0001000F800100078002000780020007800400078004000780
0C0007C03E0007C0FF807FFC1E207E9F22>65 D<07FFFF00007C01C0003C01E0003C00F0
007800F8007800F8007800F8007800F8007800F8007800F000F001F000F001E000F003C0
00F00F8000FFFE0000F00F0001E007C001E003C001E003E001E001E001E001E001E001E0
03C001E003C003E003C003E003C003C003C007C003C00F8007800F0007803E00FFFFF000
1D1F7E9E20>I<0001F808000E061800380138007000F801E0007803C000700780003007
8000300F0000301F0000301E0000303E0000203C0000007C0000007C0000007C0000007C
000000F8000000F8000000F8000000F8000000F80000007800004078000080780000803C
0000803C0001001C0002000E00020006000C000300100001C0E000003F00001D217B9F21
>I<07FFFF00007C01E0003C00F0003C00780078003C0078003C0078001E0078001E0078
001E0078001F00F0001F00F0001F00F0001F00F0001F00F0001F00F0001F01E0001E01E0
003E01E0003E01E0003E01E0003C01E0007C03C0007803C000F003C000F003C001E003C0
03C003C0078007800F0007803C00FFFFE000201F7E9E23>I<07FFFFF8007C0078003C00
38003C001800780018007800080078000800780008007800080078080800F0100000F010
0000F0100000F0300000FFF00000F0700001E0200001E0200001E0200001E0200001E000
0801E0001003C0001003C0001003C0002003C0002003C0006003C000C0078001C0078007
C0FFFFFF801D1F7E9E1F>I<07FFFFF8007C0078003C0038003C00180078001800780008
0078000800780008007800080078000800F0100000F0100000F0100000F0300000F07000
00FFF00001E0600001E0200001E0200001E0200001E0200001E0000003C0000003C00000
03C0000003C0000003C0000003C000000780000007C00000FFFE00001D1F7E9E1E>I<00
01FC04000F030C003C009C0070007C00E0003C01C0003803800018078000180F0000181F
0000181E0000183E0000103C0000007C0000007C0000007C0000007C000000F8000000F8
000000F8007FFCF80003E0780001E0780001E0780003C0780003C03C0003C03C0003C01C
0003C00E0007C007000B800380118001E06080003F80001E217B9F24>I<07FFE0007C00
003C00003C0000780000780000780000780000780000780000F00000F00000F00000F000
00F00000F00001E00001E00001E00001E00001E00001E00003C00003C00003C00003C000
03C00003C00007800007C000FFFC00131F7F9E10>73 D<07FFF000007E0000003C000000
3C000000780000007800000078000000780000007800000078000000F0000000F0000000
F0000000F0000000F0000000F0000001E0000001E0000001E0000001E0000001E0008001
E0010003C0010003C0010003C0030003C0020003C0060003C0060007801E0007807C00FF
FFFC00191F7E9E1C>76 D<07FC0000FFC0007C0000F800003C00017800003C0001780000
4E0002F000004E0002F000004E0004F000004E0004F000004E0008F000004E0008F00000
870011E00000870011E00000870021E00000870021E00000870041E00000838041E00001
038083C00001038083C00001038103C00001038203C0000101C203C0000101C403C00002
01C40780000201C80780000201C80780000201D00780000200F00780000600E007800006
00E00F00000F00C00F8000FFE0C1FFF8002A1F7E9E2A>I<07FC01FFC0003E003E00003E
001800003E001800004F001000004F001000004780100000478010000043C010000043C0
10000083C020000081E020000081E020000080F020000080F02000008078200001007840
0001007C400001003C400001003C400001001E400001001E400002000F800002000F8000
02000F800002000780000200078000060003800006000300000F00010000FFE001000022
1F7E9E22>I<0003F800001E0E000038070000E0038001C001C003C001E0078000E00F00
00F00F0000F01E0000F01E0000F83E0000F83C0000F87C0000F87C0000F87C0000F87C00
00F8F80001F0F80001F0F80001F0F80001F0F80003E0780003E0780003C0780007C07C00
07803C000F003C001E001E001C000E0038000700F00003C3C00000FE00001D217B9F23>
I<07FFFF00007C03C0003C01E0003C00F0007800F0007800F8007800F8007800F8007800
F8007800F000F001F000F001E000F003C000F0078000F00F0000FFF80001E0000001E000
0001E0000001E0000001E0000001E0000003C0000003C0000003C0000003C0000003C000
0003C000000780000007C00000FFFC00001D1F7E9E1F>I<07FFFC00007C0700003C03C0
003C01E0007801E0007801F0007801F0007801F0007801F0007801E000F003E000F003C0
00F0078000F00F0000F03C0000FFF00001E0300001E0380001E01C0001E01C0001E01C00
01E01E0003C03E0003C03E0003C03E0003C03E0003C03E0003C03E0207803E0407C01F04
FFFC0F18000003E01F207E9E21>82 D<003F040060CC01803C03801C03001C0700180600
080E00080E00080E00080E00000F00000F80000FE00007FE0003FF8001FFC0007FE00007
E00001E00000E00000F00000F04000E04000E04000E04000E06000C0600180E00380F803
00C60C0081F80016217D9F19>I<3FFFFFF03C0780F03007803060078030400F0010400F
0010C00F0010800F0010800F0010800F0010001E0000001E0000001E0000001E0000001E
0000001E0000003C0000003C0000003C0000003C0000003C0000003C0000007800000078
00000078000000780000007800000078000000F0000001F800007FFFE0001C1F7A9E21>
I<FFFC3FF80F8007C007800300078003000F0002000F0002000F0002000F0002000F0002
000F0002001E0004001E0004001E0004001E0004001E0004001E0004003C0008003C0008
003C0008003C0008003C0008003C000800380010003800100038001000380020003C0040
001C0040001C0080000E0100000706000001F800001D20799E22>I<FFF003FE1F8000F8
0F0000600F0000400F0000400F8000800780018007800100078002000780020007C00400
03C0040003C0080003C0080003C0100003E0100001E0200001E0200001E0400001E04000
01F0800000F1000000F1000000F2000000F2000000FC0000007C00000078000000780000
0070000000700000002000001F207A9E22>I E /Fj 1 64 df<07F8001FFE00381F8078
0F80FC0FC0FC0FC0FC0FC0780FC0301F80001F00003E00007C0000700000E00000E00000
C00000C00000C00000C00000C00000C00000000000000000000000000001C00003E00007
F00007F00007F00003E00001C00012207D9F19>63 D E /Fk 4 104
df<03C00FF01C38300C60066006C003C003C003C003C003C00360066006300C1C380FF0
03C010127D9317>14 D<03C00FF01FF83FFC7FFE7FFEFFFFFFFFFFFFFFFFFFFFFFFF7FFE
7FFE3FFC1FF80FF003C010127D9317>I<000F0038006000E001C001C001C001C001C001
C001C001C001C001C001C001C001C001C001C0038007001E00F8001E000700038001C001
C001C001C001C001C001C001C001C001C001C001C001C001C001C000E000600038000F10
2D7DA117>102 D<F8001E000700038001C001C001C001C001C001C001C001C001C001C0
01C001C001C001C001C000E000600038000F0038006000E001C001C001C001C001C001C0
01C001C001C001C001C001C001C001C001C0038007001E00F800102D7DA117>I
E /Fl 34 123 df<F8F8F8F8F805057A8411>46 D<00180000380000F80007F800FFF800
FFF800F8F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800
00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800
00F80000F80000F80000F80000F80000F80000F80000F8007FFFF07FFFF07FFFF014287B
A71E>49 D<00FE0003FFC007FFE00FFFF01F03F83C00FC38007E78003E70003EF0001FF0
001F60001F20001F00001F00001F00001F00003E00003E00007C00007C0000F80001F000
01E00003C0000780000F00001E00003C0000780000F00001E00003C0000780000F00001E
00003C00007FFFFF7FFFFF7FFFFF7FFFFF18287EA71E>I<007F000001FFC00007FFF000
0FFFF8001FC1F8003E007C003C003E0078003E0038003E0010003E0000003E0000003E00
00003C0000007C000000FC000001F8000007F00000FFE00000FFC00000FFE00000FFF000
0001FC0000007C0000003E0000001F0000001F0000000F8000000F8000000F8000000F80
00000F8040000F8060001F00F0001F00F8003F007E007E003F81FC001FFFF8000FFFF000
03FFE000007F000019297EA71E>I<0003F0000007F0000005F000000DF000000DF00000
1DF0000039F0000039F0000079F0000079F00000F1F00000F1F00001E1F00003E1F00003
E1F00007C1F00007C1F0000F81F0000F81F0001F01F0001F01F0003E01F0007C01F0007C
01F000F801F000FFFFFF80FFFFFF80FFFFFF80FFFFFF800001F0000001F0000001F00000
01F0000001F0000001F0000001F0000001F0000001F0000001F00019277EA61E>I<3FFF
FC3FFFFC3FFFFC3FFFFC3E00003E00003E00003E00003E00003E00003E00003E00003E00
003E00003E3F003EFFC03FFFE03FFFF03FE1F83F807C3F003E3E003E00003E00001F0000
1F00001F00001F00001F00001F00001F20001F60003E70003EF8007C7C00FC3F03F81FFF
F00FFFE007FF8000FE0018287EA61E>I<000FF000003FFC0000FFFC0001FFFC0003F80C
0007E000000FC000000F8000001F0000001E0000003E0000003C0000007C0000007C0000
007C3FE000F8FFF000F9FFF800FBFFFC00FF807E00FF003E00FE003F00FC001F00FC001F
00FC000F80F8000F80F8000F80F8000F80F8000F8078000F807C000F807C000F807C000F
003E001F003E001F001F003E001F807C000FC1FC0007FFF80003FFF00001FFC000007F00
0019297EA71E>I<FFFFE000FFFFFC00FFFFFF00F8007F80F8000FC0F80003E0F80001F0
F80000F0F80000F8F8000078F8000078F8000078F80000F8F80000F0F80001F0F80003E0
F8000FC0F8007F80FFFFFF00FFFFFC00FFFFFE00FFFFFF80F8007FC0F8000FF0F80003F8
F80000F8F800007CF800003CF800003EF800003EF800003EF800003EF800003EF800003C
F800007CF80000FCF80001F8F80007F0F8003FE0FFFFFF80FFFFFE00FFFFF0001F2A7BA9
28>66 D<0001FF00000FFFE0003FFFF8007FFFF800FE01F801F8003003F0001007C00000
0F8000001F8000001F0000003E0000003E0000007E0000007C0000007C0000007C000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000
F80000007C0000007C0000007C0000007E0000003E0000003E0000001F0000001F800000
0F80000007C0000003F0000401F8001C00FE00FC007FFFFC003FFFF8000FFFE00001FF00
1E2C7CAA26>I<FFFFF00000FFFFFC0000FFFFFF0000F8003FC000F80007E000F80003F0
00F80001F800F80000FC00F800007C00F800003E00F800001E00F800001F00F800000F00
F800000F80F800000F80F800000780F8000007C0F8000007C0F8000007C0F8000007C0F8
000007C0F8000007C0F8000007C0F8000007C0F8000007C0F8000007C0F800000780F800
000F80F800000F80F800000F80F800001F00F800001F00F800003E00F800007E00F80000
7C00F80000F800F80003F000F80007E000F8003FC000FFFFFF0000FFFFFE0000FFFFF000
00222A7BA92B>I<FFFFFFC0FFFFFFC0FFFFFFC0FFFFFFC0F8000000F8000000F8000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000
F8000000F8000000F8000000FFFFFE00FFFFFE00FFFFFE00FFFFFE00F8000000F8000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F80000001A2A7BA9
22>70 D<0001FF00000FFFE0003FFFFC007FFFFE00FF01FE01F8003E03F0000C07C00000
0FC000001F8000001F0000003F0000003E0000007E0000007C0000007C0000007C000000
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8001FFEF8001FFE
F8001FFE7C001FFE7C00003E7C00003E7E00003E3E00003E3F00003E1F00003E1F80003E
0FC0003E07C0003E03F0003E01F8003E00FF00FE007FFFFE003FFFFC000FFFE00001FF00
1F2C7CAA28>I<F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8F8052A7AA911>73 D<0001FC0000000FFF8000003FFFE000
007FFFF00001FE03FC0003F800FE0007E0003F0007C0001F000F80000F801F000007C01F
000007C03E000003E03E000003E07C000001F07C000001F07C000001F078000000F0F800
0000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F80000
00F8F8000000F8F8000000F87C000001F07C000001F07C000001F07E000003F03E000003
E03F000007E01F000007C01F80000FC00FC0001F8007E0003F0007F0007F0003F800FE00
01FE03FC0000FFFFF800003FFFE000000FFF80000001FC0000252C7DAA2C>79
D<01FE000FFF803FFFC03FFFE03C03F03001F00001F80000F80000F80000F80000F80000
F8007FF807FFF81FFFF83FE0F87F00F8FC00F8F800F8F800F8F800F8FC01F87E07F87FFF
F83FFFF81FFCF80FE0F8151B7E9A1D>97 D<F80000F80000F80000F80000F80000F80000
F80000F80000F80000F80000F80000F80000F80000F80000F80000F83F00F9FFC0FBFFE0
FFFFF0FF07F0FC01F8F800FCF8007CF8007CF8007EF8003EF8003EF8003EF8003EF8003E
F8003EF8003EF8007CF8007CF8007CFC00F8FC01F8FF07F0FFFFE0FBFFC0F9FF80F87E00
172A7BA91F>I<007FC001FFF007FFFC0FFFFC1FC07C1F00083E00007C00007C00007C00
00F80000F80000F80000F80000F80000F80000F800007C00007C00007E00003E00001F00
0C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<00003E00003E00003E00003E00
003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00FC3E03
FF3E07FFFE0FFFFE1FC1FE3F007E3E003E7C003E7C003EFC003EF8003EF8003EF8003EF8
003EF8003EF8003EF8003EFC003E7C003E7C003E3E007E3F00FE1FC1FE0FFFFE07FFBE03
FF3E00FC3E172A7EA91F>I<007E0003FF8007FFC00FFFE01F83F03F00F03E00787C0078
7C003878003CFFFFFCFFFFFCFFFFFCFFFFFCF80000F80000F800007800007C00007C0000
3E00003F000C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<001FC0007FC000FF
C001FFC003F00003E00007C00007C00007C00007C00007C00007C00007C00007C00007C0
00FFFE00FFFE00FFFE0007C00007C00007C00007C00007C00007C00007C00007C00007C0
0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0
0007C00007C00007C000122A7FA912>I<F80000F80000F80000F80000F80000F80000F8
0000F80000F80000F80000F80000F80000F80000F80000F80000F83F00F8FF80FBFFC0FF
FFE0FF07E0FE03F0FC01F0FC01F0FC01F0F801F0F801F0F801F0F801F0F801F0F801F0F8
01F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F014
2A7BA91F>104 D<F8F8F8F8F800000000000000000000F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8F8F8052A7CA90E>I<F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8052A7CA90E>108
D<F83F003F00F8FFC0FFC0FBFFE3FFE0FFFFF7FFF0FF83F783F0FE01FE01F8FC00FC00F8
FC00FC00F8FC00FC00F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F8
00F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800
F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8251B7B9A30>I<F8
3F00F8FF80FBFFC0FFFFE0FF07E0FE03F0FC01F0FC01F0FC01F0F801F0F801F0F801F0F8
01F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F8
01F0F801F0F801F0141B7B9A1F>I<007F000001FFC00007FFF0000FFFF8001FC1FC003F
007E003E003E007C001F007C001F0078000F00F8000F80F8000F80F8000F80F8000F80F8
000F80F8000F80F8000F807C001F007C001F007E003F003E003E003F007E001FC1FC000F
FFF80007FFF00001FFC000007F0000191B7E9A1E>I<F83F00F9FFC0FBFFE0FFFFF0FF07
F0FC01F8F800FCF800FCF8007CF8007EF8003EF8003EF8003EF8003EF8003EF8003EF800
3EF8007CF8007CF800FCFC00F8FC03F8FF07F0FFFFE0FBFFC0F9FF80F87E00F80000F800
00F80000F80000F80000F80000F80000F80000F80000F80000F80000F8000017277B9A1F
>I<F838F8F8F9F8FBF8FFC0FF00FE00FE00FC00FC00F800F800F800F800F800F800F800
F800F800F800F800F800F800F800F800F800F8000D1B7B9A14>114
D<03FC001FFF803FFFC07FFFC07C07C0F80080F80000F80000F80000FC00007F80007FF8
003FFE001FFF0007FF8000FFC0000FE00007E00003E00003E04003E0E007E0FC0FC0FFFF
C07FFF801FFE0003F800131B7E9A17>I<07C00007C00007C00007C00007C00007C00007
C000FFFFC0FFFFC0FFFFC007C00007C00007C00007C00007C00007C00007C00007C00007
C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C04007E1C003
FFE003FFE001FF8000FC0013227FA116>I<F801F0F801F0F801F0F801F0F801F0F801F0
F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0
F801F0F801F0F803F0F803F0FC0FF0FFFFF07FFDF03FF9F01FC1F0141B7B9A1F>I<F800
0F80FC000F807C001F007C001F007E001F003E003E003E003E001F003C001F007C001F00
7C000F8078000F80F8000F80F80007C0F00007C1F00007C1F00003E1E00003E3E00001E3
C00001E3C00001F3C00000F7800000F7800000F78000007F0000007F0000007F0000191B
7F9A1C>I<F8000F80FC000F807C001F007E001F003E003E003E003E001F003E001F007C
001F007C000F807C000F80F80007C0F80007C0F00007C1F00003E1F00003E1E00001E1E0
0001E3E00001F3C00000F3C00000F38000007380000073800000370000003F0000003E00
00001E0000001E0000003C0000003C0000003C0000007800000078000000F0000000F000
007FE000007FE000007FC000007F00000019277F9A1C>121 D<FFFFF8FFFFF8FFFFF8FF
FFF00003F00007E00007C0000FC0001F80001F00003E00007E0000FC0000F80001F80003
F00003E00007C0000FC0001F80001F00003F00007E00007FFFFCFFFFFCFFFFFCFFFFFC16
1B7E9A1A>I E /Fm 67 123 df<001F83E000F06E3001C078780380F8780300F0300700
7000070070000700700007007000070070000700700007007000FFFFFF80070070000700
700007007000070070000700700007007000070070000700700007007000070070000700
7000070070000700700007007000070070000700700007007000070070007FE3FF001D20
809F1B>11 D<003F0000E0C001C0C00381E00701E00701E0070000070000070000070000
070000070000FFFFE00700E00700E00700E00700E00700E00700E00700E00700E00700E0
0700E00700E00700E00700E00700E00700E00700E00700E00700E07FC3FE1720809F19>
I<003FE000E0E001C1E00381E00700E00700E00700E00700E00700E00700E00700E00700
E0FFFFE00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700
E00700E00700E00700E00700E00700E00700E00700E07FE7FE1720809F19>I<7038F87C
FC7EFC7E743A0402040204020804080410081008201040200F0E7E9F17>34
D<70F8FCFC74040404080810102040060E7C9F0D>39 D<0020004000800100020006000C
000C00180018003000300030007000600060006000E000E000E000E000E000E000E000E0
00E000E000E000E0006000600060007000300030003000180018000C000C000600020001
000080004000200B2E7DA112>I<800040002000100008000C0006000600030003000180
0180018001C000C000C000C000E000E000E000E000E000E000E000E000E000E000E000E0
00C000C000C001C001800180018003000300060006000C00080010002000400080000B2E
7DA112>I<70F8FCFC74040404080810102040060E7C840D>44 D<FFC0FFC00A027F8A0F>
I<70F8F8F87005057C840D>I<03F0000E1C001C0E001806003807007003807003807003
80700380F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003
C0F003C0F003C07003807003807003807807803807001806001C0E000E1C0003F000121F
7E9D17>48 D<018003800F80F38003800380038003800380038003800380038003800380
038003800380038003800380038003800380038003800380038007C0FFFE0F1E7C9D17>
I<03F0000C1C00100E00200700400780800780F007C0F803C0F803C0F803C02007C00007
C0000780000780000F00000E00001C0000380000700000600000C0000180000300000600
400C00401800401000803FFF807FFF80FFFF80121E7E9D17>I<03F0000C1C00100E0020
0F00780F80780780780780380F80000F80000F00000F00000E00001C0000380003F00000
3C00000E00000F000007800007800007C02007C0F807C0F807C0F807C0F0078040078040
0F00200E001C3C0003F000121F7E9D17>I<000600000600000E00000E00001E00002E00
002E00004E00008E00008E00010E00020E00020E00040E00080E00080E00100E00200E00
200E00400E00C00E00FFFFF0000E00000E00000E00000E00000E00000E00000E0000FFE0
141E7F9D17>I<1803001FFE001FFC001FF8001FE0001000001000001000001000001000
0010000011F000161C00180E001007001007800003800003800003C00003C00003C07003
C0F003C0F003C0E00380400380400700200600100E000C380003E000121F7E9D17>I<00
7C000182000701000E03800C07801C0780380300380000780000700000700000F1F000F2
1C00F40600F80700F80380F80380F003C0F003C0F003C0F003C0F003C07003C07003C070
03803803803807001807000C0E00061C0001F000121F7E9D17>I<4000007FFFC07FFF80
7FFF80400100800200800200800400000800000800001000002000002000004000004000
00C00000C00001C000018000038000038000038000038000078000078000078000078000
078000078000078000030000121F7D9D17>I<03F0000C0C001006003003002001806001
806001806001807001807803003E03003F06001FC8000FF00003F80007FC000C7E00103F
00300F806003804001C0C001C0C000C0C000C0C000C0C000806001802001001002000C0C
0003F000121F7E9D17>I<03F0000E18001C0C00380600380700700700700380F00380F0
0380F003C0F003C0F003C0F003C0F003C07007C07007C03807C0180BC00E13C003E3C000
0380000380000380000700300700780600780E00700C002018001070000FC000121F7E9D
17>I<70F8F8F8700000000000000000000070F8F8F87005147C930D>I<70F8F8F8700000
000000000000000070F0F8F878080808101010202040051D7C930D>I<7FFFFFE0FFFFFF
F00000000000000000000000000000000000000000000000000000000000000000FFFFFF
F07FFFFFE01C0C7D9023>61 D<000100000003800000038000000380000007C0000007C0
000007C0000009E0000009E0000009E0000010F0000010F0000010F00000207800002078
000020780000403C0000403C0000403C0000801E0000801E0000FFFE0001000F0001000F
0001000F00020007800200078002000780040003C00E0003C01F0007E0FFC03FFE1F207F
9F22>65 D<FFFFE0000F80380007801E0007801F0007800F0007800F8007800F8007800F
8007800F8007800F8007800F0007801F0007801E0007803C0007FFF00007803C0007801E
0007800F0007800F8007800780078007C0078007C0078007C0078007C0078007C0078007
8007800F8007800F0007801F000F803C00FFFFF0001A1F7E9E20>I<000FC040007030C0
01C009C0038005C0070003C00E0001C01E0000C01C0000C03C0000C07C0000407C000040
78000040F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000
F8000000780000007C0000407C0000403C0000401C0000401E0000800E00008007000100
0380020001C0040000703800000FC0001A217D9F21>I<FFFFE0000F803C0007801E0007
80070007800380078003C0078001E0078001E0078001F0078000F0078000F0078000F807
8000F8078000F8078000F8078000F8078000F8078000F8078000F8078000F8078000F007
8000F0078000F0078001E0078001E0078003C0078003800780070007800E000F803C00FF
FFE0001D1F7E9E23>I<FFFFFF000F800F00078003000780030007800100078001800780
00800780008007800080078080800780800007808000078080000781800007FF80000781
800007808000078080000780800007808000078000200780002007800020078000400780
004007800040078000C0078000C0078001800F800F80FFFFFF801B1F7E9E1F>I<FFFFFF
000F800F0007800300078003000780010007800180078000800780008007800080078000
80078080000780800007808000078080000781800007FF80000781800007808000078080
000780800007808000078000000780000007800000078000000780000007800000078000
00078000000FC00000FFFE0000191F7E9E1E>I<000FE0200078186000E004E0038002E0
070001E00F0000E01E0000601E0000603C0000603C0000207C00002078000020F8000000
F8000000F8000000F8000000F8000000F8000000F8000000F8007FFCF80003E0780001E0
7C0001E03C0001E03C0001E01E0001E01E0001E00F0001E0070001E0038002E000E00460
00781820000FE0001E217D9F24>I<FFFC0FC00780078007800780078007800780078007
800780078007800780078007800780078007800780078007800780078007800780078007
800FC0FFFC0E1F7F9E10>73 D<FF80001FF80F80001F800780001F0005C0002F0005C000
2F0005C0002F0004E0004F0004E0004F000470008F000470008F000470008F000438010F
000438010F000438010F00041C020F00041C020F00041C020F00040E040F00040E040F00
040E040F000407080F000407080F000407080F000403900F000403900F000401E00F0004
01E00F000401E00F000E00C00F001F00C01F80FFE0C1FFF8251F7E9E2A>77
D<FF803FF807C007C007C0038005E0010005E0010004F001000478010004780100043C01
00043C0100041E0100040F0100040F010004078100040781000403C1000401E1000401E1
000400F1000400F1000400790004003D0004003D0004001F0004001F0004000F00040007
00040007000E0003001F000300FFE001001D1F7E9E22>I<001F800000F0F00001C03800
07801E000F000F000E0007001E0007803C0003C03C0003C07C0003E0780001E0780001E0
F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0
780001E07C0003E07C0003E03C0003C03C0003C01E0007800E0007000F000F0007801E00
01C0380000F0F000001F80001C217D9F23>I<FFFFE0000F80780007801C0007801E0007
800F0007800F8007800F8007800F8007800F8007800F8007800F8007800F0007801E0007
801C000780780007FFE00007800000078000000780000007800000078000000780000007
8000000780000007800000078000000780000007800000078000000FC00000FFFC000019
1F7E9E1F>I<FFFF80000F80F0000780780007803C0007801E0007801E0007801F000780
1F0007801F0007801F0007801E0007801E0007803C00078078000780F00007FF80000781
C0000780E0000780F0000780700007807800078078000780780007807C0007807C000780
7C0007807C0407807E0407803E040FC01E08FFFC0F10000003E01E207E9E21>82
D<07E0800C1980100780300380600180600180E00180E00080E00080E00080F00000F000
007800007F00003FF0001FFC000FFE0003FF00001F800007800003C00003C00001C08001
C08001C08001C08001C0C00180C00380E00300F00600CE0C0081F80012217D9F19>I<7F
FFFFE0780F01E0600F0060400F0020400F0020C00F0030800F0010800F0010800F001080
0F0010000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000
0F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F000000
0F0000000F0000001F800007FFFE001C1F7E9E21>I<FFF07FF81FF01F800FC007C00F00
078003800F00078001000F0007C00100078007C00200078007C00200078007C0020003C0
09E0040003C009E0040003C009E0040003E010F00C0001E010F0080001E010F0080001F0
2078080000F02078100000F02078100000F0403C10000078403C20000078403C20000078
C03E2000003C801E4000003C801E4000003C801E4000001F000F8000001F000F8000001F
000F8000001E00078000000E00070000000E00070000000C000300000004000200002C20
7F9E2F>87 D<7FF83FF80FE00FC007C0070003C0020001E0040001F00C0000F008000078
1000007C1000003C2000003E4000001E4000000F8000000F8000000780000003C0000007
E0000005E0000009F0000018F8000010780000207C0000603C0000401E0000801F000180
0F0001000780020007C0070003C01F8007E0FFE01FFE1F1F7F9E22>I<08041008201020
1040204020804080408040B85CFC7EFC7E7C3E381C0F0E7B9F17>92
D<1FE000303000781800781C00300E00000E00000E00000E0000FE00078E001E0E00380E
00780E00F00E10F00E10F00E10F01E10781E103867200F83C014147E9317>97
D<0E0000FE00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00
000E3E000EC3800F01C00F00E00E00E00E00700E00700E00780E00780E00780E00780E00
780E00780E00700E00700E00E00F00E00D01C00CC300083E0015207F9F19>I<03F80E0C
1C1E381E380C70007000F000F000F000F000F000F00070007000380138011C020E0C03F0
10147E9314>I<000380003F800003800003800003800003800003800003800003800003
8000038000038003E380061B801C0780380380380380700380700380F00380F00380F003
80F00380F00380F003807003807003803803803807801C07800E1B8003E3F815207E9F19
>I<03F0000E1C001C0E00380700380700700700700380F00380F00380FFFF80F00000F0
0000F000007000007000003800801800800C010007060001F80011147F9314>I<007C00
C6018F038F07060700070007000700070007000700FFF007000700070007000700070007
00070007000700070007000700070007000700070007007FF01020809F0E>I<0000E003
E3300E3C301C1C30380E00780F00780F00780F00780F00780F00380E001C1C001E380033
E0002000002000003000003000003FFE001FFF800FFFC03001E0600070C00030C00030C0
0030C000306000603000C01C038003FC00141F7F9417>I<0E0000FE00000E00000E0000
0E00000E00000E00000E00000E00000E00000E00000E00000E3E000E43000E81800F01C0
0F01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0
0E01C00E01C00E01C0FFE7FC16207F9F19>I<1C001E003E001E001C0000000000000000
00000000000E007E000E000E000E000E000E000E000E000E000E000E000E000E000E000E
000E000E000E00FFC00A1F809E0C>I<00E001F001F001F000E000000000000000000000
0000007007F000F000700070007000700070007000700070007000700070007000700070
00700070007000700070007000706070F060F0C061803F000C28829E0E>I<0E0000FE00
000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0FF00E03
C00E03000E02000E04000E08000E10000E30000E70000EF8000F38000E1C000E1E000E0E
000E07000E07800E03800E03C00E03E0FFCFF815207F9F18>I<0E00FE000E000E000E00
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00
0E000E000E000E000E000E000E000E00FFE00B20809F0C>I<0E1F01F000FE618618000E
81C81C000F00F00E000F00F00E000E00E00E000E00E00E000E00E00E000E00E00E000E00
E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E0
0E000E00E00E000E00E00E00FFE7FE7FE023147F9326>I<0E3E00FE43000E81800F01C0
0F01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0
0E01C00E01C00E01C0FFE7FC16147F9319>I<01F800070E001C03803801C03801C07000
E07000E0F000F0F000F0F000F0F000F0F000F0F000F07000E07000E03801C03801C01C03
80070E0001F80014147F9317>I<0E3E00FEC3800F01C00F00E00E00E00E00F00E00700E
00780E00780E00780E00780E00780E00780E00700E00F00E00E00F01E00F01C00EC3000E
3E000E00000E00000E00000E00000E00000E00000E00000E0000FFE000151D7F9319>I<
03E0800619801C05803C0780380380780380700380F00380F00380F00380F00380F00380
F003807003807803803803803807801C0B800E138003E380000380000380000380000380
000380000380000380000380003FF8151D7E9318>I<0E78FE8C0F1E0F1E0F0C0E000E00
0E000E000E000E000E000E000E000E000E000E000E000E00FFE00F147F9312>I<1F9030
704030C010C010C010E00078007F803FE00FF00070803880188018C018C018E030D0608F
800D147E9312>I<020002000200060006000E000E003E00FFF80E000E000E000E000E00
0E000E000E000E000E000E000E080E080E080E080E080610031001E00D1C7F9B12>I<0E
01C0FE1FC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E
01C00E01C00E01C00E01C00E03C00603C0030DC001F1FC16147F9319>I<FF83F81E01E0
1C00C00E00800E00800E008007010007010003820003820003820001C40001C40001EC00
00E80000E80000700000700000700000200015147F9318>I<FF9FE1FC3C0780701C0300
601C0380200E0380400E0380400E03C0400707C0800704C0800704E08003886100038871
0003C8730001D0320001D03A0000F03C0000E01C0000E01C0000601800004008001E147F
9321>I<7FC3FC0F01E00701C007018003810001C20000E40000EC00007800003800003C
00007C00004E000087000107000303800201C00601E01E01E0FF07FE1714809318>I<FF
83F81E01E01C00C00E00800E00800E008007010007010003820003820003820001C40001
C40001EC0000E80000E800007000007000007000002000002000004000004000004000F0
8000F08000F100006200003C0000151D7F9318>I<3FFF380E200E201C40384078407000
E001E001C00380078007010E011E011C0338027006700EFFFE10147F9314>I
E /Fn 12 119 df<000000003FFE00000E0000000FFFFFC0001E0000007FFFFFF8003E00
0003FFFFFFFE00FE00000FFFFFFFFF81FE00003FFFF800FFC3FE0000FFFF80000FF7FE00
01FFFC000003FFFE0007FFF0000001FFFE000FFFC00000007FFE001FFF800000003FFE00
3FFF000000001FFE007FFE000000000FFE00FFFC0000000007FE01FFF80000000007FE03
FFF00000000003FE03FFF00000000001FE07FFE00000000001FE07FFE00000000000FE0F
FFC00000000000FE0FFFC000000000007E1FFFC000000000007E1FFF8000000000007E3F
FF8000000000007E3FFF8000000000003E3FFF8000000000003E7FFF8000000000003E7F
FF0000000000003E7FFF000000000000007FFF00000000000000FFFF00000000000000FF
FF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FF
FF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FF
FF00000000000000FFFF00000000000000FFFF00000000000000FFFF000000000000007F
FF000000000000007FFF000000000000007FFF000000000000007FFF8000000000003E3F
FF8000000000003E3FFF8000000000003E3FFF8000000000003E1FFF8000000000003E1F
FFC000000000003E0FFFC000000000007C0FFFC000000000007C07FFE000000000007C07
FFE00000000000F803FFF00000000000F803FFF00000000001F801FFF80000000001F000
FFFC0000000003E0007FFE0000000007E0003FFF000000000FC0001FFF800000001F8000
0FFFC00000003F000007FFF0000000FE000001FFFC000001FC000000FFFF80000FF80000
003FFFF8007FF00000000FFFFFFFFFC000000003FFFFFFFF00000000007FFFFFFC000000
00000FFFFFE00000000000003FFE000000474979C756>67 D<0007FFFC000000007FFFFF
C0000001FFFFFFF8000003FFFFFFFE000007FE001FFF000007FF0003FFC0000FFF8001FF
E0000FFF8000FFF0000FFF80007FF0000FFF80007FF8000FFF80007FF80007FF00003FFC
0007FF00003FFC0003FE00003FFC0000F800003FFC00000000003FFC00000000003FFC00
000000003FFC00000000003FFC00000007FFFFFC000000FFFFFFFC000007FFFFFFFC0000
3FFFE03FFC0000FFFE003FFC0003FFF0003FFC0007FFC0003FFC000FFF00003FFC001FFE
00003FFC003FFC00003FFC007FF800003FFC007FF800003FFC00FFF000003FFC00FFF000
003FFC00FFF000003FFC00FFF000003FFC00FFF000003FFC00FFF000007FFC007FF80000
FFFC007FF80001EFFC003FFC0003EFFC003FFF0007CFFF000FFFC03F8FFFF807FFFFFF07
FFFC01FFFFFC03FFFC007FFFF001FFFC0003FF80007FF8362E7DAD3A>97
D<00001FFFC0000000FFFFF8000007FFFFFE00001FFFFFFF80007FFC00FFC000FFE001FF
C001FFC003FFE003FF8003FFE007FF0003FFE00FFE0003FFE00FFE0003FFE01FFC0001FF
C01FFC0001FFC03FFC0000FF803FFC00003E007FF8000000007FF8000000007FF8000000
00FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF8000000
00FFF800000000FFF800000000FFF800000000FFF8000000007FF8000000007FF8000000
007FFC000000003FFC000000003FFC000000001FFC000000F81FFE000000F80FFE000000
F80FFF000001F007FF800003F003FFC00007E001FFE0000FC000FFF0001F80007FFE00FF
00001FFFFFFE000007FFFFF8000000FFFFE00000001FFE00002D2E7CAD35>99
D<00001FFE00000001FFFFE0000007FFFFF800001FFFFFFE00007FFC07FF0000FFE001FF
8001FFC0007FC003FF80003FE007FF00003FF00FFE00001FF01FFE00000FF81FFC00000F
F83FFC00000FFC3FFC000007FC7FFC000007FC7FF8000007FC7FF8000007FE7FF8000007
FEFFF8000007FEFFF8000007FEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFF
FCFFF800000000FFF800000000FFF800000000FFF8000000007FF8000000007FF8000000
007FFC000000003FFC000000003FFC000000003FFC0000001C1FFE0000003E0FFE000000
3E07FF0000007E07FF000000FC03FF800001F801FFC00003F0007FF0001FE0003FFE00FF
C0001FFFFFFF800007FFFFFE000000FFFFF80000000FFF80002F2E7DAD36>101
D<00FC0001FF0003FF8007FFC00FFFC01FFFE01FFFE01FFFE01FFFE01FFFE01FFFE00FFF
C007FFC003FF8001FF0000FC000000000000000000000000000000000000000000000000
00000000000000000000007FC0FFFFC0FFFFC0FFFFC0FFFFC0FFFFC003FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0FFFFFFFFFFFFFFFFFFFFFF
FFFFFFFF18497CC820>105 D<007FC000FFFFC000FFFFC000FFFFC000FFFFC000FFFFC0
0003FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC000FFFFFF80FFFFFF
80FFFFFF80FFFFFF80FFFFFF8019487CC720>108 D<007FC001FFC00000FFE00000FFFF
C00FFFF80007FFFC0000FFFFC03FFFFE001FFFFF0000FFFFC0FFFFFF007FFFFF8000FFFF
C1FC07FF80FE03FFC000FFFFC3E003FFC1F001FFE00003FFC7C001FFC3E000FFE00001FF
CF0001FFE78000FFF00001FFDE0000FFEF00007FF00001FFDC0000FFEE00007FF00001FF
FC0000FFFE00007FF80001FFF80000FFFC00007FF80001FFF00000FFF800007FF80001FF
F00000FFF800007FF80001FFF00000FFF800007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF800FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFF
FFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF05C2E
7CAD65>I<007FC001FFC00000FFFFC00FFFF80000FFFFC03FFFFE0000FFFFC0FFFFFF00
00FFFFC1FC07FF8000FFFFC3E003FFC00003FFC7C001FFC00001FFCF0001FFE00001FFDE
0000FFE00001FFDC0000FFE00001FFFC0000FFF00001FFF80000FFF00001FFF00000FFF0
0001FFF00000FFF00001FFF00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF000FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFF
C07FFFFFE0FFFFFFC07FFFFFE03B2E7CAD42>I<00000FFF0000000000FFFFF000000007
FFFFFE0000001FFFFFFF8000003FFC03FFC00000FFE0007FF00001FF80001FF80003FF00
000FFC0007FE000007FE000FFE000007FF000FFC000003FF001FFC000003FF803FFC0000
03FFC03FF8000001FFC03FF8000001FFC07FF8000001FFE07FF8000001FFE07FF8000001
FFE0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FF
F0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0
7FF8000001FFE07FF8000001FFE07FF8000001FFE07FF8000001FFE03FFC000003FFC03F
FC000003FFC01FFC000003FF801FFE000007FF800FFE000007FF0007FF00000FFE0003FF
80001FFC0001FFC0003FF80000FFE0007FF000007FFC03FFE000001FFFFFFF80000007FF
FFFE00000000FFFFF0000000000FFF000000342E7DAD3B>I<0001F000000001F0000000
01F000000001F000000001F000000001F000000003F000000003F000000003F000000007
F000000007F000000007F00000000FF00000000FF00000001FF00000003FF00000003FF0
0000007FF0000001FFF0000003FFF000000FFFFFFFC0FFFFFFFFC0FFFFFFFFC0FFFFFFFF
C0FFFFFFFFC000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF00000
00FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000
FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FF
F0000000FFF0000000FFF0000000FFF001F000FFF001F000FFF001F000FFF001F000FFF0
01F000FFF001F000FFF001F000FFF001F000FFF001F0007FF001E0007FF803E0003FF803
E0003FFC07C0001FFE0F80000FFFFF800007FFFE000001FFFC0000001FF00024427EC12E
>116 D<007FE000003FF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF000
FFFFE0007FFFF000FFFFE0007FFFF00003FFE00001FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00001FFF00001FFE00001FFF00001FFE000
01FFF00001FFE00003FFF00000FFE00007FFF00000FFE0000F7FF000007FE0001F7FF000
007FF0003E7FF800003FFC00FC7FFFE0001FFFFFF87FFFE00007FFFFE07FFFE00001FFFF
807FFFE000003FFE007FFFE03B2E7CAD42>I<FFFFFF8001FFFFFFFFFF8001FFFFFFFFFF
8001FFFFFFFFFF8001FFFFFFFFFF8001FFFF01FFE000001FC001FFF000001F8001FFF000
001F8000FFF800001F0000FFF800003F00007FF800003E00007FFC00007E00003FFC0000
7C00003FFE0000FC00001FFE0000F800001FFF0001F800000FFF0001F000000FFF8003F0
000007FF8003E0000007FFC007E0000007FFC007E0000003FFE007C0000003FFE00FC000
0001FFE00F80000001FFF01F80000000FFF01F00000000FFF83F000000007FF83E000000
007FFC7E000000003FFC7C000000003FFEFC000000001FFEF8000000001FFFF800000000
1FFFF8000000000FFFF0000000000FFFF00000000007FFE00000000007FFE00000000003
FFC00000000003FFC00000000001FF800000000001FF800000000000FF000000000000FF
0000000000007E0000000000003C000000382E7DAD3F>I E /Fo
8 117 df<00001E000000003E00000000FE00000003FE0000003FFE0000FFFFFE0000FF
FFFE0000FFFFFE0000FFCFFE0000000FFE0000000FFE0000000FFE0000000FFE0000000F
FE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE
0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE00
00000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000
000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE000000
0FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000F
FE0000000FFE0000000FFE00007FFFFFFFC07FFFFFFFC07FFFFFFFC07FFFFFFFC0223879
B731>49 D<0003FF800180001FFFF00380007FFFFC078001FFFFFF0F8003FE00FF9F8007
F0000FFF800FE00003FF801FC00001FF803F8000007F803F8000007F807F0000003F807F
0000001F807F0000001F80FF0000000F80FF0000000F80FF0000000F80FF8000000780FF
8000000780FFC000000780FFE000000780FFF8000000007FFE000000007FFFF00000007F
FFFF0000003FFFFFF800003FFFFFFF00001FFFFFFFC0000FFFFFFFF00007FFFFFFF80003
FFFFFFFC0001FFFFFFFE00007FFFFFFF00003FFFFFFF800007FFFFFF8000007FFFFFC000
0007FFFFC00000003FFFE000000003FFE000000000FFF0000000007FF0000000003FF070
0000001FF0F00000001FF0F00000001FF0F00000000FF0F00000000FF0F80000000FF0F8
0000000FE0F80000000FE0FC0000000FE0FC0000001FC0FE0000001FC0FF0000001F80FF
C000003F80FFF000007F00FFFC0001FE00FCFFC007FC00F87FFFFFF800F01FFFFFE000E0
03FFFF8000C0003FFC00002C3D7BBB37>83 D<0000FFF000000FFFFF00003FFFFF8000FF
C01FC001FF003FE003FC007FF007FC007FF00FF8007FF01FF0007FF01FF0003FE03FF000
3FE03FF0001FC07FE00007007FE00000007FE0000000FFE0000000FFE0000000FFE00000
00FFE0000000FFE0000000FFE0000000FFE0000000FFE00000007FE00000007FE0000000
7FF00000003FF00000003FF00000001FF00000781FF80000780FF80000F007FC0000F003
FE0001E001FF8007C000FFE01F80003FFFFF00000FFFFC000000FFC00025267DA52C>99
D<0001FFC000000FFFF800003FFFFE0000FF80FF0001FE003F8007FC001FC00FF8000FE0
0FF8000FF01FF00007F03FF00007F83FF00007F87FE00007F87FE00003FC7FE00003FC7F
E00003FCFFE00003FCFFFFFFFFFCFFFFFFFFFCFFFFFFFFFCFFE0000000FFE0000000FFE0
000000FFE00000007FE00000007FE00000007FE00000003FE00000003FF000003C1FF000
003C1FF000003C0FF800007807FC0000F803FE0001F001FF0007E000FFC03FC0003FFFFF
000007FFFC000000FFE00026267DA52D>101 D<00F00003FC0007FE000FFE000FFF001F
FF001FFF001FFF000FFF000FFE0007FE0003FC0000F00000000000000000000000000000
000000000000000000000000000000000000FF00FFFF00FFFF00FFFF00FFFF0007FF0003
FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003
FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003
FF0003FF0003FF0003FF00FFFFF8FFFFF8FFFFF8FFFFF8153D7DBC1B>105
D<00FE007FC000FFFE01FFF800FFFE07FFFC00FFFE0F03FE00FFFE1C01FF0007FE3001FF
8003FE6000FF8003FEE000FFC003FEC000FFC003FF8000FFC003FF8000FFC003FF8000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC0FFFFFC3FFFFFFFFFFC3FFF
FFFFFFFC3FFFFFFFFFFC3FFFFF30267CA537>110 D<0000FFC00000000FFFFC0000003F
FFFF000000FFC0FFC00001FE001FE00007FC000FF80007F80007F8000FF00003FC001FF0
0003FE003FF00003FF003FE00001FF007FE00001FF807FE00001FF807FE00001FF807FE0
0001FF80FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE0
0001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC07FE00001FF807FE00001FF807FE0
0001FF803FF00003FF003FF00003FF001FF00003FE000FF80007FC000FF80007FC0007FC
000FF80003FE001FF00000FFC0FFC000003FFFFF0000000FFFFC00000001FFE000002A26
7DA531>I<0007800000078000000780000007800000078000000F8000000F8000000F80
00000F8000001F8000001F8000003F8000003F8000007F800000FF800001FF800007FF80
001FFFFFF0FFFFFFF0FFFFFFF0FFFFFFF001FF800001FF800001FF800001FF800001FF80
0001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF80
0001FF800001FF800001FF800001FF800001FF800001FF803C01FF803C01FF803C01FF80
3C01FF803C01FF803C01FF803C01FF803C00FF807800FFC078007FC070003FE0E0001FFF
C00007FF800001FF001E377EB626>116 D E end
%%EndProlog
%%BeginSetup
%%Feature: *Resolution 300dpi
TeXDict begin

%%EndSetup
%%Page: 1 1
1 0 bop 75 356 a Fo(Section)35 b(1)75 564 y Fn(Collecti)q(v)m(e)42
b(Comm)l(unication)892 786 y Fm(Al)16 b(Geist)876 843
y(Marc)e(Snir)75 985 y Fl(1.1)59 b(Intro)r(duction)75
1087 y Fm(Collectiv)o(e)13 b(comm)o(unication)f(is)g(de\014ned)h(to)e
(b)q(e)h(comm)o(unication)g(that)f(in)o(v)o(olv)o(es)h(a)f(group)g(of)g
(pro)q(cesses.)75 1143 y(The)k(functions)h(pro)o(vided)g(b)o(y)g(the)f
(MPI)g(collectiv)o(e)i(comm)o(unication)f(include:)143
1233 y Fk(\017)23 b Fm(Broadcast)14 b(from)g(one)i(mem)o(b)q(er)f(to)g
(all)h(mem)o(b)q(ers)f(of)g(a)g(group.)143 1326 y Fk(\017)23
b Fm(Barrier)15 b(across)f(all)j(group)d(mem)o(b)q(ers)143
1418 y Fk(\017)23 b Fm(Gather)14 b(data)h(from)f(all)i(group)f(mem)o(b)
q(ers)g(to)g(one)g(mem)o(b)q(er.)143 1510 y Fk(\017)23
b Fm(Scatter)14 b(data)h(from)f(one)i(mem)o(b)q(er)f(to)g(all)h(mem)o
(b)q(ers)f(of)g(a)g(group.)143 1603 y Fk(\017)23 b Fm(Global)15
b(op)q(erations)g(suc)o(h)h(as)e(sum,)h(max,)f(min,)i(etc.,)e(w)o(ere)h
(the)g(result)g(is)h(kno)o(wn)e(b)o(y)h(all)h(group)189
1659 y(mem)o(b)q(ers)e(and)h(a)g(v)m(ariation)g(where)g(the)g(result)g
(is)g(kno)o(wn)f(b)o(y)h(only)g(one)g(mem)o(b)q(er.)20
b(The)15 b(abilit)o(y)189 1716 y(to)f(ha)o(v)o(e)h(user)g(de\014ned)i
(global)f(op)q(erations.)143 1808 y Fk(\017)23 b Fm(Scan)15
b(across)g(all)h(mem)o(b)q(ers)f(of)g(a)g(group)g(\(also)g(called)h
(parallel)h(pre\014x\).)143 1900 y Fk(\017)23 b Fm(Broadcast)14
b(from)g(all)j(mem)o(b)q(ers)e(to)f(all)j(mem)o(b)q(ers)e(of)g(a)g
(group.)143 1993 y Fk(\017)23 b Fm(Scatter)c(\(or)g(Gather\))g(data)g
(from)g(all)i(mem)o(b)q(ers)f(to)f(all)i(mem)o(b)q(ers)f(of)g(a)f
(group)h(\(also)f(called)189 2049 y(complete)d(exc)o(hange)f(or)g
(all-to-all\).)75 2139 y(While)j(v)o(endors)f(ma)o(y)f(optimize)i
(certain)g(collectiv)o(e)g(routines)g(for)e(their)h(arc)o(hitectures,)h
(a)e(complete)75 2195 y(library)d(of)f(the)g(collectiv)o(e)i(comm)o
(unication)f(routines)g(can)g(b)q(e)g(written)f(en)o(tirely)h(using)g
(p)q(oin)o(t-to-p)q(oin)o(t)75 2252 y(comm)o(unication)j(functions.)166
2308 y(The)d(syn)o(tax)f(and)i(seman)o(tics)f(of)f(the)h(collectiv)o(e)
i(op)q(erations)e(are)g(de\014ned)h(so)f(as)g(to)f(b)q(e)i(consisten)o
(t)75 2365 y(with)22 b(the)f(syn)o(tax)g(and)h(seman)o(tics)f(of)g(the)
h(p)q(oin)o(t-to-p)q(oin)o(t)g(op)q(erations.)39 b(A)21
b(collectiv)o(e)j(op)q(eration)75 2421 y(is)f(executed)h(b)o(y)f(ha)o
(ving)g(all)g(pro)q(cesses)h(in)f(the)g(group)g(call)g(the)g(comm)o
(unication)h(routine,)h(with)75 2478 y(matc)o(hing)14
b(parameters.)19 b(One)14 b(of)g(the)g(k)o(ey)g(parameters)f(is)i(a)e
(comm)o(unicator)h(that)f(de\014nes)i(the)f(group)75
2534 y(of)i(participating)h(pro)q(cesses)f(and)h(pro)o(vides)f(a)g(con)
o(text)f(for)h(the)g(op)q(eration.)23 b(The)16 b(reader)g(is)h
(referred)75 2591 y(to)g(c)o(hapter)g Fj(??)26 b Fm(for)17
b(information)g(concerning)i(comm)o(unication)f(bu\013ers)f(and)h
(their)f(manipulations)75 2647 y(and)g(t)o(yp)q(e)g(matc)o(hing)g
(rules;)i(and)e(to)f(c)o(hapter)h Fj(??)25 b Fm(for)17
b(information)g(on)g(ho)o(w)g(to)f(de\014ne)i(groups)f(and)75
2704 y(create)e(comm)o(unicators.)964 2828 y(1)p eop
%%Page: 2 2
2 1 bop 75 -100 a Fm(2)747 b Fi(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)166 45 y Fm(Collectiv)o(e)h(routines)g(can)f(\(but)g
(are)g(not)g(required)h(to\))e(return)i(as)e(so)q(on)h(as)g(their)h
(participation)75 102 y(in)i(the)g(collectiv)o(e)h(comm)o(unication)f
(is)g(complete.)31 b(The)19 b(completion)g(of)f(a)h(call)g(indicates)h
(that)e(the)75 158 y(caller)d(is)g(no)o(w)e(free)h(to)g(access)g(the)g
(lo)q(cations)h(in)g(the)f(comm)o(unication)g(bu\013er,)g(or)g(an)o(y)g
(other)f(lo)q(cation)75 214 y(that)g(can)g(b)q(e)i(referenced)f(b)o(y)g
(the)f(collectiv)o(e)i(op)q(eration.)20 b(It)14 b(do)q(es)f(not)h
(indicate)h(that)d(other)i(pro)q(cesses)75 271 y(in)g(the)f(group)f(ha)
o(v)o(e)h(started)f(the)h(op)q(eration)g(\(unless)h(otherwise)f
(indicated)i(in)f(the)f(description)h(of)f(the)75 327
y(op)q(eration\).)19 b(The)11 b(successful)i(completion)g(of)e(a)h
(collectiv)o(e)h(comm)o(unication)f(call)h(ma)o(y)e(dep)q(end)i(on)f
(the)75 384 y(execution)i(of)e(a)h(matc)o(hing)f(call)i(at)e(all)i(pro)
q(cesses)f(in)h(the)f(group.)18 b(Th)o(us,)13 b(a)g(collectiv)o(e)h
(comm)o(unication)75 440 y(call)h(ma)o(y)l(,)e(or)g(ma)o(y)g(not,)h(ha)
o(v)o(e)f(the)h(e\013ect)g(of)f(sync)o(hronizing)i(all)g(calling)g(pro)
q(cesses.)20 b(A)14 b(more)f(detailed)75 497 y(discussion)g(of)e(the)g
(correct)g(use)g(of)g(the)g(collectiv)o(e)i(routines)f(can)f(b)q(e)h
(found)f(at)g(the)g(end)h(of)f(this)h(c)o(hapter.)166
643 y Fh(Discussion:)33 b Fg(The)13 b(collectiv)o(e)g(op)q(erations)g
(do)g(not)f(accept)j(a)d(message)h(tag)f(parameter.)17
b(The)d(rationale)75 700 y(for)j(not)g(using)g(tags)g(is)h(that)f(the)h
(need)g(for)f(distinguishing)f(collectiv)o(e)h(op)q(erations)h(with)f
(the)h(same)e(con)o(text)75 756 y(seldom)g(arises)i(\(since)g(the)g(op)
q(erations)f(are)h(blo)q(c)o(king\);)f(the)h(tag)f(\014eld)g(can)h(b)q
(e)f(used)i(b)o(y)e(the)g(p)q(oin)o(t-to-p)q(oin)o(t)75
813 y(messages)d(that)g(implemen)o(t)d(the)j(collectiv)o(e)g(comm)o
(unication.)75 1082 y Fl(1.2)59 b(Communication)18 b(F)n(unctions)75
1198 y Fm(The)d(k)o(ey)h(concept)f(of)g(the)g(collectiv)o(e)i
(functions)f(is)g(to)f(ha)o(v)o(e)f(a)h(\\group")g(of)g(participating)h
(pro)q(cesses.)75 1255 y(The)j(routines)f(do)g(not)g(ha)o(v)o(e)g(a)g
(group)g(iden)o(ti\014er)i(as)e(an)g(explicit)j(parameter.)28
b(Instead,)19 b(there)f(is)h(a)75 1311 y(comm)o(unicator)f(parameter.)
29 b(In)19 b(this)g(c)o(hapter)g(a)f(comm)o(unicator)g(can)h(b)q(e)g
(though)o(t)f(of)g(as)g(a)g(group)75 1368 y(iden)o(ti\014er)f(merged)e
(with)h(a)e(con)o(text.)166 1508 y Fh(Discussion:)38
b Fg(The)16 b(last)e(prop)q(osal)h(con)o(tained)g(t)o(w)o(o)g(la)o(y)o
(ers)g(of)f(functions)h(one)g(\(con)o(tiguous\))g(bu\013er)h(for)75
1558 y(the)e(\\con)o(tiguous")g(functions,)f(and)h(an)f(arra)o(y)h(of)f
(bu\013ers)j(for)d(the)i(noncon)o(tiguous)e(ones.)166
1615 y(The)e(latest)g(pt2pt)g(c)o(hapter)h(no)f(longer)g(con)o(tains)f
(a)h(bu\013er)h(descriptor)g(handle,)f(instead)g(there)h(is)f(a)g
(general)75 1665 y(datat)o(yp)q(e)16 b(parameter)f(whic)o(h)h(can)g
(describ)q(e)h(arbitrary)f(structures.)26 b(I)16 b(ha)o(v)o(e)f(incorp)
q(orated)h(this)g(feature)h(in)o(to)75 1715 y(the)c(collectiv)o(e)f
(functions,)g(since)g(w)o(e)h(ha)o(v)o(e)f(agreed)g(that)g(the)h
(collectiv)o(e)f(routines)g(m)o(ust)f(b)q(e)i(buildable)e(from)f(the)75
1764 y(pt2pt)k(routines.)75 2034 y Fl(1.3)59 b(Ba)n(rrier)21
b(synchronization)75 2197 y Ff(MPI)p 150 2197 15 2 v
17 w(BARRIER\()i(comm)g(\))117 2254 y Fg(IN)15 b Ff(comm)673
b Fg(comm)o(unicator)11 b(handle)166 2344 y Ff(MPI)p
241 2344 V 17 w(BARRIER)k Fm(blo)q(c)o(ks)i(the)f(caller)h(un)o(til)h
(all)f(group)f(mem)o(b)q(ers)g(ha)o(v)o(e)g(called)i(it;)e(the)h(call)g
(returns)75 2401 y(at)e(an)o(y)f(pro)q(cess)i(only)g(after)e(all)i
(group)f(mem)o(b)q(ers)g(ha)o(v)o(e)g(en)o(tered)h(the)f(call.)75
2587 y Fl(1.4)59 b(Data)19 b(move)g(functions)75 2704
y Fm(Figure)c(1.1)g(illustrates)h(the)f(the)g(di\013eren)o(t)h
(collectiv)o(e)h(mo)o(v)o(e)d(functions)i(supp)q(orted)g(b)o(y)f(MPI.)p
eop
%%Page: 3 3
3 2 bop 75 -100 a Fi(1.4.)34 b(D)o(A)l(T)l(A)15 b(MO)o(VE)g(FUNCTIONS)
1099 b Fm(3)75 2420 y @beginspecial @setspecial
%%BeginDocument: fig.ps
/arrowdict 13 dict def                      % Local storage for the procedure
					    % ``arrow.''
							
/arrow                                      % The procedure ``arrow'' adds an
  { arrowdict begin                         % arrow shape to the current path.
      /headlength exch def                  % It takes seven arguments: the x
      /halfheadthickness exch 2 div def     % and y coordinates of the tail
      /halfthickness exch 2 div def         % (imagine that a line has been
      /tipy exch def /tipx exch def         % drawn down the center of the
      /taily exch def /tailx exch def       % arrow from the tip to the tail,
					    % then x and y lie on this line),
					    % the x and y coordinates of the
					    % tip of the arrow, the thickness
					    % of the arrow in the tail
					    % portion, the thickness of the
					    % arrow at the widest part of the
					    % arrowhead and the length of the
					    % arrowhead.
							
      /dx tipx tailx sub def                % Compute the differences in x and
      /dy tipy taily sub def                % y for the tip and tail. These
      /arrowlength dx dx mul dy dy mul add  % will be used to compute the
	sqrt def                            % length of the arrow and to
      /angle dy dx atan def                 % compute the angle of direction
					    % that the arrow is facing with
					    % respect to the current user
					    % coordinate system origin.
      /base arrowlength headlength sub def  % Compute where the base of the
					    % arrowhead will be.
								
      /savematrix matrix currentmatrix def  % Save the current user coordinate
					    % system. We are using the same
					    % strategy to localize the effect
					    % of transformations as was used
					    % in the program to draw an
					    % ellipse.
      tailx taily translate                 % Translate to the starting point
					    % of the tail.
      angle rotate                          % Rotate the x-axis to correspond
					    % with the center line of the
					    % arrow.
      0 halfthickness neg moveto            % Add the arrow shape to the
					    % current path.
      base halfthickness neg lineto
      base halfheadthickness neg lineto
      arrowlength 0 lineto
      base halfheadthickness lineto
      base halfthickness lineto
      0 halfthickness lineto
      closepath
	       
      savematrix setmatrix                  % Restore the current user
					    % coordinate system.
    end
  } def
/Mydict 100 dict def
Mydict begin
/Box
{ /height exch def
  /length exch def

   length 0 rlineto
   0 height rlineto
   length neg 0 rlineto
   closepath
} def

/Grid
{ 
  /ny exch def
  /nx exch def
  /dely exch def
  /delx exch def
  /leny { ny dely mul} def
  /lenx { nx delx mul} def
  currentpoint
  /ypos exch def
  /xpos exch def
  /y ypos def
  /x xpos def

  0 1 ny { pop x y moveto lenx 0 rlineto stroke /y y dely add def} for
  /y ypos def
  /x xpos def
  0 1 nx { pop x y moveto 0 leny rlineto stroke /x x delx add def} for
} def

/GridLabels
{ 
  /shift exch def
  /raise exch def
  /yoff exch def
  /xoff exch def
  /p1 exch def
  /p2 exch def
  /ny exch def
  /nx exch def
  /dely exch def
  /delx exch def
  /Darray exch def
  /leny { ny dely mul} def
  /lenx { nx delx mul} def
  currentpoint
  /ypos exch def
  /xpos exch def
  /y ypos def
  /x xpos def

  /dx3 delx 3 div def
  /dy3 dely 3 div def

  /ix -1 def
  /iy ny 1 sub def
  Darray{
     aload pop 
    /Subc  exch def
    /Text exch def
    /ix ix 1 add def
    ix nx ge { /ix 0 def /iy iy 1 sub def} if
    /x xpos delx ix 0.5 add mul add 
    /Helvetica findfont p1 scalefont setfont Text stringwidth pop 
    /Helvetica findfont p2 scalefont setfont Subc stringwidth pop 
    add xoff add 2 div sub shift add def
    /y ypos dely iy 0.5 add mul add raise add def
    x y moveto 
    /Helvetica findfont p1 scalefont Text show
    xoff yoff rmoveto
    /Helvetica findfont p2 scalefont Subc show
  } forall
} def

2 setlinecap
6.5 72 mul 320 sub 2 div 0 translate 
0 150 moveto 
20 20 6 6 Grid 

0 150 moveto 
[
[(A)(0)]
[(A)(1)]
[(A)(2)]
[(A)(3)]
[(A)(4)]
[(A)(5)]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 225 200 225 12 24 18 arrow stroke
200 195 140 195 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(one-all scatter) dup stringwidth pop 170 exch 2 div sub 242 moveto show
(one-all gather) dup stringwidth pop 170 exch 2 div sub 170 moveto show

220 150 moveto 
20 20 6 6 Grid 

220 150 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(1)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(2)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(3)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(4)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(5)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

0 0 moveto 
20 20 6 6 Grid 

0 0 moveto 
[
[(A)(0)]
[(A)(1)]
[(A)(2)]
[(A)(3)]
[(A)(4)]
[(A)(5)]
[(B)(0)]
[(B)(1)]
[(B)(2)]
[(B)(3)]
[(B)(4)]
[(B)(5)]
[(C)(0)]
[(C)(1)]
[(C)(2)]
[(C)(3)]
[(C)(4)]
[(C)(5)]
[(D)(0)]
[(D)(1)]
[(D)(2)]
[(D)(3)]
[(D)(4)]
[(D)(5)]
[(E)(0)]
[(E)(1)]
[(E)(2)]
[(E)(3)]
[(E)(4)]
[(E)(5)]
[(F)(0)]
[(F)(1)]
[(F)(2)]
[(F)(3)]
[(F)(4)]
[(F)(5)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

220 0 moveto 
20 20 6 6 Grid 

220 0 moveto 
[
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(1)]
[(B)(1)]
[(C)(1)]
[(D)(1)]
[(E)(1)]
[(F)(1)]
[(A)(2)]
[(B)(2)]
[(C)(2)]
[(D)(2)]
[(E)(2)]
[(F)(2)]
[(A)(3)]
[(B)(3)]
[(C)(3)]
[(D)(3)]
[(E)(3)]
[(F)(3)]
[(A)(4)]
[(B)(4)]
[(C)(4)]
[(D)(4)]
[(E)(4)]
[(F)(4)]
[(A)(5)]
[(B)(5)]
[(C)(5)]
[(D)(5)]
[(E)(5)]
[(F)(5)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 60 200 60 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(all-all scatter) dup stringwidth pop 170 exch 2 div sub 77 moveto show

0 300 moveto 
20 20 6 6 Grid 

0 300 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(B)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(C)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(D)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(E)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(F)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 360 200 360 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(all-all broadcast) dup stringwidth pop 170 exch 2 div sub 377 moveto show

220 300 moveto 
20 20 6 6 Grid 

220 300 moveto 
[
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

0 450 moveto 
20 20 6 6 Grid 

0 450 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 510 200 510 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
0 575 moveto
(data) show
(data) stringwidth pop 4 add 575 4 add (data) stringwidth pop 33 add 575 4 add
1 4 5 arrow fill
(one-all broadcast) dup stringwidth pop 170 exch 2 div sub 527 moveto show
gsave
0 570 (processes) stringwidth pop sub translate
90 rotate
0 5 moveto
(processes) show
-4 8 -33 8 1 4 5 arrow fill
grestore

220 450 moveto 
20 20 6 6 Grid 

220 450 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels


%%EndDocument
 @endspecial 98 x(Figure)15 b(1.1:)k(Collectiv)o(e)e(mo)o(v)o(e)d
(functions)i(illustrated)h(for)e(a)f(group)h(of)g(six)h(pro)q(cesses.)k
(In)c(eac)o(h)f(case,)75 2574 y(eac)o(h)g(ro)o(w)g(of)g(b)q(o)o(xes)g
(represen)o(ts)g(data)g(lo)q(cations)h(in)g(one)g(pro)q(cess.)k(Th)o
(us,)15 b(in)h(the)f(one-all)i(broadcast,)75 2631 y(initially)k(just)e
(the)f(\014rst)h(pro)q(cess)g(con)o(tains)f(the)h(data)f
Fe(A)1085 2638 y Fd(0)1105 2631 y Fm(,)h(but)g(after)e(the)i(broadcast)
f(all)i(pro)q(cesses)75 2687 y(con)o(tain)15 b(it.)p
eop
%%Page: 4 4
4 3 bop 75 -100 a Fm(4)747 b Fi(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)75 45 y Fc(1.4.1)49 b(Broadcast)99
179 y Ff(MPI)p 174 179 15 2 v 17 w(BCAST\()23 b(buffer,)g(cnt,)g(type,)
g(root,)g(comm)h(\))117 235 y Fg(IN/OUT)15 b Ff(buffer)511
b Fg(starting)14 b(address)h(of)e(bu\013er)117 291 y(IN)39
b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13 b(of)g(en)o(tries)i(in)f(bu\013er)
117 346 y(IN)39 b Ff(type)649 b Fg(data)13 b(t)o(yp)q(e)i(of)e
(bu\013er)i(\(p)q(ossibly)f(general\))117 401 y(IN)39
b Ff(root)649 b Fg(rank)14 b(of)f(broadcast)h(ro)q(ot)117
457 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11 b(handle)166
540 y Ff(MPI)p 241 540 V 17 w(BCAST)h Fm(broadcasts)f(a)h(message)h
(from)e(the)i(pro)q(cess)g(with)g(rank)f Ff(root)g Fm(to)g(all)h(other)
g(pro)q(cesses)75 596 y(of)f(the)h(group.)19 b(It)12
b(is)h(called)h(b)o(y)f(all)g(mem)o(b)q(ers)g(of)f(group)g(using)i(the)
e(same)h(argumen)o(ts)e(for)h Ff(cnt,)24 b(type,)75 653
y(comm,)f(and)h(root)p Fm(.)31 b(On)19 b(return)g(the)g(con)o(ten)o(ts)
g(of)f(the)i(bu\013er)f(of)f(the)h(pro)q(cess)h(with)f(rank)g
Ff(root)f Fm(is)75 709 y(con)o(tained)e(in)g(the)f(bu\013er)g(of)g(the)
g(calling)i(pro)q(cess.)75 833 y Fc(1.4.2)49 b(Gather)75
967 y Ff(MPI)p 150 967 V 17 w(GATHER\()23 b(sendbuf,)f(sendcnt,)h
(sendtype,)g(recvbuf,)g(recvcnt,)f(recvtype,)h(root,)g(comm\))117
1080 y Fg(IN)39 b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e
(send)i(bu\013er)117 1136 y(IN)39 b Ff(sendcnt)577 b
Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(send)i(bu\013er)g(\(in)
o(teger\))117 1191 y(IN)39 b Ff(sendtype)553 b Fg(data)13
b(t)o(yp)q(e)i(of)e(send)i(bu\013er)g(elemen)o(ts)117
1246 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15 b(of)e(receiv)o(e)i
(bu\013er)g({)f(signi\014can)o(t)f(only)g(at)h(ro)q(ot)117
1301 y(IN)39 b Ff(recvcnt)577 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)f(receiv)o(e)j(bu\013er)e(\(in)o(teger\))117
1357 y(IN)39 b Ff(recvtype)553 b Fg(data)14 b(t)o(yp)q(e)h(of)f(recv)i
(bu\013er)g(elemen)o(ts)e({)h(signi\014can)o(t)f(only)905
1413 y(at)g(ro)q(ot)117 1468 y(IN)39 b Ff(root)649 b
Fg(rank)14 b(of)f(receiving)h(pro)q(cess)i(\(in)o(teger\))117
1524 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11 b(handle)166
1607 y Fm(Eac)o(h)17 b(pro)q(cess)g(\(including)i(the)e(ro)q(ot)f(pro)q
(cess\))h(sends)g(the)g(con)o(ten)o(ts)g(of)f(its)h(send)h(bu\013er)f
(to)f(the)75 1663 y(ro)q(ot)g(pro)q(cess.)25 b(The)17
b(ro)q(ot)f(pro)q(cess)h(places)h(all)f(the)g(incoming)h(messages)f(in)
g(the)g(lo)q(cations)h(sp)q(eci\014ed)75 1720 y(b)o(y)e(the)g(recvbuf)h
(and)f(recvt)o(yp)q(e.)23 b(The)16 b(receiv)o(e)h(bu\013er)f(is)g
(ignored)h(for)e(all)i(non-ro)q(ot)f(pro)q(cesses.)23
b(The)75 1776 y(receiv)o(e)e(bu\013er)f(of)f(the)h(ro)q(ot)f(pro)q
(cess)i(is)f(assumed)g(con)o(tiguous)g(and)g(partitioned)h(in)o(to)f
Ff(MPI)p 1742 1776 V 17 w(GSIZE)75 1833 y Fm(consecutiv)o(e)d(blo)q(c)o
(ks,)g(eac)o(h)f(consisting)h(of)f Ff(sendcnt)f Fm(elemen)o(ts.)23
b(The)17 b(data)e(sen)o(t)h(from)f(pro)q(cess)i(with)75
1889 y(rank)d(i)g(is)h(stored)f(in)g(the)h Ff(i)p Fm(-th)f(blo)q(c)o
(k.)20 b(The)14 b(function)h(is)g(called)g(with)g(the)f(same)f(v)m
(alues)j(for)d Ff(sendcnt,)75 1946 y(sendtype,)23 b(root)p
Fm(,)14 b(and)h Ff(comm)g Fm(at)f(all)j(participating)f(pro)q(cesses.)
75 2070 y Fc(1.4.3)49 b(Scatter)75 2204 y Ff(MPI)p 150
2204 V 17 w(SCATTER\()23 b(sendbuf,)f(sendcnt,)h(sendtype,)g(recvbuf,)f
(recvcnt,)h(recvtype,)g(root,)g(comm\))117 2317 y Fg(IN)39
b Ff(sendbuf)577 b Fg(address)15 b(of)e(send)i(bu\013er)g({)e
(signi\014can)o(t)h(only)f(at)h(ro)q(ot)117 2372 y(IN)39
b Ff(sendcnt)577 b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f
(send)i(bu\013er)g(\(in)o(teger\))117 2427 y(IN)39 b
Ff(sendtype)553 b Fg(data)13 b(t)o(yp)q(e)i(of)e(send)i(bu\013er)g
(elemen)o(ts)117 2483 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15
b(of)e(receiv)o(e)i(bu\013er.)117 2538 y(IN)39 b Ff(recvcnt)577
b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(receiv)o(e)j
(bu\013er)e(\(in)o(teger\))117 2593 y(IN)39 b Ff(recvtype)553
b Fg(data)13 b(t)o(yp)q(e)i(of)e(receiv)o(e)i(bu\013er)g(elemen)o(ts)
117 2648 y(IN)39 b Ff(root)649 b Fg(rank)14 b(of)f(sending)h(pro)q
(cess)i(\(in)o(teger\))117 2704 y(IN)39 b Ff(group)625
b Fg(comm)o(unicator)11 b(handle)p eop
%%Page: 5 5
5 4 bop 75 -100 a Fi(1.4.)34 b(D)o(A)l(T)l(A)15 b(MO)o(VE)g(FUNCTIONS)
1099 b Fm(5)166 45 y(The)16 b(ro)q(ot)g(pro)q(cess)g(sends)h(the)f
Ff(i)p Fm(-th)g(p)q(ortion)h(of)e(its)i(send)g(bu\013er)f(to)f(the)i
(pro)q(cess)f(with)h(rank)f Ff(i)p Fm(;)75 102 y(eac)o(h)h(pro)q(cess)g
(\(including)j(the)d(ro)q(ot)f(pro)q(cess\))h(stores)f(the)h(incoming)h
(message)f(in)h(its)f(reciv)o(e)g(bu\013er.)75 158 y(The)f(send)h
(bu\013er)e(of)h(the)g(ro)q(ot)f(pro)q(cess)h(is)g(assumed)g(con)o
(tiguous)g(and)g(partitioned)h(in)o(to)f Ff(MPI)p 1742
158 15 2 v 17 w(GSIZE)75 214 y Fm(consecutiv)o(e)22 b(blo)q(c)o(ks,)h
(eac)o(h)e(consisting)h(of)f Ff(recvcnt)f Fm(elemen)o(ts.)38
b(The)21 b Ff(i)p Fm(-th)g(blo)q(c)o(k)h(is)g(sen)o(t)e(to)h(the)75
271 y(pro)q(cess)c(with)h(rank)e(i)i(in)g(the)f(group)f(and)i(stored)e
(in)i(its)f(receiv)o(e)h(bu\013er.)25 b(The)17 b(routine)h(is)f(called)
i(b)o(y)75 327 y(all)g(mem)o(b)q(ers)f(of)g(the)g(group)g(using)h(the)g
(same)e(argumen)o(ts)h(for)f Ff(recvcnt,)23 b(recvtype,)g(root)p
Fm(,)18 b(and)75 384 y Ff(comm)p Fm(.)166 441 y(Note)d(that)f
Ff(scatter)g Fm(is)i(the)f(rev)o(erse)g(op)q(eration)h(to)e
Ff(gather)p Fm(.)75 567 y Fc(1.4.4)49 b(All-to-all)19
b(b)o(roadcast)75 701 y Ff(MPI)p 150 701 V 17 w(ALLCAST\()k(sendbuf,)f
(sendcnt,)h(sendtype,)g(recvbuf,)f(recvcnt,)h(recvtype,)g(comm\))117
758 y Fg(IN)39 b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e(send)
i(bu\013er)117 814 y(IN)39 b Ff(sendcnt)577 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)f(send)i(bu\013er)g(\(in)o(teger\))117
869 y(IN)39 b Ff(sendtype)553 b Fg(data)13 b(t)o(yp)q(e)i(of)e(send)i
(bu\013er)g(elemen)o(ts)117 925 y(OUT)39 b Ff(recvbuf)530
b Fg(address)15 b(of)e(receiv)o(e)i(bu\013er.)117 981
y(IN)39 b Ff(recvcnt)577 b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h
(in)f(receiv)o(e)j(bu\013er)e(\(in)o(teger\))117 1037
y(IN)39 b Ff(recvtype)553 b Fg(data)13 b(t)o(yp)q(e)i(of)e(receiv)o(e)i
(bu\013er)g(elemen)o(ts)117 1093 y(IN)39 b Ff(comm)649
b Fg(comm)o(unicator)11 b(handle)166 1176 y Fm(Eac)o(h)16
b(pro)q(cess)g(in)h(the)f(group)g(broadcasts)g(its)g(en)o(tire)h(send)f
(bu\013er)g(to)g(all)h(pro)q(cesses)g(\(including)75
1233 y(itself)t(\);)f(All)h(send)f(bu\013ers)g(ha)o(v)o(e)f(the)h(same)
g(n)o(um)o(b)q(er)g(of)f(elemen)o(ts.)23 b(Eac)o(h)15
b(pro)q(cess)h(concatenates)g(the)75 1289 y(incoming)f(messages,)f(in)h
(the)f(order)g(of)g(the)g(senders')g(ranks,)g(and)g(stores)g(them)g(in)
h(its)f(receiv)o(e)h(bu\013er.)75 1346 y(The)h(routine)g(is)g(called)h
(b)o(y)e(all)i(mem)o(b)q(ers)e(of)h(the)f(group)g(using)i(the)e(same)h
(argumen)o(ts)e(for)h Ff(sendcnt,)75 1402 y(sendtype)p
Fm(,)f(and)h Ff(comm)p Fm(.)166 1459 y Ff(MPI)p 241 1459
V 17 w(ALLCAST)i Fm(is)i(equiv)m(alen)o(t)h(to)e Fe(n)h
Fm(executions)g(of)f Ff(MPI)p 1167 1459 V 17 w(BCAST)p
Fm(,)f(with)i(eac)o(h)f(pro)q(cess)h(once)g(the)75 1516
y(ro)q(ot.)75 1642 y Fc(1.4.5)49 b(All-to-all)19 b(scatter-gather)75
1776 y Ff(MPI)p 150 1776 V 17 w(ALLTOALL\(sendbuf,)i(sendcnt,)i
(sendtype,)g(recvbuf,)f(recvcnt,)h(recvtype,)g(comm\))117
1833 y Fg(IN)39 b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e
(send)i(bu\013er)117 1888 y(IN)39 b Ff(sendcnt)577 b
Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(send)i(bu\013er)g(\(in)
o(teger\))117 1944 y(IN)39 b Ff(sendtype)553 b Fg(data)13
b(t)o(yp)q(e)i(of)e(send)i(bu\013er)g(elemen)o(ts)117
2000 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15 b(of)e(receiv)o(e)i
(bu\013er.)117 2056 y(IN)39 b Ff(recvcnt)577 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)f(receiv)o(e)j(bu\013er)e(\(in)o(teger\))117
2112 y(IN)39 b Ff(recvtype)553 b Fg(data)13 b(t)o(yp)q(e)i(of)e(receiv)
o(e)i(bu\013er)g(elemen)o(ts)117 2168 y(IN)39 b Ff(comm)649
b Fg(comm)o(unicator)11 b(handle)166 2251 y Fm(Eac)o(h)j(pro)q(cess)h
(in)h(the)e(group)g(sends)i(the)e Ff(i)p Fm(-th)h(p)q(ortion)g(of)f
(its)g(send)i(bu\013er)e(to)g(the)h(pro)q(cess)g(with)75
2308 y(rank)21 b Ff(i)g Fm(\(itself)h(included\).)40
b(All)22 b(messages)f(sen)o(t)g(from)f(one)i(pro)q(cess)f(to)f(another)
h(ha)o(v)o(e)g(the)g(same)75 2364 y(length.)27 b(The)18
b(send)g(bu\013er)g(of)f(eac)o(h)g(pro)q(cess)h(is)g(partitioned)g(in)o
(to)g Ff(MPI)p 1347 2364 V 16 w(GSIZE)f Fm(consecutiv)o(e)i(blo)q(c)o
(ks,)75 2421 y(eac)o(h)12 b(consisting)h(of)f Ff(sendcnt)f
Fm(elemen)o(ts.)19 b(The)13 b Ff(i)p Fm(-th)f(blo)q(c)o(k)g(is)h(sen)o
(t)f(to)f(the)h Ff(i)p Fm(-th)g(pro)q(cess)h(in)f(the)h(group.)75
2477 y(Eac)o(h)k(pro)q(cess)g(concatenates)f(the)h(incoming)h
(messages,)e(in)i(the)f(order)f(of)h(the)f(senders')h(ranks,)g(and)75
2534 y(store)c(them)h(in)g(its)g(receiv)o(e)g(bu\013er.)20
b(The)14 b(routine)g(is)g(called)h(b)o(y)f(all)g(mem)o(b)q(ers)g(of)f
(the)h(group)f(using)i(the)75 2590 y(same)g(argumen)o(ts)f(for)h
Ff(sendcnt,)23 b(sendtype)p Fm(,)13 b(and)j Ff(comm)p
Fm(.)166 2647 y(An)f(all-to-all)h(scatter-gather)d(is)i(the)g(equiv)m
(alen)o(t)h(of)e Fe(n)h Fm(scatters)e(\(or)h Fe(n)h Fm(gathers\))e
(executed)j(with)75 2704 y(eac)o(h)f(pro)q(cess)h(b)q(eing)g(once)g
(the)f(ro)q(ot.)p eop
%%Page: 6 6
6 5 bop 75 -100 a Fm(6)747 b Fi(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)166 45 y Fh(Discussion:)30 b Fg(Do)10
b(w)o(e)h(w)o(an)o(t)f(to)h(ha)o(v)o(e)f(completely)f(general)i(v)o
(ersions)g(of)f(the)h(ab)q(o)o(v)o(e)g(data)f(mo)o(v)o(e)f(routines?)75
102 y(Without)g(bu\013er)h(descriptors)i(this)d(is)h(nearly)f(imp)q
(ossible)f(in)h(fortran)h(b)q(ecause)h(of)e(v)n(ariable)f(length)i
(argumen)o(t)e(list)75 158 y(or)i(arra)o(y)p 219 158
13 2 v 15 w(of)p 268 158 V 14 w(addresses)j(requiremen)o(ts.)k(In)10
b(C)h(it)f(is)g(merely)f(ugly)m(.)16 b(F)m(or)10 b(example)f(the)i
(general)f(Alltoall)e(w)o(ould)i(b)q(e:)75 214 y Fb(MPI)p
144 214 14 2 v 15 w(ALLTOALL\(array)p 467 214 V 13 w(of)p
524 214 V 15 w(sendadr,)20 b(array)p 845 214 V 15 w(of)p
904 214 V 15 w(sendcnt,)j(array)p 1228 214 V 15 w(of)p
1287 214 V 15 w(sendtype,)d(array)p 1630 214 V 14 w(of)p
1688 214 V 16 w(recvadr,)75 271 y(array)p 188 271 V 15
w(of)p 247 271 V 15 w(recvcnt,)g(array)p 568 271 V 15
w(of)p 627 271 V 15 w(recvtype,)g(comm\))g Fg(It)h(is)f(not)h(clear)g
(if)f(w)o(e)h(w)o(ould)f(w)o(an)o(t)g(to)h(use)h(an)e(ar-)75
327 y(ra)o(y)p 135 327 13 2 v 15 w(of)p 184 327 V 14
w(t)o(yp)q(es)15 b(or)f(a)f(more)g(complex)g(datat)o(yp)q(e)h(with)f
(end)p 992 327 V 16 w(of)p 1042 327 V 14 w(bu\013er)i(mark)o(ers)f
(inserted.)75 554 y Fl(1.5)59 b(Global)20 b(Compute)e(Op)r(erations)75
656 y Fm(The)13 b(functions)g(in)h(this)f(section)h(p)q(erform)e(one)h
(of)f(the)h(follo)o(wing)g(op)q(erations)g(across)f(all)i(the)f(mem)o
(b)q(ers)75 713 y(of)i(a)g(group:)189 807 y(global)g(max)g(on)g(in)o
(teger)h(and)f(\015oating)g(p)q(oin)o(t)h(data)e(t)o(yp)q(es)189
902 y(global)h(min)h(on)g(in)o(teger)f(and)g(\015oating)g(p)q(oin)o(t)h
(data)f(t)o(yp)q(es)189 996 y(global)g(sum)h(on)f(in)o(teger)g(and)h
(\015oating)f(p)q(oin)o(t)g(data)g(t)o(yp)q(es)189 1091
y(global)g(pro)q(duct)h(on)f(in)o(teger)h(and)f(\015oating)g(p)q(oin)o
(t)h(data)e(t)o(yp)q(es)189 1186 y(global)h(AND)h(on)f(logical)h(and)f
(in)o(teger)h(data)e(t)o(yp)q(es)189 1280 y(global)h(OR)h(on)g(logical)
g(and)f(in)o(teger)h(data)e(t)o(yp)q(es)189 1375 y(global)h(X)o(OR)h
(on)f(logical)i(and)e(in)o(teger)h(data)e(t)o(yp)q(es)189
1469 y(global)h(max)g(and)h(who)f(\(rank\))f(has)h(it)189
1564 y(global)g(min)h(and)g(who)f(\(rank\))f(has)h(it)189
1659 y(user)g(de\014ned)i(\(asso)q(ciativ)o(e\))d(op)q(eration)189
1753 y(user)h(de\014ned)i(\(asso)q(ciativ)o(e)e(and)g(comm)o(utativ)o
(e\))f(op)q(eration)75 1876 y Fc(1.5.1)49 b(Reduce)75
2010 y Ff(MPI)p 150 2010 15 2 v 17 w(REDUCE\()23 b(sendbuf,)f(recvbuf,)
h(cnt,)h(type,)f(op,)g(root,)g(comm\))117 2066 y Fg(IN)39
b Ff(sendbuf)577 b Fg(address)15 b(of)e(send)i(bu\013er)117
2121 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15 b(of)e(receiv)o(e)i
(bu\013er)g({)f(signi\014can)o(t)f(only)g(at)h(ro)q(ot)117
2176 y(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h
(in)f(input)h(bu\013er)h(\(in)o(teger\))117 2230 y(IN)39
b Ff(type)649 b Fg(data)13 b(t)o(yp)q(e)i(of)e(elemen)o(ts)h(of)f
(input)h(bu\013er)117 2285 y(IN)39 b Ff(op)697 b Fg(op)q(eration)117
2340 y(IN)39 b Ff(root)649 b Fg(rank)14 b(of)f(ro)q(ot)h(pro)q(cess)h
(\(in)o(teger\))117 2395 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11
b(handle)166 2478 y Fm(Com)o(bines)19 b(the)f(v)m(alues)i(pro)o(vided)f
(in)g(the)g(send)g(bu\013er)f(of)g(eac)o(h)g(pro)q(cess)h(in)g(the)g
(group,)f(using)75 2534 y(the)e(op)q(eration)g Ff(op)p
Fm(,)g(and)g(returns)f(the)h(com)o(bined)h(v)m(alue)h(in)e(the)g
(receiv)o(e)h(bu\013er)f(of)f(the)h(pro)q(cess)h(with)75
2591 y(rank)f Ff(root)p Fm(.)21 b(Eac)o(h)16 b(pro)q(cess)g(can)g(pro)o
(vide)g(one)g(v)m(alue,)h(or)f(a)f(sequence)i(of)f(v)m(alues,)h(in)f
(whic)o(h)h(case)f(the)75 2647 y(com)o(bine)f(op)q(eration)f(is)h
(executed)g(p)q(oin)o(t)o(wise)g(on)f(eac)o(h)g(en)o(try)g(of)g(the)g
(sequence.)21 b(F)l(or)13 b(example,)i(if)g(the)75 2704
y(op)q(eration)e(is)g Ff(max)f Fm(and)g(the)h(send)g(bu\013er)f(con)o
(tains)h(t)o(w)o(o)e(\015oating)h(p)q(oin)o(t)h(n)o(um)o(b)q(ers,)g
(then)g(recvbuf\(1\))f(=)p eop
%%Page: 7 7
7 6 bop 75 -100 a Fi(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)905
b Fm(7)75 45 y(global)17 b(max\(sendbuf\(1\)\))d(and)i(recvbuf\(2\))g
(=)g(global)h(max\(sendbuf\(2\)\).)k(All)c(send)f(bu\013ers)g(should)75
102 y(de\014ne)e(sequences)f(of)f(equal)h(length)h(of)e(en)o(tries)g
(all)i(of)e(the)g(same)h(data)e(t)o(yp)q(e,)i(where)g(the)f(t)o(yp)q(e)
h(is)g(one)f(of)75 158 y(those)i(allo)o(w)o(ed)g(for)f(op)q(erands)h
(of)g Ff(op)p Fm(.)19 b(F)l(or)13 b(all)i(op)q(erations)f(except)g
Ff(MINLOC)f Fm(and)h Ff(MAXLOC)f Fm(the)h(n)o(um)o(b)q(er)75
214 y(and)h(t)o(yp)q(e)g(of)g(elemen)o(ts)g(in)h(the)f(send)g(bu\013er)
g(are)g(the)g(same)f(as)h(for)f(the)h(receiv)o(e)h(bu\013ers.)k(F)l(or)
14 b Ff(MINLOC)75 271 y Fm(and)i Ff(MAXLOC)p Fm(,)f(the)h(receiv)o(e)h
(bu\013er)f(will)i(con)o(tain)e Ff(cnt)g Fm(elemen)o(ts)h(of)e(the)i
(same)e(t)o(yp)q(e)h(as)g(the)g(elemen)o(ts)75 327 y(in)g(the)f(input)i
(bu\013er,)d(follo)o(w)o(ed)i(b)o(y)f Ff(cnt)g Fm(in)o(tegers)g
(\(ranks\).)166 385 y(The)h(op)q(eration)g(de\014ned)i(b)o(y)e
Ff(op)f Fm(is)i(asso)q(ciativ)o(e)f(and)g(comm)o(utativ)o(e,)f(and)i
(the)f(implemen)o(tation)75 441 y(can)g(tak)o(e)f(adv)m(an)o(tage)g(of)
h(asso)q(ciativit)o(y)g(and)g(comm)o(utativit)o(y)f(in)i(order)f(to)f
(c)o(hange)h(order)f(of)h(ev)m(alua-)75 498 y(tion.)k(The)13
b(routine)h(is)g(called)h(b)o(y)e(all)i(group)e(mem)o(b)q(ers)g(using)h
(the)g(same)f(argumen)o(ts)g(for)f Ff(cnt,)24 b(type,)75
554 y(op,)f(root)15 b Fm(and)g Ff(comm)p Fm(.)166 611
y(W)l(e)g(list)h(b)q(elo)o(w)g(the)f(supp)q(orted)h(options)f(for)g
Ff(op)p Fm(.)117 715 y Ff(MPI)p 192 715 15 2 v 16 w(MAX)670
b Fm(maxim)o(um)117 771 y Ff(MPI)p 192 771 V 16 w(MIN)g
Fm(minim)o(um)117 827 y Ff(MPI)p 192 827 V 16 w(SUM)g
Fm(sum)117 883 y Ff(MPI)p 192 883 V 16 w(PROD)646 b Fm(pro)q(duct)117
939 y Ff(MPI)p 192 939 V 16 w(AND)670 b Fm(and)15 b(\(logical)h(or)f
(bit-wise)h(in)o(teger\))117 995 y Ff(MPI)p 192 995 V
16 w(OR)694 b Fm(or)14 b(\(logical)j(or)d(bit-wise)j(in)o(teger\))117
1051 y Ff(MPI)p 192 1051 V 16 w(XOR)670 b Fm(xor)14 b(\(logical)i(or)f
(bit-wise)i(in)o(teger\))117 1107 y Ff(MPI)p 192 1107
V 16 w(MAXLOC)598 b Fm(maxim)o(um)14 b(v)m(alue)h(and)f(rank)g(of)g
(pro)q(cess)g(with)g(max-)905 1164 y(im)o(um)h(v)m(alue)i(\(rank)d(of)g
(\014rst)h(pro)q(cess)g(with)g(maxim)o(um)905 1220 y(v)m(alue,)h(in)g
(case)g(of)e(ties\))117 1276 y Ff(MPI)p 192 1276 V 16
w(MINLOC)598 b Fm(minim)o(um)17 b(v)m(alue)h(and)f(rank)f(of)g(pro)q
(cess)h(with)g(min-)905 1333 y(im)o(um)g(v)m(alue)g(\(rank)f(of)f
(\014rst)h(pro)q(cess)g(with)h(minim)o(um)905 1389 y(v)m(alue,)f(in)g
(case)g(of)e(ties\))166 1473 y(All)i(op)q(erations,)e(with)h(the)f
(exception)i(of)e Ff(MAXLOC)g Fm(and)g Ff(MINLOC)g Fm(return)g(a)h(v)m
(alue)g(whic)o(h)h(has)e(the)75 1530 y(same)i(datat)o(yp)q(e)f(as)g
(the)h(op)q(erands.)22 b(Eac)o(h)16 b(op)q(erand)g(of)f
Ff(MAXLOC)g Fm(and)h Ff(MINLOC)f Fm(can)h(b)q(e)h(though)o(t)e(as)g(a)
75 1586 y(pair)g Ff(\(v,)23 b(i\))p Fm(:)d Ff(i)14 b
Fm(is)h(the)g(rank)f(of)g(the)g(calling)j(pro)q(cess)e(that)e(is)i
(passed)g(implictly)l(,)i(and)e Ff(v)f Fm(is)h(the)g(v)m(alue)75
1643 y(that)g(is)h(explicitly)i(passed)e(to)f(the)g(call.)22
b Ff(MAXLOC)15 b Fm(and)g Ff(MINLOC)g Fm(return)g(\(explicitly\))j(a)d
(pair)h Ff(\(value,)75 1699 y(rank\))p Fm(.)166 1756
y(When)i Ff(MINLOC)g Fm(or)f Ff(MAXLOC)g Fm(are)h(in)o(v)o(ok)o(ed,)h
(the)f(input)h(bu\013er)f(should)h(con)o(tain)g Fe(m)f
Fm(elemen)o(ts)g(of)75 1813 y(the)d(same)f(t)o(yp)q(e)h(to)f(whic)o(h)i
(the)f(op)q(eration)f Ff(MIN)h Fm(or)f Ff(MAX)g Fm(can)h(b)q(e)g
(applied.)22 b(The)15 b(op)q(eration)g(returns)g(at)75
1869 y(the)i(ro)q(ot)f Fe(m)h Fm(elemen)o(ts)h(of)f(the)g(same)g(t)o
(yp)q(e)g(as)g(the)g(inputs,)h(follo)o(w)o(ed)g(b)o(y)f
Fe(m)g Fm(in)o(tegers)g(\(ranks\).)25 b(The)75 1926 y(output)15
b(bu\013er)g(should)h(b)q(e)g(de\014ned)h(accordingly)l(.)166
1983 y(The)e(op)q(eration)h(that)e(de\014nes)i Ff(MAXLOC)f
Fm(is)721 2047 y Fa( )774 2090 y Fe(u)779 2147 y(i)821
2047 y Fa(!)864 2119 y Fk(\016)897 2047 y Fa( )950 2090
y Fe(v)952 2147 y(j)995 2047 y Fa(!)1040 2119 y Fm(=)1088
2047 y Fa( )1142 2090 y Fe(w)1146 2147 y(k)1196 2047
y Fa(!)75 2253 y Fm(where)833 2312 y Fe(w)e Fm(=)g(max\()p
Fe(u;)8 b(v)r Fm(\))75 2399 y(and)710 2504 y Fe(k)13
b Fm(=)795 2405 y Fa(8)795 2443 y(>)795 2455 y(<)795
2530 y(>)795 2542 y(:)853 2447 y Fe(i)194 b Fm(if)15
b Fe(u)e(>)g(v)853 2504 y Fm(min)q(\()p Fe(i;)8 b(j)s
Fm(\))39 b(if)15 b Fe(u)e Fm(=)g Fe(v)853 2560 y(j)191
b Fm(if)15 b Fe(u)e(<)g(v)166 2646 y Fm(Note)i(that)f(this)i(op)q
(eration)f(is)h(asso)q(ciativ)o(e)g(and)f(comm)o(utativ)o(e.)166
2704 y(A)g(similar)i(de\014nition)g(can)e(b)q(e)h(giv)o(en)f(for)g
Ff(MINLOC)p Fm(.)p eop
%%Page: 8 8
8 7 bop 75 -100 a Fm(8)747 b Fi(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)166 45 y Fh(Discussion:)166 95 y Fg(W)m(e)c
(de\014ne)i Fb(MINLOC)d Fg(to)i(return)h(a)e(v)o(ector)h(of)f(v)n
(alues,)g(follo)o(w)o(ed)f(b)o(y)i(a)f(v)o(ector)i(of)e(ranks.)18
b(The)13 b(alternativ)o(e)f(is)75 145 y(for)i Fb(MINLOC)f
Fg(to)i(return)h(a)e(v)o(ector)h(of)f(\(v)n(alue,)g(rank\))h(pairs,)f
(i.e.,)f(a)h(v)o(ector)i(of)e(structures.)22 b(This)15
b(second)g(c)o(hoice)75 195 y(is)e(less)h(con)o(v)o(enien)o(t)g(for)f
(F)m(ortran.)18 b(Another)c(alternativ)o(e)f(is)g(to)g(ha)o(v)o(e)g
Fb(MINLOC)f Fg(return)j(t)o(w)o(o)e(output)g(bu\013ers,)i(but)75
244 y(then)g(it)e(need)i(b)q(e)f(in)o(v)o(ok)o(ed)f(di\013eren)o(tly)i
(than)f(the)g(other)h(op)q(erations.)166 294 y(The)e(computation)d(can)
i(still)g(b)q(e)g(pip)q(elined,)g(pro)o(vided)g(that)g(the)h(lo)q
(cation)e(of)h(the)h(\014rst)g(rank)f(en)o(try)h(in)e(the)75
344 y(output)j(bu\013er)h(can)f(b)q(e)h(computed)e(upfron)o(t.)166
559 y Fh(Implemen)o(tati)o(on)f(note:)166 609 y Fg(The)k(op)q(erations)
g(can)g(b)q(e)g(applied)g(to)f(op)q(erands)i(of)e(di\013eren)o(t)i(t)o
(yp)q(es,)f(in)g(di\013eren)o(t)g(calls:)22 b(e.g.,)15
b Fb(MPI)p 1798 609 14 2 v 15 w(SUM)75 659 y Fg(ma)o(y)f(require)k(an)e
(in)o(teger)g(sum)g(in)g(one)g(call,)g(and)g(a)g(complex)f(sum)g(in)h
(another.)26 b(Since)17 b(w)o(e)g(require)g(that)f(all)75
709 y(elemen)o(ts)f(b)q(e)h(of)f(the)h(same)f(datat)o(yp)q(e,)g(it)h
(is)f(not)g(necessary)j(to)d(store)h(a)g(full)e(signature)i(with)f(eac)
o(h)h(bu\013er:)22 b(It)75 758 y(is)14 b(only)e(necessary)k(to)e(store)
g(the)h(datat)o(yp)q(e)f(of)f(the)h(elemen)o(ts)g(when)g(all)e(elemen)o
(ts)i(are)g(of)f(the)h(same)f(t)o(yp)q(e,)h(and)75 808
y(store)h(a)e(\015ag)h(indicating)f(that)g(the)i(bu\013er)g(is)f(not)f
(homogeneous,)g(otherwise.)166 1023 y Fh(Missing:)166
1073 y Fg(Need)f(to)f(de\014ne)h(the)g(t)o(yp)q(es)g(compatible)e(with)
g(eac)o(h)i(op)q(eration.)17 b(This)11 b(includes)h Fb(MPI)p
1520 1073 V 15 w(BYTE)e Fg(for)h(the)h(logical)75 1123
y(op)q(erations,)i(and)f(whatev)o(er)i(F)m(ortran/C)f(allo)o(w)e(for)h
(all)g(op)q(erations.)75 1309 y Ff(MPI)p 150 1309 15
2 v 17 w(USER)p 263 1309 V 16 w(REDUCE\()23 b(sendbuf,)g(recvbuf,)g
(cnt,)g(type,)g(function,)g(root,)g(comm\))117 1366 y
Fg(IN)39 b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e(send)i
(bu\013er)117 1417 y(OUT)39 b Ff(recvbuf)530 b Fg(starting)16
b(address)h(of)e(receiv)o(e)i(bu\013er)g({)e(signi\014can)o(t)h(only)
905 1474 y(at)e(ro)q(ot)117 1525 y(IN)39 b Ff(cnt)673
b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(input)h(bu\013er)h
(\(in)o(teger\))117 1577 y(IN)39 b Ff(type)649 b Fg(data)13
b(t)o(yp)q(e)i(of)e(elemen)o(ts)h(of)f(input)h(bu\013er)117
1628 y(IN)39 b Ff(function)553 b Fg(user)15 b(de\014ned)g(function)117
1680 y(IN)39 b Ff(root)649 b Fg(rank)14 b(of)f(ro)q(ot)h(pro)q(cess)h
(\(in)o(teger\))117 1731 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11
b(handle)166 1814 y Fm(Similar)20 b(to)f(the)g(reduce)h(op)q(eration)f
(function)h(ab)q(o)o(v)o(e)f(except)g(that)g(a)f(user)i(supplied)h
(function)75 1871 y(is)e(used.)32 b Ff(function)17 b
Fm(is)j(a)e(function)i(with)f(three)g(argumen)o(ts.)30
b(A)19 b(C)g(protot)o(yp)q(e)f(for)g(suc)o(h)h(function)75
1927 y(is)e Ff(f\()23 b(invec,)g(inoutvec,)g(*len\))p
Fm(.)f(Both)17 b Ff(invec)e Fm(and)i Ff(inoutvec)e Fm(are)h(arra)o(ys)f
(with)h Ff(*len)g Fm(en)o(tries.)75 1984 y(The)g(function)g(computes)f
(p)q(oin)o(t)o(wise)h(a)f(comm)o(utativ)o(e)g(and)h(asso)q(ciativ)o(e)f
(op)q(eration)h(on)f(eac)o(h)h(pair)f(of)75 2040 y(en)o(tries)20
b(and)g(returns)g(the)g(result)g(in)h Ff(inoutvec)p Fm(.)32
b(A)20 b(pseudo)q(co)q(de)h(for)e Ff(function)g Fm(is)h(giv)o(en)h(b)q
(elo)o(w,)75 2097 y(where)15 b Fk(\016)g Fm(is)h(the)f(comm)o(utativ)o
(e)g(and)g(asso)q(ciativ)o(e)h(op)q(eration)f(de\014ned)i(b)o(y)e
Ff(function)p Fm(.)166 2153 y Ff(for\(i=0;)23 b(i)g Fe(<)i
Ff(*len;)e(i++\))75 2210 y Fk(f)75 2266 y Ff(inoutvec[i])f
Fk(\016)p Ff(=)i(invec[i])75 2323 y Fk(g)166 2379 y Fm(The)15
b(t)o(yp)q(e)g(of)f(the)h(elemen)o(ts)h(of)e Ff(invec)g
Fm(and)h(of)g Ff(inoutvec)e Fm(matc)o(h)i(the)g(t)o(yp)q(e)g(of)f(the)h
(elemen)o(ts)g(of)75 2435 y(the)g(send)h(bu\013ers)f(and)h(the)f
(receiv)o(e)h(bu\013er.)75 2539 y Ff(MPI)p 150 2539 V
17 w(USER)p 263 2539 V 16 w(REDUCEA\()23 b(sendbuf,)g(recvbuf,)g(cnt,)g
(type,)g(function,)g(root,)g(comm\))117 2596 y Fg(IN)39
b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e(send)i(bu\013er)117
2647 y(OUT)39 b Ff(recvbuf)530 b Fg(starting)16 b(address)h(of)e
(receiv)o(e)i(bu\013er)g({)e(signi\014can)o(t)h(only)905
2704 y(at)e(ro)q(ot)p eop
%%Page: 9 9
9 8 bop 75 -100 a Fi(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)905
b Fm(9)117 45 y Fg(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)f(input)h(bu\013er)h(\(in)o(teger\))117
100 y(IN)39 b Ff(type)649 b Fg(data)13 b(t)o(yp)q(e)i(of)e(elemen)o(ts)
h(of)f(input)h(bu\013er)117 155 y(IN)39 b Ff(function)553
b Fg(user)15 b(de\014ned)g(function)117 210 y(IN)39 b
Ff(root)649 b Fg(rank)14 b(of)f(ro)q(ot)h(pro)q(cess)h(\(in)o(teger\))
117 264 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11
b(handle)166 347 y Fm(Iden)o(tical)20 b(to)e Ff(MPI)p
490 347 15 2 v 16 w(USER)p 602 347 V 17 w(REDUCE)p Fm(,)f(except)i
(that)e(the)i(op)q(eration)f(de\014ned)i(b)o(y)e Ff(function)g
Fm(is)g(not)75 404 y(required)g(to)e(b)q(e)h(comm)o(utativ)o(e,)f(but)g
(only)h(asso)q(ciativ)o(e.)25 b(Th)o(us,)16 b(the)h(order)f(of)g
(computation)h(can)f(b)q(e)75 460 y(mo)q(di\014ed)h(only)e(using)h
(asso)q(ciativit)o(y)l(.)166 593 y Fh(Implemen)o(tati)o(on)c(note:)166
650 y Fg(The)e(co)q(de)g(for)f Fb(MPI)p 466 650 14 2
v 15 w(USER)p 569 650 V 15 w(REDUCEA)e Fg(can)j(b)q(e)g(used)g(to)f
(pro)o(vide)g(an)g(iden)o(tical)g(implemen)o(tati)o(on)d(for)j
Fb(MPI)p 1780 650 V 15 w(USER)p 1883 650 V 15 w(REDUCE)p
Fg(.)166 922 y Fh(Discussion:)166 978 y Fg(The)21 b(addition)e(of)h
(the)i(third)e(parameter,)i Fb(*len)d Fg(in)h Fb(function)f
Fg(allo)o(w)g(the)i(system)g(to)f(a)o(v)o(oid)f(calling)75
1035 y Fb(function)c Fg(for)i(eac)o(h)g(elemen)o(t)g(in)f(the)i(input)e
(bu\013er;)j(rather,)g(the)e(system)g(can)g(c)o(ho)q(ose)h(to)e(apply)h
Fb(function)75 1091 y Fg(to)d(c)o(h)o(unks)h(of)f(inputs,)g(where)i
(the)f(size)g(of)f(the)h(c)o(h)o(unk)f(is)g(c)o(hosen)i(b)o(y)e(the)h
(system)f(so)g(as)h(to)f(optimize)f(comm)o(u-)75 1148
y(nication)j(and)g(computation)f(pip)q(elining.)25 b(E.g.,)16
b Fb(*len)f Fg(could)i(b)q(e)g(set)g(to)g(b)q(e)g(the)g(t)o(ypical)f
(pac)o(k)o(et)h(size)g(in)f(the)75 1204 y(comm)o(unication)10
b(subsystem.)166 1419 y Fh(Missing:)166 1470 y Fg(The)21
b(last)f(oppro)o(v)o(ed)g(draft)g(has)h(an)f(additional)e
Fb(unitsize)h Fg(parameter)h(in)g Fb(MPI)p 1508 1470
V 15 w(USER)p 1611 1470 V 15 w(REDUCE)p Fg(:)f(Eac)o(h)75
1519 y(elemen)o(t)12 b(of)h Fb(invec)f Fg(or)g Fb(inoutvec)g
Fg(corresp)q(onds)j(to)d Fb(unitsize)g Fg(elemen)o(ts)g(of)h(the)g
(input)g(\(output\))g(bu\013er.)19 b(This)75 1569 y(allo)o(ws,)11
b(for)i(example,)f(to)h(pass)g(to)g Fb(MPI)p 704 1569
V 15 w(USER)p 807 1569 V 15 w(REDUCE)f Fg(a)g(v)o(ector)i(of)f(real)g
(n)o(um)o(b)q(ers,)f(and)h(ha)o(v)o(e)g Fb(function)e
Fg(treat)75 1619 y(eac)o(h)17 b(pair)f(of)g(real)h(n)o(um)o(b)q(ers)f
(as)h(one)g(complex)e(n)o(um)o(b)q(er.)25 b(I)17 b(am)e(not)h(sure)i(w)
o(e)f(need)h(this)e(if)g(w)o(e)h(appro)o(v)o(e)f(the)75
1669 y(prop)q(osal)f(to)h(use)g(bu\013er)g(t)o(yp)q(es:)22
b(It)16 b(w)o(e)g(do)f(so)h(it)f(will)f(b)q(e)i(easy)g(to)f(de\014ne)i
(the)f(bu\013er)g(to)g(consists)g(of)f(pairs)h(of)75
1719 y(real)e(n)o(um)o(b)q(ers,)f(sa)o(y)m(.)166 1858
y Fm(MPI)j(also)f(includes)j(v)m(arian)o(ts)e(of)f(eac)o(h)h(of)f(the)h
(reduce)g(op)q(erations)g(where)g(the)g(result)g(is)g(kno)o(wn)75
1914 y(to)f(all)h(pro)q(cesses)f(in)h(the)g(group)f(on)g(return.)75
2018 y Ff(MPI)p 150 2018 15 2 v 17 w(ALLREDUCE\()22 b(sendbuf,)h
(recvbuf,)g(cnt,)g(type,)g(op,)h(comm\))117 2075 y Fg(IN)39
b Ff(sendbuf)577 b Fg(starting)14 b(address)h(of)e(send)i(bu\013er)117
2130 y(OUT)39 b Ff(recvbuf)530 b Fg(starting)14 b(address)h(of)e
(receiv)o(e)i(bu\013er)117 2184 y(IN)39 b Ff(cnt)673
b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(input)h(bu\013er)h
(\(in)o(teger\))117 2239 y(IN)39 b Ff(type)649 b Fg(data)13
b(t)o(yp)q(e)i(of)e(elemen)o(ts)h(of)f(input)h(bu\013er)117
2294 y(IN)39 b Ff(op)697 b Fg(op)q(eration)117 2349 y(IN)39
b Ff(comm)649 b Fg(comm)o(unicator)11 b(handle)166 2432
y Fm(Same)g(as)g(the)g Ff(MPI)p 484 2432 V 16 w(REDUCE)g
Fm(op)q(eration)g(function)h(except)f(that)f(the)i(result)f(app)q(ears)
g(in)h(the)f(receiv)o(e)75 2488 y(bu\013er)k(of)g(all)h(the)f(group)g
(mem)o(b)q(ers.)75 2592 y Ff(MPI)p 150 2592 V 17 w(USER)p
263 2592 V 16 w(ALLREDUCE\()23 b(sendbuf,)g(recvbuf,)f(cnt,)i(type,)f
(function,)g(comm\))117 2649 y Fg(IN)39 b Ff(sendbuf)577
b Fg(starting)14 b(address)h(of)e(send)i(bu\013er)117
2704 y(OUT)39 b Ff(recvbuf)530 b Fg(starting)14 b(address)h(of)e
(receiv)o(e)i(bu\013er)p eop
%%Page: 10 10
10 9 bop 75 -100 a Fm(10)724 b Fi(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)117 45 y Fg(IN)39
b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(input)h
(bu\013er)h(\(in)o(teger\))117 102 y(IN)39 b Ff(type)649
b Fg(data)13 b(t)o(yp)q(e)i(of)e(elemen)o(ts)h(of)f(input)h(bu\013er)
117 160 y(IN)39 b Ff(function)553 b Fg(user)15 b(de\014ned)g(function)
117 217 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)11
b(handle)166 302 y Fm(Same)j(as)f(the)h Ff(MPI)p 492
302 15 2 v 16 w(USER)p 604 302 V 17 w(REDUCE)f Fm(op)q(eration)h
(function)g(except)h(that)e(the)g(result)i(app)q(ears)e(in)i(the)75
358 y(receiv)o(e)h(bu\013er)f(of)g(all)h(the)f(group)g(mem)o(b)q(ers.)
75 463 y Ff(MPI)p 150 463 V 17 w(USER)p 263 463 V 16
w(ALLREDUCEA\()23 b(sendbuf,)g(recvbuf,)f(cnt,)i(type,)f(function,)f
(comm\))117 520 y Fg(IN)39 b Ff(sendbuf)577 b Fg(starting)14
b(address)h(of)e(send)i(bu\013er)117 577 y(OUT)39 b Ff(recvbuf)530
b Fg(starting)14 b(address)h(of)e(receiv)o(e)i(bu\013er)117
635 y(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h
(in)f(input)h(bu\013er)h(\(in)o(teger\))117 692 y(IN)39
b Ff(type)649 b Fg(data)13 b(t)o(yp)q(e)i(of)e(elemen)o(ts)h(of)f
(input)h(bu\013er)117 749 y(IN)39 b Ff(function)553 b
Fg(user)15 b(de\014ned)g(function)117 807 y(IN)39 b Ff(comm)649
b Fg(comm)o(unicator)11 b(handle)166 891 y Fm(Same)16
b(as)g Ff(MPI)p 420 891 V 16 w(USER)p 532 891 V 17 w(REDUCEA)p
Fm(,)f(except)h(that)g(the)g(result)g(app)q(ears)g(in)h(the)f(receiv)o
(e)h(bu\013er)f(of)g(all)75 948 y(the)f(group)g(mem)o(b)q(ers.)166
1082 y Fh(Implemen)o(tati)o(on)d(note:)166 1140 y Fg(The)j(allreduce)h
(op)q(erations)f(can)g(b)q(e)h(implem)o(en)o(ted)d(as)i(a)g(reduce,)h
(follo)o(w)o(ed)e(b)o(y)h(a)f(broadcast.)22 b(Ho)o(w)o(ev)o(er,)75
1196 y(a)14 b(direct)g(implemen)o(tation)d(can)j(lead)f(to)h(b)q(etter)
i(p)q(erformance.)75 1409 y Fc(1.5.2)49 b(Scan)75 1545
y Ff(MPI)p 150 1545 V 17 w(SCAN\()23 b(sendbuf,)g(recvbuf,)f(cnt,)i
(type,)f(op,)g(comm)h(\))117 1601 y Fg(IN)39 b Ff(sendbuf)577
b Fg(starting)14 b(address)h(of)e(send)i(bu\013er)117
1659 y(IN)39 b Ff(recvbuf)577 b Fg(starting)14 b(address)h(of)e(receiv)
o(e)i(bu\013er)117 1716 y(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)f(input)h(bu\013er)h(\(in)o(teger\))117
1774 y(IN)39 b Ff(type)649 b Fg(data)13 b(t)o(yp)q(e)i(of)e(elemen)o
(ts)h(of)f(input)h(bu\013er)117 1831 y(IN)39 b Ff(op)697
b Fg(op)q(eration)117 1888 y(IN)39 b Ff(comm)649 b Fg(comm)o(unicator)
11 b(handle)166 1973 y Ff(MPI)p 241 1973 V 17 w(SCAN)g
Fm(is)i(used)f(to)f(p)q(erform)h(a)g(parallel)h(pre\014x)g(with)f(resp)
q(ect)h(to)e(an)h(asso)q(ciativ)o(e)g(and)g(comm)o(u-)75
2029 y(tativ)o(e)k(reduction)h(op)q(eration)g(on)f(data)f(distributed)j
(across)e(the)g(group.)23 b(The)16 b(op)q(eration)h(returns)f(in)75
2086 y(the)f(receiv)o(e)h(bu\013er)e(of)h(the)f(pro)q(cess)h(with)h
(rank)e Ff(i)h Fm(the)f(reduction)i(of)e(the)h(v)m(alues)h(in)g(the)f
(send)g(bu\013ers)75 2142 y(of)i(pro)q(cesses)g(with)h(ranks)f
Ff(0,...,i)p Fm(.)25 b(The)17 b(t)o(yp)q(e)h(of)e(op)q(erations)i(supp)
q(orted,)g(their)f(seman)o(tics,)h(and)75 2199 y(the)d(constrain)o(ts)g
(on)g(send)h(and)f(receiv)o(e)h(bu\013ers)g(are)e(as)h(for)g
Ff(MPI)p 1214 2199 V 17 w(REDUCE)p Fm(.)75 2304 y Ff(MPI)p
150 2304 V 17 w(USER)p 263 2304 V 16 w(SCAN\()24 b(sendbuf,)e(recvbuf,)
h(cnt,)g(type,)h(function,)e(comm\))117 2360 y Fg(IN)39
b Ff(sendbuf)577 b Fg(address)15 b(of)e(input)h(bu\013er)117
2418 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15 b(of)e(output)h
(bu\013er)117 2475 y(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)g(input)f(and)h(output)g(bu\013er)h(\(in-)905
2531 y(teger\))117 2589 y(IN)39 b Ff(type)649 b Fg(data)13
b(t)o(yp)q(e)i(of)e(bu\013er)117 2646 y(IN)39 b Ff(function)553
b Fg(user)15 b(pro)o(vided)e(function)117 2704 y(IN)39
b Ff(comm)649 b Fg(comm)o(unicator)11 b(handle)p eop
%%Page: 11 11
11 10 bop 75 -100 a Fi(1.6.)29 b(CORRECTNESS)1308 b Fm(11)166
45 y(Same)15 b(as)f(the)h Ff(MPI)p 495 45 15 2 v 16 w(SCAN)f
Fm(op)q(eration)h(function)h(except)f(that)f(a)g(user)h(supplied)i
(function)f(is)f(used.)75 102 y Ff(function)d Fm(is)h(an)f(asso)q
(ciativ)o(e)h(and)g(comm)o(utativ)o(e)f(function)i(with)f(an)f(input)i
(v)o(ector,)e(an)h(inout)g(v)o(ector,)75 158 y(and)j(a)g(length)g
(argumen)o(t.)21 b(The)16 b(t)o(yp)q(es)g(of)g(the)g(t)o(w)o(o)e(v)o
(ectors)h(and)h(of)g(the)g(returned)g(v)m(alues)h(all)g(agree.)75
214 y(See)f Ff(MPI)p 231 214 V 17 w(USER)p 344 214 V
16 w(REDUCE)f Fm(for)f(more)h(details.)75 318 y Ff(MPI)p
150 318 V 17 w(USER)p 263 318 V 16 w(SCANA\()23 b(sendbuf,)g(recvbuf,)g
(cnt,)g(type,)g(function,)g(comm\))117 375 y Fg(IN)39
b Ff(sendbuf)577 b Fg(address)15 b(of)e(input)h(bu\013er)117
430 y(OUT)39 b Ff(recvbuf)530 b Fg(address)15 b(of)e(output)h(bu\013er)
117 485 y(IN)39 b Ff(cnt)673 b Fg(n)o(um)o(b)q(er)13
b(of)g(elemen)o(ts)h(in)g(input)f(and)h(output)g(bu\013er)h(\(in-)905
541 y(teger\))117 596 y(IN)39 b Ff(type)649 b Fg(data)13
b(t)o(yp)q(e)i(of)e(bu\013er)117 651 y(IN)39 b Ff(function)553
b Fg(user)15 b(de\014ned)g(function)117 706 y(IN)39 b
Ff(comm)649 b Fg(comm)o(unicator)11 b(handle)166 789
y Fm(Same)j(as)f Ff(MPI)p 415 789 V 17 w(USER)p 528 789
V 17 w(SCAN)p Fm(,)f(except)j(that)e(the)h(user-de\014ned)h(op)q
(eration)f(need)h(not)e(b)q(e)i(comm)o(uta-)75 845 y(tiv)o(e.)166
978 y Fh(Implemen)o(tati)o(on)d(note:)166 1035 y Fb(MPI)p
235 1035 14 2 v 15 w(USER)p 338 1035 V 15 w(SCAN)h Fg(can)h(b)q(e)g
(implemen)o(ted)e(as)i Fb(MPI)p 949 1035 V 15 w(USER)p
1052 1035 V 15 w(SCANA)p Fg(.)75 1262 y Fl(1.6)59 b(Co)n(rrectness)75
1364 y Fm(A)13 b(correct)f(program)g(should)i(in)o(v)o(ok)o(e)f
(collectiv)o(e)h(comm)o(unications)g(so)e(that)g(deadlo)q(c)o(k)i(will)
g(not)f(o)q(ccur,)75 1421 y(whether)22 b(collectiv)o(e)h(comm)o
(unication)f(is)g(sync)o(hronizing)h(or)d(not.)38 b(The)22
b(follo)o(wing)g(t)o(w)o(o)e(examples)75 1477 y(illustrate)c(dangerous)
f(use)h(of)f(collectiv)o(e)i(routines.)j(The)15 b(\014rst)g(example)h
(is)g(erroneous.)75 1585 y Ff(/*)24 b(Example)e(A)i(*/)75
1641 y(switch\(MPI_rank\(comm,rank\))o(;)d(rank\))147
1698 y({)147 1754 y(case)i(0:)g({)h(MPI_bcast\(var1,)e(cnt,)h(type,)h
(0,)f(comm\);)385 1811 y(MPI_send\(var2,)f(cnt,)h(type,)h(1,)f(tag,)h
(comm\);)385 1867 y(break;)337 1924 y(})147 1980 y(case)f(1:)g({)h
(MPI_recv\(var2,)e(cnt,)h(type,)h(0,)f(tag,)h(comm\);)385
2036 y(MPI_bcast\(var1,)e(cnt,)h(type,)h(0,)f(comm\);)385
2093 y(break;)337 2149 y(})147 2206 y(})166 2313 y Fm(Pro)q(cess)16
b(zero)f(executes)h(a)g(broadcast,)f(follo)o(w)o(ed)h(b)o(y)f(a)h(blo)q
(c)o(king)h(send)f(op)q(eration;)g(pro)q(cess)g(one)75
2370 y(\014rst)h(executes)h(a)g(matc)o(hing)f(blo)q(c)o(king)i(receiv)o
(e,)g(follo)o(w)o(ed)f(b)o(y)g(the)f(matc)o(hing)h(broadcast)f(call.)28
b(This)75 2426 y(program)18 b(ma)o(y)g(deadlo)q(c)o(k.)33
b(The)20 b(broadcast)e(call)i(on)f(pro)q(cess)h(zero)f(ma)o(y)f(blo)q
(c)o(k)i(un)o(til)g(pro)q(cess)g(one)75 2483 y(executes)c(the)f(matc)o
(hing)g(broadcast)g(call,)h(so)f(that)f(the)i(send)f(is)h(not)f
(executed.)21 b(Pro)q(cess)15 b(one)g(blo)q(c)o(ks)75
2539 y(on)g(the)g(receiv)o(e)i(and)e(nev)o(er)g(executes)h(the)f
(broadcast.)166 2596 y(The)g(follo)o(wing)h(example)g(is)g(correct,)e
(but)i(nondeterministic:)75 2704 y Ff(/*)24 b(Example)e(B)i(*/)p
eop
%%Page: 12 12
12 11 bop 75 -100 a Fm(12)724 b Fi(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)75 45 y Ff
(switch\(MPI_rank\(comm,rank\))o(;)21 b(rank\))147 102
y({)170 158 y(case)j(0:)f({)h(MPI_bcast\(var1,)e(cnt,)h(type,)g(0,)h
(comm\);)409 214 y(MPI_send\(var2,)e(cnt,)h(type,)h(1,)f(tag,)g
(comm\);)409 271 y(break;)361 327 y(})170 384 y(case)h(1:)f({)h
(MPI_recv\(var2,)e(cnt,)h(type,)h(MPI_SOURCE_ANY,)d(tag,)j(comm\);)409
440 y(MPI_bcast\(var1,)e(cnt,)h(type,)g(0,)h(comm\);)409
497 y(MPI_recv\(var2,)e(cnt,)h(type,)h(MPI_SOURCE_ANY,)d(tag,)j
(comm\);)409 553 y(break;)361 610 y(})170 666 y(case)g(2:)f({)h
(MPI_send\(var2,)e(cnt,)h(type,)h(1,)f(tag,)g(comm\);)409
723 y(MPI_bcast\(var1,)f(cnt,)h(type,)g(0,)h(comm\);)409
779 y(break;)361 835 y(})170 892 y(})166 1003 y Fm(All)17
b(three)e(pro)q(cesses)h(participate)g(in)g(a)f(broadcast.)20
b(Pro)q(cess)15 b(0)g(sends)h(a)f(message)g(to)g(pro)q(cess)g(1)75
1060 y(after)d(the)h(broadcast,)f(and)g(pro)q(cess)h(2)g(sends)g(a)f
(message)g(to)g(pro)q(cess)h(1)f(after)g(the)h(broadcast.)18
b(Pro)q(cess)75 1116 y(1)d(receiv)o(es)h(b)q(efore)f(and)h(after)e(the)
i(broadcast,)e(with)h(a)g(wildcard)h(source)g(parameter.)166
1174 y(Tw)o(o)e(p)q(ossible)i(executions,)g(with)f(di\013eren)o(t)g
(matc)o(hings)g(of)f(sends)h(and)g(receiv)o(es)h(are)f(illustrated)75
1230 y(b)q(elo)o(w.)337 1343 y Ff(First)24 b(Execution)170
1456 y(0)311 b(1)357 b(2)648 1512 y(/-----)47 b(send)457
1569 y(recv)23 b(<-/)75 1625 y(broadcast)118 b(broadcast)166
b(broadcast)123 1682 y(send)23 b(---\\)337 1738 y(\\-->)h(recv)337
1850 y(Second)g(Execution)147 1963 y(0)334 b(1)357 b(2)75
2019 y(broadcast)123 2076 y(send)23 b(---\\)337 2132
y(\\-->)48 b(recv)433 2188 y(broadcast)166 b(broadcast)719
2245 y(/---)47 b(send)481 2301 y(recv)23 b(<---/)166
2413 y Fm(Note)16 b(that)g(the)h(second)g(execution)h(has)f(the)f(p)q
(eculiar)j(e\013ect)e(that)f(a)g(send)h(executed)h(after)e(the)75
2469 y(broadcast)e(is)i(receiv)o(ed)h(at)d(another)h(no)q(de)h(b)q
(efore)f(the)g(broadcast.)166 2603 y Fh(Discussion:)166
2654 y Fg(An)d(alternativ)o(e)g(design)h(is)f(to)g(require)h(that)g
(all)e(collectiv)o(e)h(comm)o(unication)d(calls)j(are)g(sync)o
(hronizing.)18 b(In)75 2704 y(this)11 b(case,)h(the)g(second)g(program)
d(is)i(determinisitc)g(and)f(only)h(the)g(\014rst)h(execution)g(ma)o(y)
d(o)q(ccur.)18 b(This)11 b(will)e(mak)o(e)p eop
%%Page: 13 13
13 12 bop 75 -100 a Fi(1.6.)34 b(CORRECTNESS)1303 b Fm(13)75
45 y Fg(a)14 b(di\013erence)h(only)e(for)h(collectiv)o(e)g(op)q
(erations)g(where)h(not)e(all)g(pro)q(cesses)j(b)q(oth)e(send)h(and)f
(receiv)o(e)h(\(broadcast,)75 95 y(reduce,)g(scatter,)g(gather\).)166
236 y Fm(It)21 b(is)g(the)g(user's)f(resp)q(onsibilit)o(y)j(to)d(mak)o
(e)g(sure)h(that)f(there)h(are)f(no)h(t)o(w)o(o)e(concurren)o(tly)i
(exe-)75 292 y(cuting)g(collectiv)o(e)i(calls)e(that)f(use)h(the)g
(same)f(comm)o(unicator)g(on)h(the)g(same)f(pro)q(cess.)36
b(\(Since)22 b(all)75 348 y(collectiv)o(e)d(comm)o(unication)f(calls)h
(are)e(blo)q(c)o(king)i(this)e(restriction)h(only)g(a\013ects)f(m)o
(ultithreaded)h(im-)75 405 y(plemen)o(tations.\))i(On)14
b(the)g(other)g(hand,)g(it)g(is)g(legitimate)h(for)e(one)h(pro)q(cess)h
(to)e(start)g(a)g(new)h(collectiv)o(e)75 461 y(comm)o(unication)j(call)
g(ev)o(en)f(though)g(a)f(previous)i(call)g(that)e(uses)h(the)g(same)g
(comm)o(unicator)g(has)f(not)75 518 y(y)o(et)g(terminated)g(on)g
(another)g(pro)q(cess.)20 b(As)15 b(illustrated)i(in)f(the)f(follo)o
(wing)h(example:)75 633 y Ff(/*)24 b(Example)e(C)i(*/)99
689 y(MPI_bcast\(var1,)e(cnt,)h(type,)g(0,)h(comm\);)99
746 y(MPI_bcast\(var2,)e(cnt,)h(type,)g(1,)h(comm\);)166
859 y Fm(In)17 b(a)e(nonsync)o(hronizing)j(implemen)o(tation)g(of)d
(broadcast,)g(pro)q(cess)i(zero)f(ma)o(y)f(start)g(executing)75
916 y(the)h(second)g(broadcast)f(b)q(efore)h(pro)q(cess)g(one)g
(terminated)g(the)g(\014rst)f(broadcast.)21 b(Both)15
b(pro)q(cess)h(zero)75 972 y(and)h(one)f(ma)o(y)g(terminate)g(their)h
(t)o(w)o(o)e(broadcast)g(calls)j(b)q(efore)e(other)g(pro)q(cesses)h(ha)
o(v)o(e)f(started)f(their)75 1029 y(calls.)21 b(It)15
b(is)h(the)f(implemen)o(ter's)h(resp)q(onsibilit)o(y)i(to)c(ensure)i
(this)g(will)h(not)e(cause)g(an)o(y)g(error.)166 1163
y Fh(Implemen)o(tati)o(on)d(note:)166 1214 y Fg(Assume)e(that)g
(broadcast)h(is)f(implemen)o(ted)e(using)i(p)q(oin)o(t-to-p)q(oin)o(t)e
(MPI)j(comm)o(unicati)o(on.)j(The)d(follo)o(wing)75 1264
y(t)o(w)o(o)i(rules)i(are)f(satis\014ed:)134 1343 y(1.)22
b(All)13 b(receiv)o(es)i(sp)q(ecify)g(their)f(source)h(explicitly)e
(\(no)h(wildcards\).)134 1424 y(2.)22 b(Eac)o(h)12 b(pro)q(cess)i
(sends)g(all)d(messages)h(that)h(p)q(ertain)f(to)h(one)f(collectiv)o(e)
g(call)g(b)q(efore)h(sending)f(an)o(y)g(message)189 1474
y(that)i(p)q(ertain)g(to)g(a)f(subsequen)o(t)j(collectiv)o(e)e(call.)
166 1553 y(Then)i(messages)g(b)q(elonging)f(to)h(successiv)o(e)i
(broadcasts)f(cannot)f(b)q(e)h(confused,)g(as)f(the)h(order)f(of)g(p)q
(oin)o(t-)75 1602 y(to-p)q(oin)o(t)d(messages)h(is)g(preserv)o(ed.)20
b(This)14 b(is)f(true,)i(in)e(general,)h(for)f(an)o(y)h(collectiv)o(e)g
(library)m(.)166 1743 y Fm(A)k(collectiv)o(e)h(comm)o(unication)f(ma)o
(y)f(execute)i(in)f(a)g(con)o(text)f(while)i(p)q(oin)o(t-to-p)q(oin)o
(t)f(comm)o(uni-)75 1800 y(cations)f(that)f(use)h(the)g(same)g(con)o
(text)f(are)h(p)q(ending,)i(or)d(o)q(ccur)i(concurren)o(tly)l(.)25
b(This)18 b(is)f(illustracted)75 1856 y(in)f(example)g(B)g(ab)q(o)o(v)o
(e,)f(the)g(\014rst)g(pro)q(cess)h(ma)o(y)f(receiv)o(e)h(a)f(message)g
(sen)o(t)g(with)h(the)f(con)o(text)g(of)g(com-)75 1912
y(m)o(unicator)j Ff(comm)f Fm(while)j(it)e(is)h(executing)g(a)f
(broadcast)f(with)i(the)f(same)g(comm)o(unicator.)28
b(It)18 b(is)h(the)75 1969 y(implemen)o(ter)d(resp)q(onsibili)q(t)o(y)h
(to)e(ensure)h(this)f(will)i(not)e(cause)g(an)o(y)g(confusion.)166
2103 y Fh(Implemen)o(tati)o(on)21 b(note:)65 b Fg(Assume)22
b(that)f(collectiv)o(e)g(comm)o(unications)d(are)k(implemen)o(ted)d
(using)75 2153 y(p)q(oin)o(t-to-p)q(oin)o(t)e(MPI)i(comm)o(unication.)
29 b(Then,)20 b(in)e(order)i(to)e(a)o(v)o(oid)g(confusion,)h(whenev)o
(er)h(a)e(comm)o(unica-)75 2203 y(tor)g(is)f(created,)i(a)e(\\hidden)h
(comm)o(unicator")c(need)19 b(b)q(e)f(created)h(for)e(collectiv)o(e)g
(comm)o(unication.)26 b(A)17 b(direct)75 2252 y(implemen)o(tatio)o(n)11
b(of)i(MPI)h(collectiv)o(e)g(comm)o(unicatio)o(n)d(can)j(ac)o(hiev)o(e)
g(a)g(similar)d(e\013ect)k(more)e(c)o(heaply)m(,)g(e.g.,)f(b)o(y)75
2302 y(using)j(a)g(hidden)h(tag)f(or)g(con)o(text)h(bit)g(to)f
(indicate)g(whether)i(the)f(comm)o(unicator)d(is)i(used)h(for)f(p)q
(oin)o(t-to-p)q(oin)o(t)75 2352 y(or)f(collectiv)o(e)g(comm)o(unicatio)
o(n.)166 2403 y(An)g(alternativ)o(e)h(c)o(hoice)g(is)f(to)g(require)h
(that)g(a)f(comm)o(unicator)e(is)i(quiescen)o(t)i(when)f(used)g(in)f(a)
g(collectiv)o(e)75 2453 y(comm)o(unication:)h(No)f(messages)h(using)f
(a)f(con)o(text)i(can)g(b)q(e)g(p)q(ending)f(at)g(a)g(pro)q(cess)i
(when)f(this)f(pro)q(cess)i(starts)75 2503 y(executing)e(a)f(collectiv)
o(e)h(comm)o(unicati)o(on)d(with)i(this)g(con)o(text,)h(nor)f(can)h(an)
o(y)f(new)h(message)f(with)g(this)g(con)o(text)75 2553
y(arriv)o(e)j(during)h(the)g(execution)g(of)f(this)g(collectiv)o(e)h
(comm)o(unicatio)o(n,)d(unless)j(they)g(w)o(ere)h(sen)o(t)f(as)g(part)f
(of)g(the)75 2602 y(execution)f(of)e(the)h(collectiv)o(e)g(call)f
(itself.)166 2654 y(This)j(approac)o(h)h(has)f(the)h(adv)n(an)o(tage)f
(of)f(simplifying)e(the)k(la)o(y)o(ering)f(of)f(collectiv)o(e)i(comm)o
(unicatio)o(ns)d(on)75 2704 y(top)d(of)f(p)q(oin)o(t-to-p)q(oin)o(t)f
(comm)o(unicatio)o(n)f(\(no)j(need)g(for)g(hidden)f(con)o(texts\).)19
b(Also,)10 b(it)g(imp)q(oses)g(on)h(the)g(collectiv)o(e)p
eop
%%Page: 14 14
14 13 bop 75 -100 a Fm(14)724 b Fi(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)75 45 y Fg(collectiv)o(e)f(comm)o
(unicatio)o(n)d(library)i(the)i(same)d(restrictions)j(that)f(hold)f
(for)h(an)o(y)f(other)i(collectiv)o(e)e(library)m(.)20
b(It)75 95 y(has)14 b(the)h(disadv)n(an)o(tage)e(of)g(restricting)i
(the)f(use)h(of)e(collectiv)o(e)h(comm)o(unicatio)o(ns.)166
145 y(The)i(question)h(is)f(whether)h(w)o(e)g(w)o(an)o(t)e(to)h(view)g
(the)h(collectiv)o(e)f(comm)o(unication)d(op)q(erations)j(as)g(part)h
(of)75 195 y(the)f(basic)g(comm)o(unicati)o(on)d(services)k(of)e(MPI,)g
(or)g(whether)i(w)o(e)f(w)o(an)o(t)f(to)g(see)i(them)d(as)i(a)f
(library)g(la)o(y)o(ered)g(on)75 244 y(top)f(of)f(these)i(basic)f
(services.)p eop
%%Trailer
end
userdict /end-hook known{end-hook}if
%%EOF
From owner-mpi-collcomm@CS.UTK.EDU Tue Sep  7 19:47:07 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA05476; Tue, 7 Sep 93 19:47:07 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA23318; Tue, 7 Sep 93 19:46:32 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 7 Sep 1993 19:46:31 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA23310; Tue, 7 Sep 93 19:46:27 -0400
Received: from snacker.pnl.gov (130.20.186.18) by pnlg.pnl.gov; Tue, 7 Sep 93
 16:39 PDT
Received: by snacker.pnl.gov (4.1/SMI-4.1) id AA28435; Tue, 7 Sep 93 16:36:57
 PDT
Date: Tue, 7 Sep 93 16:36:57 PDT
From: rj_littlefield@pnlg.pnl.gov
Subject: proposed changes to collective chapter
To: gst@ornl.gov, mpi-collcomm@cs.utk.edu, snir@watson.ibm.com
Cc: rj_littlefield@pnlg.pnl.gov
Message-Id: <9309072336.AA28435@snacker.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

This note proposes two changes to the collective communication chapter.

-------------------------------------
In summary, the proposed changes are:

1. Extending the functionality for gather, scatter, allcast, and
   all-to-all, so as to permit each process to have different
   amounts of data.

2. Adding a new operation, "reduce_scatter", which behaves like
   reduce except that the result vector is distributed among the
   participating nodes.

Regarding the first change:

   The calling sequences are tweaked as follows:

     . change the input length from scalar to vector (if scattering), and
     . make the actual result length be a vector output parameter.  

   I also propose adding a maximum result length for safety.

   Discussion: The existing restriction that all data lengths be the
   same was apparently a side effect of switching from descriptors to
   derived data types.  This proposal is intended to restore the
   functionality implied by earlier versions.  

   The proposal does involve tradeoffs.  In general, this proposal
   emphasizes high functionality, at the expense of some potential for
   optimization.  In particular, the interface proposed here does NOT
   inform every participating node about the contributions that will
   be made by every other participating node.  Thus, there is no way
   for the collective comm routines to adaptively choose an optimal
   algorithm immediately upon entry, and this prevents some possible
   optimizations.  The penalty can be minimized by clever coding, and
   my personal guess is that the all-same-length case would average
   maybe 10-20% slower with this interface than with the best
   specialized interface.  Other possible alternatives include: 1)
   having different routine names to handle the all-same-length case;
   2) supplying all nodes' contributions in every node's argument
   list; and 3) not handling the different-lengths case.  Alternative
   (1) causes unwanted growth in the number of routines, (2) is not
   scalable (especially for all-to-all), and (3) is low functionality.
   Thus the interface proposed here seems to be the best compromise
   within the MPI framework.
   
----------------------------------------
In detail, the proposed changes are:

1. The calling sequences and semantics of MPI_GATHER, MPI_SCATTER, 
   MPI_ALLCAST, and MPI_ALLTOALL become as follows:

  MPI_GATHER
    (sendbuf,sendcnt,sendtype,recvbuf,maxrecvcnt,recvcnts,recvtype,root,comm)
       IN      IN       IN      OUT      IN      OUT[vec]   IN      IN   IN

    As before, gather concatenates all processes' contributions in
    rank order, on the root.  What is new is that each process can
    contribute a different sendcnt, and that all of the sendcnt's
    are also returned on the root, in the recvcnts argument.

    The root process gets recvcnts[i] = sendcnt on the process with rank i.

  MPI_SCATTER
    (sendbuf,sendcnts,sendtype,recvbuf,maxrecvcnt,recvcnt,recvtype,root,comm)
       IN    IN[vec]    IN       OUT      IN        OUT     IN      IN   IN

    As before, scatter distributes the root process's data to all
    processes, in rank order.  What is new is that the root can
    specify a different amount of data to be sent to each process.

    The process with rank i gets recvcnt = sendcnts[i] on root.

  MPI_ALLCAST
    (sendbuf,sendcnt,sendtype,recvbuf,maxrecvent,recvcnts,recvtype,comm)
       IN      IN       IN      OUT       IN     OUT[vec]    IN     IN

    As before, allcast results in each process receiving the
    concatenation of all processes' contributions.  What is new is
    that each process can contribute a different sendcnt, and that
    all of the sendcnt's are returned to each process.  

    Every node gets recvcnts[i] = sendcnt on process with rank i.

  MPI_ALLTOALL
    (sendbuf,sendcnts,sendtype,recvbuf,maxrecvcnt,recvcnts,recvtype,comm)
       IN    IN[vec]     IN      OUT       IN     OUT[vec]    IN     IN

    As before, alltoall results in each process receiving the
    concatenation of personalized contributions from other processes.
    What is new is that each process can send a different amount of
    data to each other process, and that each process receives the
    sendcnt's along with the data.

    Node with rank i gets recvcnts[k] = sendcnts[i] on proc with rank k.


  The following routines are unchanged:

    MPI_REDUCE
    MPI_USER_REDUCE[,A]
    MPI_ALLREDUCE
    MPU_USER_ALLREDUCE[,A]
    MPI_SCAN
    MPI_USER_SCAN[,A]

2. The following new routines are proposed:

New: MPI_REDUCE_SCATTER (sendbuf,recvbuf,distcnts,type,op,comm)
                            IN     OUT   IN[vec]   IN  IN  IN

    IN   sendbuf       address of send buffer

    OUT  recvbuf       address of receive buffer.

                       The number of results that will appear in recvbuf
                       on the process with rank i is exactly distcnts[i].

    IN   distcnts      integer vector of counts indicating how to distribute
                       the results.  

                       This vector must be identical on all members of
                       the group.  Process with rank i receives
                       sendcnts[i] elements, resulting from reducing
                       the elements starting at sum[k=0..i-1](sendcnts[k]).

    IN   type          data type of elements of input buffer

    IN   op            operation

    IN   comm          communicator handle

    This routine is functionally equivalent to:
  
      MPI_REDUCE (sendbuf,scratch,sum(counts),type,op,0,comm)
      MPI_SCATTER (scratch,counts,type,recvbuf,count[myrank],junk,type,0,comm)

    However, it can be implemented so as to run substantially faster.

New: MPI_USER_REDUCE_SCATTER[,A] (sendbuf,recvbuf,distcnts,type,function,comm)
                                    IN      OUT   IN[vec]   IN  CALLBACK  IN

    with the obvious arguments.

----------------------------------------------------------------------

Comments?  And who is holding the source LaTeX these days?

--Rik
----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU Wed Sep  8 08:29:07 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA08407; Wed, 8 Sep 93 08:29:07 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18524; Wed, 8 Sep 93 08:28:34 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 8 Sep 1993 08:28:33 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18516; Wed, 8 Sep 93 08:28:32 -0400
Received: by msr.EPM.ORNL.GOV (4.1/1.34)
	id AA20655; Wed, 8 Sep 93 08:28:31 EDT
Date: Wed, 8 Sep 93 08:28:31 EDT
From: geist@msr.EPM.ORNL.GOV (Al Geist)
Message-Id: <9309081228.AA20655@msr.EPM.ORNL.GOV>
To: mpi-collcomm@cs.utk.edu, rj_littlefield@pnlg.pnl.gov
Subject: Re: proposed changes to collective chapter


>Comments?  And who is holding the source LaTeX these days?

Hi Rik, I am holding the source LaTeX for collective chapter.
As for the proposed changes I have no problems with them.

IF I don't get negative comments on the proposed changes then
I will modify the collcomm chapter before the next meeting
and we can formally vote on the ammendments.

Reasonable?
Al Geist

PS. So if anyone has problems with the proposed ammendment
    now is the time to speak. uh email I mean. (-:
From owner-mpi-collcomm@CS.UTK.EDU Wed Sep  8 19:25:25 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA14982; Wed, 8 Sep 93 19:25:25 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06514; Wed, 8 Sep 93 19:24:15 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 8 Sep 1993 19:24:14 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA06506; Wed, 8 Sep 93 19:24:12 -0400
Received: from WATSON by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 7735;
   Wed, 08 Sep 93 19:24:14 EDT
Received: from YKTVMV by watson.vnet.ibm.com with "VAGENT.V1.0"
          id 7767; Wed, 8 Sep 1993 19:24:13 EDT
Received: from snir.watson.ibm.com by yktvmv.watson.ibm.com (IBM VM SMTP V2R3)
   with TCP; Wed, 08 Sep 93 19:24:11 EDT
Received: by snir.watson.ibm.com (AIX 3.2/UCB 5.64/930311)
          id AA26929; Wed, 8 Sep 1993 19:24:07 -0400
From: snir@watson.ibm.com (Marc Snir)
Message-Id: <9309082324.AA26929@snir.watson.ibm.com>
To: gst@ornl.gov
Cc: rj_littlefiled@pnlg.pnl.gov, mpi-collcomm@cs.utk.edu
Reply-To: snir@watson.ibm.com
Date: Wed, 08 Sep 93 19:24:06 -0500


-:) -:) -:) -:) -:)

1. I like the first proposal of Rick.  One comment:
At all places where an argument is not significant the call
should not need to provide the argument (this is the rule I have
been using in the current draft).  One should specify for each call
which arguments are significant.

2.  The definition of REDUCE_SCATTER is somewhat obscure and needs
to be expanded.  If I understand it correctly, this function first does
a componentwise reduction on vectors provided by the processes.
Next, the resulting vector of results is split into disjoint
segments, where segment i has length discnts[i]; the i-th segment is
sent to process i.
An alternative is that each process receives the same number of
values.
Another alternative is that each process also provides a bit vector
indicating which results it needs.  This alternative allows the same
result to be sent to more than one process.
Q: how frequent is the
pattern of usage that Rick assumes?  (Namely, that each process
wants a different number of results, but each result is needed by a
unique process.)  I would like some justification.
From owner-mpi-collcomm@CS.UTK.EDU Wed Sep  8 20:47:02 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA15096; Wed, 8 Sep 93 20:47:02 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11551; Wed, 8 Sep 93 20:46:05 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 8 Sep 1993 20:46:04 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11543; Wed, 8 Sep 93 20:46:02 -0400
Received: from snacker.pnl.gov (130.20.186.18) by pnlg.pnl.gov; Wed, 8 Sep 93
 17:41 PDT
Received: by snacker.pnl.gov (4.1/SMI-4.1) id AA17697; Wed, 8 Sep 93 17:38:21
 PDT
Date: Wed, 8 Sep 93 17:38:21 PDT
From: rj_littlefield@pnlg.pnl.gov
Subject: proposed collective changes
To: gst@ornl.gov, mpi-collcomm@cs.utk.edu, snir@watson.ibm.com
Cc: rj_littlefield@pnlg.pnl.gov
Message-Id: <9309090038.AA17697@snacker.pnl.gov>
X-Envelope-To: mpi-collcomm@cs.utk.edu

Marc Snir writes:

> 1. I like the first proposal of Rick.  One comment:
> At all places where an argument is not significant the call
> should not need to provide the argument (this is the rule I have
> been using in the current draft).  One should specify for each call
> which arguments are significant.

Sure.  

I presume this refers to recvcnts on non-root nodes for the gather,
and to sendcnts on non-root nodes for the scatter.  Are there others?

> 2.  The definition of REDUCE_SCATTER is somewhat obscure and needs
> to be expanded.  If I understand it correctly, this function first does
> a componentwise reduction on vectors provided by the processes.
> Next, the resulting vector of results is split into disjoint
> segments, where segment i has length discnts[i]; the i-th segment is
> sent to process i.

This correctly describes the result.  An efficient implementation
would actually keep the data split up at all times.

> An alternative is that each process receives the same number of
> values.
> Another alternative is that each process also provides a bit vector
> indicating which results it needs.  This alternative allows the same
> result to be sent to more than one process.
> Q: how frequent is the
> pattern of usage that Rick assumes?  (Namely, that each process
> wants a different number of results, but each result is needed by a
> unique process.)  I would like some justification.

For my users, the most common case is that the results just need
to be distributed so as to balance the load.  (This optimizes an
overall computation that is structured as: compute per-process
contributions; sum contributions; perform local computations on
each global sum; allcast results.  A specific example is in
molecular dynamics using the replicated data model: the per-process
contributions are forces; the summed values are total force on
each particle; the local computation is to integrate forces to
give velocities and positions; and the allcast is to distribute
the new positions.)

This case would be supported by Marc's first alternative.  But
that alternative does not support another case I have, which is
that each result is needed by only one process, but some of the
results have to end up co-resident.  (E.g., in order to permit
the "shake" algorithm for molecular dynamics.)

Marc's second alternative is both more than I need and less than
I could use.  It matches a model in which all processes
contribute to all results, and results are needed on multiple but
not all processes.  I do not have any cases matching that model,
and I'm having trouble imagining one.  I do have many
circumstances in which only some processes contribute to each
result, and all contributors need that result, but those cases
would not be efficiently handled by the proposed bit-vector
scheme.  (Aside: see the PARTI package of Joel Saltz et.al. for
routines to handle that case.)

The REDUCE_SCATTER that I proposed has the advantage of being
conceptually similar to the other MPI routines, and of handling a
useful set of situations.  

Marc's first alternative does not meet some needs, and his second
alternative is based on a unique interface model (the bit vector)
for which we would have to introduce new support routines in
order to retain portability.  This seems too high a cost, unless
of course some justification is offered.

(Sorry -- I just had to slip that in. ;-)

Further comment?

--Rik
----------------------------------------------------------------------
rj_littlefield@pnl.gov (alias 'd39135')   Rik Littlefield
Tel: 509-375-3927                         Pacific Northwest Lab, MS K1-87
Fax: 509-375-6631                         P.O.Box 999, Richland, WA  99352
From owner-mpi-collcomm@CS.UTK.EDU Tue Sep 14 10:05:11 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA11681; Tue, 14 Sep 93 10:05:11 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18468; Tue, 14 Sep 93 10:02:07 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 14 Sep 1993 10:02:05 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from sun4.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA18457; Tue, 14 Sep 93 10:01:58 -0400
Received: by sun4.epm.ornl.gov (4.1/1.34)
	id AA03324; Tue, 14 Sep 93 10:01:56 EDT
Date: Tue, 14 Sep 93 10:01:56 EDT
From: geist@sun4.epm.ornl.gov (Al Geist)
Message-Id: <9309141401.AA03324@sun4.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Latest Revision to Collective Chapter (postscript)

Changes: fixed figure so it prints on (all?) printers
         added Rick Littlefield's ammendments
         i.e group members are allowed to send different amounts
             added 3 new routines based on reduce followed by scatter.
         Fixed several typos, probablly added a few more. (-:

See you in Dallas,
Al Geist
-------------------------
%!PS-Adobe-2.0
%%Creator: dvips 5.516 Copyright 1986, 1993 Radical Eye Software
%%Title: cc.dvi
%%CreationDate: Tue Sep 14 09:43:34 1993
%%Pages: 18
%%PageOrder: Ascend
%%BoundingBox: 0 0 612 792
%%EndComments
%DVIPSCommandLine: dvips -D 300 -o cc.ps cc.dvi
%DVIPSSource:  TeX output 1993.09.14:0933
%%BeginProcSet: tex.pro
/TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N
/X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72
mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1}
ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale
isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div
hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul
TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get
round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{
statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0
0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn
begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X
array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo
setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx
FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{
pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}
B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup
length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B
/ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type
/stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp
0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2
index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff
ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice
ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{
ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn
/base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1
sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D
}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0
moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add
.99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage
userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook
known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X
/IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for
65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0
0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V
{}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7
getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}
ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley
false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley
scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave
transform round exch round exch itransform moveto rulex 0 rlineto 0
ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta
0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}
B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{
3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p
-1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{
3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end
%%EndProcSet
%%BeginProcSet: special.pro
TeXDict begin /SDict 200 dict N SDict begin /@SpecialDefaults{/hs 612 N
/vs 792 N /ho 0 N /vo 0 N /hsc 1 N /vsc 1 N /ang 0 N /CLIP 0 N /rwiSeen
false N /rhiSeen false N /letter{}N /note{}N /a4{}N /legal{}N}B
/@scaleunit 100 N /@hscale{@scaleunit div /hsc X}B /@vscale{@scaleunit
div /vsc X}B /@hsize{/hs X /CLIP 1 N}B /@vsize{/vs X /CLIP 1 N}B /@clip{
/CLIP 2 N}B /@hoffset{/ho X}B /@voffset{/vo X}B /@angle{/ang X}B /@rwi{
10 div /rwi X /rwiSeen true N}B /@rhi{10 div /rhi X /rhiSeen true N}B
/@llx{/llx X}B /@lly{/lly X}B /@urx{/urx X}B /@ury{/ury X}B /magscale
true def end /@MacSetUp{userdict /md known{userdict /md get type
/dicttype eq{userdict begin md length 10 add md maxlength ge{/md md dup
length 20 add dict copy def}if end md begin /letter{}N /note{}N /legal{}
N /od{txpose 1 0 mtx defaultmatrix dtransform S atan/pa X newpath
clippath mark{transform{itransform moveto}}{transform{itransform lineto}
}{6 -2 roll transform 6 -2 roll transform 6 -2 roll transform{
itransform 6 2 roll itransform 6 2 roll itransform 6 2 roll curveto}}{{
closepath}}pathforall newpath counttomark array astore /gc xdf pop ct 39
0 put 10 fz 0 fs 2 F/|______Courier fnt invertflag{PaintBlack}if}N
/txpose{pxs pys scale ppr aload pop por{noflips{pop S neg S TR pop 1 -1
scale}if xflip yflip and{pop S neg S TR 180 rotate 1 -1 scale ppr 3 get
ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg TR}if xflip yflip
not and{pop S neg S TR pop 180 rotate ppr 3 get ppr 1 get neg sub neg 0
TR}if yflip xflip not and{ppr 1 get neg ppr 0 get neg TR}if}{noflips{TR
pop pop 270 rotate 1 -1 scale}if xflip yflip and{TR pop pop 90 rotate 1
-1 scale ppr 3 get ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg
TR}if xflip yflip not and{TR pop pop 90 rotate ppr 3 get ppr 1 get neg
sub neg 0 TR}if yflip xflip not and{TR pop pop 270 rotate ppr 2 get ppr
0 get neg sub neg 0 S TR}if}ifelse scaleby96{ppr aload pop 4 -1 roll add
2 div 3 1 roll add 2 div 2 copy TR .96 dup scale neg S neg S TR}if}N /cp
{pop pop showpage pm restore}N end}if}if}N /normalscale{Resolution 72
div VResolution 72 div neg scale magscale{DVImag dup scale}if 0 setgray}
N /psfts{S 65781.76 div N}N /startTexFig{/psf$SavedState save N userdict
maxlength dict begin /magscale false def normalscale currentpoint TR
/psf$ury psfts /psf$urx psfts /psf$lly psfts /psf$llx psfts /psf$y psfts
/psf$x psfts currentpoint /psf$cy X /psf$cx X /psf$sx psf$x psf$urx
psf$llx sub div N /psf$sy psf$y psf$ury psf$lly sub div N psf$sx psf$sy
scale psf$cx psf$sx div psf$llx sub psf$cy psf$sy div psf$ury sub TR
/showpage{}N /erasepage{}N /copypage{}N /p 3 def @MacSetUp}N /doclip{
psf$llx psf$lly psf$urx psf$ury currentpoint 6 2 roll newpath 4 copy 4 2
roll moveto 6 -1 roll S lineto S lineto S lineto closepath clip newpath
moveto}N /endTexFig{end psf$SavedState restore}N /@beginspecial{SDict
begin /SpecialSave save N gsave normalscale currentpoint TR
@SpecialDefaults count /ocount X /dcount countdictstack N}N /@setspecial
{CLIP 1 eq{newpath 0 0 moveto hs 0 rlineto 0 vs rlineto hs neg 0 rlineto
closepath clip}if ho vo TR hsc vsc scale ang rotate rwiSeen{rwi urx llx
sub div rhiSeen{rhi ury lly sub div}{dup}ifelse scale llx neg lly neg TR
}{rhiSeen{rhi ury lly sub div dup scale llx neg lly neg TR}if}ifelse
CLIP 2 eq{newpath llx lly moveto urx lly lineto urx ury lineto llx ury
lineto closepath clip}if /showpage{}N /erasepage{}N /copypage{}N newpath
}N /@endspecial{count ocount sub{pop}repeat countdictstack dcount sub{
end}repeat grestore SpecialSave restore end}N /@defspecial{SDict begin}
N /@fedspecial{end}B /li{lineto}B /rl{rlineto}B /rc{rcurveto}B /np{
/SaveX currentpoint /SaveY X N 1 setlinecap newpath}N /st{stroke SaveX
SaveY moveto}N /fil{fill SaveX SaveY moveto}N /ellipse{/endangle X
/startangle X /yrad X /xrad X /savematrix matrix currentmatrix N TR xrad
yrad scale 0 0 1 startangle endangle arc savematrix setmatrix}N end
%%EndProcSet
TeXDict begin 40258431 52099146 1000 300 300
(/home/sun4/u0/geist/PAPERS/MPI/cc.dvi) @start /Fa 6
63 df<0000180000300000600000E00000C0000180000380000700000600000E00000C00
001C0000380000380000700000700000E00000E00001E00001C00001C000038000038000
0380000780000700000700000F00000E00000E00001E00001E00001E00001C00001C0000
3C00003C00003C00003C0000380000780000780000780000780000780000780000780000
780000700000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000700000780000780000780000780000780000780000780000
7800003800003C00003C00003C00003C00001C00001C00001E00001E00001E00000E0000
0E00000F000007000007000007800003800003800003800001C00001C00001E00000E000
00E000007000007000003800003800001C00000C00000E00000600000700000380000180
0000C00000E0000060000030000018157C768121>32 D<C0000060000030000038000018
00000C00000E000007000003000003800001800001C00000E00000E00000700000700000
3800003800003C00001C00001C00000E00000E00000E00000F0000070000070000078000
03800003800003C00003C00003C00001C00001C00001E00001E00001E00001E00000E000
00F00000F00000F00000F00000F00000F00000F00000F000007000007800007800007800
007800007800007800007800007800007800007800007800007800007800007800007800
007800007800007800007800007800007800007800007800007800007800007800007000
00F00000F00000F00000F00000F00000F00000F00000F00000E00001E00001E00001E000
01E00001C00001C00003C00003C00003C0000380000380000780000700000700000F0000
0E00000E00000E00001C00001C00003C0000380000380000700000700000E00000E00001
C0000180000380000300000700000E00000C0000180000380000300000600000C0000015
7C7F8121>I<0018007800F001E003C007800F001F001E003E003C007C007C007800F800
F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800F800
F800F800F800F8000D25707E25>56 D<F800F800F800F800F800F800F800F800F800F800
F800F800F800F800F800F800F800F800F800F800F800F800F80078007C007C003C003E00
1E001F000F00078003C001E000F0007800180D25708025>58 D<007C007C007C007C007C
007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C
007C00F800F800F800F001F001E003E003C0078007000E001C003800F000C000F0003800
1C000E000700078003C003E001E001F000F000F800F800F8007C007C007C007C007C007C
007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C
0E4D798025>60 D<F8F8F8F8F8F8F8F8F8F8F8F8F8F8050E708025>62
D E /Fb 59 126 df<007000F001E003C007800F001E001C003800380070007000700070
00E000E000E000E000E000E000E000E0007000700070007000380038001C001E000F0007
8003C001F000F000700C24799F18>40 D<6000F00078003C001E000F000780038001C001
C000E000E000E000E00070007000700070007000700070007000E000E000E000E001C001
C0038007800F001E003C007800F00060000C247C9F18>I<01C00001C00001C00001C000
C1C180F1C780F9CF807FFF001FFC0007F00007F0001FFC007FFF00F9CF80F1C780C1C180
01C00001C00001C00001C00011147D9718>I<00600000F00000F00000F00000F00000F0
0000F00000F0007FFFC0FFFFE0FFFFE07FFFC000F00000F00000F00000F00000F00000F0
0000F00000600013147E9718>I<1C3E7E7F3F1F070E1E7CF860080C788518>I<7FFF00FF
FF80FFFF807FFF0011047D8F18>I<3078FCFC78300606778518>I<000300000780000780
000F80000F00001F00001E00001E00003E00003C00007C0000780000780000F80000F000
01F00001E00003E00003C00003C00007C0000780000F80000F00000F00001F00001E0000
3E00003C00003C00007C0000780000F80000F00000F0000060000011247D9F18>I<01F0
0007FC000FFE001F1F001C07003803807803C07001C07001C0E000E0E000E0E000E0E000
E0E000E0E000E0E000E0E000E0E000E0F001E07001C07001C07803C03803801C07001F1F
000FFE0007FC0001F000131C7E9B18>I<01800380038007800F803F80FF80FB80438003
800380038003800380038003800380038003800380038003800380038003807FFCFFFE7F
FC0F1C7B9B18>I<03F0000FFE003FFF007C0F807003C0E001C0F000E0F000E06000E000
00E00000E00001C00001C00003C0000780000F00001E00003C0000780000F00001E00007
C0000F80001E00E03C00E07FFFE0FFFFE07FFFE0131C7E9B18>I<3078FCFC7830000000
00000000003078FCFC78300614779318>58 D<183C7E7E3C180000000000000000183C7E
7E3E1E0E1C3C78F060071A789318>I<000300000780001F80003F00007E0001FC0003F0
0007E0001FC0003F00007E0000FC0000FC00007E00003F00001FC00007E00003F00001FC
00007E00003F00001F8000078000030011187D9918>I<7FFFC0FFFFE0FFFFE0FFFFE000
0000000000000000000000FFFFE0FFFFE0FFFFE07FFFC0130C7E9318>I<600000F00000
FC00007E00003F00001FC00007E00003F00001FC00007E00003F00001F80001F80003F00
007E0001FC0003F00007E0001FC0003F00007E0000FC0000F0000060000011187D9918>
I<00700000F80000F80000D80000D80001DC0001DC0001DC00018C00038E00038E00038E
00038E000306000707000707000707000707000FFF800FFF800FFF800E03800E03801C01
C01C01C07F07F0FF8FF87F07F0151C7F9B18>65 D<FFFC00FFFF00FFFF801C03C01C01C0
1C00E01C00E01C00E01C00E01C01E01C01C01C07C01FFF801FFF001FFFC01C03C01C00E0
1C00F01C00701C00701C00701C00701C00F01C00E01C03E0FFFFC0FFFF80FFFE00141C7F
9B18>I<00F8E003FEE007FFE00F07E01E03E03C01E03800E07000E07000E0700000E000
00E00000E00000E00000E00000E00000E00000E000007000007000E07000E03800E03C00
E01E01C00F07C007FF8003FE0000F800131C7E9B18>I<FFFFF0FFFFF0FFFFF01C00701C
00701C00701C00701C00001C00001C0E001C0E001C0E001FFE001FFE001FFE001C0E001C
0E001C0E001C00001C00001C00381C00381C00381C00381C0038FFFFF8FFFFF8FFFFF815
1C7F9B18>69 D<FFFFE0FFFFE0FFFFE01C00E01C00E01C00E01C00E01C00001C00001C1C
001C1C001C1C001FFC001FFC001FFC001C1C001C1C001C1C001C00001C00001C00001C00
001C00001C00001C0000FFC000FFC000FFC000131C7E9B18>I<7FFF00FFFF807FFF0001
C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001
C00001C00001C00001C00001C00001C00001C00001C00001C00001C0007FFF00FFFF807F
FF00111C7D9B18>73 D<FC01F8FE03F8FE03F83B06E03B06E03B06E03B06E03B8EE03B8E
E0398CE0398CE039DCE039DCE039DCE038D8E038D8E038F8E03870E03870E03800E03800
E03800E03800E03800E03800E0FE03F8FE03F8FE03F8151C7F9B18>77
D<7E07F0FF0FF87F07F01D81C01D81C01D81C01DC1C01CC1C01CC1C01CE1C01CE1C01CE1
C01C61C01C71C01C71C01C31C01C39C01C39C01C39C01C19C01C19C01C1DC01C0DC01C0D
C01C0DC07F07C0FF87C07F03C0151C7F9B18>I<0FF8003FFE007FFF00780F00700700F0
0780E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E0
0380E00380E00380E00380E00380F00780700700780F007FFF003FFE000FF800111C7D9B
18>I<FFFE00FFFF80FFFFC01C03C01C01E01C00E01C00701C00701C00701C00701C0070
1C00E01C01E01C03C01FFFC01FFF801FFE001C00001C00001C00001C00001C00001C0000
1C00001C0000FF8000FF8000FF8000141C7F9B18>I<7FF800FFFE007FFF001C0F801C03
801C03C01C01C01C01C01C01C01C03C01C03801C0F801FFF001FFE001FFE001C0F001C07
001C03801C03801C03801C03801C03801C039C1C039C1C039C7F01F8FF81F87F00F0161C
7F9B18>82 D<03F3801FFF803FFF807C0F80700780E00380E00380E00380E00000700000
7800003F00001FF00007FE0000FF00000F800003C00001C00000E00000E06000E0E000E0
E001E0F001C0F80780FFFF80FFFE00E7F800131C7E9B18>I<FF83FEFF83FEFF83FE1C00
701C00701C00701C00701C00701C00701C00701C00701C00701C00701C00701C00701C00
701C00701C00701C00701C00701C00701C00700E00E00F01E00783C003FF8001FF00007C
00171C809B18>85 D<FF07F8FF07F8FF07F81C01C01E03C00E03800F0780070700070700
038E00038E0001DC0001DC0001DC0000F80000F800007000007000007000007000007000
00700000700000700000700001FC0003FE0001FC00151C7F9B18>89
D<FFF8FFF8FFF8E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0
00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000FFF8FFF8FF
F80D24779F18>91 D<600000F00000F00000F800007800007C00003C00003C00003E0000
1E00001F00000F00000F00000F800007800007C00003C00003C00003E00001E00001F000
00F00000F800007800007800007C00003C00003E00001E00001E00001F00000F00000F80
00078000078000030011247D9F18>I<FFF8FFF8FFF80038003800380038003800380038
003800380038003800380038003800380038003800380038003800380038003800380038
00380038003800380038FFF8FFF8FFF80D247F9F18>I<7FFF00FFFF80FFFF807FFF0011
047D7F18>95 D<1FE0003FF8007FFC00781E00300E0000070000070000FF0007FF001FFF
007F0700780700E00700E00700E00700F00F00781F003FFFF01FFBF007E1F014147D9318
>97 D<7E0000FE00007E00000E00000E00000E00000E00000E00000E3E000EFF800FFFC0
0FC1E00F80E00F00700E00700E00380E00380E00380E00380E00380E00380F00700F0070
0F80E00FC1E00FFFC00EFF80063E00151C809B18>I<01FE0007FF001FFF803E07803803
00700000700000E00000E00000E00000E00000E00000E000007000007001C03801C03E03
C01FFF8007FF0001FC0012147D9318>I<001F80003F80001F8000038000038000038000
038000038003E3800FFB801FFF803C1F80380F80700780700380E00380E00380E00380E0
0380E00380E00380700780700780380F803C1F801FFFF00FFBF803E3F0151C7E9B18>I<
01F00007FC001FFE003E0F00380780700380700380E001C0E001C0FFFFC0FFFFC0FFFFC0
E000007000007001C03801C03E03C01FFF8007FF0001FC0012147D9318>I<001F80007F
C000FFE000E1E001C0C001C00001C00001C0007FFFC0FFFFC0FFFFC001C00001C00001C0
0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0007FFF
007FFF007FFF00131C7F9B18>I<01E1F007FFF80FFFF81E1E301C0E0038070038070038
07003807003807001C0E001E1E001FFC001FF80039E0003800001C00001FFE001FFFC03F
FFE07801F0700070E00038E00038E00038E000387800F07E03F01FFFC00FFF8001FC0015
1F7F9318>I<7E0000FE00007E00000E00000E00000E00000E00000E00000E3E000EFF80
0FFFC00FC1C00F80E00F00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E0
0E00E00E00E00E00E07FC3FCFFE7FE7FC3FC171C809B18>I<03800007C00007C00007C0
000380000000000000000000000000007FC000FFC0007FC00001C00001C00001C00001C0
0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C000FFFF00FFFF
80FFFF00111D7C9C18>I<FE0000FE0000FE00000E00000E00000E00000E00000E00000E
3FF00E7FF00E3FF00E07800E0F000E1E000E3C000E78000EF0000FF8000FFC000F9C000F
0E000E0F000E07000E03800E03C0FFC7F8FFC7F8FFC7F8151C7F9B18>107
D<7FE000FFE0007FE00000E00000E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0
0000E0007FFFC0FFFFE07FFFC0131C7E9B18>I<7CE0E000FFFBF8007FFFF8001F1F1C00
1E1E1C001E1E1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C001C1C1C00
1C1C1C001C1C1C001C1C1C001C1C1C007F1F1F00FFBFBF807F1F1F001914819318>I<7E
3E00FEFF807FFFC00FC1C00F80E00F00E00E00E00E00E00E00E00E00E00E00E00E00E00E
00E00E00E00E00E00E00E00E00E07FC3FCFFE7FE7FC3FC1714809318>I<01F0000FFE00
1FFF003E0F803803807001C07001C0E000E0E000E0E000E0E000E0E000E0F001E07001C0
7803C03C07803E0F801FFF000FFE0001F00013147E9318>I<7E3E00FEFF807FFFC00FC1
E00F80E00F00700E00700E00380E00380E00380E00380E00380E00380F00700F00700F80
E00FC1E00FFFC00EFF800E3E000E00000E00000E00000E00000E00000E00000E00007FC0
00FFE0007FC000151E809318>I<7F87E0FF9FF07FBFF803F87803F03003E00003C00003
C0000380000380000380000380000380000380000380000380000380007FFE00FFFF007F
FE0015147F9318>114 D<07F7003FFF007FFF00780F00E00700E00700E007007C00007F
E0001FFC0003FE00001F00600780E00380E00380F00380F80F00FFFF00FFFC00E7F00011
147D9318>I<0180000380000380000380000380007FFFC0FFFFC0FFFFC0038000038000
0380000380000380000380000380000380000380000380400380E00380E00380E001C1C0
01FFC000FF80003E0013197F9818>I<7E07E0FE0FE07E07E00E00E00E00E00E00E00E00
E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E00E01E00F03E007FFFC03FF
FE01FCFC1714809318>I<7F8FF0FF8FF87F8FF01E03C00E03800E03800E038007070007
0700070700038E00038E00038E00038E0001DC0001DC0001DC0000F80000F80000700015
147F9318>I<FF8FF8FF8FF8FF8FF83800E03800E03800E01C01C01C01C01C71C01CF9C0
1CF9C01CD9C01CD9C00DDD800DDD800DDD800D8D800F8F800F8F8007070015147F9318>
I<7F8FF07F9FF07F8FF0070700078E00039E0001DC0001F80000F80000700000F00000F8
0001DC00039E00038E000707000F07807F8FF0FF8FF87F8FF015147F9318>I<7F8FF0FF
8FF87F8FF00E01C00E03800E0380070380070700070700038700038600038E0001CE0001
CE0000CC0000CC0000DC0000780000780000780000700000700000700000F00000E00079
E0007BC0007F80003F00001E0000151E7F9318>I<0007E0001FE0007FE000780000E000
00E00000E00000E00000E00000E00000E00000E00000E00000E00000E00001E0007FC000
FF8000FF80007FC00001E00000E00000E00000E00000E00000E00000E00000E00000E000
00E00000E00000E000007800007FE0001FE00007E013247E9F18>123
D<7C0000FF0000FFC00003C00000E00000E00000E00000E00000E00000E00000E00000E0
0000E00000E00000E00000F000007FC0003FE0003FE0007FC000F00000E00000E00000E0
0000E00000E00000E00000E00000E00000E00000E00000E00003C000FFC000FF00007C00
0013247E9F18>125 D E /Fc 29 118 df<018001C0018001806186F99F7DBE1FF807E0
07E01FF87DBEF99F61860180018001C0018010127E9E15>42 D<001C0000003E0000003E
0000002E0000006700000067000000E7800000C7800000C3800001C3C0000183C0000181
C0000381E0000381E0000700F0000700F0000600F0000E0078000FFFF8000FFFF8001C00
3C001C003C0018003C0038001E0038001E0070001F0070000F0070000F00E0000780191D
7F9C1C>65 D<FFF800FFFF00F00F80F003C0F001E0F000F0F000F0F000F0F000F0F000F0
F001E0F007C0FFFF80FFFE00FFFF80F03FC0F003E0F001F0F000F0F00078F00078F00078
F00078F00078F000F0F001E0F007C0FFFF80FFFC00151D7C9C1C>I<003FC000FFF003C0
F00780300F00001E00003C00003C0000780000780000780000F00000F00000F00000F000
00F00000F00000F00000F00000F000007800007800007800003C00003C00001E00000F00
0807801803C07800FFF0003F80151F7D9D1B>I<FFFC00FFFF00F00F80F003E0F001F0F0
00F0F00078F00038F0003CF0003CF0001CF0001EF0001EF0001EF0001EF0001EF0001EF0
001EF0001EF0003CF0003CF0003CF00078F000F0F000F0F003E0F00FC0FFFF00FFFC0017
1D7C9C1E>I<FFFFC0FFFFC0F00000F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000FFFF80FFFF80F00000F00000F00000F00000F00000F00000F00000
F00000F00000F00000F00000F00000FFFFC0FFFFC0121D7C9C19>I<003F8001FFF003C0
F80780380F00181E00003C00003C0000780000780000780000F00000F00000F00000F000
00F00000F00000F007F8F007F8F000387800387800387800383C00383C00381E00380F00
3807803803C0F801FFF0003F80151F7D9D1C>71 D<F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0
F0F0F0F0F0F0F0F0F0F0F0F0F0F0041D7C9C0C>73 D<F000F000F000F000F000F000F000
F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000
F000F000FFFEFFFE0F1D7C9C16>76 D<FC0007E0FC0007E0FC0007E0EE000DE0EE000DE0
EE000DE0E70019E0E70019E0E70019E0E78039E0E38031E0E3C071E0E3C071E0E1C061E0
E1C061E0E1E0E1E0E1E0E1E0E0E0C1E0E0F1C1E0E07181E0E07181E0E07181E0E03B01E0
E03B01E0E03B01E0E01E01E0E01E01E0E01E01E0E00001E01B1D7C9C24>I<FC0070FC00
70FE0070EE0070EF0070E70070E70070E78070E38070E3C070E3C070E1E070E1E070E0E0
70E0F070E07070E07870E07870E03C70E03C70E01C70E01E70E00E70E00E70E00F70E007
70E007F0E003F0E003F0141D7C9C1D>I<003F000001FFE00003FFF00007C0F8000F807C
001E001E003E001F003C000F00780007807800078078000780F00003C0F00003C0F00003
C0F00003C0F00003C0F00003C0F00003C0F00003C0F80007C078000780780007807C000F
803C000F003E001F001F003E000F807C0007C0F80003FFF00001FFE000003F00001A1F7E
9D1F>I<FFFC00FFFF00F00F80F003C0F001E0F000F0F000F0F000F0F000F0F000F0F000
F0F001E0F003E0F00FC0FFFF80FFFE00F00000F00000F00000F00000F00000F00000F000
00F00000F00000F00000F00000F00000F00000141D7C9C1B>I<FFF800FFFF00F00F80F0
03C0F001E0F000F0F000F0F000F0F000F0F000F0F001E0F003E0F00FC0FFFF80FFFF00FF
F800F03C00F01C00F01E00F00F00F00F00F00780F00780F003C0F003C0F001E0F000F0F0
00F0F00078151D7C9C1B>82 D<03F8000FFE001C0F00380700700300600000E00000E000
00E00000E00000F000007800007F00003FE0001FFC0007FE0001FF00001F800007800003
C00003C00001C00001C00001C00001C0C00180E00380F007007C0E001FFC0007F000121F
7E9D17>I<FFFFFF80FFFFFF80001E0000001E0000001E0000001E0000001E0000001E00
00001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E00
00001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E00
00001E0000001E0000001E0000191D7F9C1C>I<F00070F00070F00070F00070F00070F0
0070F00070F00070F00070F00070F00070F00070F00070F00070F00070F00070F00070F0
0070F00070F00070F00070F00070F000707800E07800E03C01C01E03800F078007FE0001
F800141E7C9C1D>I<78000E007C001E003C003C001E0038000F0070000F00F0000781E0
0003C1C00001C3C00001E7800000F70000007E0000003E0000003C0000003C0000007E00
000077000000E7800001E3800003C1C0000381E0000700F0000F00F8000E0078001C003C
003C003E0078001F0070000F00F0000F80191D7F9C1C>88 D<F80001E07C0001C03E0003
801E0007801F0007000F800E0007801E0007C01C0003E03C0001E0380001F0700000F0F0
000078E000007DC000003FC000001F8000001F0000000F0000000F0000000F0000000F00
00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00001B1D80
9C1C>I<7FFFF07FFFF00001E00003E00003C00007C0000780000F00001F00001E00003E
00003C0000780000F80000F00001F00001E00003C00007C0000780000F80000F00001E00
003E00003C00007C0000780000FFFFF0FFFFF0141D7E9C19>I<07E00FF81FFC3C1C7004
7000E000E000E000E000E000E000700070043C1C1FFC0FF807E00E127E9112>99
D<07C01FE03FF078787018601CFFFCFFFCFFFCE000E000E000700070043C1C3FFC1FF807
E00E127E9112>101 D<00FC01FC03FC07000E000E000E000E000E000E000E00FFE0FFE0
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E1D809C
0D>I<F0F0F0F000000000000000707070707070707070707070707070707070041D7E9C
0A>105 D<E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0E0031D
7D9C0A>108 D<E3E0EFF0FFF8F83CF01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE0
1CE01CE01CE01C0E127D9115>110 D<03F0000FFC001FFE003C0F00780780700380E001
C0E001C0E001C0E001C0E001C0F003C07003807807803C0F001FFE000FFC0003F0001212
7F9115>I<1C001C001C001C001C001C00FFE0FFE01C001C001C001C001C001C001C001C
001C001C001C001C001C201FF00FF007C00C187F970F>116 D<E01CE01CE01CE01CE01C
E01CE01CE01CE01CE01CE01CE01CE01CE01CE07CFFFC7FDC3F1C0E127D9115>I
E /Fd 26 118 df<FFE0FFE0FFE00B037F8C10>45 D<F0F0F0F004047B830E>I<00C001
C007C0FFC0FFC0FBC003C003C003C003C003C003C003C003C003C003C003C003C003C003
C003C003C003C003C003C003C003C003C003C003C003C0FFFFFFFFFFFF10227CA118>49
D<03F0000FFC001FFE003C1F003007807007C06003C0E003E0C001E04001E04001E00001
E00001E00001E00003C00003C0000780000780000F00001E00003C0000780000F00001E0
0001C0000380000700000E00001C0000380000700000FFFFE0FFFFE0FFFFE013227EA118
>I<01F00007FC001FFF003E0F003807807003C02003C02003C00003C00003C00003C000
0780000780000F00001E0003FC0003F80003FE00000F000007800003C00003C00001E000
01E00001E00001E00001E08001E0C003C0E003C07007803C0F801FFF000FFC0003F00013
237EA118>I<001F00001F00002F00002F00006F0000EF0000CF0001CF0001CF00038F00
038F00078F00070F000F0F000E0F001E0F003C0F003C0F00780F00780F00F00F00FFFFF8
FFFFF8FFFFF8000F00000F00000F00000F00000F00000F00000F00000F00000F0015217F
A018>I<3FFF803FFF803FFF803C00003C00003C00003C00003C00003C00003C00003C00
003C00003CF8003FFE003FFF003F0F803E07803C03C03803C00001E00001E00001E00001
E00001E00001E00001E04003C04003C0E003C07007807C1F003FFE000FFC0003F0001322
7EA018>I<001F0000001F0000003F8000003F8000003B8000007BC0000073C0000071C0
0000F1E00000F1E00000E0E00001E0F00001E0F00001C0F00003C0780003C07800038078
0007803C0007803C0007003C000F001E000F001E000FFFFE001FFFFF001FFFFF001C000F
003C0007803C00078038000780780003C0780003C0700003C0F00001E0F00001E0E00001
E01B237EA220>65 D<FFFC00FFFF80FFFFC0F007F0F001F0F00078F0003CF0003CF0003C
F0003CF0003CF00038F00078F000F0F003E0FFFFC0FFFF00FFFFC0F00FE0F001F8F00078
F0003CF0001CF0001EF0001EF0001EF0001EF0001EF0003CF0007CF000F8F003F0FFFFE0
FFFFC0FFFE0017237BA220>I<000FF000003FFE0000FFFF8001F80F8003E00380078000
000F0000001E0000001E0000003C0000003C000000780000007800000078000000F00000
00F0000000F0000000F0000000F0000000F0000000F000FFC0F000FFC0F000FFC0780003
C0780003C0780003C03C0003C03C0003C01E0003C01E0003C00F0003C0078003C003E003
C001F807C000FFFFC0003FFF00000FF8001A257DA321>71 D<FFFC00FFFF80FFFFC0F003
E0F000F0F00078F00038F0003CF0003CF0003CF0003CF0003CF00038F00078F000F0F003
E0FFFFC0FFFF80FFFE00F01E00F00F00F00700F00780F00380F003C0F001E0F001E0F000
F0F000F0F00078F00038F0003CF0001EF0001EF0000F18237BA21F>82
D<00FE0003FFC007FFE00F81E01E00603C00003C00007800007800007800007800007800
007C00003C00003F00001FC0000FFC0007FF0001FF80003FC00007E00001F00000F00000
F8000078000078000078000078000078000078C000F0E000F0F801E07E07C03FFF800FFF
0001FC0015257EA31B>I<07E01FF83FFC381E201E000F000F000F000F00FF07FF1FFF3E
0F780FF00FF00FF00FF00FF83F7FFF3FEF1F8F10167E9517>97 D<F00000F00000F00000
F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F1F000F7FC00
FFFE00FC1F00F80F00F00780F00780F003C0F003C0F003C0F003C0F003C0F003C0F003C0
F003C0F00780F00780F80F00FC3E00FFFE00F7F800F1F00012237CA219>I<01FC0007FF
000FFF801F03803C0180780000780000700000F00000F00000F00000F00000F00000F000
007800007800007800003C00401F03C00FFFC007FF8001FC0012167E9516>I<0003C000
03C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C003
E3C00FFBC01FFFC03F0FC03C07C07803C07803C0F003C0F003C0F003C0F003C0F003C0F0
03C0F003C0F003C07803C07803C03C07C03E0FC01FFFC00FFBC003E3C012237EA219>I<
03F00007FC001FFE003E0F003C0780780380780380F001C0FFFFC0FFFFC0FFFFC0F00000
F00000F000007000007800007800003C00801F07800FFF8007FF0001F80012167E9516>
I<01F07807FFF80FFFF81F1F001E0F003C07803C07803C07803C07803C07801E0F001F1F
000FFE001FFC0019F0003800003800003C00001FFE001FFFC01FFFE03FFFF07801F07800
F8F00078F00078F00078F000787800F03E03E01FFFC00FFF8001FC0015217F9518>103
D<F000F000F000F000F000F000F000F000F000F000F000F000F000F1F8F3FCF7FEFE1EF8
0FF80FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F10
237CA219>I<F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0
F0F0F0F0F004237DA20B>108 D<F1F8F3FCF7FEFE1EF80FF80FF00FF00FF00FF00FF00F
F00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F10167C9519>110
D<01FC0007FF000FFF801F07C03C01E07800F07800F0700070F00078F00078F00078F000
78F00078F000787800F07800F07C01F03E03E01F07C00FFF8007FF0001FC0015167F9518
>I<F0E0F3E0F7E0FF00FE00FC00F800F800F000F000F000F000F000F000F000F000F000
F000F000F000F000F0000B167C9511>114 D<07F01FFC3FFE3C0E7806780078007C003F
003FF01FF80FFC01FE001F000F000F000FC00FF81EFFFE3FFC0FF010167F9513>I<0F00
0F000F000F000F000F00FFF8FFF8FFF80F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F080F1C07FC07F803E00E1C7F9B12>I<F00FF00FF00FF00FF00FF0
0FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF01FF83F7FFF7FCF1F0F10167C
9519>I E /Fe 1 49 df<07C018303018701C600C600CE00EE00EE00EE00EE00EE00EE0
0EE00EE00E600C600C701C30181C7007C00F157F9412>48 D E /Ff
12 120 df<70F8FCFC74040404080810102040060E7C840D>59 D<000001C00000078000
001E00000078000001E00000078000000E00000038000000F0000003C000000F0000003C
000000F0000000F00000003C0000000F00000003C0000000F0000000380000000E000000
0780000001E0000000780000001E0000000780000001C01A1A7C9723>I<E00000007800
00001E0000000780000001E0000000780000001C0000000700000003C0000000F0000000
3C0000000F00000003C0000003C000000F0000003C000000F0000003C00000070000001C
00000078000001E00000078000001E00000078000000E00000001A1A7C9723>62
D<000002000000060000000E0000000E0000001E0000001F0000002F0000002F0000004F
0000008F0000008F0000010F0000010F0000020F0000040F0000040F0000080F80000807
80001007800020078000200780007FFF8000400780008007800180078001000780020007
80020007C0040003C00C0003C01E0007C0FF807FFC1E207E9F22>65
D<00E001E001E000C000000000000000000000000000000E001300238043804380438087
00070007000E000E001C001C001C20384038403840388019000E000B1F7E9E10>105
D<0000C00001E00001E00001C0000000000000000000000000000000000000000000001E
00006300004380008380010380010380020700000700000700000700000E00000E00000E
00000E00001C00001C00001C00001C000038000038000038000038000070000070003070
0078E000F1C0006380003E00001328819E13>I<01E0000FE00001C00001C00001C00001
C0000380000380000380000380000700000700000701E00706100E08700E10F00E20F00E
40601C80001D00001E00001FC000387000383800383800381C2070384070384070384070
1880E01880600F0014207E9F18>I<1E07C07C00231861860023A032030043C034030043
80380380438038038087007007000700700700070070070007007007000E00E00E000E00
E00E000E00E00E000E00E01C101C01C01C201C01C038201C01C038401C01C01840380380
18801801800F0024147E9328>109 D<1E07802318C023A06043C0704380704380708700
E00700E00700E00700E00E01C00E01C00E01C00E03821C03841C07041C07081C03083803
101801E017147E931B>I<0F00601180702180E021C0E041C0E04380E08381C00701C007
01C00701C00E03800E03800E03800E03840E07080C07080C07080E0F1006131003E1E016
147E931A>117 D<0F01801183C02183E021C1E041C0E043806083804007004007004007
00400E00800E00800E00800E01000E01000C02000E04000E040006180001E00013147E93
16>I<0F006060118070F02180E0F821C0E07841C0E0384380E0188381C0100701C01007
01C0100701C0100E0380200E0380200E0380200E0380400E0380400E0380800E07808006
0781000709860001F078001D147E9321>I E /Fg 47 122 df<007F07F001FF1FF003FF
3FF007807800070070000F00F0000F00F0000F00F0000F00F0000F00F0000F00F0000F00
F000FFF8FF80FFF8FF80FFF8FF800F00F0000F00F0000F00F0000F00F0000F00F0000F00
F0000F00F0000F00F0000F00F0000F00F0000F00F0000F00F0000F00F0000F00F0000F00
F0000F00F0000F00F0001C20809F1B>11 D<007000E001C00380078007000E001E001E00
3C003C003C0078007800780078007000F000F000F000F000F000F000F000F000F000F000
F000F000700078007800780078003C003C003C001E001E000E0007000780038001C000E0
00700C2E7EA112>40 D<E000700038001C001E000E0007000780078003C003C003C001E0
01E001E001E000E000F000F000F000F000F000F000F000F000F000F000F000F000E001E0
01E001E001E003C003C003C00780078007000E001E001C0038007000E0000C2E7DA112>
I<018001C001800180C183E187F99F7DBE1FF807E007E01FF87DBEF99FE187C183018001
8001C0018010147DA117>I<787878781830306060E0050A7D830D>44
D<000100030003000600060006000C000C000C0018001800180030003000300060006000
6000C000C000C00180018001800300030003000600060006000C000C000C001800180018
00300030003000600060006000C000C000C000102D7DA117>47 D<001F0000001F000000
3F8000003B8000003B8000007BC0000073C0000071C00000F1E00000E1E00000E0E00001
E0F00001E0F00001C0F00003C0780003C078000380780007803C0007803C0007003C000F
FFFE000FFFFE000FFFFE001E000F001E000F003C000F803C0007803C000780780007C078
0003C0780003C0F00003E01B207F9F1E>65 D<FFF800FFFF00FFFF80F00FC0F003E0F001
E0F000F0F000F0F000F0F000F0F000F0F001E0F003C0F01F80FFFF00FFFF00FFFF80F007
E0F001E0F000F0F00078F00078F00078F00078F00078F00078F000F0F001F0F007E0FFFF
C0FFFF80FFFC0015207B9F1E>I<001FC000FFF801FFFC03E03C07800C0F00001E00003E
00003C00007C0000780000780000780000F00000F00000F00000F00000F00000F00000F0
0000F000007800007800007800007C00003C00003E00001E00000F000207800E03E03E01
FFFC00FFF0001FC017227DA01D>I<FFFC00FFFF80FFFFC0F007E0F001F0F000F8F00078
F0003CF0003CF0001EF0001EF0000EF0000FF0000FF0000FF0000FF0000FF0000FF0000F
F0000FF0000FF0001EF0001EF0001EF0003CF0007CF000F8F001F0F007E0FFFFC0FFFF80
FFFC0018207B9F21>I<FFFFC0FFFFC0FFFFC0F00000F00000F00000F00000F00000F000
00F00000F00000F00000F00000F00000FFFF80FFFF80FFFF80F00000F00000F00000F000
00F00000F00000F00000F00000F00000F00000F00000F00000FFFFE0FFFFE0FFFFE01320
7B9F1B>I<001FE000FFF801FFFE03E03E07800E0F00001E00003E00003C00007C000078
0000780000780000F00000F00000F00000F00000F00000F00000F003FEF003FE7803FE78
001E78001E7C001E3C001E3E001E1E001E0F001E07801E03E03E01FFFE00FFF8001FC017
227DA01E>71 D<F00078F00078F00078F00078F00078F00078F00078F00078F00078F000
78F00078F00078F00078F00078FFFFF8FFFFF8FFFFF8F00078F00078F00078F00078F000
78F00078F00078F00078F00078F00078F00078F00078F00078F00078F0007815207B9F20
>I<F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F00420
7C9F0D>I<F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F0
0000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F0
0000F00000F00000F00000F00000F00000F00000FFFF80FFFF80FFFF8011207B9F19>76
D<F80001F8FC0003F8FC0003F8F4000378F6000778F6000778F6000778F3000E78F3000E
78F3000E78F3801E78F3801E78F1801C78F1C03C78F1C03C78F0C03878F0C03878F0E078
78F0E07878F0607078F070F078F070F078F030E078F039E078F039E078F019C078F019C0
78F019C078F00F8078F00F8078F00F8078F00000781D207B9F28>I<FC0078FE0078FE00
78F60078F70078F70078F38078F38078F38078F3C078F1C078F1E078F1E078F0E078F0F0
78F07078F07078F07878F03878F03C78F03C78F01C78F01E78F00E78F00E78F00E78F007
78F00778F00378F003F8F003F8F001F815207B9F20>I<003F000000FFC00003FFF00007
E1F8000F807C001F003E001E001E003C000F003C000F00780007807800078078000780F0
0003C0F00003C0F00003C0F00003C0F00003C0F00003C0F00003C0F00003C0F00003C0F8
0007C07800078078000780780007803C000F003C000F001E001E001F003E000F807C0007
E1F80003FFF00000FFC000003F00001A227DA021>I<FFF800FFFF00FFFF80F00FC0F003
E0F001E0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F001E0F003E0F00FC0FFFF
80FFFF00FFF800F00000F00000F00000F00000F00000F00000F00000F00000F00000F000
00F00000F00000F0000014207B9F1D>I<FFF800FFFF00FFFF80F007C0F003E0F001E0F0
00F0F000F0F000F0F000F0F000F0F001E0F003E0F007C0FFFF80FFFF00FFF800F03C00F0
1E00F01E00F00F00F00F00F00780F00780F003C0F001C0F001E0F000F0F000F0F00078F0
0078F0003C16207B9F1D>82 D<01FC0007FF800FFFC01F03C03C00C03C00007800007800
007800007800007800007C00003C00003F00001FE0000FFC0007FE0001FF00003F800007
C00003C00003E00001E00001E00001E00001E00001E00001C0C003C0F007C0FC0F807FFF
001FFE0003F80013227EA019>I<FFFFFFC0FFFFFFC0FFFFFFC0001E0000001E0000001E
0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E
0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E
0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E00001A20
7E9F1F>I<F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F0
00F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F000F0F0
00F0F000F0F000F07801E07801E03C03C03C03C01F0F800FFF0007FE0001F80014217B9F
1F>I<FFFFF8FFFFF8FFFFF80000F00001F00001E00003C00007C0000780000F80000F00
001E00003E00003C00007C0000780000F00001F00001E00003E00003C0000780000F8000
0F00001F00001E00003C00007C0000780000FFFFFCFFFFFCFFFFFC16207D9F1C>90
D<FFFFFFF0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0
F0F0F0F0F0F0F0FFFFFF082D7DA10D>I<FFFFFF0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F
0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0FFFFFFF082D7FA10D>93
D<07E03FF87FFC701E401F000F000F000F003F07FF1FFF7E0FF80FF00FF00FF00FF83F7F
FF3FEF1F8F10147E9316>97 D<F00000F00000F00000F00000F00000F00000F00000F000
00F00000F00000F00000F00000F1F000F7FC00FFFE00FC3E00F80F00F00F00F00780F007
80F00780F00780F00780F00780F00780F00F00F00F00F81F00FC3E00FFFC00F7F800F1E0
0011207D9F17>I<03F00FFC1FFE3E0E3C0278007800F000F000F000F000F000F0007800
78003C013E0F1FFF0FFE03F010147E9314>I<0007800007800007800007800007800007
8000078000078000078000078000078000078007C7800FF7801FFF803E1F807C07807807
80F80780F00780F00780F00780F00780F00780F00780F00780780780780F803E1F801FFF
800FF78007C78011207E9F17>I<03F0000FFC001FFE003E1F003C0700780700700380FF
FF80FFFF80FFFF80F00000F00000F000007000007800003C01003E07001FFF0007FE0001
F80011147F9314>I<007E01FE03FE078007000F000F000F000F000F000F000F00FFF0FF
F0FFF00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F
000F20809F0E>I<03E0F00FFFF01FFFF03E3E003C1E00780F00780F00780F00780F0078
0F003C1E003E3E001FFC003FF80033E0003000003800003FFE003FFF801FFFC03FFFE078
03F0F000F0F000F0F000F0F801F07E07E03FFFC00FFF0003FC00141E7F9317>I<F0F0F0
F00000000000000000F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F004207D9F0B>
105 D<F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000
F00000F01F00F01E00F03C00F07800F0F000F1E000F3C000F78000FFC000FFC000FFE000
F9F000F8F000F0F800F07C00F07C00F03E00F01E00F01F00F00F8011207D9F16>107
D<F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F004207D
9F0B>I<F0FC07E0F3FE1FF0F7FF3FF8FE0FF07CF807C03CF807C03CF007803CF007803C
F007803CF007803CF007803CF007803CF007803CF007803CF007803CF007803CF007803C
F007803CF007803CF007803C1E147D9325>I<F1F8F3FCF7FEFC1FF80FF80FF00FF00FF0
0FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F10147D9317>I<01F80007FE00
1FFF803F0FC03C03C07801E07801E0F000F0F000F0F000F0F000F0F000F0F000F07801E0
7801E03C03C03F0FC01FFF8007FE0001F80014147F9317>I<F1F000F7FC00FFFE00FC3E
00F81F00F00F00F00F80F00780F00780F00780F00780F00780F00780F00F00F00F00F81F
00FC3E00FFFC00F7F800F1E000F00000F00000F00000F00000F00000F00000F00000F000
00F00000111D7D9317>I<F0E0F3E0F7E0FF00FC00FC00F800F800F000F000F000F000F0
00F000F000F000F000F000F000F0000B147D9310>114 D<07F01FFC3FFC780C78007800
78007C003FC01FF00FF803F8007C003C003CC03CF07CFFF87FF00FC00E147F9311>I<1E
001E001E001E001E001E00FFF0FFF0FFF01E001E001E001E001E001E001E001E001E001E
001E001E001E001E201FF00FF007C00C1A7F9910>I<F00FF00FF00FF00FF00FF00FF00F
F00FF00FF00FF00FF00FF00FF00FF00FF01FF03FFFFF7FEF3F0F10147D9317>I<F003C0
F003C07803807807807807803C0F003C0F003C0F001E0E001E1E001E1E000F1C000F3C00
0F3C0007380007380007B80003F00003F00001E00012147F9315>I<7801E07C03C03E07
801E0F000F0F00079E0003FC0003F80001F80000F00001F00001F80003FC00079E000F0F
000E0F001E07803C03C07801E0F801F01414809315>120 D<F003C0F003C07807807807
807C07803C0F003C0F001E0F001E1E000E1E000F1C000F1C00073C0007380003B80003B8
0003B00001F00001F00000E00000E00001C00001C00001C0000380000780007F00007E00
007C0000121D7F9315>I E /Fh 57 124 df<007E1F0001C1B1800303E3C00703C3C00E
03C1800E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC000E01C0000E
01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E
01C0000E01C0000E01C0000E01C0000E01C0000E01C0007F87FC001A1D809C18>11
D<007E0001C1800301800703C00E03C00E01800E00000E00000E00000E00000E0000FFFF
C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01
C00E01C00E01C00E01C00E01C07F87F8151D809C17>I<007FC001C1C00303C00703C00E
01C00E01C00E01C00E01C00E01C00E01C00E01C0FFFFC00E01C00E01C00E01C00E01C00E
01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F
CFF8151D809C17>I<6060F0F0F8F86868080808080808101010102020404080800D0C7F
9C15>34 D<004000800100020006000C000C0018001800300030007000600060006000E0
00E000E000E000E000E000E000E000E000E000E000E00060006000600070003000300018
0018000C000C00060002000100008000400A2A7D9E10>40 D<800040002000100018000C
000C000600060003000300038001800180018001C001C001C001C001C001C001C001C001
C001C001C001C0018001800180038003000300060006000C000C00180010002000400080
000A2A7E9E10>I<60F0F0701010101020204080040C7C830C>44
D<FFE0FFE00B0280890E>I<60F0F06004047C830C>I<00010003000600060006000C000C
000C0018001800180030003000300060006000C000C000C0018001800180030003000300
060006000C000C000C00180018001800300030003000600060006000C000C00010297E9E
15>I<030007003F00C70007000700070007000700070007000700070007000700070007
000700070007000700070007000700070007000F80FFF80D1C7C9B15>49
D<07C01830201C400C400EF00FF80FF807F8077007000F000E000E001C001C0038007000
6000C00180030006010C01180110023FFE7FFEFFFE101C7E9B15>I<60F0F06000000000
00000000000060F0F06004127C910C>58 D<60F0F0600000000000000000000060F0F070
1010101020204080041A7C910C>I<000600000006000000060000000F0000000F000000
0F00000017800000178000001780000023C0000023C0000023C0000041E0000041E00000
41E0000080F0000080F0000180F8000100780001FFF80003007C0002003C0002003C0006
003E0004001E0004001E000C001F001E001F00FF80FFF01C1D7F9C1F>65
D<001F808000E0618001801980070007800E0003801C0003801C00018038000180780000
807800008070000080F0000000F0000000F0000000F0000000F0000000F0000000F00000
00F0000000700000807800008078000080380000801C0001001C0001000E000200070004
000180080000E03000001FC000191E7E9C1E>67 D<FFFFC0000F00F0000F003C000F000E
000F0007000F0007000F0003800F0003C00F0001C00F0001C00F0001E00F0001E00F0001
E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001C00F0001C00F0003C00F0003
800F0007800F0007000F000E000F001C000F007000FFFFC0001B1C7E9B20>I<FFFFFC0F
003C0F000C0F00040F00040F00060F00020F00020F02020F02000F02000F02000F06000F
FE000F06000F02000F02000F02000F02010F00010F00020F00020F00020F00060F00060F
000C0F003CFFFFFC181C7E9B1C>I<FFFFF80F00780F00180F00080F00080F000C0F0004
0F00040F02040F02000F02000F02000F06000FFE000F06000F02000F02000F02000F0200
0F00000F00000F00000F00000F00000F00000F00000F8000FFF800161C7E9B1B>I<FFF3
FFC00F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F00
3C000F003C000F003C000F003C000FFFFC000F003C000F003C000F003C000F003C000F00
3C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C00FFF3
FFC01A1C7E9B1F>72 D<FFF00F000F000F000F000F000F000F000F000F000F000F000F00
0F000F000F000F000F000F000F000F000F000F000F000F000F000F00FFF00C1C7F9B0F>
I<FF8000FF800F8000F8000F8000F8000BC00178000BC00178000BC001780009E0027800
09E002780008F004780008F004780008F004780008780878000878087800087808780008
3C107800083C107800083C107800081E207800081E207800081E207800080F407800080F
40780008078078000807807800080780780008030078001C03007800FF8307FF80211C7E
9B26>77 D<FF007FC00F800E000F8004000BC0040009E0040009E0040008F0040008F804
0008780400083C0400083C0400081E0400080F0400080F0400080784000807C4000803C4
000801E4000801E4000800F40008007C0008007C0008003C0008003C0008001C0008000C
001C000C00FF8004001A1C7E9B1F>I<003F800000E0E0000380380007001C000E000E00
1C0007003C00078038000380780003C0780003C0700001C0F00001E0F00001E0F00001E0
F00001E0F00001E0F00001E0F00001E0F00001E0700001C0780003C0780003C038000380
3C0007801C0007000E000E0007001C000380380000E0E000003F80001B1E7E9C20>I<FF
FF800F00E00F00780F003C0F001C0F001E0F001E0F001E0F001E0F001E0F001C0F003C0F
00780F00E00FFF800F00000F00000F00000F00000F00000F00000F00000F00000F00000F
00000F00000F0000FFF000171C7E9B1C>I<FFFF00000F01E0000F0078000F003C000F00
1C000F001E000F001E000F001E000F001E000F001C000F003C000F0078000F01E0000FFF
00000F03C0000F00E0000F00F0000F0078000F0078000F0078000F0078000F0078000F00
78000F0078100F0078100F0038100F003C20FFF01C20000007C01C1D7E9B1F>82
D<07E0801C1980300580700380600180E00180E00080E00080E00080F00000F800007C00
007FC0003FF8001FFE0007FF0000FF80000F800007C00003C00001C08001C08001C08001
C0C00180C00180E00300D00200CC0C0083F800121E7E9C17>I<7FFFFFC0700F01C0600F
00C0400F0040400F0040C00F0020800F0020800F0020800F0020000F0000000F0000000F
0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F
0000000F0000000F0000000F0000000F0000000F0000001F800003FFFC001B1C7F9B1E>
I<FFF07FC00F000E000F0004000F0004000F0004000F0004000F0004000F0004000F0004
000F0004000F0004000F0004000F0004000F0004000F0004000F0004000F0004000F0004
000F0004000F0004000F0004000F0004000700080007800800038010000180100000C020
000070C000001F00001A1D7E9B1F>I<FFE0FFE0FF1F001F003C1E001E00180F001F0010
0F001F00100F001F001007801F00200780278020078027802003C027804003C043C04003
C043C04003E043C04001E081E08001E081E08001E081E08000F100F10000F100F10000F1
00F100007900FA00007A007A00007A007A00003E007C00003C003C00003C003C00003C00
3C00001800180000180018000018001800281D7F9B2B>87 D<0808101020204040404080
8080808080B0B0F8F8787830300D0C7A9C15>92 D<1FC000307000783800781C00301C00
001C00001C0001FC000F1C00381C00701C00601C00E01C40E01C40E01C40603C40304E80
1F870012127E9115>97 D<FC00001C00001C00001C00001C00001C00001C00001C00001C
00001C00001C00001C7C001D86001E03001C01801C01C01C00C01C00E01C00E01C00E01C
00E01C00E01C00E01C00C01C01C01C01801E030019060010F800131D7F9C17>I<07E00C
301878307870306000E000E000E000E000E000E00060007004300418080C3007C00E127E
9112>I<003F000007000007000007000007000007000007000007000007000007000007
0003E7000C1700180F00300700700700600700E00700E00700E00700E00700E00700E007
00600700700700300700180F000C370007C7E0131D7E9C17>I<03E00C301818300C700E
6006E006FFFEE000E000E000E00060007002300218040C1803E00F127F9112>I<00F801
8C071E061E0E0C0E000E000E000E000E000E00FFE00E000E000E000E000E000E000E000E
000E000E000E000E000E000E000E000E007FE00F1D809C0D>I<00038003C4C00C38C01C
3880181800381C00381C00381C00381C001818001C38000C300013C00010000030000018
00001FF8001FFF001FFF803003806001C0C000C0C000C0C000C06001803003001C0E0007
F800121C7F9215>I<FC00001C00001C00001C00001C00001C00001C00001C00001C0000
1C00001C00001C7C001C87001D03001E03801C03801C03801C03801C03801C03801C0380
1C03801C03801C03801C03801C03801C03801C0380FF9FF0141D7F9C17>I<18003C003C
0018000000000000000000000000000000FC001C001C001C001C001C001C001C001C001C
001C001C001C001C001C001C001C00FF80091D7F9C0C>I<FC00001C00001C00001C0000
1C00001C00001C00001C00001C00001C00001C00001C3FC01C0F001C0C001C08001C1000
1C20001C40001CE0001DE0001E70001C78001C38001C3C001C1C001C0E001C0F001C0F80
FF9FE0131D7F9C16>107 D<FC001C001C001C001C001C001C001C001C001C001C001C00
1C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80091D
7F9C0C>I<FC7E07E0001C838838001D019018001E01E01C001C01C01C001C01C01C001C
01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01C01C001C01
C01C001C01C01C001C01C01C001C01C01C00FF8FF8FF8021127F9124>I<FC7C001C8700
1D03001E03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C0380
1C03801C03801C0380FF9FF014127F9117>I<03F0000E1C001806003003007003806001
80E001C0E001C0E001C0E001C0E001C0E001C06001807003803003001806000E1C0003F0
0012127F9115>I<FC7C001D86001E03001C01801C01C01C00C01C00E01C00E01C00E01C
00E01C00E01C00E01C01C01C01C01C01801E03001D06001CF8001C00001C00001C00001C
00001C00001C00001C0000FF8000131A7F9117>I<03C1000C3300180B00300F00700700
700700E00700E00700E00700E00700E00700E00700600700700700300F00180F000C3700
07C700000700000700000700000700000700000700000700003FE0131A7E9116>I<FCE0
1D301E781E781C301C001C001C001C001C001C001C001C001C001C001C001C00FFC00D12
7F9110>I<1F9030704030C010C010E010F8007F803FE00FF000F880388018C018C018E0
10D0608FC00D127F9110>I<04000400040004000C000C001C003C00FFE01C001C001C00
1C001C001C001C001C001C001C101C101C101C101C100C100E2003C00C1A7F9910>I<FC
1F801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C
03801C03801C07800C07800E1B8003E3F014127F9117>I<FF07E03C03801C01001C0100
0E02000E020007040007040007040003880003880003D80001D00001D00000E00000E000
00E00000400013127F9116>I<FF3FCFE03C0F03801C0701801C0701001C0B01000E0B82
000E0B82000E1182000711C4000711C4000720C40003A0E80003A0E80003C0680001C070
0001C0700001803000008020001B127F911E>I<7F8FF00F03800F030007020003840001
C80001D80000F00000700000780000F800009C00010E00020E000607000403801E07C0FF
0FF81512809116>I<FF07E03C03801C01001C01000E02000E0200070400070400070400
03880003880003D80001D00001D00000E00000E00000E000004000004000008000008000
F08000F10000F300006600003C0000131A7F9116>I<7FFC70386038407040F040E041C0
03C0038007000F040E041C043C0C380870087038FFF80E127F9112>I<FFFFF01401808B
15>I E /Fi 17 118 df<78FCFCFCFC7800000000000078FCFCFCFC7806127D910D>58
D<FFFFF800FFFFFF000FC01FC00FC007E00FC001F00FC001F80FC000F80FC000FC0FC000
7C0FC0007C0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC0007E0FC000
7E0FC0007C0FC0007C0FC0007C0FC000F80FC000F80FC001F00FC007E00FC01FC0FFFFFF
00FFFFF8001F1C7E9B25>68 D<FFFFFFFF07E007E007E007E007E007E007E007E007E007
E007E007E007E007E007E007E007E007E007E007E007E007E007E007E0FFFFFFFF101C7F
9B12>73 D<FFC00003FFFFE00007FF0FE00007F00DF0000DF00DF0000DF00DF0000DF00C
F80019F00CF80019F00C7C0031F00C7C0031F00C3E0061F00C3E0061F00C1F00C1F00C1F
00C1F00C1F00C1F00C0F8181F00C0F8181F00C07C301F00C07C301F00C03E601F00C03E6
01F00C01FC01F00C01FC01F00C01FC01F00C00F801F00C00F801F0FFC0701FFFFFC0701F
FF281C7E9B2D>77 D<0FF8001C1E003E0F803E07803E07C01C07C00007C0007FC007E7C0
1F07C03C07C07C07C0F807C0F807C0F807C0780BC03E13F80FE1F815127F9117>97
D<03FC000E0E001C1F003C1F00781F00780E00F80000F80000F80000F80000F80000F800
007800007801803C01801C03000E0E0003F80011127E9115>99 D<01FC000F07001C0380
3C01C07801C07801E0F801E0F801E0FFFFE0F80000F80000F800007800007C00603C0060
1E00C00F038001FC0013127F9116>101 D<03F8F00E0F381E0F381C07303C07803C0780
3C07803C07801C07001E0F000E0E001BF8001000001800001800001FFF001FFFC00FFFE0
1FFFF07801F8F00078F00078F000787000707800F01E03C007FF00151B7F9118>103
D<1E003F003F003F003F001E00000000000000000000000000FF00FF001F001F001F001F
001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>105
D<FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F
001F001F001F001F001F001F001F001F001F00FFE0FFE00B1D7F9C0E>108
D<FF0FC07E00FF31E18F001F40F207801F80FC07C01F80FC07C01F00F807C01F00F807C0
1F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F00F807C01F
00F807C01F00F807C0FFE7FF3FF8FFE7FF3FF825127F9128>I<FF0FC0FF31E01F40F01F
80F81F80F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F
00F8FFE7FFFFE7FF18127F911B>I<01FC000F07801C01C03C01E07800F07800F0F800F8
F800F8F800F8F800F8F800F8F800F87800F07800F03C01E01E03C00F078001FC0015127F
9118>I<FF3F80FFE1E01F80F01F00781F007C1F003C1F003E1F003E1F003E1F003E1F00
3E1F003E1F003C1F007C1F00781F80F01FC1E01F3F001F00001F00001F00001F00001F00
001F0000FFE000FFE000171A7F911B>I<1FD830786018E018E018F000FF807FE07FF01F
F807FC007CC01CC01CE01CE018F830CFC00E127E9113>115 D<03000300030003000700
07000F000F003FFCFFFC1F001F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C
0F08079803F00E1A7F9913>I<FF07F8FF07F81F00F81F00F81F00F81F00F81F00F81F00
F81F00F81F00F81F00F81F00F81F00F81F00F81F01F80F01F80786FF01F8FF18127F911B
>I E /Fj 23 87 df<3078F8787005057C840D>46 D<000C001C00FC0F38003800380038
0038003800700070007000700070007000E000E000E000E000E000E001C001C001C001C0
01C001C0038003C0FFFE0F1E7C9D17>49 D<0000600000600000E00001C00003C00005C0
000DC00009C00011C000238000438000C380008380010380020380040700080700180700
100700200700400700FFFFF0000E00000E00000E00000E00000E00001C00001E0001FFE0
141E7E9D17>52 D<01803001FFE003FFC003FF0003FC0002000002000002000004000004
0000040000047C000587000603800C01800801C00001C00001E00001E00001E00001E070
03C0F803C0F003C0E00380800780400700400E00201C0018700007C000141F7D9D17>I<
000F8000704000C0200180E00301E00701E00E00C01E00001C00003C000038000078F800
790E007A07007C0300F80380F80380F003C0F003C0F003C0F003C0F00780E00780E00780
E00700E00F00600E00701C0030180018700007C000131F7C9D17>I<0000100000001800
000038000000380000007800000078000000FC000001BC0000013C0000033C0000023C00
00063C0000043E0000081E0000081E0000101E0000101E0000201E0000200F0000400F00
00400F0000FFFF0000800F0001000F800100078002000780020007800400078004000780
0C0007C03E0007C0FF807FFC1E207E9F22>65 D<07FFFF00007C01C0003C01E0003C00F0
007800F8007800F8007800F8007800F8007800F8007800F000F001F000F001E000F003C0
00F00F8000FFFE0000F00F0001E007C001E003C001E003E001E001E001E001E001E001E0
03C001E003C003E003C003E003C003C003C007C003C00F8007800F0007803E00FFFFF000
1D1F7E9E20>I<0001F808000E061800380138007000F801E0007803C000700780003007
8000300F0000301F0000301E0000303E0000203C0000007C0000007C0000007C0000007C
000000F8000000F8000000F8000000F8000000F80000007800004078000080780000803C
0000803C0001001C0002000E00020006000C000300100001C0E000003F00001D217B9F21
>I<07FFFF00007C01E0003C00F0003C00780078003C0078003C0078001E0078001E0078
001E0078001F00F0001F00F0001F00F0001F00F0001F00F0001F00F0001F01E0001E01E0
003E01E0003E01E0003E01E0003C01E0007C03C0007803C000F003C000F003C001E003C0
03C003C0078007800F0007803C00FFFFE000201F7E9E23>I<07FFFFF8007C0078003C00
38003C001800780018007800080078000800780008007800080078080800F0100000F010
0000F0100000F0300000FFF00000F0700001E0200001E0200001E0200001E0200001E000
0801E0001003C0001003C0001003C0002003C0002003C0006003C000C0078001C0078007
C0FFFFFF801D1F7E9E1F>I<07FFFFF8007C0078003C0038003C00180078001800780008
0078000800780008007800080078000800F0100000F0100000F0100000F0300000F07000
00FFF00001E0600001E0200001E0200001E0200001E0200001E0000003C0000003C00000
03C0000003C0000003C0000003C000000780000007C00000FFFE00001D1F7E9E1E>I<00
01FC04000F030C003C009C0070007C00E0003C01C0003803800018078000180F0000181F
0000181E0000183E0000103C0000007C0000007C0000007C0000007C000000F8000000F8
000000F8007FFCF80003E0780001E0780001E0780003C0780003C03C0003C03C0003C01C
0003C00E0007C007000B800380118001E06080003F80001E217B9F24>I<07FFE0007C00
003C00003C0000780000780000780000780000780000780000F00000F00000F00000F000
00F00000F00001E00001E00001E00001E00001E00001E00003C00003C00003C00003C000
03C00003C00007800007C000FFFC00131F7F9E10>73 D<07FFF000007E0000003C000000
3C000000780000007800000078000000780000007800000078000000F0000000F0000000
F0000000F0000000F0000000F0000001E0000001E0000001E0000001E0000001E0008001
E0010003C0010003C0010003C0030003C0020003C0060003C0060007801E0007807C00FF
FFFC00191F7E9E1C>76 D<07FC0000FFC0007C0000F800003C00017800003C0001780000
4E0002F000004E0002F000004E0004F000004E0004F000004E0008F000004E0008F00000
870011E00000870011E00000870021E00000870021E00000870041E00000838041E00001
038083C00001038083C00001038103C00001038203C0000101C203C0000101C403C00002
01C40780000201C80780000201C80780000201D00780000200F00780000600E007800006
00E00F00000F00C00F8000FFE0C1FFF8002A1F7E9E2A>I<07FC01FFC0003E003E00003E
001800003E001800004F001000004F001000004780100000478010000043C010000043C0
10000083C020000081E020000081E020000080F020000080F02000008078200001007840
0001007C400001003C400001003C400001001E400001001E400002000F800002000F8000
02000F800002000780000200078000060003800006000300000F00010000FFE001000022
1F7E9E22>I<0003F800001E0E000038070000E0038001C001C003C001E0078000E00F00
00F00F0000F01E0000F01E0000F83E0000F83C0000F87C0000F87C0000F87C0000F87C00
00F8F80001F0F80001F0F80001F0F80001F0F80003E0780003E0780003C0780007C07C00
07803C000F003C001E001E001C000E0038000700F00003C3C00000FE00001D217B9F23>
I<07FFFF00007C03C0003C01E0003C00F0007800F0007800F8007800F8007800F8007800
F8007800F000F001F000F001E000F003C000F0078000F00F0000FFF80001E0000001E000
0001E0000001E0000001E0000001E0000003C0000003C0000003C0000003C0000003C000
0003C000000780000007C00000FFFC00001D1F7E9E1F>I<07FFFC00007C0700003C03C0
003C01E0007801E0007801F0007801F0007801F0007801F0007801E000F003E000F003C0
00F0078000F00F0000F03C0000FFF00001E0300001E0380001E01C0001E01C0001E01C00
01E01E0003C03E0003C03E0003C03E0003C03E0003C03E0003C03E0207803E0407C01F04
FFFC0F18000003E01F207E9E21>82 D<003F040060CC01803C03801C03001C0700180600
080E00080E00080E00080E00000F00000F80000FE00007FE0003FF8001FFC0007FE00007
E00001E00000E00000F00000F04000E04000E04000E04000E06000C0600180E00380F803
00C60C0081F80016217D9F19>I<3FFFFFF03C0780F03007803060078030400F0010400F
0010C00F0010800F0010800F0010800F0010001E0000001E0000001E0000001E0000001E
0000001E0000003C0000003C0000003C0000003C0000003C0000003C0000007800000078
00000078000000780000007800000078000000F0000001F800007FFFE0001C1F7A9E21>
I<FFFC3FF80F8007C007800300078003000F0002000F0002000F0002000F0002000F0002
000F0002001E0004001E0004001E0004001E0004001E0004001E0004003C0008003C0008
003C0008003C0008003C0008003C000800380010003800100038001000380020003C0040
001C0040001C0080000E0100000706000001F800001D20799E22>I<FFF003FE1F8000F8
0F0000600F0000400F0000400F8000800780018007800100078002000780020007C00400
03C0040003C0080003C0080003C0100003E0100001E0200001E0200001E0400001E04000
01F0800000F1000000F1000000F2000000F2000000FC0000007C00000078000000780000
0070000000700000002000001F207A9E22>I E /Fk 10 58 df<1F00318060C04040C060
C060C060C060C060C060C060C060404060C031801F000B107F8F0F>48
D<0C003C00CC000C000C000C000C000C000C000C000C000C000C000C000C00FF8009107E
8F0F>I<1F00618040C08060C0600060006000C00180030006000C00102020207FC0FFC0
0B107F8F0F>I<1F00218060C060C000C0008001800F00008000400060C060C060804060
801F000B107F8F0F>I<0300030007000F000B001300330023004300C300FFE003000300
030003001FE00B107F8F0F>I<20803F002C002000200020002F00308020400060006000
60C06080C061801F000B107F8F0F>I<0780184030C060C06000C000CF00F080E040C060
C060C060406060C030801F000B107F8F0F>I<40007FE07FC08080808001000200040004
000C0008000800180018001800180018000B117E900F>I<1F00318060C060C060C07180
3F000F00338061C0C060C060C060404060801F000B107F8F0F>I<1F00318060C0C040C0
60C060C06040E021E01E600060004060C0608043003E000B107F8F0F>I
E /Fl 1 64 df<07F8001FFE00381F80780F80FC0FC0FC0FC0FC0FC0780FC0301F80001F
00003E00007C0000700000E00000E00000C00000C00000C00000C00000C00000C0000000
0000000000000000000001C00003E00007F00007F00007F00003E00001C00012207D9F19
>63 D E /Fm 2 16 df<03C00FF01C38300C60066006C003C003C003C003C003C0036006
6006300C1C380FF003C010127D9317>14 D<03C00FF01FF83FFC7FFE7FFEFFFFFFFFFFFF
FFFFFFFFFFFF7FFE7FFE3FFC1FF80FF003C010127D9317>I E /Fn
34 123 df<F8F8F8F8F805057A8411>46 D<00180000380000F80007F800FFF800FFF800
F8F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800
00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800
00F80000F80000F80000F80000F80000F80000F8007FFFF07FFFF07FFFF014287BA71E>
49 D<00FE0003FFC007FFE00FFFF01F03F83C00FC38007E78003E70003EF0001FF0001F
60001F20001F00001F00001F00001F00003E00003E00007C00007C0000F80001F00001E0
0003C0000780000F00001E00003C0000780000F00001E00003C0000780000F00001E0000
3C00007FFFFF7FFFFF7FFFFF7FFFFF18287EA71E>I<007F000001FFC00007FFF0000FFF
F8001FC1F8003E007C003C003E0078003E0038003E0010003E0000003E0000003E000000
3C0000007C000000FC000001F8000007F00000FFE00000FFC00000FFE00000FFF0000001
FC0000007C0000003E0000001F0000001F0000000F8000000F8000000F8000000F800000
0F8040000F8060001F00F0001F00F8003F007E007E003F81FC001FFFF8000FFFF00003FF
E000007F000019297EA71E>I<0003F0000007F0000005F000000DF000000DF000001DF0
000039F0000039F0000079F0000079F00000F1F00000F1F00001E1F00003E1F00003E1F0
0007C1F00007C1F0000F81F0000F81F0001F01F0001F01F0003E01F0007C01F0007C01F0
00F801F000FFFFFF80FFFFFF80FFFFFF80FFFFFF800001F0000001F0000001F0000001F0
000001F0000001F0000001F0000001F0000001F0000001F00019277EA61E>I<3FFFFC3F
FFFC3FFFFC3FFFFC3E00003E00003E00003E00003E00003E00003E00003E00003E00003E
00003E3F003EFFC03FFFE03FFFF03FE1F83F807C3F003E3E003E00003E00001F00001F00
001F00001F00001F00001F00001F20001F60003E70003EF8007C7C00FC3F03F81FFFF00F
FFE007FF8000FE0018287EA61E>I<000FF000003FFC0000FFFC0001FFFC0003F80C0007
E000000FC000000F8000001F0000001E0000003E0000003C0000007C0000007C0000007C
3FE000F8FFF000F9FFF800FBFFFC00FF807E00FF003E00FE003F00FC001F00FC001F00FC
000F80F8000F80F8000F80F8000F80F8000F8078000F807C000F807C000F807C000F003E
001F003E001F001F003E001F807C000FC1FC0007FFF80003FFF00001FFC000007F000019
297EA71E>I<FFFFE000FFFFFC00FFFFFF00F8007F80F8000FC0F80003E0F80001F0F800
00F0F80000F8F8000078F8000078F8000078F80000F8F80000F0F80001F0F80003E0F800
0FC0F8007F80FFFFFF00FFFFFC00FFFFFE00FFFFFF80F8007FC0F8000FF0F80003F8F800
00F8F800007CF800003CF800003EF800003EF800003EF800003EF800003EF800003CF800
007CF80000FCF80001F8F80007F0F8003FE0FFFFFF80FFFFFE00FFFFF0001F2A7BA928>
66 D<0001FF00000FFFE0003FFFF8007FFFF800FE01F801F8003003F0001007C000000F
8000001F8000001F0000003E0000003E0000007E0000007C0000007C0000007C000000F8
000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
0000007C0000007C0000007C0000007E0000003E0000003E0000001F0000001F8000000F
80000007C0000003F0000401F8001C00FE00FC007FFFFC003FFFF8000FFFE00001FF001E
2C7CAA26>I<FFFFF00000FFFFFC0000FFFFFF0000F8003FC000F80007E000F80003F000
F80001F800F80000FC00F800007C00F800003E00F800001E00F800001F00F800000F00F8
00000F80F800000F80F800000780F8000007C0F8000007C0F8000007C0F8000007C0F800
0007C0F8000007C0F8000007C0F8000007C0F8000007C0F8000007C0F800000780F80000
0F80F800000F80F800000F80F800001F00F800001F00F800003E00F800007E00F800007C
00F80000F800F80003F000F80007E000F8003FC000FFFFFF0000FFFFFE0000FFFFF00000
222A7BA92B>I<FFFFFFC0FFFFFFC0FFFFFFC0FFFFFFC0F8000000F8000000F8000000F8
000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
000000F8000000F8000000FFFFFE00FFFFFE00FFFFFE00FFFFFE00F8000000F8000000F8
000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8
000000F8000000F8000000F8000000F8000000F8000000F8000000F80000001A2A7BA922
>70 D<0001FF00000FFFE0003FFFFC007FFFFE00FF01FE01F8003E03F0000C07C000000F
C000001F8000001F0000003F0000003E0000007E0000007C0000007C0000007C000000F8
000000F8000000F8000000F8000000F8000000F8000000F8000000F8001FFEF8001FFEF8
001FFE7C001FFE7C00003E7C00003E7E00003E3E00003E3F00003E1F00003E1F80003E0F
C0003E07C0003E03F0003E01F8003E00FF00FE007FFFFE003FFFFC000FFFE00001FF001F
2C7CAA28>I<F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8052A7AA911>73 D<0001FC0000000FFF8000003FFFE00000
7FFFF00001FE03FC0003F800FE0007E0003F0007C0001F000F80000F801F000007C01F00
0007C03E000003E03E000003E07C000001F07C000001F07C000001F078000000F0F80000
00F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000
F8F8000000F8F8000000F87C000001F07C000001F07C000001F07E000003F03E000003E0
3F000007E01F000007C01F80000FC00FC0001F8007E0003F0007F0007F0003F800FE0001
FE03FC0000FFFFF800003FFFE000000FFF80000001FC0000252C7DAA2C>79
D<01FE000FFF803FFFC03FFFE03C03F03001F00001F80000F80000F80000F80000F80000
F8007FF807FFF81FFFF83FE0F87F00F8FC00F8F800F8F800F8F800F8FC01F87E07F87FFF
F83FFFF81FFCF80FE0F8151B7E9A1D>97 D<F80000F80000F80000F80000F80000F80000
F80000F80000F80000F80000F80000F80000F80000F80000F80000F83F00F9FFC0FBFFE0
FFFFF0FF07F0FC01F8F800FCF8007CF8007CF8007EF8003EF8003EF8003EF8003EF8003E
F8003EF8003EF8007CF8007CF8007CFC00F8FC01F8FF07F0FFFFE0FBFFC0F9FF80F87E00
172A7BA91F>I<007FC001FFF007FFFC0FFFFC1FC07C1F00083E00007C00007C00007C00
00F80000F80000F80000F80000F80000F80000F800007C00007C00007E00003E00001F00
0C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<00003E00003E00003E00003E00
003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00FC3E03
FF3E07FFFE0FFFFE1FC1FE3F007E3E003E7C003E7C003EFC003EF8003EF8003EF8003EF8
003EF8003EF8003EF8003EFC003E7C003E7C003E3E007E3F00FE1FC1FE0FFFFE07FFBE03
FF3E00FC3E172A7EA91F>I<007E0003FF8007FFC00FFFE01F83F03F00F03E00787C0078
7C003878003CFFFFFCFFFFFCFFFFFCFFFFFCF80000F80000F800007800007C00007C0000
3E00003F000C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<001FC0007FC000FF
C001FFC003F00003E00007C00007C00007C00007C00007C00007C00007C00007C00007C0
00FFFE00FFFE00FFFE0007C00007C00007C00007C00007C00007C00007C00007C00007C0
0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0
0007C00007C00007C000122A7FA912>I<F80000F80000F80000F80000F80000F80000F8
0000F80000F80000F80000F80000F80000F80000F80000F80000F83F00F8FF80FBFFC0FF
FFE0FF07E0FE03F0FC01F0FC01F0FC01F0F801F0F801F0F801F0F801F0F801F0F801F0F8
01F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F014
2A7BA91F>104 D<F8F8F8F8F800000000000000000000F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8F8F8052A7CA90E>I<F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8
F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8052A7CA90E>108
D<F83F003F00F8FFC0FFC0FBFFE3FFE0FFFFF7FFF0FF83F783F0FE01FE01F8FC00FC00F8
FC00FC00F8FC00FC00F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F8
00F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800
F800F8F800F800F8F800F800F8F800F800F8F800F800F8F800F800F8251B7B9A30>I<F8
3F00F8FF80FBFFC0FFFFE0FF07E0FE03F0FC01F0FC01F0FC01F0F801F0F801F0F801F0F8
01F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F8
01F0F801F0F801F0141B7B9A1F>I<007F000001FFC00007FFF0000FFFF8001FC1FC003F
007E003E003E007C001F007C001F0078000F00F8000F80F8000F80F8000F80F8000F80F8
000F80F8000F80F8000F807C001F007C001F007E003F003E003E003F007E001FC1FC000F
FFF80007FFF00001FFC000007F0000191B7E9A1E>I<F83F00F9FFC0FBFFE0FFFFF0FF07
F0FC01F8F800FCF800FCF8007CF8007EF8003EF8003EF8003EF8003EF8003EF8003EF800
3EF8007CF8007CF800FCFC00F8FC03F8FF07F0FFFFE0FBFFC0F9FF80F87E00F80000F800
00F80000F80000F80000F80000F80000F80000F80000F80000F80000F8000017277B9A1F
>I<F838F8F8F9F8FBF8FFC0FF00FE00FE00FC00FC00F800F800F800F800F800F800F800
F800F800F800F800F800F800F800F800F800F8000D1B7B9A14>114
D<03FC001FFF803FFFC07FFFC07C07C0F80080F80000F80000F80000FC00007F80007FF8
003FFE001FFF0007FF8000FFC0000FE00007E00003E00003E04003E0E007E0FC0FC0FFFF
C07FFF801FFE0003F800131B7E9A17>I<07C00007C00007C00007C00007C00007C00007
C000FFFFC0FFFFC0FFFFC007C00007C00007C00007C00007C00007C00007C00007C00007
C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C04007E1C003
FFE003FFE001FF8000FC0013227FA116>I<F801F0F801F0F801F0F801F0F801F0F801F0
F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0F801F0
F801F0F801F0F803F0F803F0FC0FF0FFFFF07FFDF03FF9F01FC1F0141B7B9A1F>I<F800
0F80FC000F807C001F007C001F007E001F003E003E003E003E001F003C001F007C001F00
7C000F8078000F80F8000F80F80007C0F00007C1F00007C1F00003E1E00003E3E00001E3
C00001E3C00001F3C00000F7800000F7800000F78000007F0000007F0000007F0000191B
7F9A1C>I<F8000F80FC000F807C001F007E001F003E003E003E003E001F003E001F007C
001F007C000F807C000F80F80007C0F80007C0F00007C1F00003E1F00003E1E00001E1E0
0001E3E00001F3C00000F3C00000F38000007380000073800000370000003F0000003E00
00001E0000001E0000003C0000003C0000003C0000007800000078000000F0000000F000
007FE000007FE000007FC000007F00000019277F9A1C>121 D<FFFFF8FFFFF8FFFFF8FF
FFF00003F00007E00007C0000FC0001F80001F00003E00007E0000FC0000F80001F80003
F00003E00007C0000FC0001F80001F00003F00007E00007FFFFCFFFFFCFFFFFCFFFFFC16
1B7E9A1A>I E /Fo 12 119 df<000000003FFE00000E0000000FFFFFC0001E0000007F
FFFFF8003E000003FFFFFFFE00FE00000FFFFFFFFF81FE00003FFFF800FFC3FE0000FFFF
80000FF7FE0001FFFC000003FFFE0007FFF0000001FFFE000FFFC00000007FFE001FFF80
0000003FFE003FFF000000001FFE007FFE000000000FFE00FFFC0000000007FE01FFF800
00000007FE03FFF00000000003FE03FFF00000000001FE07FFE00000000001FE07FFE000
00000000FE0FFFC00000000000FE0FFFC000000000007E1FFFC000000000007E1FFF8000
000000007E3FFF8000000000007E3FFF8000000000003E3FFF8000000000003E7FFF8000
000000003E7FFF0000000000003E7FFF000000000000007FFF00000000000000FFFF0000
0000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF0000
0000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF0000
0000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF0000
00000000007FFF000000000000007FFF000000000000007FFF000000000000007FFF8000
000000003E3FFF8000000000003E3FFF8000000000003E3FFF8000000000003E1FFF8000
000000003E1FFFC000000000003E0FFFC000000000007C0FFFC000000000007C07FFE000
000000007C07FFE00000000000F803FFF00000000000F803FFF00000000001F801FFF800
00000001F000FFFC0000000003E0007FFE0000000007E0003FFF000000000FC0001FFF80
0000001F80000FFFC00000003F000007FFF0000000FE000001FFFC000001FC000000FFFF
80000FF80000003FFFF8007FF00000000FFFFFFFFFC000000003FFFFFFFF00000000007F
FFFFFC00000000000FFFFFE00000000000003FFE000000474979C756>67
D<0007FFFC000000007FFFFFC0000001FFFFFFF8000003FFFFFFFE000007FE001FFF0000
07FF0003FFC0000FFF8001FFE0000FFF8000FFF0000FFF80007FF0000FFF80007FF8000F
FF80007FF80007FF00003FFC0007FF00003FFC0003FE00003FFC0000F800003FFC000000
00003FFC00000000003FFC00000000003FFC00000000003FFC00000007FFFFFC000000FF
FFFFFC000007FFFFFFFC00003FFFE03FFC0000FFFE003FFC0003FFF0003FFC0007FFC000
3FFC000FFF00003FFC001FFE00003FFC003FFC00003FFC007FF800003FFC007FF800003F
FC00FFF000003FFC00FFF000003FFC00FFF000003FFC00FFF000003FFC00FFF000003FFC
00FFF000007FFC007FF80000FFFC007FF80001EFFC003FFC0003EFFC003FFF0007CFFF00
0FFFC03F8FFFF807FFFFFF07FFFC01FFFFFC03FFFC007FFFF001FFFC0003FF80007FF836
2E7DAD3A>97 D<00001FFFC0000000FFFFF8000007FFFFFE00001FFFFFFF80007FFC00FF
C000FFE001FFC001FFC003FFE003FF8003FFE007FF0003FFE00FFE0003FFE00FFE0003FF
E01FFC0001FFC01FFC0001FFC03FFC0000FF803FFC00003E007FF8000000007FF8000000
007FF800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF8000000
00FFF800000000FFF800000000FFF800000000FFF800000000FFF8000000007FF8000000
007FF8000000007FFC000000003FFC000000003FFC000000001FFC000000F81FFE000000
F80FFE000000F80FFF000001F007FF800003F003FFC00007E001FFE0000FC000FFF0001F
80007FFE00FF00001FFFFFFE000007FFFFF8000000FFFFE00000001FFE00002D2E7CAD35
>99 D<00001FFE00000001FFFFE0000007FFFFF800001FFFFFFE00007FFC07FF0000FFE0
01FF8001FFC0007FC003FF80003FE007FF00003FF00FFE00001FF01FFE00000FF81FFC00
000FF83FFC00000FFC3FFC000007FC7FFC000007FC7FF8000007FC7FF8000007FE7FF800
0007FEFFF8000007FEFFF8000007FEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFF
FFFFFCFFF800000000FFF800000000FFF800000000FFF8000000007FF8000000007FF800
0000007FFC000000003FFC000000003FFC000000003FFC0000001C1FFE0000003E0FFE00
00003E07FF0000007E07FF000000FC03FF800001F801FFC00003F0007FF0001FE0003FFE
00FFC0001FFFFFFF800007FFFFFE000000FFFFF80000000FFF80002F2E7DAD36>101
D<00FC0001FF0003FF8007FFC00FFFC01FFFE01FFFE01FFFE01FFFE01FFFE01FFFE00FFF
C007FFC003FF8001FF0000FC000000000000000000000000000000000000000000000000
00000000000000000000007FC0FFFFC0FFFFC0FFFFC0FFFFC0FFFFC003FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FF
C001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0FFFFFFFFFFFFFFFFFFFFFF
FFFFFFFF18497CC820>105 D<007FC000FFFFC000FFFFC000FFFFC000FFFFC000FFFFC0
0003FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC0
0001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC000FFFFFF80FFFFFF
80FFFFFF80FFFFFF80FFFFFF8019487CC720>108 D<007FC001FFC00000FFE00000FFFF
C00FFFF80007FFFC0000FFFFC03FFFFE001FFFFF0000FFFFC0FFFFFF007FFFFF8000FFFF
C1FC07FF80FE03FFC000FFFFC3E003FFC1F001FFE00003FFC7C001FFC3E000FFE00001FF
CF0001FFE78000FFF00001FFDE0000FFEF00007FF00001FFDC0000FFEE00007FF00001FF
FC0000FFFE00007FF80001FFF80000FFFC00007FF80001FFF00000FFF800007FF80001FF
F00000FFF800007FF80001FFF00000FFF800007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF
E00000FFF000007FF800FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFF
FFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF05C2E
7CAD65>I<007FC001FFC00000FFFFC00FFFF80000FFFFC03FFFFE0000FFFFC0FFFFFF00
00FFFFC1FC07FF8000FFFFC3E003FFC00003FFC7C001FFC00001FFCF0001FFE00001FFDE
0000FFE00001FFDC0000FFE00001FFFC0000FFF00001FFF80000FFF00001FFF00000FFF0
0001FFF00000FFF00001FFF00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0
0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0
0001FFE00000FFF000FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFF
C07FFFFFE0FFFFFFC07FFFFFE03B2E7CAD42>I<00000FFF0000000000FFFFF000000007
FFFFFE0000001FFFFFFF8000003FFC03FFC00000FFE0007FF00001FF80001FF80003FF00
000FFC0007FE000007FE000FFE000007FF000FFC000003FF001FFC000003FF803FFC0000
03FFC03FF8000001FFC03FF8000001FFC07FF8000001FFE07FF8000001FFE07FF8000001
FFE0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FF
F0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0
7FF8000001FFE07FF8000001FFE07FF8000001FFE07FF8000001FFE03FFC000003FFC03F
FC000003FFC01FFC000003FF801FFE000007FF800FFE000007FF0007FF00000FFE0003FF
80001FFC0001FFC0003FF80000FFE0007FF000007FFC03FFE000001FFFFFFF80000007FF
FFFE00000000FFFFF0000000000FFF000000342E7DAD3B>I<0001F000000001F0000000
01F000000001F000000001F000000001F000000003F000000003F000000003F000000007
F000000007F000000007F00000000FF00000000FF00000001FF00000003FF00000003FF0
0000007FF0000001FFF0000003FFF000000FFFFFFFC0FFFFFFFFC0FFFFFFFFC0FFFFFFFF
C0FFFFFFFFC000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF00000
00FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000
FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FF
F0000000FFF0000000FFF0000000FFF001F000FFF001F000FFF001F000FFF001F000FFF0
01F000FFF001F000FFF001F000FFF001F000FFF001F0007FF001E0007FF803E0003FF803
E0003FFC07C0001FFE0F80000FFFFF800007FFFE000001FFFC0000001FF00024427EC12E
>116 D<007FE000003FF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF000
FFFFE0007FFFF000FFFFE0007FFFF00003FFE00001FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000
00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000
01FFE00000FFF00001FFE00000FFF00001FFE00001FFF00001FFE00001FFF00001FFE000
01FFF00001FFE00003FFF00000FFE00007FFF00000FFE0000F7FF000007FE0001F7FF000
007FF0003E7FF800003FFC00FC7FFFE0001FFFFFF87FFFE00007FFFFE07FFFE00001FFFF
807FFFE000003FFE007FFFE03B2E7CAD42>I<FFFFFF8001FFFFFFFFFF8001FFFFFFFFFF
8001FFFFFFFFFF8001FFFFFFFFFF8001FFFF01FFE000001FC001FFF000001F8001FFF000
001F8000FFF800001F0000FFF800003F00007FF800003E00007FFC00007E00003FFC0000
7C00003FFE0000FC00001FFE0000F800001FFF0001F800000FFF0001F000000FFF8003F0
000007FF8003E0000007FFC007E0000007FFC007E0000003FFE007C0000003FFE00FC000
0001FFE00F80000001FFF01F80000000FFF01F00000000FFF83F000000007FF83E000000
007FFC7E000000003FFC7C000000003FFEFC000000001FFEF8000000001FFFF800000000
1FFFF8000000000FFFF0000000000FFFF00000000007FFE00000000007FFE00000000003
FFC00000000003FFC00000000001FF800000000001FF800000000000FF000000000000FF
0000000000007E0000000000003C000000382E7DAD3F>I E /Fp
8 117 df<00001E000000003E00000000FE00000003FE0000003FFE0000FFFFFE0000FF
FFFE0000FFFFFE0000FFCFFE0000000FFE0000000FFE0000000FFE0000000FFE0000000F
FE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE
0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE00
00000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000
000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE000000
0FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000F
FE0000000FFE0000000FFE00007FFFFFFFC07FFFFFFFC07FFFFFFFC07FFFFFFFC0223879
B731>49 D<0003FF800180001FFFF00380007FFFFC078001FFFFFF0F8003FE00FF9F8007
F0000FFF800FE00003FF801FC00001FF803F8000007F803F8000007F807F0000003F807F
0000001F807F0000001F80FF0000000F80FF0000000F80FF0000000F80FF8000000780FF
8000000780FFC000000780FFE000000780FFF8000000007FFE000000007FFFF00000007F
FFFF0000003FFFFFF800003FFFFFFF00001FFFFFFFC0000FFFFFFFF00007FFFFFFF80003
FFFFFFFC0001FFFFFFFE00007FFFFFFF00003FFFFFFF800007FFFFFF8000007FFFFFC000
0007FFFFC00000003FFFE000000003FFE000000000FFF0000000007FF0000000003FF070
0000001FF0F00000001FF0F00000001FF0F00000000FF0F00000000FF0F80000000FF0F8
0000000FE0F80000000FE0FC0000000FE0FC0000001FC0FE0000001FC0FF0000001F80FF
C000003F80FFF000007F00FFFC0001FE00FCFFC007FC00F87FFFFFF800F01FFFFFE000E0
03FFFF8000C0003FFC00002C3D7BBB37>83 D<0000FFF000000FFFFF00003FFFFF8000FF
C01FC001FF003FE003FC007FF007FC007FF00FF8007FF01FF0007FF01FF0003FE03FF000
3FE03FF0001FC07FE00007007FE00000007FE0000000FFE0000000FFE0000000FFE00000
00FFE0000000FFE0000000FFE0000000FFE0000000FFE00000007FE00000007FE0000000
7FF00000003FF00000003FF00000001FF00000781FF80000780FF80000F007FC0000F003
FE0001E001FF8007C000FFE01F80003FFFFF00000FFFFC000000FFC00025267DA52C>99
D<0001FFC000000FFFF800003FFFFE0000FF80FF0001FE003F8007FC001FC00FF8000FE0
0FF8000FF01FF00007F03FF00007F83FF00007F87FE00007F87FE00003FC7FE00003FC7F
E00003FCFFE00003FCFFFFFFFFFCFFFFFFFFFCFFFFFFFFFCFFE0000000FFE0000000FFE0
000000FFE00000007FE00000007FE00000007FE00000003FE00000003FF000003C1FF000
003C1FF000003C0FF800007807FC0000F803FE0001F001FF0007E000FFC03FC0003FFFFF
000007FFFC000000FFE00026267DA52D>101 D<00F00003FC0007FE000FFE000FFF001F
FF001FFF001FFF000FFF000FFE0007FE0003FC0000F00000000000000000000000000000
000000000000000000000000000000000000FF00FFFF00FFFF00FFFF00FFFF0007FF0003
FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003
FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003
FF0003FF0003FF0003FF00FFFFF8FFFFF8FFFFF8FFFFF8153D7DBC1B>105
D<00FE007FC000FFFE01FFF800FFFE07FFFC00FFFE0F03FE00FFFE1C01FF0007FE3001FF
8003FE6000FF8003FEE000FFC003FEC000FFC003FF8000FFC003FF8000FFC003FF8000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF
C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC0FFFFFC3FFFFFFFFFFC3FFF
FFFFFFFC3FFFFFFFFFFC3FFFFF30267CA537>110 D<0000FFC00000000FFFFC0000003F
FFFF000000FFC0FFC00001FE001FE00007FC000FF80007F80007F8000FF00003FC001FF0
0003FE003FF00003FF003FE00001FF007FE00001FF807FE00001FF807FE00001FF807FE0
0001FF80FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE0
0001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC07FE00001FF807FE00001FF807FE0
0001FF803FF00003FF003FF00003FF001FF00003FE000FF80007FC000FF80007FC0007FC
000FF80003FE001FF00000FFC0FFC000003FFFFF0000000FFFFC00000001FFE000002A26
7DA531>I<0007800000078000000780000007800000078000000F8000000F8000000F80
00000F8000001F8000001F8000003F8000003F8000007F800000FF800001FF800007FF80
001FFFFFF0FFFFFFF0FFFFFFF0FFFFFFF001FF800001FF800001FF800001FF800001FF80
0001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF80
0001FF800001FF800001FF800001FF800001FF800001FF803C01FF803C01FF803C01FF80
3C01FF803C01FF803C01FF803C01FF803C00FF807800FFC078007FC070003FE0E0001FFF
C00007FF800001FF001E377EB626>116 D E /Fq 71 123 df<001F83E000F06E3001C0
78780380F8780300F0300700700007007000070070000700700007007000070070000700
7000FFFFFF80070070000700700007007000070070000700700007007000070070000700
700007007000070070000700700007007000070070000700700007007000070070000700
7000070070007FE3FF001D20809F1B>11 D<003F0000E0C001C0C00381E00701E00701E0
070000070000070000070000070000070000FFFFE00700E00700E00700E00700E00700E0
0700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0
0700E07FC3FE1720809F19>I<003FE000E0E001C1E00381E00700E00700E00700E00700
E00700E00700E00700E00700E0FFFFE00700E00700E00700E00700E00700E00700E00700
E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E07FE7
FE1720809F19>I<7038F87CFC7EFC7E743A040204020402080408041008100820104020
0F0E7E9F17>34 D<70F8FCFC74040404080810102040060E7C9F0D>39
D<0020004000800100020006000C000C00180018003000300030007000600060006000E0
00E000E000E000E000E000E000E000E000E000E000E00060006000600070003000300030
00180018000C000C000600020001000080004000200B2E7DA112>I<8000400020001000
08000C00060006000300030001800180018001C000C000C000C000E000E000E000E000E0
00E000E000E000E000E000E000E000C000C000C001C00180018001800300030006000600
0C00080010002000400080000B2E7DA112>I<70F8FCFC74040404080810102040060E7C
840D>44 D<FFC0FFC00A027F8A0F>I<70F8F8F87005057C840D>I<03F0000E1C001C0E00
180600380700700380700380700380700380F003C0F003C0F003C0F003C0F003C0F003C0
F003C0F003C0F003C0F003C0F003C0F003C0F003C0700380700380700380780780380700
1806001C0E000E1C0003F000121F7E9D17>48 D<018003800F80F3800380038003800380
038003800380038003800380038003800380038003800380038003800380038003800380
0380038007C0FFFE0F1E7C9D17>I<03F0000C1C00100E00200700400780800780F007C0
F803C0F803C0F803C02007C00007C0000780000780000F00000E00001C00003800007000
00600000C0000180000300000600400C00401800401000803FFF807FFF80FFFF80121E7E
9D17>I<03F0000C1C00100E00200F00780F80780780780780380F80000F80000F00000F
00000E00001C0000380003F000003C00000E00000F000007800007800007C02007C0F807
C0F807C0F807C0F00780400780400F00200E001C3C0003F000121F7E9D17>I<00060000
0600000E00000E00001E00002E00002E00004E00008E00008E00010E00020E00020E0004
0E00080E00080E00100E00200E00200E00400E00C00E00FFFFF0000E00000E00000E0000
0E00000E00000E00000E0000FFE0141E7F9D17>I<1803001FFE001FFC001FF8001FE000
10000010000010000010000010000010000011F000161C00180E00100700100780000380
0003800003C00003C00003C07003C0F003C0F003C0E00380400380400700200600100E00
0C380003E000121F7E9D17>I<007C000182000701000E03800C07801C07803803003800
00780000700000700000F1F000F21C00F40600F80700F80380F80380F003C0F003C0F003
C0F003C0F003C07003C07003C07003803803803807001807000C0E00061C0001F000121F
7E9D17>I<4000007FFFC07FFF807FFF8040010080020080020080040000080000080000
100000200000200000400000400000C00000C00001C00001800003800003800003800003
8000078000078000078000078000078000078000078000030000121F7D9D17>I<03F000
0C0C001006003003002001806001806001806001807001807803003E03003F06001FC800
0FF00003F80007FC000C7E00103F00300F806003804001C0C001C0C000C0C000C0C000C0
C000806001802001001002000C0C0003F000121F7E9D17>I<03F0000E18001C0C003806
00380700700700700380F00380F00380F003C0F003C0F003C0F003C0F003C07007C07007
C03807C0180BC00E13C003E3C0000380000380000380000700300700780600780E00700C
002018001070000FC000121F7E9D17>I<70F8F8F8700000000000000000000070F8F8F8
7005147C930D>I<70F8F8F8700000000000000000000070F0F8F8780808081010102020
40051D7C930D>I<7FFFFFE0FFFFFFF00000000000000000000000000000000000000000
000000000000000000000000FFFFFFF07FFFFFE01C0C7D9023>61
D<000100000003800000038000000380000007C0000007C0000007C0000009E0000009E0
000009E0000010F0000010F0000010F00000207800002078000020780000403C0000403C
0000403C0000801E0000801E0000FFFE0001000F0001000F0001000F0002000780020007
8002000780040003C00E0003C01F0007E0FFC03FFE1F207F9F22>65
D<FFFFE0000F80380007801E0007801F0007800F0007800F8007800F8007800F8007800F
8007800F8007800F0007801F0007801E0007803C0007FFF00007803C0007801E0007800F
0007800F8007800780078007C0078007C0078007C0078007C0078007C00780078007800F
8007800F0007801F000F803C00FFFFF0001A1F7E9E20>I<000FC040007030C001C009C0
038005C0070003C00E0001C01E0000C01C0000C03C0000C07C0000407C00004078000040
F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000F8000000
780000007C0000407C0000403C0000401C0000401E0000800E0000800700010003800200
01C0040000703800000FC0001A217D9F21>I<FFFFE0000F803C0007801E000780070007
800380078003C0078001E0078001E0078001F0078000F0078000F0078000F8078000F807
8000F8078000F8078000F8078000F8078000F8078000F8078000F8078000F0078000F007
8000F0078001E0078001E0078003C0078003800780070007800E000F803C00FFFFE0001D
1F7E9E23>I<FFFFFF000F800F0007800300078003000780010007800180078000800780
008007800080078080800780800007808000078080000781800007FF8000078180000780
800007808000078080000780800007800020078000200780002007800040078000400780
0040078000C0078000C0078001800F800F80FFFFFF801B1F7E9E1F>I<FFFFFF000F800F
000780030007800300078001000780018007800080078000800780008007800080078080
000780800007808000078080000781800007FF8000078180000780800007808000078080
000780800007800000078000000780000007800000078000000780000007800000078000
000FC00000FFFE0000191F7E9E1E>I<000FE0200078186000E004E0038002E0070001E0
0F0000E01E0000601E0000603C0000603C0000207C00002078000020F8000000F8000000
F8000000F8000000F8000000F8000000F8000000F8007FFCF80003E0780001E07C0001E0
3C0001E03C0001E01E0001E01E0001E00F0001E0070001E0038002E000E0046000781820
000FE0001E217D9F24>I<FFF8FFF80F800F8007800F0007800F0007800F0007800F0007
800F0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007FFFF0007
800F0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007800F0007
800F0007800F0007800F0007800F0007800F000F800F80FFF8FFF81D1F7E9E22>I<FFFC
0FC007800780078007800780078007800780078007800780078007800780078007800780
07800780078007800780078007800780078007800FC0FFFC0E1F7F9E10>I<FFFE000FC0
000780000780000780000780000780000780000780000780000780000780000780000780
000780000780000780000780000780000780000780020780020780020780020780060780
0407800407800C07801C0F807CFFFFFC171F7E9E1C>76 D<FF80001FF80F80001F800780
001F0005C0002F0005C0002F0005C0002F0004E0004F0004E0004F000470008F00047000
8F000470008F000438010F000438010F000438010F00041C020F00041C020F00041C020F
00040E040F00040E040F00040E040F000407080F000407080F000407080F000403900F00
0403900F000401E00F000401E00F000401E00F000E00C00F001F00C01F80FFE0C1FFF825
1F7E9E2A>I<FF803FF807C007C007C0038005E0010005E0010004F00100047801000478
0100043C0100043C0100041E0100040F0100040F010004078100040781000403C1000401
E1000401E1000400F1000400F1000400790004003D0004003D0004001F0004001F000400
0F0004000700040007000E0003001F000300FFE001001D1F7E9E22>I<001F800000F0F0
0001C0380007801E000F000F000E0007001E0007803C0003C03C0003C07C0003E0780001
E0780001E0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001
F0F80001F0780001E07C0003E07C0003E03C0003C03C0003C01E0007800E0007000F000F
0007801E0001C0380000F0F000001F80001C217D9F23>I<FFFFE0000F80780007801C00
07801E0007800F0007800F8007800F8007800F8007800F8007800F8007800F8007800F00
07801E0007801C000780780007FFE0000780000007800000078000000780000007800000
07800000078000000780000007800000078000000780000007800000078000000FC00000
FFFC0000191F7E9E1F>I<FFFF80000F80F0000780780007803C0007801E0007801E0007
801F0007801F0007801F0007801F0007801E0007801E0007803C00078078000780F00007
FF80000781C0000780E0000780F0000780700007807800078078000780780007807C0007
807C0007807C0007807C0407807E0407803E040FC01E08FFFC0F10000003E01E207E9E21
>82 D<07E0800C1980100780300380600180600180E00180E00080E00080E00080F00000
F000007800007F00003FF0001FFC000FFE0003FF00001F800007800003C00003C00001C0
8001C08001C08001C08001C0C00180C00380E00300F00600CE0C0081F80012217D9F19>
I<7FFFFFE0780F01E0600F0060400F0020400F0020C00F0030800F0010800F0010800F00
10800F0010000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00
00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00
00000F0000000F0000001F800007FFFE001C1F7E9E21>I<FFF07FF81FF01F800FC007C0
0F00078003800F00078001000F0007C00100078007C00200078007C00200078007C00200
03C009E0040003C009E0040003C009E0040003E010F00C0001E010F0080001E010F00800
01F02078080000F02078100000F02078100000F0403C10000078403C20000078403C2000
0078C03E2000003C801E4000003C801E4000003C801E4000001F000F8000001F000F8000
001F000F8000001E00078000000E00070000000E00070000000C00030000000400020000
2C207F9E2F>87 D<7FF83FF80FE00FC007C0070003C0020001E0040001F00C0000F00800
00781000007C1000003C2000003E4000001E4000000F8000000F8000000780000003C000
0007E0000005E0000009F0000018F8000010780000207C0000603C0000401E0000801F00
01800F0001000780020007C0070003C01F8007E0FFE01FFE1F1F7F9E22>I<FEFEC0C0C0
C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0C0
C0C0FEFE072D7CA10D>91 D<080410082010201040204020804080408040B85CFC7EFC7E
7C3E381C0F0E7B9F17>I<FEFE0606060606060606060606060606060606060606060606
060606060606060606060606060606060606FEFE072D7FA10D>I<1FE000303000781800
781C00300E00000E00000E00000E0000FE00078E001E0E00380E00780E00F00E10F00E10
F00E10F01E10781E103867200F83C014147E9317>97 D<0E0000FE00000E00000E00000E
00000E00000E00000E00000E00000E00000E00000E00000E3E000EC3800F01C00F00E00E
00E00E00700E00700E00780E00780E00780E00780E00780E00780E00700E00700E00E00F
00E00D01C00CC300083E0015207F9F19>I<03F80E0C1C1E381E380C70007000F000F000
F000F000F000F00070007000380138011C020E0C03F010147E9314>I<000380003F8000
038000038000038000038000038000038000038000038000038000038003E380061B801C
0780380380380380700380700380F00380F00380F00380F00380F00380F0038070038070
03803803803807801C07800E1B8003E3F815207E9F19>I<03F0000E1C001C0E00380700
380700700700700380F00380F00380FFFF80F00000F00000F00000700000700000380080
1800800C010007060001F80011147F9314>I<007C00C6018F038F070607000700070007
00070007000700FFF0070007000700070007000700070007000700070007000700070007
0007000700070007007FF01020809F0E>I<0000E003E3300E3C301C1C30380E00780F00
780F00780F00780F00780F00380E001C1C001E380033E000200000200000300000300000
3FFE001FFF800FFFC03001E0600070C00030C00030C00030C000306000603000C01C0380
03FC00141F7F9417>I<0E0000FE00000E00000E00000E00000E00000E00000E00000E00
000E00000E00000E00000E3E000E43000E81800F01C00F01C00E01C00E01C00E01C00E01
C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFE7FC1620
7F9F19>I<1C001E003E001E001C000000000000000000000000000E007E000E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC00A1F809E0C
>I<00E001F001F001F000E0000000000000000000000000007007F000F0007000700070
007000700070007000700070007000700070007000700070007000700070007000700070
6070F060F0C061803F000C28829E0E>I<0E0000FE00000E00000E00000E00000E00000E
00000E00000E00000E00000E00000E00000E0FF00E03C00E03000E02000E04000E08000E
10000E30000E70000EF8000F38000E1C000E1E000E0E000E07000E07800E03800E03C00E
03E0FFCFF815207F9F18>I<0E00FE000E000E000E000E000E000E000E000E000E000E00
0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00
0E00FFE00B20809F0C>I<0E1F01F000FE618618000E81C81C000F00F00E000F00F00E00
0E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E
00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E00FFE7
FE7FE023147F9326>I<0E3E00FE43000E81800F01C00F01C00E01C00E01C00E01C00E01
C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFE7FC1614
7F9319>I<01F800070E001C03803801C03801C07000E07000E0F000F0F000F0F000F0F0
00F0F000F0F000F07000E07000E03801C03801C01C0380070E0001F80014147F9317>I<
0E3E00FEC3800F01C00F00E00E00E00E00F00E00700E00780E00780E00780E00780E0078
0E00780E00700E00F00E00E00F01E00F01C00EC3000E3E000E00000E00000E00000E0000
0E00000E00000E00000E0000FFE000151D7F9319>I<03E0800619801C05803C07803803
80780380700380F00380F00380F00380F00380F00380F003807003807803803803803807
801C0B800E138003E380000380000380000380000380000380000380000380000380003F
F8151D7E9318>I<0E78FE8C0F1E0F1E0F0C0E000E000E000E000E000E000E000E000E00
0E000E000E000E000E00FFE00F147F9312>I<1F9030704030C010C010C010E00078007F
803FE00FF00070803880188018C018C018E030D0608F800D147E9312>I<020002000200
060006000E000E003E00FFF80E000E000E000E000E000E000E000E000E000E000E000E08
0E080E080E080E080610031001E00D1C7F9B12>I<0E01C0FE1FC00E01C00E01C00E01C0
0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E03C0
0603C0030DC001F1FC16147F9319>I<FF83F81E01E01C00C00E00800E00800E00800701
0007010003820003820003820001C40001C40001EC0000E80000E8000070000070000070
0000200015147F9318>I<FF9FE1FC3C0780701C0300601C0380200E0380400E0380400E
03C0400707C0800704C0800704E080038861000388710003C8730001D0320001D03A0000
F03C0000E01C0000E01C0000601800004008001E147F9321>I<7FC3FC0F01E00701C007
018003810001C20000E40000EC00007800003800003C00007C00004E0000870001070003
03800201C00601E01E01E0FF07FE1714809318>I<FF83F81E01E01C00C00E00800E0080
0E008007010007010003820003820003820001C40001C40001EC0000E80000E800007000
007000007000002000002000004000004000004000F08000F08000F100006200003C0000
151D7F9318>I<3FFF380E200E201C40384078407000E001E001C00380078007010E011E
011C0338027006700EFFFE10147F9314>I E /Fr 46 122 df<0000C018000000C01800
0000C0180000018030000001803000000180300000018030000003006000000300600000
0300600000030060000003006000000600C000000600C000000600C000000600C000000C
018000FFFFFFFFC0FFFFFFFFC00018030000001803000000180300000018030000003006
0000003006000000300600000030060000FFFFFFFFC0FFFFFFFFC000600C000000C01800
0000C018000000C018000000C01800000180300000018030000001803000000180300000
03006000000300600000030060000003006000000600C000000600C000000600C0000022
2D7DA229>35 D<70F8FCFC7404040404080810102040060F7C840E>44
D<FFE0FFE00B027F8B10>I<70F8F8F87005057C840E>I<01F000071C000C060018030038
03803803807001C07001C07001C07001C0F001E0F001E0F001E0F001E0F001E0F001E0F0
01E0F001E0F001E0F001E0F001E0F001E0F001E0F001E07001C07001C07001C07803C038
03803803801C07000C0600071C0001F00013227EA018>48 D<008003800F80F380038003
800380038003800380038003800380038003800380038003800380038003800380038003
80038003800380038003800380038007C0FFFE0F217CA018>I<03F8000C1E0010070020
07804007C07807C07803C07807C03807C0000780000780000700000F00000E0000380003
F000001C00000F000007800007800003C00003C00003E02003E07003E0F803E0F803E0F0
03C04003C0400780200780100F000C1C0003F00013227EA018>51
D<000200000600000E00000E00001E00001E00002E00004E00004E00008E00008E00010E
00020E00020E00040E00040E00080E00100E00100E00200E00200E00400E00800E00FFFF
F8000E00000E00000E00000E00000E00000E00000E00001F0001FFF015217FA018>I<10
00801E07001FFF001FFE001FF80013E00010000010000010000010000010000010000010
F800130E001407001803801003800001C00001C00001E00001E00001E00001E07001E0F0
01E0F001E0E001C08001C04003C04003802007001006000C1C0003F00013227EA018>I<
007E0001C1000300800601C00E03C01C03C0180180380000380000780000700000700000
F0F800F30C00F40600F40300F80380F801C0F001C0F001E0F001E0F001E0F001E0F001E0
7001E07001E07001E03801C03801C01803801C03000C0600070C0001F00013227EA018>
I<01F800060E000803001001802001802000C06000C06000C06000C07000C07801803E01
003F02001FC4000FF80003F80003FC00067F00083F80100F803007C06001C06000E0C000
E0C00060C00060C00060C000606000406000C03000801803000E0E0003F00013227EA018
>56 D<01F000060C000C0600180700380380700380700380F001C0F001C0F001C0F001E0
F001E0F001E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001C0
0001C00001C0000380000380300300780700780600700C002018001030000FC00013227E
A018>I<0001800000018000000180000003C0000003C0000003C0000005E0000005E000
000DF0000008F0000008F0000010F800001078000010780000203C0000203C0000203C00
00401E0000401E0000401E0000800F0000800F0000FFFF000100078001000780030007C0
020003C0020003C0040003E0040001E0040001E00C0000F00C0000F03E0001F8FF800FFF
20237EA225>65 D<0007E0100038183000E0063001C00170038000F0070000F00E000070
1E0000701C0000303C0000303C0000307C0000107800001078000010F8000000F8000000
F8000000F8000000F8000000F8000000F8000000F800000078000000780000107C000010
3C0000103C0000101C0000201E0000200E000040070000400380008001C0010000E00200
00381C000007E0001C247DA223>67 D<FFFFFFC00F8007C0078001C0078000C007800040
078000400780006007800020078000200780002007802020078020000780200007802000
078060000780E00007FFE0000780E0000780600007802000078020000780200007802008
0780000807800008078000100780001007800010078000300780003007800070078000E0
0F8003E0FFFFFFE01D227EA121>69 D<FFFFFFC00F8007C0078001C0078000C007800040
078000400780006007800020078000200780002007802020078020000780200007802000
078060000780E00007FFE0000780E0000780600007802000078020000780200007802000
078000000780000007800000078000000780000007800000078000000780000007800000
0FC00000FFFE00001B227EA120>I<FFFC0FC00780078007800780078007800780078007
800780078007800780078007800780078007800780078007800780078007800780078007
800780078007800FC0FFFC0E227EA112>73 D<FFC00003FF0FC00003F007C00003E005E0
0005E005E00005E004F00009E004F00009E004F00009E004780011E004780011E0047800
11E0043C0021E0043C0021E0043C0021E0041E0041E0041E0041E0040F0081E0040F0081
E0040F0081E004078101E004078101E004078101E00403C201E00403C201E00401E401E0
0401E401E00401E401E00400F801E00400F801E00400F801E004007001E00E007001E01F
007003F0FFE0203FFF28227EA12D>77 D<FF8007FF07C000F807C0007005E0002004F000
2004F0002004780020047C0020043C0020041E0020041F0020040F002004078020040780
200403C0200401E0200401E0200400F0200400F8200400782004003C2004003E2004001E
2004000F2004000F20040007A0040003E0040003E0040001E0040001E0040000E00E0000
601F000060FFE0002020227EA125>I<FFFFF0000F803C0007800F0007800780078007C0
078003C0078003E0078003E0078003E0078003E0078003E0078003E0078003C0078007C0
0780078007800F0007803C0007FFF0000780000007800000078000000780000007800000
078000000780000007800000078000000780000007800000078000000780000007800000
0FC00000FFFC00001B227EA121>80 D<FFFFE000000F803C000007800E00000780078000
078007C000078003C000078003E000078003E000078003E000078003E000078003E00007
8003C000078007C000078007800007800E000007803C000007FFE0000007807000000780
38000007801C000007801E000007800E000007800F000007800F000007800F000007800F
000007800F800007800F800007800F800007800F808007800FC080078007C0800FC003C1
00FFFC01E2000000007C0021237EA124>82 D<03F0200C0C601802603001E07000E06000
60E00060E00060E00020E00020E00020F00000F000007800007F00003FF0001FFE000FFF
0003FF80003FC00007E00001E00000F00000F0000070800070800070800070800070C000
60C00060E000C0F000C0C80180C6070081FC0014247DA21B>I<7FFFFFF8780780786007
8018400780084007800840078008C007800C800780048007800480078004800780040007
800000078000000780000007800000078000000780000007800000078000000780000007
800000078000000780000007800000078000000780000007800000078000000780000007
80000007800000078000000FC00003FFFF001E227EA123>I<0FE0001838003C0C003C0E
0018070000070000070000070000FF0007C7001E07003C0700780700700700F00708F007
08F00708F00F087817083C23900FC1E015157E9418>97 D<0E0000FE00001E00000E0000
0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E1F000E61C0
0E80600F00300E00380E003C0E001C0E001E0E001E0E001E0E001E0E001E0E001E0E001E
0E001C0E003C0E00380F00700C80600C41C0083F0017237FA21B>I<01FE000703000C07
801C0780380300780000700000F00000F00000F00000F00000F00000F00000F000007000
007800403800401C00800C010007060001F80012157E9416>I<0000E0000FE00001E000
00E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E001F8E007
04E00C02E01C01E03800E07800E07000E0F000E0F000E0F000E0F000E0F000E0F000E0F0
00E07000E07800E03800E01801E00C02E0070CF001F0FE17237EA21B>I<01FC00070700
0C03801C01C03801C07801E07000E0F000E0FFFFE0F00000F00000F00000F00000F00000
7000007800203800201C00400E008007030000FC0013157F9416>I<003C00C6018F038F
030F070007000700070007000700070007000700FFF80700070007000700070007000700
0700070007000700070007000700070007000700070007807FF8102380A20F>I<000070
01F198071E180E0E181C07001C07003C07803C07803C07803C07801C07001C07000E0E00
0F1C0019F0001000001000001800001800001FFE000FFFC00FFFE03800F0600030400018
C00018C00018C000186000306000303800E00E038003FE0015217F9518>I<0E0000FE00
001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00
000E1F800E60C00E80E00F00700F00700E00700E00700E00700E00700E00700E00700E00
700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF18237FA21B>I<1C
001E003E001E001C00000000000000000000000000000000000E00FE001E000E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC00A227FA10E
>I<01C003E003E003E001C00000000000000000000000000000000001E00FE001E000E0
00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0
00E000E000E000E060E0F0C0F18061803E000B2C82A10F>I<0E0000FE00001E00000E00
000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E03FC0E01
F00E01C00E01800E02000E04000E08000E10000E38000EF8000F1C000E1E000E0E000E07
000E07800E03C00E01C00E01E00E00F00E00F8FFE3FE17237FA21A>I<0E00FE001E000E
000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E
000E000E000E000E000E000E000E000E000E000E000E000E00FFE00B237FA20E>I<0E1F
C07F00FE60E183801E807201C00F003C00E00F003C00E00E003800E00E003800E00E0038
00E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800
E00E003800E00E003800E00E003800E00E003800E00E003800E0FFE3FF8FFE27157F942A
>I<0E1F80FE60C01E80E00F00700F00700E00700E00700E00700E00700E00700E00700E
00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF18157F941B>I<
01FC000707000C01801800C03800E0700070700070F00078F00078F00078F00078F00078
F00078F000787000707800F03800E01C01C00E038007070001FC0015157F9418>I<0E1F
00FE61C00E80600F00700E00380E003C0E001C0E001E0E001E0E001E0E001E0E001E0E00
1E0E001E0E003C0E003C0E00380F00700E80E00E41C00E3F000E00000E00000E00000E00
000E00000E00000E00000E00000E0000FFE000171F7F941B>I<0E3CFE461E8F0F0F0F06
0F000E000E000E000E000E000E000E000E000E000E000E000E000E000F00FFF010157F94
13>114 D<0F8830786018C018C008C008E008F0007F803FE00FF001F8003C801C800C80
0CC00CC008E018D0308FC00E157E9413>I<02000200020002000600060006000E001E00
3E00FFF80E000E000E000E000E000E000E000E000E000E000E000E040E040E040E040E04
0E040708030801F00E1F7F9E13>I<0E0070FE07F01E00F00E00700E00700E00700E0070
0E00700E00700E00700E00700E00700E00700E00700E00700E00700E00F00E00F0060170
03827800FC7F18157F941B>I<FFC1FE1E00780E00300E00200E00200700400700400380
8003808003808001C10001C10000E20000E20000E2000074000074000038000038000038
0000100017157F941A>I<FF8FF8FF1E01E03C1C01C0180E01C0180E01E0100E01E01007
026020070270200702702003843040038438400384384001C8188001C81C8001C81C8000
F00D0000F00F0000F00F0000600600006006000060060020157F9423>I<FFC1FE1E0078
0E00300E00200E002007004007004003808003808003808001C10001C10000E20000E200
00E200007400007400003800003800003800001000001000002000002000002000004000
F04000F08000F180004300003C0000171F7F941A>121 D E /Fs
20 118 df<FFFF80FFFF80FFFF8011037F9016>45 D<FFFFFFE00000FFFFFFFC000007E0
007F000003E0000F800003E00003C00003E00001E00003E00000F00003E00000780003E0
00003C0003E000001E0003E000001E0003E000000F0003E000000F0003E000000F8003E0
0000078003E0000007C003E0000007C003E0000003C003E0000003C003E0000003E003E0
000003E003E0000003E003E0000003E003E0000003E003E0000003E003E0000003E003E0
000003E003E0000003E003E0000003E003E0000003E003E0000003C003E0000003C003E0
000007C003E0000007C003E00000078003E00000078003E000000F8003E000000F0003E0
00001F0003E000001E0003E000003C0003E00000780003E00000F80003E00001F00003E0
0003E00003E0000F800007E0003F0000FFFFFFFC0000FFFFFFE000002B317CB033>68
D<FFFF80FFFF8007F00003E00003E00003E00003E00003E00003E00003E00003E00003E0
0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0
0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0
0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00007F000FFFF
80FFFF8011317DB017>73 D<FFF00000007FF8FFF00000007FF807F00000007F0002F800
0000BE0002F8000000BE0002F8000000BE00027C0000013E00027C0000013E00023E0000
023E00023E0000023E00023E0000023E00021F0000043E00021F0000043E00021F000004
3E00020F8000083E00020F8000083E00020F8000083E000207C000103E000207C000103E
000207C000103E000203E000203E000203E000203E000201F000403E000201F000403E00
0201F000403E000200F800803E000200F800803E000200F800803E0002007C01003E0002
007C01003E0002007C01003E0002003E02003E0002003E02003E0002003E02003E000200
1F04003E0002001F04003E0002000F88003E0002000F88003E0002000F88003E00020007
D0003E00020007D0003E00020007D0003E00020003E0003E00020003E0003E00020003E0
003E00070001C0003E000F8001C0007F00FFF801C00FFFF8FFF800800FFFF835317CB03D
>77 D<FFFFFFC000FFFFFFF80007E0007E0003E0001F0003E000078003E00003C003E000
01E003E00001F003E00001F003E00000F003E00000F803E00000F803E00000F803E00000
F803E00000F803E00000F803E00000F003E00001F003E00001E003E00003E003E00003C0
03E000078003E0001F0003E0007C0003FFFFF00003E000000003E000000003E000000003
E000000003E000000003E000000003E000000003E000000003E000000003E000000003E0
00000003E000000003E000000003E000000003E000000003E000000003E000000003E000
000003E000000003E000000003E000000007F0000000FFFF800000FFFF80000025317CB0
2D>80 D<007F802001FFE02007C078600F001C601E0006E03C0003E0380001E0780000E0
700000E070000060F0000060F0000060F0000020F0000020F0000020F8000020F8000000
7C0000007E0000003F0000003FC000001FF800000FFF800007FFF80003FFFC0000FFFF00
000FFF800000FFC000001FE0000007E0000003F0000001F0000000F0000000F8000000F8
8000007880000078800000788000007880000078C0000078C0000070E00000F0E00000E0
F00000E0F80001C0EC000380C7000700C1F01E00807FFC00800FF0001D337CB125>83
D<00FE00000303C0000C00E00010007000100038003C003C003E001C003E001E003E001E
0008001E0000001E0000001E0000001E00000FFE0000FC1E0003E01E000F801E001F001E
003E001E003C001E007C001E00F8001E04F8001E04F8001E04F8003E04F8003E0478003E
047C005E043E008F080F0307F003FC03E01E1F7D9E21>97 D<003F8000E0600380180700
040F00041E001E1C003E3C003E7C003E7C0008780000F80000F80000F80000F80000F800
00F80000F80000F80000F800007800007C00007C00003C00011E00011E00020F00020700
0403801800E060003F80181F7D9E1D>99 D<000001E000003FE000003FE0000003E00000
01E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E00000
01E0000001E0000001E0000001E0000001E0000001E0001F81E000F061E001C019E00780
05E00F0003E00E0003E01E0001E03C0001E03C0001E07C0001E0780001E0F80001E0F800
01E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0780001E07800
01E03C0001E03C0001E01C0001E01E0003E00E0005E0070009E0038011F000E061FF003F
81FF20327DB125>I<003F800000E0E0000380380007003C000E001E001E001E001C000F
003C000F007C000F0078000F8078000780F8000780F8000780FFFFFF80F8000000F80000
00F8000000F8000000F8000000F8000000780000007C0000003C0000003C0000801E0000
800E0001000F0002000780020001C00C0000F03000001FC000191F7E9E1D>I<0007E000
1C1000383800707C00E07C01E07C01C03803C00003C00003C00003C00003C00003C00003
C00003C00003C00003C00003C00003C000FFFFC0FFFFC003C00003C00003C00003C00003
C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003
C00003C00003C00003C00003C00003C00003C00003C00003C00003C00007E0007FFF007F
FF0016327FB114>I<000000F0007F030801C1C41C0380E81C070070080F0078001E003C
001E003C003E003E003E003E003E003E003E003E003E003E003E003E001E003C001E003C
000F007800070070000780E00009C1C000087F0000180000001800000018000000180000
00180000001C0000000E0000000FFFF80007FFFF0003FFFF800E000FC0180001E0300000
F070000070E0000038E0000038E0000038E0000038E00000387000007070000070380000
E01C0001C00700070001C01C00003FE0001E2F7E9F21>I<07000F801F801F800F800700
000000000000000000000000000000000000000000000780FF80FF800F80078007800780
078007800780078007800780078007800780078007800780078007800780078007800780
0780078007800FC0FFF8FFF80D307EAF12>105 D<0780FE001FC000FF83078060F000FF
8C03C18078000F9001E2003C0007A001E4003C0007A000F4001E0007C000F8001E0007C0
00F8001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000
F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0
001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F000
1E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E
000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341F7E9E38>109
D<0780FE0000FF83078000FF8C03C0000F9001E00007A001E00007A000F00007C000F000
07C000F000078000F000078000F000078000F000078000F000078000F000078000F00007
8000F000078000F000078000F000078000F000078000F000078000F000078000F0000780
00F000078000F000078000F000078000F000078000F000078000F000078000F0000FC001
F800FFFC1FFF80FFFC1FFF80211F7E9E25>I<001FC00000F0780001C01C00070007000F
0007801E0003C01C0001C03C0001E03C0001E0780000F0780000F0780000F0F80000F8F8
0000F8F80000F8F80000F8F80000F8F80000F8F80000F8F80000F8780000F07C0001F03C
0001E03C0001E01E0003C01E0003C00F00078007800F0001C01C0000F07800001FC0001D
1F7E9E21>I<0783E0FF8C18FF907C0F907C07A07C07C03807C00007C00007C000078000
078000078000078000078000078000078000078000078000078000078000078000078000
0780000780000780000780000780000780000FC000FFFE00FFFE00161F7E9E19>114
D<01FC100E03301800F0300070600030E00030E00010E00010E00010F00010F800007E00
003FF0001FFF000FFFC003FFE0003FF00001F80000F880003C80003C80001CC0001CC000
1CE0001CE00018F00038F00030CC0060C301C080FE00161F7E9E1A>I<00400000400000
400000400000400000C00000C00000C00001C00001C00003C00007C0000FC0001FFFE0FF
FFE003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003
C00003C00003C00003C00003C00003C01003C01003C01003C01003C01003C01003C01003
C01001C02001E02000E0400078C0001F00142C7FAB19>I<078000F000FF801FF000FF80
1FF0000F8001F000078000F000078000F000078000F000078000F000078000F000078000
F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0
00078000F000078000F000078000F000078000F000078000F000078000F000078001F000
078001F000078001F000038002F00003C004F00001C008F800007030FF80001FC0FF8021
1F7E9E25>I E /Ft 5 85 df<00000000600000000000700000000000F00000000001F0
0000000001F00000000003F00000000003F00000000007F00000000007F0000000000FF0
000000000FF0000000001BF00000000033F00000000033F00000000063F00000000063F8
00000000C1F800000000C1F80000000181F80000000381F80000000301F80000000601F8
0000000601F80000000C01F80000000C01F80000001801F80000001801F80000003001F8
0000006001F80000006001F8000000C001F8000000C001FC000001FFFFFC000001FFFFFC
0000030000FC0000070000FC0000060000FC00000C0000FC00000C0000FC0000180000FC
0000180000FC0000300000FC0000700000FC0000600000FC0000E00000FC0001E00000FC
0003E00000FE000FF00001FE00FFFE003FFFF0FFFE003FFFF02C327CB135>65
D<000FFFFFFF0000000FFFFFFFC00000003F8007F00000003F8001F80000003F00007C00
00003F00007E0000007F00003E0000007F00001F0000007E00001F0000007E00001F8000
00FE00000F800000FE00000F800000FC00000FC00000FC00000FC00001FC00000FC00001
FC00000FC00001F800000FC00001F800000FC00003F800000FC00003F800001FC00003F0
00001FC00003F000001FC00007F000001FC00007F000001F800007E000003F800007E000
003F80000FE000003F80000FE000003F00000FC000007F00000FC000007F00001FC00000
7E00001FC00000FE00001F800000FC00001F800000FC00003F800001F800003F800001F0
00003F000003F000003F000007E000007F000007C000007F00000FC000007E00001F8000
007E00003F000000FE00007E000000FE0000F8000000FC0001F0000000FC0007E0000001
FC003F800000FFFFFFFE000000FFFFFFF000000032317CB036>68
D<000FFFFFFFFE000FFFFFFFFE00003F8000FE00003F80003E00003F00001E00003F0000
1E00007F00000C00007F00000C00007E00000C00007E00000C0000FE00000C0000FE0000
0C0000FC00000C0000FC00000C0001FC00001C0001FC00C0180001F800C0000001F800C0
000003F801C0000003F801C0000003F00180000003F00380000007F00F80000007FFFF80
000007FFFF00000007E00F0000000FE0070000000FE0070000000FC0060000000FC00600
00001FC00E0000001FC00E0000001F800C0000001F80000000003F80000000003F800000
00003F00000000003F00000000007F00000000007F00000000007E00000000007E000000
0000FE0000000000FE0000000000FC0000000000FC0000000001FC00000000FFFFFC0000
00FFFFFC0000002F317CB02F>70 D<000FFFFFF800000FFFFFFF0000003F801FC000003F
8007E000003F0003F000003F0001F800007F0000FC00007F0000FC00007E0000FC00007E
0000FC0000FE0000FC0000FE0001FC0000FC0001FC0000FC0001FC0001FC0001F80001FC
0003F80001F80003F00001F80007E00003F80007E00003F8000F800003F0003F000003F0
007E000007F003F8000007FFFFE0000007FFFF80000007E007C000000FE003F000000FE0
01F000000FC000F800000FC000F800001FC000FC00001FC000FC00001F8000FC00001F80
00FC00003F8001FC00003F8001FC00003F0001FC00003F0001FC00007F0003F800007F00
03F800007E0003F800007E0003F80600FE0003F80E00FE0003F80C00FC0003F80C00FC00
03F81C01FC0001F838FFFFF000FC70FFFFF0007FE0000000001F802F327CB034>82
D<07FFFFFFFFF807FFFFFFFFF80FE007F001F80F8007F000F80E0007E000701E0007E000
701C000FE0007018000FE0007038000FC0007038000FC0007030001FC0006070001FC000
6060001F80006060001F80006060003F8000E0E0003F8000C000003F00000000003F0000
0000007F00000000007F00000000007E00000000007E0000000000FE0000000000FE0000
000000FC0000000000FC0000000001FC0000000001FC0000000001F80000000001F80000
000003F80000000003F80000000003F00000000003F00000000007F00000000007F00000
000007E00000000007E0000000000FE0000000000FE0000000000FC0000000000FC00000
00001FC0000000001FC0000000001F80000000003F80000000007FC00000007FFFFFC000
007FFFFFC000002D3174B033>84 D E end
%%EndProlog
%%BeginSetup
%%Feature: *Resolution 300dpi
TeXDict begin

%%EndSetup
%%Page: 0 1
0 0 bop 795 908 a Ft(D)26 b(R)g(A)f(F)h(T)225 999 y Fs(Do)r(cumen)n(t)
20 b(for)i(a)f(Standard)g(Message-P)n(assing)f(In)n(terface)621
1194 y Fr(Message)c(P)o(assing)h(In)o(terface)e(F)l(orum)766
1320 y(Septem)o(b)q(er)g(14,)h(1993)87 1378 y(This)g(w)o(ork)g(w)o(as)h
(supp)q(orted)g(b)o(y)f(ARP)l(A)g(and)g(NSF)g(under)g(con)o(tract)g(n)o
(um)o(b)q(er)f(###,)g(b)o(y)g(the)192 1436 y(National)h(Science)f(F)l
(oundation)i(Science)e(and)i(T)l(ec)o(hnology)f(Cen)o(ter)f(Co)q(op)q
(erativ)o(e)76 1494 y(Agreemen)o(t)e(No.)22 b(CCR-8809615,)d(and)e(b)o
(y)e(the)h(Commission)e(of)j(the)f(Europ)q(ean)i(Comm)o(unit)n(y)654
1552 y(through)f(Esprit)f(pro)s(ject)g(P6643.)p eop
%%Page: 1 2
1 1 bop 166 45 a Fq(This)20 b(is)h(the)f(result)g(of)f(a)h(LaT)l(eX)g
(run)g(of)g(a)f(draft)g(of)h(a)f(single)j(c)o(hapter)d(of)h(the)g(MPIF)
f(Final)75 102 y(Rep)q(ort)d(do)q(cumen)o(t.)969 2828
y(i)p eop
%%Page: 1 3
1 2 bop 75 356 a Fp(Section)35 b(1)75 564 y Fo(Collecti)q(v)m(e)42
b(Comm)l(unication)75 805 y Fn(1.1)59 b(Intro)r(duction)75
906 y Fq(Collectiv)o(e)13 b(comm)o(unication)f(is)g(de\014ned)h(to)e(b)
q(e)h(comm)o(unication)g(that)f(in)o(v)o(olv)o(es)h(a)f(group)g(of)g
(pro)q(cesses.)75 963 y(The)k(functions)h(pro)o(vided)g(b)o(y)g(the)f
(MPI)g(collectiv)o(e)i(comm)o(unication)f(include:)143
1055 y Fm(\017)23 b Fq(Broadcast)14 b(from)g(one)i(mem)o(b)q(er)f(to)g
(all)h(mem)o(b)q(ers)f(of)g(a)g(group.)143 1148 y Fm(\017)23
b Fq(Barrier)15 b(across)f(all)j(group)d(mem)o(b)q(ers)143
1242 y Fm(\017)23 b Fq(Gather)14 b(data)h(from)f(all)i(group)f(mem)o(b)
q(ers)g(to)g(one)g(mem)o(b)q(er.)143 1335 y Fm(\017)23
b Fq(Scatter)14 b(data)h(from)f(one)i(mem)o(b)q(er)f(to)g(all)h(mem)o
(b)q(ers)f(of)g(a)g(group.)143 1428 y Fm(\017)23 b Fq(Global)15
b(op)q(erations)g(suc)o(h)h(as)e(sum,)h(max,)f(min,)i(etc.,)e(w)o(ere)h
(the)g(result)g(is)h(kno)o(wn)e(b)o(y)h(all)h(group)189
1485 y(mem)o(b)q(ers)e(and)h(a)g(v)m(ariation)g(where)g(the)g(result)g
(is)g(kno)o(wn)f(b)o(y)h(only)g(one)g(mem)o(b)q(er.)20
b(The)15 b(abilit)o(y)189 1541 y(to)f(ha)o(v)o(e)h(user)g(de\014ned)i
(global)f(op)q(erations.)143 1634 y Fm(\017)23 b Fq(Scan)15
b(across)g(all)h(mem)o(b)q(ers)f(of)g(a)g(group)g(\(also)g(called)h
(parallel)h(pre\014x\).)143 1728 y Fm(\017)23 b Fq(Broadcast)14
b(from)g(all)j(mem)o(b)q(ers)e(to)f(all)j(mem)o(b)q(ers)e(of)g(a)g
(group.)143 1821 y Fm(\017)23 b Fq(Scatter)c(\(or)g(Gather\))g(data)g
(from)g(all)i(mem)o(b)q(ers)f(to)f(all)i(mem)o(b)q(ers)f(of)g(a)f
(group)h(\(also)f(called)189 1877 y(complete)d(exc)o(hange)f(or)g
(all-to-all\).)75 1970 y(While)j(v)o(endors)f(ma)o(y)f(optimize)i
(certain)g(collectiv)o(e)g(routines)g(for)e(their)h(arc)o(hitectures,)h
(a)e(complete)75 2026 y(library)d(of)f(the)g(collectiv)o(e)i(comm)o
(unication)f(routines)g(can)g(b)q(e)g(written)f(en)o(tirely)h(using)g
(p)q(oin)o(t-to-p)q(oin)o(t)75 2083 y(comm)o(unication)j(functions.)166
2139 y(The)d(syn)o(tax)f(and)i(seman)o(tics)f(of)f(the)h(collectiv)o(e)
i(op)q(erations)e(are)g(de\014ned)h(so)f(as)g(to)f(b)q(e)i(consisten)o
(t)75 2195 y(with)22 b(the)f(syn)o(tax)g(and)h(seman)o(tics)f(of)g(the)
h(p)q(oin)o(t-to-p)q(oin)o(t)g(op)q(erations.)39 b(A)21
b(collectiv)o(e)j(op)q(eration)75 2252 y(is)f(executed)h(b)o(y)f(ha)o
(ving)g(all)g(pro)q(cesses)h(in)f(the)g(group)g(call)g(the)g(comm)o
(unication)h(routine,)h(with)75 2308 y(matc)o(hing)14
b(parameters.)19 b(One)14 b(of)g(the)g(k)o(ey)g(parameters)f(is)i(a)e
(comm)o(unicator)h(that)f(de\014nes)i(the)f(group)75
2365 y(of)i(participating)h(pro)q(cesses)f(and)h(pro)o(vides)f(a)g(con)
o(text)f(for)h(the)g(op)q(eration.)23 b(The)16 b(reader)g(is)h
(referred)75 2421 y(to)g(c)o(hapter)g Fl(??)26 b Fq(for)17
b(information)g(concerning)i(comm)o(unication)f(bu\013ers)f(and)h
(their)f(manipulations)75 2478 y(and)g(t)o(yp)q(e)g(matc)o(hing)g
(rules;)i(and)e(to)f(c)o(hapter)h Fl(??)25 b Fq(for)17
b(information)g(on)g(ho)o(w)g(to)f(de\014ne)i(groups)f(and)75
2534 y(create)e(comm)o(unicators.)166 2591 y(Collectiv)o(e)i(routines)g
(can)f(\(but)g(are)g(not)g(required)h(to\))e(return)i(as)e(so)q(on)h
(as)g(their)h(participation)75 2647 y(in)i(the)g(collectiv)o(e)h(comm)o
(unication)f(is)g(complete.)31 b(The)19 b(completion)g(of)f(a)h(call)g
(indicates)h(that)e(the)75 2704 y(caller)d(is)g(no)o(w)e(free)h(to)g
(access)g(the)g(lo)q(cations)h(in)g(the)f(comm)o(unication)g(bu\013er,)
g(or)g(an)o(y)g(other)f(lo)q(cation)-32 46 y Fk(1)-32
103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32
385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611
y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40
893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40
1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40
1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40
1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40
1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40
2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40
2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40
2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40
2699 y(48)p eop
%%Page: 2 4
2 3 bop 75 -100 a Fq(2)747 b Fj(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)75 45 y Fq(that)d(can)g(b)q(e)i(referenced)f(b)o(y)g
(the)f(collectiv)o(e)i(op)q(eration.)20 b(It)14 b(do)q(es)f(not)h
(indicate)h(that)d(other)i(pro)q(cesses)75 102 y(in)g(the)f(group)f(ha)
o(v)o(e)h(started)f(the)h(op)q(eration)g(\(unless)h(otherwise)f
(indicated)i(in)f(the)f(description)h(of)f(the)75 158
y(op)q(eration\).)19 b(The)11 b(successful)i(completion)g(of)e(a)h
(collectiv)o(e)h(comm)o(unication)f(call)h(ma)o(y)e(dep)q(end)i(on)f
(the)75 214 y(execution)i(of)e(a)h(matc)o(hing)f(call)i(at)e(all)i(pro)
q(cesses)f(in)h(the)f(group.)18 b(Th)o(us,)13 b(a)g(collectiv)o(e)h
(comm)o(unication)75 271 y(call)h(ma)o(y)l(,)e(or)g(ma)o(y)g(not,)h(ha)
o(v)o(e)f(the)h(e\013ect)g(of)f(sync)o(hronizing)i(all)g(calling)g(pro)
q(cesses.)20 b(A)14 b(more)f(detailed)75 327 y(discussion)g(of)e(the)g
(correct)g(use)g(of)g(the)g(collectiv)o(e)i(routines)f(can)f(b)q(e)h
(found)f(at)g(the)g(end)h(of)f(this)h(c)o(hapter.)166
496 y Fi(Discussion:)33 b Fh(The)13 b(collectiv)o(e)g(op)q(erations)g
(do)g(not)f(accept)j(a)d(message)h(tag)f(parameter.)17
b(The)d(rationale)75 553 y(for)j(not)g(using)g(tags)g(is)h(that)f(the)h
(need)g(for)f(distinguishing)f(collectiv)o(e)h(op)q(erations)h(with)f
(the)h(same)e(con)o(text)75 609 y(seldom)g(arises)i(\(since)g(the)g(op)
q(erations)f(are)h(blo)q(c)o(king\);)f(the)h(tag)f(\014eld)g(can)h(b)q
(e)f(used)i(b)o(y)e(the)g(p)q(oin)o(t-to-p)q(oin)o(t)75
666 y(messages)d(that)g(implemen)o(t)d(the)j(collectiv)o(e)g(comm)o
(unication.)75 1062 y Fn(1.2)59 b(Communication)18 b(F)n(unctions)75
1222 y Fq(The)d(k)o(ey)h(concept)f(of)g(the)g(collectiv)o(e)i
(functions)f(is)g(to)f(ha)o(v)o(e)f(a)h(\\group")g(of)g(participating)h
(pro)q(cesses.)75 1278 y(The)j(routines)f(do)g(not)g(ha)o(v)o(e)g(a)g
(group)g(iden)o(ti\014er)i(as)e(an)g(explicit)j(parameter.)28
b(Instead,)19 b(there)f(is)h(a)75 1335 y(comm)o(unicator)f(parameter.)
29 b(In)19 b(this)g(c)o(hapter)g(a)f(comm)o(unicator)g(can)h(b)q(e)g
(though)o(t)f(of)g(as)g(a)g(group)75 1391 y(iden)o(ti\014er)f(merged)e
(with)h(a)e(con)o(text.)75 1705 y Fn(1.3)59 b(Ba)n(rrier)21
b(synchronization)75 1912 y Fg(MPI)p 160 1912 14 2 v
16 w(BARRIER\()16 b(comm)d(\))117 2019 y Fh(IN)155 b
Fg(comm)470 b Fh(comm)o(unicator)11 b(handle)166 2174
y Fg(MPI)p 251 2174 V 16 w(BARRIER)k Fq(blo)q(c)o(ks)g(the)f(caller)h
(un)o(til)g(all)f(group)g(mem)o(b)q(ers)g(ha)o(v)o(e)f(called)j(it;)e
(the)g(call)h(returns)75 2230 y(at)g(an)o(y)f(pro)q(cess)i(only)g
(after)e(all)i(group)f(mem)o(b)q(ers)g(ha)o(v)o(e)g(en)o(tered)h(the)f
(call.)75 2544 y Fn(1.4)59 b(Data)19 b(move)g(functions)75
2704 y Fq(Figure)c(1.1)g(illustrates)h(the)f(the)g(di\013eren)o(t)h
(collectiv)o(e)h(mo)o(v)o(e)d(functions)i(supp)q(orted)g(b)o(y)f(MPI.)
1967 46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967
272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498
y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724
y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949
y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 3 5
3 4 bop 75 -100 a Fj(1.4.)34 b(D)o(A)l(T)l(A)15 b(MO)o(VE)g(FUNCTIONS)
1099 b Fq(3)75 2420 y @beginspecial @setspecial
%%BeginDocument: coll-fig1.ps
/arrowdict 13 dict def                      % Local storage for the procedure
					    % ``arrow.''
							
/arrow                                      % The procedure ``arrow'' adds an
  { arrowdict begin                         % arrow shape to the current path.
      /headlength exch def                  % It takes seven arguments: the x
      /halfheadthickness exch 2 div def     % and y coordinates of the tail
      /halfthickness exch 2 div def         % (imagine that a line has been
      /tipy exch def /tipx exch def         % drawn down the center of the
      /taily exch def /tailx exch def       % arrow from the tip to the tail,
					    % then x and y lie on this line),
					    % the x and y coordinates of the
					    % tip of the arrow, the thickness
					    % of the arrow in the tail
					    % portion, the thickness of the
					    % arrow at the widest part of the
					    % arrowhead and the length of the
					    % arrowhead.
							
      /dx tipx tailx sub def                % Compute the differences in x and
      /dy tipy taily sub def                % y for the tip and tail. These
      /arrowlength dx dx mul dy dy mul add  % will be used to compute the
	sqrt def                            % length of the arrow and to
      /angle dy dx atan def                 % compute the angle of direction
					    % that the arrow is facing with
					    % respect to the current user
					    % coordinate system origin.
      /base arrowlength headlength sub def  % Compute where the base of the
					    % arrowhead will be.
								
      /savematrix matrix currentmatrix def  % Save the current user coordinate
					    % system. We are using the same
					    % strategy to localize the effect
					    % of transformations as was used
					    % in the program to draw an
					    % ellipse.
      tailx taily translate                 % Translate to the starting point
					    % of the tail.
      angle rotate                          % Rotate the x-axis to correspond
					    % with the center line of the
					    % arrow.
      0 halfthickness neg moveto            % Add the arrow shape to the
					    % current path.
      base halfthickness neg lineto
      base halfheadthickness neg lineto
      arrowlength 0 lineto
      base halfheadthickness lineto
      base halfthickness lineto
      0 halfthickness lineto
      closepath
	       
      savematrix setmatrix                  % Restore the current user
					    % coordinate system.
    end
  } def
/Box
{ /height exch def
  /length exch def

   length 0 rlineto
   0 height rlineto
   length neg 0 rlineto
   closepath
} def

/Gdict 200 dict def
/Grid
{ 
  Gdict begin
  /ny exch def
  /nx exch def
  /dely exch def
  /delx exch def
  /leny { ny dely mul} def
  /lenx { nx delx mul} def
  currentpoint
  /ypos exch def
  /xpos exch def

  /y ypos def
  /x xpos def

  0 1 ny { pop x y moveto lenx 0 rlineto stroke /y y dely add def} for
  /y ypos def
  /x xpos def
  0 1 nx { pop x y moveto 0 leny rlineto stroke /x x delx add def} for
  end
} def

/GLdict 300 dict def
/GridLabels
{ 
  GLdict begin

  /shift exch def
  /raise exch def
  /yoff exch def
  /xoff exch def
  /p1 exch def
  /p2 exch def
  /ny exch def
  /nx exch def
  /dely exch def
  /delx exch def

  /Darray exch def
  /leny { ny dely mul} def
  /lenx { nx delx mul} def
  currentpoint
  /ypos exch def
  /xpos exch def

  /y ypos def
  /x xpos def

  /dx3 delx 3 div def
  /dy3 dely 3 div def

  /ix -1 def
  /iy ny 1 sub def
  Darray{
     aload pop 
    /Subc  exch def
    /Text exch def
    /ix ix 1 add def
    ix nx ge { /ix 0 def /iy iy 1 sub def} if
    /x xpos delx ix 0.5 add mul add 
    /Helvetica findfont p1 scalefont setfont Text stringwidth pop 
    /Helvetica findfont p2 scalefont setfont Subc stringwidth pop 
    add xoff add 2 div sub shift add def
    /y ypos dely iy 0.5 add mul add raise add def
    x y moveto 
    /Helvetica findfont p1 scalefont Text show
    xoff yoff rmoveto
    /Helvetica findfont p2 scalefont Subc show
  } forall
  end
  clear
} def

2 setlinecap
6.5 72 mul 320 sub 2 div 0 translate 
0 150 moveto 
20 20 6 6 Grid 

0 150 moveto 
[
[(A)(0)]
[(A)(1)]
[(A)(2)]
[(A)(3)]
[(A)(4)]
[(A)(5)]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 225 200 225 12 24 18 arrow stroke
200 195 140 195 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(one-all scatter) dup stringwidth pop 170 exch 2 div sub 242 moveto show
(one-all gather) dup stringwidth pop 170 exch 2 div sub 170 moveto show

220 150 moveto 
20 20 6 6 Grid 

220 150 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(1)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(2)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(3)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(4)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(5)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

0 0 moveto 
20 20 6 6 Grid 

0 0 moveto 
[
[(A)(0)]
[(A)(1)]
[(A)(2)]
[(A)(3)]
[(A)(4)]
[(A)(5)]
[(B)(0)]
[(B)(1)]
[(B)(2)]
[(B)(3)]
[(B)(4)]
[(B)(5)]
[(C)(0)]
[(C)(1)]
[(C)(2)]
[(C)(3)]
[(C)(4)]
[(C)(5)]
[(D)(0)]
[(D)(1)]
[(D)(2)]
[(D)(3)]
[(D)(4)]
[(D)(5)]
[(E)(0)]
[(E)(1)]
[(E)(2)]
[(E)(3)]
[(E)(4)]
[(E)(5)]
[(F)(0)]
[(F)(1)]
[(F)(2)]
[(F)(3)]
[(F)(4)]
[(F)(5)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

220 0 moveto 
20 20 6 6 Grid 

220 0 moveto 
[
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(1)]
[(B)(1)]
[(C)(1)]
[(D)(1)]
[(E)(1)]
[(F)(1)]
[(A)(2)]
[(B)(2)]
[(C)(2)]
[(D)(2)]
[(E)(2)]
[(F)(2)]
[(A)(3)]
[(B)(3)]
[(C)(3)]
[(D)(3)]
[(E)(3)]
[(F)(3)]
[(A)(4)]
[(B)(4)]
[(C)(4)]
[(D)(4)]
[(E)(4)]
[(F)(4)]
[(A)(5)]
[(B)(5)]
[(C)(5)]
[(D)(5)]
[(E)(5)]
[(F)(5)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 60 200 60 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(all-all scatter) dup stringwidth pop 170 exch 2 div sub 77 moveto show

0 300 moveto 
20 20 6 6 Grid 

0 300 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(B)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(C)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(D)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(E)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(F)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 360 200 360 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
(all-all broadcast) dup stringwidth pop 170 exch 2 div sub 377 moveto show

220 300 moveto 
20 20 6 6 Grid 

220 300 moveto 
[
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
[(A)(0)]
[(B)(0)]
[(C)(0)]
[(D)(0)]
[(E)(0)]
[(F)(0)]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

0 450 moveto 
20 20 6 6 Grid 

0 450 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

140 510 200 510 12 24 18 arrow stroke

/Helvetica findfont 12 scalefont setfont
0 575 moveto
(data) show
(data) stringwidth pop 4 add 575 4 add (data) stringwidth pop 33 add 575 4 add
1 4 5 arrow fill
(one-all broadcast) dup stringwidth pop 170 exch 2 div sub 527 moveto show
gsave
0 570 (processes) stringwidth pop sub translate
90 rotate
0 5 moveto
(processes) show
-4 8 -33 8 1 4 5 arrow fill
grestore

220 450 moveto 
20 20 6 6 Grid 

220 450 moveto 
[
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
[(A)(0)]
[()()]
[()()]
[()()]
[()()]
[()()]
 ] 20 20 6 6 9 12 1 -5 -2 2 GridLabels

showpage

%%EndDocument
 @endspecial 98 x(Figure)15 b(1.1:)k(Collectiv)o(e)e(mo)o(v)o(e)d
(functions)i(illustrated)h(for)e(a)f(group)h(of)g(six)h(pro)q(cesses.)k
(In)c(eac)o(h)f(case,)75 2574 y(eac)o(h)g(ro)o(w)g(of)g(b)q(o)o(xes)g
(represen)o(ts)g(data)g(lo)q(cations)h(in)g(one)g(pro)q(cess.)k(Th)o
(us,)15 b(in)h(the)f(one-all)i(broadcast,)75 2631 y(initially)k(just)e
(the)f(\014rst)h(pro)q(cess)g(con)o(tains)f(the)h(data)f
Ff(A)1085 2638 y Fe(0)1105 2631 y Fq(,)h(but)g(after)e(the)i(broadcast)
f(all)i(pro)q(cesses)75 2687 y(con)o(tain)15 b(it.)-32
46 y Fk(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272
y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40
554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780
y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006
y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232
y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457
y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683
y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909
y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135
y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361
y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587
y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 4 6
4 5 bop 75 -100 a Fq(4)747 b Fj(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)75 45 y Fd(1.4.1)49 b(Broadcast)90
181 y Fg(MPI)p 175 181 14 2 v 16 w(BCAST\()16 b(bu\013er,)f(cnt,)g(t)o
(yp)q(e,)h(ro)q(ot,)f(comm)e(\))117 260 y(IN/OUT)38 b(bu\013er)478
b Fh(starting)14 b(address)h(of)f(bu\013er)117 339 y(IN)171
b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)13 b(of)h(en)o(tries)h(in)e(bu\013er)
117 417 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(bu\013er)i(\(p)q(ossibly)f(general\))117 496 y(IN)171
b Fg(ro)q(ot)492 b Fh(rank)14 b(of)f(broadcast)i(ro)q(ot)117
574 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
700 y Fg(MPI)p 251 700 V 16 w(BCAST)g Fq(broadcasts)e(a)h(message)g
(from)g(the)g(pro)q(cess)h(with)f(rank)21 b Fg(ro)q(ot)10
b Fq(to)f(all)j(other)e(pro)q(cesses)75 757 y(of)15 b(the)g(group.)20
b(It)15 b(is)h(called)h(b)o(y)e(all)h(mem)o(b)q(ers)g(of)f(group)g
(using)h(the)f(same)g(argumen)o(ts)f(for)30 b Fg(cnt,)16
b(t)o(yp)q(e,)75 813 y(comm,)j(and)i(ro)q(ot)p Fq(.)34
b(On)20 b(return)g(the)g(con)o(ten)o(ts)f(of)h(the)g(bu\013er)g(of)g
(the)g(pro)q(cess)g(with)g(rank)40 b Fg(ro)q(ot)19 b
Fq(is)75 870 y(con)o(tained)d(in)g(the)f(bu\013er)g(of)g(the)g(calling)
i(pro)q(cess.)75 1001 y Fd(1.4.2)49 b(Gather)75 1137
y Fg(MPI)p 160 1137 V 16 w(GA)l(THER\()25 b(sendbuf,)j(sendcnt,)g
(sendt)o(yp)q(e,)h(recvbuf,)d(maxrecvcnt,)g(recvcnts,)h(recvt)o(yp)q
(e,)g(ro)q(ot,)75 1193 y(comm\))117 1272 y Fh(IN)171
b Fg(sendbuf)428 b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
1351 y(IN)171 b Fg(sendcnt)429 b Fh(n)o(um)o(b)q(er)13
b(of)h(elemen)o(ts)f(in)h(send)h(bu\013er)f(\(in)o(teger\))117
1429 y(IN)171 b Fg(sendt)o(yp)q(e)408 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(send)i(bu\013er)g(elemen)o(ts)117 1508 y(OUT)124 b Fg(recvbuf)434
b Fh(address)15 b(of)f(receiv)o(e)h(bu\013er)g({)e(signi\014can)o(t)h
(only)f(at)h(ro)q(ot)117 1586 y(IN)171 b Fg(maxrecvcnt)355
b Fh(maxim)n(um)11 b(n)o(um)o(b)q(er)j(of)g(elemen)o(ts)h(in)f(receiv)o
(e)i(bu\013er)g({)f(sig-)905 1643 y(ni\014can)o(t)f(only)f(at)h(ro)q
(ot)117 1721 y(OUT)124 b Fg(recvcnts)418 b Fh(in)o(teger)13
b(arra)o(y)g(of)f(size)h Fc(MPI)p 1349 1721 13 2 v 14
w(GSIZE)f Fh(returning)h(the)h(n)o(um)o(b)q(er)905 1778
y(of)i(elemen)o(ts)h(sen)o(t)h(b)o(y)f(eac)o(h)g(pro)q(cessor)i({)d
(signi\014can)o(t)h(only)905 1834 y(at)d(ro)q(ot.)117
1912 y(IN)171 b Fg(recvt)o(yp)q(e)414 b Fh(data)13 b(t)o(yp)q(e)i(of)d
(recv)j(bu\013er)g(elemen)o(ts)e({)g(signi\014can)o(t)g(only)g(at)905
1969 y(ro)q(ot)117 2047 y(IN)171 b Fg(ro)q(ot)492 b Fh(rank)14
b(of)f(receiving)i(pro)q(cess)g(\(in)o(teger\))117 2126
y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
2252 y Fq(Eac)o(h)17 b(pro)q(cess)g(\(including)i(the)e(ro)q(ot)f(pro)q
(cess\))h(sends)g(the)g(con)o(ten)o(ts)g(of)f(its)h(send)h(bu\013er)f
(to)f(the)75 2308 y(ro)q(ot)g(pro)q(cess.)25 b(The)17
b(ro)q(ot)f(pro)q(cess)h(places)h(all)f(the)g(incoming)h(messages)f(in)
g(the)g(lo)q(cations)h(sp)q(eci\014ed)75 2365 y(b)o(y)e(the)g(recvbuf)h
(and)f(recvt)o(yp)q(e.)23 b(The)16 b(receiv)o(e)h(bu\013er)f(is)g
(ignored)h(for)e(all)i(non-ro)q(ot)f(pro)q(cesses.)23
b(The)75 2421 y(receiv)o(e)d(bu\013er)f(of)f(the)h(ro)q(ot)f(pro)q
(cess)h(is)h(assumed)f(con)o(tiguous)g(and)g(partitioned)g(in)o(to)g
Fg(MPI)p 1739 2421 14 2 v 16 w(GSIZE)75 2478 y Fq(consecutiv)o(e)j(blo)
q(c)o(ks,)g(The)g(data)e(sen)o(t)h(from)f(pro)q(cess)h(with)g(rank)g(i)
g(is)h(stored)e(in)i(the)f Fb(i)p Fq(-th)g(blo)q(c)o(k.)75
2534 y Fg(sendcnt)d Fq(can)d(b)q(e)h(di\013eren)o(t)g(for)f(eac)o(h)g
(group)g(mem)o(b)q(er)h(and)f(these)h(v)m(alues,)g(whic)o(h)g(are)f
(the)h(size)g(of)f(the)75 2591 y(blo)q(c)o(ks,)g(are)f(returned)h(in)g
(the)g(arra)o(y)27 b Fg(recvcnts)16 b Fq(on)f(the)f(ro)q(ot)g(pro)q
(cess.)20 b(recv)o(cn)o(ts[i])14 b(=)g Fg(sendcnt)j Fq(on)e(the)75
2647 y(pro)q(cess)j(with)h(rank)e(i.)29 b(The)19 b(function)g(is)f
(called)i(with)e(the)g(same)g(v)m(alues)h(for)36 b Fg(sendt)o(yp)q(e,)
21 b(ro)q(ot)p Fq(,)d(and)75 2704 y Fg(comm)13 b Fq(at)i(all)h
(participating)g(pro)q(cesses.)1967 46 y Fk(1)1967 103
y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967
385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959
611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959
836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959
1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959
1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959
1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959
1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959
1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959
2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959
2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959
2643 y(47)1959 2699 y(48)p eop
%%Page: 5 7
5 6 bop 75 -100 a Fj(1.4.)34 b(D)o(A)l(T)l(A)15 b(MO)o(VE)g(FUNCTIONS)
1099 b Fq(5)75 45 y Fd(1.4.3)49 b(Scatter)75 178 y Fg(MPI)p
160 178 14 2 v 16 w(SCA)l(TTER\()22 b(sendbuf,)j(sendcnts)q(,)g(sendt)o
(yp)q(e,)h(recvbuf,)d(maxrecvcnt,)g(recvcnt,)h(recvt)o(yp)q(e,)g(ro)q
(ot,)75 235 y(comm\))117 312 y Fh(IN)171 b Fg(sendbuf)428
b Fh(address)15 b(of)f(send)g(bu\013er)h({)f(signi\014can)o(t)f(only)g
(at)h(ro)q(ot)117 388 y(IN)171 b Fg(sendcnts)412 b Fh(in)o(teger)12
b(arra)o(y)f(of)f(size)i Fc(MPI)p 1343 388 13 2 v 14
w(GSIZE)f Fh(sp)q(ecifying)g(the)h(n)o(um)o(b)q(er)905
444 y(of)e(elemen)o(ts)h(to)g(send)g(to)g(eac)o(h)g(pro)q(cessor)i({)d
(signi\014can)o(t)h(only)905 501 y(at)j(ro)q(ot.)117
576 y(IN)171 b Fg(sendt)o(yp)q(e)408 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(send)i(bu\013er)g(elemen)o(ts)117 651 y(OUT)124 b Fg(recvbuf)434
b Fh(address)15 b(of)f(receiv)o(e)h(bu\013er.)117 727
y(IN)171 b Fg(maxrecvcnt)355 b Fh(maxim)n(um)11 b(n)o(um)o(b)q(er)j(of)
g(elemen)o(ts)h(in)f(receiv)o(e)i(bu\013er)g(\(in)o(te-)905
783 y(ger\))117 859 y(OUT)124 b Fg(recvcnt)435 b Fh(n)o(um)o(b)q(er)13
b(of)h(elemen)o(ts)f(in)h(receiv)o(e)h(bu\013er)g(\(in)o(teger\))117
934 y(IN)171 b Fg(recvt)o(yp)q(e)414 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(receiv)o(e)j(bu\013er)f(elemen)o(ts)117 1010 y(IN)171
b Fg(ro)q(ot)492 b Fh(rank)14 b(of)f(sending)h(pro)q(cess)i(\(in)o
(teger\))117 1085 y(IN)171 b Fg(group)463 b Fh(comm)o(unicator)11
b(handle)166 1210 y Fq(The)16 b(ro)q(ot)g(pro)q(cess)g(sends)h(the)f
Fb(i)p Fq(-th)g(p)q(ortion)h(of)e(its)i(send)g(bu\013er)f(to)f(the)i
(pro)q(cess)f(with)h(rank)f Fb(i)p Fq(;)75 1266 y(eac)o(h)f(pro)q(cess)
h(\(including)i(the)e(ro)q(ot)e(pro)q(cess\))h(stores)g(the)h(incoming)
g(message)f(in)h(its)g(receiv)o(e)g(bu\013er.)75 1322
y(The)f(send)h(bu\013er)e(of)h(the)g(ro)q(ot)f(pro)q(cess)h(is)h
(assumed)f(con)o(tiguous)f(and)i(partitioned)f(in)o(to)g
Fg(MPI)p 1739 1322 14 2 v 16 w(GSIZE)75 1379 y Fq(consecutiv)o(e)h(blo)
q(c)o(ks.)22 b(The)16 b Fb(i)p Fq(-th)f(blo)q(c)o(k)h(consists)g(of)31
b Fg(sendcnts[i])18 b Fq(elemen)o(ts.)k(The)16 b Fb(i)p
Fq(-th)f(blo)q(c)o(k)h(is)g(sen)o(t)75 1435 y(to)c(the)i(pro)q(cess)f
(with)g(rank)g(i)h(in)g(the)f(group)g(and)g(stored)g(in)g(its)h(receiv)
o(e)g(bu\013er.)19 b(The)13 b(routine)h(is)f(called)75
1492 y(b)o(y)i(all)h(mem)o(b)q(ers)g(of)e(the)i(group)f(using)g(the)h
(same)f(argumen)o(ts)f(for)30 b Fg(recvt)o(yp)q(e,)15
b(ro)q(ot)p Fq(,)g(and)30 b Fg(comm)p Fq(.)166 1548 y(Note)15
b(that)29 b Fg(MPI)p 474 1548 V 16 w(SCA)l(TTER)17 b
Fq(is)e(the)h(rev)o(erse)f(op)q(eration)g(to)g Fg(MPI)p
1322 1548 V 15 w(GA)l(THER)p Fq(.)75 1671 y Fd(1.4.4)49
b(All-to-all)19 b(b)o(roadcast)75 1804 y Fg(MPI)p 160
1804 V 16 w(ALLCAST\()e(sendbuf,)k(sendcnt,)f(sendt)o(yp)q(e,)h
(recvbuf,)e(maxrecvcnt,)f(recvcnts,)h(recvt)o(yp)q(e,)g(comm\))117
1938 y Fh(IN)171 b Fg(sendbuf)428 b Fh(starting)14 b(address)h(of)f
(send)g(bu\013er)117 2014 y(IN)171 b Fg(sendcnt)429 b
Fh(n)o(um)o(b)q(er)13 b(of)h(elemen)o(ts)f(in)h(send)h(bu\013er)f(\(in)
o(teger\))117 2089 y(IN)171 b Fg(sendt)o(yp)q(e)408 b
Fh(data)14 b(t)o(yp)q(e)g(of)f(send)i(bu\013er)g(elemen)o(ts)117
2164 y(OUT)124 b Fg(recvbuf)434 b Fh(address)15 b(of)f(receiv)o(e)h
(bu\013er.)117 2240 y(IN)171 b Fg(maxrecvcnt)355 b Fh(maxim)n(um)10
b(n)o(um)o(b)q(er)j(of)g(elemen)o(ts)h(in)g(receiv)o(e)h(bu\013er)117
2315 y(OUT)124 b Fg(recvcnts)418 b Fh(in)o(teger)13 b(arra)o(y)g(of)f
(size)h Fc(MPI)p 1349 2315 13 2 v 14 w(GSIZE)f Fh(returning)h(the)h(n)o
(um)o(b)q(er)905 2372 y(of)f(elemen)o(ts)h(sen)o(t)h(b)o(y)f(eac)o(h)g
(pro)q(cessor.)117 2447 y(IN)171 b Fg(recvt)o(yp)q(e)414
b Fh(data)14 b(t)o(yp)q(e)g(of)f(receiv)o(e)j(bu\013er)f(elemen)o(ts)
117 2523 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 2647 y Fq(Eac)o(h)16 b(pro)q(cess)g(in)h(the)f(group)g
(broadcasts)g(its)g(en)o(tire)h(send)f(bu\013er)g(to)g(all)h(pro)q
(cesses)g(\(including)75 2704 y(itself)t(\);)c(Eac)o(h)e(send)i
(bu\013er)e(can)h(ha)o(v)o(e)f(a)h(di\013eren)o(t)g(n)o(um)o(b)q(er)g
(of)f(elemen)o(ts.)20 b(Eac)o(h)11 b(pro)q(cess)h(concatenates)-32
46 y Fk(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272
y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40
554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780
y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006
y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232
y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457
y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683
y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909
y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135
y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361
y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587
y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 6 8
6 7 bop 75 -100 a Fq(6)747 b Fj(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)75 45 y Fq(the)i(incoming)i(messages,)e(in)i(the)e
(order)g(of)g(the)h(senders')f(ranks,)h(and)f(stores)g(them)g(in)h(its)
g(receiv)o(e)75 102 y(bu\013er.)h(The)15 b(n)o(um)o(b)q(er)g(of)f
(elemen)o(ts)h(in)h(the)f Fb(i)p Fq(-th)f(sender's)h(con)o(tribution)g
(is)h(returned)f(in)30 b Fg(recvcnts[i])p Fq(.)75 158
y(The)16 b(routine)f(is)h(called)h(b)o(y)f(all)g(mem)o(b)q(ers)f(of)g
(the)h(group)f(using)h(the)f(same)g(argumen)o(ts)g(for)30
b Fg(sendt)o(yp)q(e,)75 214 y(maxrecvcnt)p Fq(,)15 b(and)30
b Fg(comm)p Fq(.)166 272 y Fg(MPI)p 251 272 14 2 v 16
w(ALLCAST)19 b Fq(is)h(equiv)m(alen)o(t)h(to)d Ff(n)i
Fq(executions)g(of)f Fg(MPI)p 1220 272 V 16 w(BCAST)p
Fq(,)g(with)h(eac)o(h)f(pro)q(cess)h(once)75 328 y(the)15
b(ro)q(ot.)75 455 y Fd(1.4.5)49 b(All-to-all)19 b(scatter-gather)75
589 y Fg(MPI)p 160 589 V 16 w(ALL)l(TO)o(ALL\(sendbuf,)d(sendcnts,)i
(sendt)o(yp)q(e,)f(recvbuf,)f(maxrecvcnt,)e(recvcnts,)j(recvt)o(yp)q
(e,)e(comm\))117 724 y Fh(IN)171 b Fg(sendbuf)428 b Fh(starting)14
b(address)h(of)f(send)g(bu\013er)117 800 y(IN)171 b Fg(sendcnts)412
b Fh(in)o(teger)12 b(arra)o(y)f(of)f(size)i Fc(MPI)p
1343 800 13 2 v 14 w(GSIZE)f Fh(sp)q(ecifying)g(the)h(n)o(um)o(b)q(er)
905 857 y(of)h(elemen)o(ts)h(to)g(send)h(to)e(eac)o(h)i(pro)q(cessor)
117 934 y(IN)171 b Fg(sendt)o(yp)q(e)408 b Fh(data)14
b(t)o(yp)q(e)g(of)f(send)i(bu\013er)g(elemen)o(ts)117
1010 y(OUT)124 b Fg(recvbuf)434 b Fh(address)15 b(of)f(receiv)o(e)h
(bu\013er)117 1087 y(IN)171 b Fg(maxrecvcnt)355 b Fh(maxim)n(um)10
b(n)o(um)o(b)q(er)j(of)g(elemen)o(ts)h(in)g(receiv)o(e)h(bu\013er)117
1164 y(OUT)124 b Fg(recvcnts)418 b Fh(in)o(teger)13 b(arra)o(y)g(of)f
(size)h Fc(MPI)p 1349 1164 V 14 w(GSIZE)f Fh(returning)h(the)h(n)o(um)o
(b)q(er)905 1220 y(of)f(elemen)o(ts)h(receiv)o(ed)h(from)e(eac)o(h)h
(pro)q(cessor)117 1297 y(IN)171 b Fg(recvt)o(yp)q(e)414
b Fh(data)14 b(t)o(yp)q(e)g(of)f(receiv)o(e)j(bu\013er)f(elemen)o(ts)
117 1374 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 1499 y Fq(The)j(send)h(bu\013er)f(of)f(eac)o(h)h(pro)q
(cess)h(is)f(partitioned)h(in)o(to)f Fg(MPI)p 1253 1499
14 2 v 16 w(GSIZE)g Fq(consecutiv)o(e)h(blo)q(c)o(ks.)20
b(The)75 1556 y(n)o(um)o(b)q(er)f(of)f(elelmen)o(ts)i(in)g(the)f
Fb(i)p Fq(-th)f(blo)q(c)o(k)i(is)f(giv)o(en)g(b)o(y)g
Fg(sendcnts[i])p Fq(.)33 b(Eac)o(h)19 b(pro)q(cess)g(in)g(the)g(group)
75 1612 y(sends)g(the)f Fb(i)p Fq(-th)g(blo)q(c)o(k)g(of)g(its)g(send)h
(bu\013er)f(to)f(the)h(pro)q(cess)h(with)f(rank)g Fb(i)g
Fq(\(itself)g(included\).)31 b(Eac)o(h)75 1668 y(pro)q(cess)16
b(concatenates)f(the)h(incoming)h(messages,)d(in)j(the)e(order)h(of)f
(the)g(senders')h(ranks,)f(and)h(stores)75 1725 y(them)g(in)h(its)g
(receiv)o(e)g(bu\013er.)23 b(The)17 b(n)o(um)o(b)q(er)g(of)f(elelmen)o
(ts)h(in)g(the)g(receiv)o(ed)g(blo)q(c)o(ks)g(are)f(returned)h(in)75
1781 y(the)h(arra)o(y)f Fg(recvcnts)i Fq(suc)o(h)g(that)e(pro)q(cess)i
(with)f(rank)g Fb(i)g Fq(gets)35 b Fg(recvcnts[k])20
b Fq(equal)f(to)36 b Fg(sendcnts[i])21 b Fq(on)75 1838
y(pro)q(cess)f(with)g(rank)f Fb(k)p Fq(.)33 b(The)19
b(routine)i(is)f(called)h(b)o(y)e(all)i(mem)o(b)q(ers)e(of)g(the)h
(group)f(using)i(the)e(same)75 1894 y(argumen)o(ts)14
b(for)30 b Fg(sendt)o(yp)q(e)p Fq(,)17 b(and)31 b Fg(comm)p
Fq(.)166 1952 y(An)15 b(all-to-all)h(scatter-gather)d(is)i(the)g(equiv)
m(alen)o(t)h(of)e Ff(n)h Fq(scatters)e(\(or)h Ff(n)h
Fq(gathers\))e(executed)j(with)75 2008 y(eac)o(h)f(pro)q(cess)h(b)q
(eing)g(once)g(the)f(ro)q(ot.)75 2156 y Fn(1.5)59 b(Global)20
b(Compute)e(Op)r(erations)75 2259 y Fq(The)13 b(functions)g(in)h(this)f
(section)h(p)q(erform)e(one)h(of)f(the)h(follo)o(wing)g(op)q(erations)g
(across)f(all)i(the)f(mem)o(b)q(ers)75 2315 y(of)i(a)g(group:)189
2412 y(global)g(max)g(on)g(in)o(teger)h(and)f(\015oating)g(p)q(oin)o(t)
h(data)e(t)o(yp)q(es)189 2509 y(global)h(min)h(on)g(in)o(teger)f(and)g
(\015oating)g(p)q(oin)o(t)h(data)f(t)o(yp)q(es)189 2607
y(global)g(sum)h(on)f(in)o(teger)g(and)h(\015oating)f(p)q(oin)o(t)g
(data)g(t)o(yp)q(es)189 2704 y(global)g(pro)q(duct)h(on)f(in)o(teger)h
(and)f(\015oating)g(p)q(oin)o(t)h(data)e(t)o(yp)q(es)1967
46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967
272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498
y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724
y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949
y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 7 9
7 8 bop 75 -100 a Fj(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)905
b Fq(7)189 45 y(global)15 b(AND)h(on)f(logical)h(and)f(in)o(teger)h
(data)e(t)o(yp)q(es)189 140 y(global)h(OR)h(on)g(logical)g(and)f(in)o
(teger)h(data)e(t)o(yp)q(es)189 235 y(global)h(X)o(OR)h(on)f(logical)i
(and)e(in)o(teger)h(data)e(t)o(yp)q(es)189 331 y(global)h(max)g(and)h
(who)f(\(rank\))f(has)h(it)189 426 y(global)g(min)h(and)g(who)f
(\(rank\))f(has)h(it)189 521 y(user)g(de\014ned)i(\(asso)q(ciativ)o
(e\))d(op)q(eration)189 616 y(user)h(de\014ned)i(\(asso)q(ciativ)o(e)e
(and)g(comm)o(utativ)o(e\))f(op)q(eration)75 740 y Fd(1.5.1)49
b(Reduce)75 874 y Fg(MPI)p 160 874 14 2 v 16 w(REDUCE\()15
b(sendbuf,)j(recvbuf,)d(cnt,)h(t)o(yp)q(e,)g(op,)f(ro)q(ot,)f(comm\))
117 951 y Fh(IN)171 b Fg(sendbuf)428 b Fh(address)15
b(of)f(send)g(bu\013er)117 1027 y(OUT)124 b Fg(recvbuf)434
b Fh(address)15 b(of)f(receiv)o(e)h(bu\013er)g({)e(signi\014can)o(t)h
(only)f(at)h(ro)q(ot)117 1103 y(IN)171 b Fg(cnt)512 b
Fh(n)o(um)o(b)q(er)13 b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g
(\(in)o(teger\))117 1179 y(IN)171 b Fg(t)o(yp)q(e)491
b Fh(data)14 b(t)o(yp)q(e)g(of)f(elemen)o(ts)h(of)f(input)h(bu\013er)
117 1254 y(IN)171 b Fg(op)525 b Fh(op)q(eration)117 1330
y(IN)171 b Fg(ro)q(ot)492 b Fh(rank)14 b(of)f(ro)q(ot)h(pro)q(cess)i
(\(in)o(teger\))117 1406 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 1531 y Fq(Com)o(bines)19 b(the)f(v)m(alues)i(pro)o(vided)f
(in)g(the)g(send)g(bu\013er)f(of)g(eac)o(h)g(pro)q(cess)h(in)g(the)g
(group,)f(using)75 1587 y(the)d(op)q(eration)30 b Fg(op)p
Fq(,)15 b(and)g(returns)g(the)g(com)o(bined)h(v)m(alue)h(in)f(the)f
(receiv)o(e)h(bu\013er)f(of)f(the)h(pro)q(cess)h(with)75
1644 y(rank)32 b Fg(ro)q(ot)p Fq(.)21 b(Eac)o(h)16 b(pro)q(cess)g(can)g
(pro)o(vide)h(one)f(v)m(alue,)h(or)e(a)h(sequence)h(of)e(v)m(alues,)i
(in)g(whic)o(h)g(case)f(the)75 1700 y(com)o(bine)j(op)q(eration)g(is)g
(executed)g(p)q(oin)o(t-wise)h(on)e(eac)o(h)g(en)o(try)g(of)g(the)h
(sequence.)30 b(F)l(or)18 b(example,)h(if)75 1757 y(the)g(op)q(eration)
f(is)h Fc(MPI)p 489 1757 13 2 v 15 w(MAX)f Fq(and)h(the)f(send)h
(bu\013er)g(con)o(tains)g(t)o(w)o(o)e(\015oating)h(p)q(oin)o(t)h(n)o
(um)o(b)q(ers,)h(then)75 1813 y(recvbuf\(1\))d(=)h(global)h
(max\(sendbuf\(1\)\))d(and)i(recvbuf\(2\))g(=)g(global)g
(max\(sendbuf\(2\)\).)26 b(All)19 b(send)75 1869 y(bu\013ers)d(should)i
(de\014ne)g(sequences)f(of)f(equal)i(length)f(of)f(en)o(tries)h(all)g
(of)f(the)h(same)f(data)g(t)o(yp)q(e,)h(where)75 1926
y(the)d(t)o(yp)q(e)g(is)h(one)f(of)f(those)h(allo)o(w)o(ed)g(for)g(op)q
(erands)g(of)28 b Fg(op)p Fq(.)19 b(F)l(or)14 b(all)h(op)q(erations)f
(except)27 b Fc(MPI)p 1706 1926 V 14 w(MINLOC)75 1982
y Fq(and)j Fc(MPI)p 255 1982 V 14 w(MAXLOC)15 b Fq(the)h(n)o(um)o(b)q
(er)g(and)f(t)o(yp)q(e)h(of)f(elemen)o(ts)h(in)h(the)e(send)i(bu\013er)
e(are)g(the)h(same)f(as)h(for)75 2039 y(the)e(receiv)o(e)h(bu\013ers.)k
(F)l(or)26 b Fc(MPI)p 635 2039 V 14 w(MINLOC)14 b Fq(and)26
b Fc(MPI)p 994 2039 V 15 w(MAXLOC)p Fq(,)12 b(the)i(receiv)o(e)h
(bu\013er)f(will)h(con)o(tain)28 b Fg(cnt)75 2095 y Fq(elemen)o(ts)20
b(of)f(the)h(same)f(t)o(yp)q(e)h(as)f(the)h(elemen)o(ts)g(in)g(the)g
(input)g(bu\013er,)g(follo)o(w)o(ed)g(b)o(y)39 b Fg(cnt)21
b Fq(in)o(tegers)75 2152 y(\(ranks\).)166 2209 y(The)15
b(op)q(eration)g(de\014ned)h(b)o(y)30 b Fg(op)15 b Fq(is)g(asso)q
(ciativ)o(e)g(and)g(comm)o(utativ)o(e,)f(and)h(the)g(implemen)o(tation)
75 2265 y(can)h(tak)o(e)f(adv)m(an)o(tage)g(of)h(asso)q(ciativit)o(y)g
(and)g(comm)o(utativit)o(y)f(in)i(order)f(to)f(c)o(hange)h(order)f(of)h
(ev)m(alua-)75 2321 y(tion.)23 b(The)17 b(routine)f(is)h(called)h(b)o
(y)e(all)h(group)f(mem)o(b)q(ers)g(using)h(the)f(same)g(argumen)o(ts)f
(for)32 b Fg(cnt,)17 b(t)o(yp)q(e,)75 2378 y(op,)e(ro)q(ot)g
Fq(and)30 b Fg(comm)p Fq(.)166 2435 y(W)l(e)15 b(list)h(b)q(elo)o(w)g
(the)f(supp)q(orted)h(options)f(for)30 b Fg(op)p Fq(.)117
2538 y Fc(MPI)p 194 2538 V 14 w(MAX)649 b Fq(maxim)o(um)117
2593 y Fc(MPI)p 194 2593 V 14 w(MIN)664 b Fq(minim)o(um)117
2649 y Fc(MPI)p 194 2649 V 14 w(SUM)653 b Fq(sum)117
2704 y Fc(MPI)p 194 2704 V 14 w(PROD)626 b Fq(pro)q(duct)-32
46 y Fk(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272
y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40
554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780
y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006
y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232
y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457
y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683
y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909
y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135
y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361
y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587
y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 8 10
8 9 bop 75 -100 a Fq(8)747 b Fj(SECTION)16 b(1.)35 b(COLLECTIVE)16
b(COMMUNICA)l(TION)117 45 y Fc(MPI)p 194 45 13 2 v 14
w(AND)654 b Fq(and)15 b(\(logical)h(or)f(bit-wise)h(in)o(teger\))117
101 y Fc(MPI)p 194 101 V 14 w(OR)683 b Fq(or)14 b(\(logical)j(or)d
(bit-wise)j(in)o(teger\))117 157 y Fc(MPI)p 194 157 V
14 w(X)o(OR)656 b Fq(xor)14 b(\(logical)i(or)f(bit-wise)i(in)o(teger\))
117 212 y Fc(MPI)p 194 212 V 14 w(MAXLOC)569 b Fq(maxim)o(um)14
b(v)m(alue)h(and)f(rank)g(of)g(pro)q(cess)g(with)g(max-)905
269 y(im)o(um)h(v)m(alue)i(\(rank)d(of)g(\014rst)h(pro)q(cess)g(with)g
(maxim)o(um)905 325 y(v)m(alue,)h(in)g(case)g(of)e(ties\))117
381 y Fc(MPI)p 194 381 V 14 w(MINLOC)584 b Fq(minim)o(um)17
b(v)m(alue)h(and)f(rank)f(of)g(pro)q(cess)h(with)g(min-)905
438 y(im)o(um)g(v)m(alue)g(\(rank)f(of)f(\014rst)h(pro)q(cess)g(with)h
(minim)o(um)905 494 y(v)m(alue,)f(in)g(case)g(of)e(ties\))166
578 y(All)19 b(op)q(erations,)f(with)g(the)f(exception)i(of)33
b Fc(MPI)p 1019 578 V 15 w(MAXLOC)16 b Fq(and)34 b Fc(MPI)p
1404 578 V 15 w(MINLOC)17 b Fq(return)g(a)h(v)m(alue)75
634 y(whic)o(h)11 b(has)f(the)g(same)g(datat)o(yp)q(e)g(as)f(the)i(op)q
(erands.)18 b(Eac)o(h)10 b(op)q(erand)h(of)19 b Fc(MPI)p
1389 634 V 14 w(MAXLOC)9 b Fq(and)20 b Fc(MPI)p 1752
634 V 14 w(MINLOC)75 691 y Fq(can)15 b(b)q(e)h(though)o(t)e(as)g(a)h
(pair)g Fb(\(v,)24 b(i\))p Fq(:)19 b Fb(i)c Fq(is)h(the)f(rank)f(of)h
(the)g(calling)i(pro)q(cess)e(that)f(is)h(passed)h(implic-)75
747 y(itly)l(,)f(and)f Fb(v)f Fq(is)h(the)g(v)m(alue)h(that)e(is)h
(explicitly)j(passed)d(to)f(the)h(call.)38 b Fc(MPI)p
1332 747 V 15 w(MAXLOC)12 b Fq(and)27 b Fc(MPI)p 1706
747 V 14 w(MINLOC)75 803 y Fq(return)15 b(\(explicitly\))i(a)e(pair)h
Fb(\(value,)23 b(rank\))p Fq(.)166 861 y(When)33 b Fc(MPI)p
393 861 V 15 w(MINLOC)17 b Fq(or)33 b Fc(MPI)p 731 861
V 14 w(MAXLOC)16 b Fq(are)h(in)o(v)o(ok)o(ed,)h(the)g(input)g(bu\013er)
f(should)i(con)o(tain)e Ff(m)75 917 y Fq(elemen)o(ts)h(of)f(the)h(same)
f(t)o(yp)q(e)h(to)f(whic)o(h)h(the)g(op)q(eration)34
b Fc(MPI)p 1189 917 V 14 w(MIN)18 b Fq(or)33 b Fc(MPI)p
1447 917 V 14 w(MAX)17 b Fq(can)h(b)q(e)g(applied.)75
974 y(The)f(op)q(eration)f(returns)g(at)g(the)g(ro)q(ot)g
Ff(m)g Fq(elemen)o(ts)h(of)f(the)g(same)g(t)o(yp)q(e)h(as)f(the)g
(inputs,)h(follo)o(w)o(ed)g(b)o(y)75 1030 y Ff(m)e Fq(in)o(tegers)g
(\(ranks\).)k(The)d(output)f(bu\013er)g(should)h(b)q(e)g(de\014ned)g
(accordingly)l(.)166 1087 y(The)f(op)q(eration)h(that)e(de\014nes)30
b Fc(MPI)p 801 1087 V 15 w(MAXLOC)14 b Fq(is)721 1150
y Fa( )774 1194 y Ff(u)779 1250 y(i)821 1150 y Fa(!)864
1222 y Fm(\016)897 1150 y Fa( )950 1194 y Ff(v)952 1250
y(j)995 1150 y Fa(!)1040 1222 y Fq(=)1088 1150 y Fa( )1142
1194 y Ff(w)1146 1250 y(k)1196 1150 y Fa(!)75 1357 y
Fq(where)833 1415 y Ff(w)f Fq(=)g(max\()p Ff(u;)8 b(v)r
Fq(\))75 1501 y(and)710 1606 y Ff(k)13 b Fq(=)795 1507
y Fa(8)795 1545 y(>)795 1557 y(<)795 1632 y(>)795 1644
y(:)853 1549 y Ff(i)194 b Fq(if)15 b Ff(u)e(>)g(v)853
1606 y Fq(min)q(\()p Ff(i;)8 b(j)s Fq(\))39 b(if)15 b
Ff(u)e Fq(=)g Ff(v)853 1662 y(j)191 b Fq(if)15 b Ff(u)e(<)g(v)166
1747 y Fq(Note)i(that)f(this)i(op)q(eration)f(is)h(asso)q(ciativ)o(e)g
(and)f(comm)o(utativ)o(e.)166 1805 y(A)g(similar)i(de\014nition)g(can)e
(b)q(e)h(giv)o(en)f(for)29 b Fc(MPI)p 991 1805 V 14 w(MINLOC)p
Fq(.)166 1938 y Fi(Discussion:)166 1988 y Fh(W)m(e)19
b(de\014ne)40 b Fc(MPI)p 465 1988 V 14 w(MINLOC)19 b
Fh(to)g(return)i(a)e(v)o(ector)h(of)f(v)n(alues,)h(follo)o(w)o(ed)e(b)o
(y)h(a)g(v)o(ector)h(of)f(ranks.)35 b(The)75 2038 y(alternativ)o(e)13
b(is)g(for)27 b Fc(MPI)p 477 2038 V 14 w(MINLOC)13 b
Fh(to)g(return)i(a)e(v)o(ector)h(of)f(\(v)n(alue,)g(rank\))g(pairs,)g
(i.e.,)f(a)h(v)o(ector)h(of)f(structures.)75 2088 y(This)18
b(second)i(c)o(hoice)f(is)g(less)g(con)o(v)o(enien)o(t)g(for)f(F)m
(ortran.)32 b(Another)20 b(alternativ)o(e)e(is)g(to)h(ha)o(v)o(e)37
b Fc(MPI)p 1706 2088 V 14 w(MINLOC)75 2138 y Fh(return)15
b(t)o(w)o(o)e(output)i(bu\013ers,)f(but)g(then)h(it)e(need)i(b)q(e)g
(in)o(v)o(ok)o(ed)e(di\013eren)o(tly)h(than)g(the)h(other)f(op)q
(erations.)166 2188 y(The)f(computation)d(can)i(still)g(b)q(e)g(pip)q
(elined,)g(pro)o(vided)g(that)g(the)h(lo)q(cation)e(of)h(the)h(\014rst)
g(rank)f(en)o(try)h(in)e(the)75 2238 y(output)j(bu\013er)h(can)f(b)q(e)
h(computed)e(up)h(fron)o(t.)166 2454 y Fi(Implemen)o(tati)o(on)e(note:)
166 2504 y Fh(The)i(op)q(erations)g(can)g(b)q(e)g(applied)g(to)f(op)q
(erands)i(of)e(di\013eren)o(t)i(t)o(yp)q(es,)f(in)f(di\013eren)o(t)i
(calls:)i(e.g.,)c Fc(MPI)p 1775 2504 V 14 w(SUM)75 2554
y Fh(ma)o(y)h(require)k(an)e(in)o(teger)g(sum)g(in)g(one)g(call,)g(and)
g(a)g(complex)f(sum)g(in)h(another.)26 b(Since)17 b(w)o(e)g(require)g
(that)f(all)75 2604 y(elemen)o(ts)f(b)q(e)h(of)f(the)h(same)f(datat)o
(yp)q(e,)g(it)h(is)f(not)g(necessary)j(to)d(store)h(a)g(full)e
(signature)i(with)f(eac)o(h)h(bu\013er:)22 b(It)75 2654
y(is)14 b(only)e(necessary)k(to)e(store)g(the)h(datat)o(yp)q(e)f(of)f
(the)h(elemen)o(ts)g(when)g(all)e(elemen)o(ts)i(are)g(of)f(the)h(same)f
(t)o(yp)q(e,)h(and)75 2704 y(store)h(a)e(\015ag)h(indicating)f(that)g
(the)i(bu\013er)g(is)f(not)f(homogeneous,)g(otherwise.)1967
46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967
272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498
y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724
y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949
y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 9 11
9 10 bop 75 -100 a Fj(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)905
b Fq(9)166 45 y Fi(Missing:)166 97 y Fh(Need)19 b(to)f(de\014ne)h(the)g
(t)o(yp)q(es)g(compatible)d(with)i(eac)o(h)h(op)q(eration.)30
b(This)18 b(includes)h Fc(MPI)p 1611 97 13 2 v 14 w(BYTE)f
Fh(for)f(the)75 146 y(logical)12 b(op)q(erations,)i(and)g(whatev)o(er)g
(F)m(ortran/C)g(allo)o(w)e(for)h(all)g(op)q(erations.)75
335 y Fg(MPI)p 160 335 14 2 v 16 w(USER)p 288 335 V 17
w(REDUCE\()j(sendbuf,)h(recvbuf,)f(cnt,)f(t)o(yp)q(e,)h(function,)h(ro)
q(ot,)d(comm\))117 414 y Fh(IN)171 b Fg(sendbuf)428 b
Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117 492
y(OUT)124 b Fg(recvbuf)434 b Fh(starting)15 b(address)g(of)f(receiv)o
(e)i(bu\013er)f({)f(signi\014can)o(t)g(only)g(at)905
549 y(ro)q(ot)117 627 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)13
b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g(\(in)o(teger\))117
706 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(elemen)o(ts)h(of)f(input)h(bu\013er)117 785 y(IN)171
b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)117
863 y(IN)171 b Fg(ro)q(ot)492 b Fh(rank)14 b(of)f(ro)q(ot)h(pro)q(cess)
i(\(in)o(teger\))117 942 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 1068 y Fq(Similar)20 b(to)f(the)g(reduce)h(op)q(eration)f
(function)h(ab)q(o)o(v)o(e)f(except)g(that)g(a)f(user)i(supplied)h
(function)75 1125 y(is)f(used.)64 b Fg(function)20 b
Fq(is)g(a)f(function)h(with)f(three)g(argumen)o(ts.)31
b(A)19 b(C)g(protot)o(yp)q(e)f(for)h(suc)o(h)g(function)75
1181 y(is)g Fb(f\()k(invec,)g(inoutvec,)g(*len\))p Fq(.)k(Both)36
b Fg(invec)19 b Fq(and)36 b Fg(inoutvec)19 b Fq(are)f(arra)o(ys)e(with)
37 b Fg(*len)18 b Fq(en)o(tries.)75 1238 y(The)h(function)g(computes)g
(p)q(oin)o(t-wise)h(a)e(comm)o(utativ)o(e)g(and)h(asso)q(ciativ)o(e)g
(op)q(eration)f(on)h(eac)o(h)g(pair)75 1294 y(of)e(en)o(tries)h(and)g
(returns)f(the)h(result)g(in)36 b Fg(inoutvec)p Fq(.)28
b(A)17 b(pseudo-co)q(de)i(for)35 b Fg(function)19 b Fq(is)f(giv)o(en)g
(b)q(elo)o(w,)75 1350 y(where)d Fb(op)g Fq(is)h(the)f(comm)o(utativ)o
(e)g(and)g(asso)q(ciativ)o(e)h(op)q(eration)f(de\014ned)i(b)o(y)30
b Fg(function)p Fq(.)361 1467 y Fb(for\(i=0;)23 b(i)h(<)g(*len;)f
(i++\))g({)552 1524 y(inoutvec[i])g(op=)g(invec[i])361
1580 y(})166 1695 y Fq(The)16 b(t)o(yp)q(e)h(of)f(the)g(elemen)o(ts)h
(of)32 b Fg(invec)17 b Fq(and)g(of)32 b Fg(inoutvec)18
b Fq(matc)o(h)d(the)i(t)o(yp)q(e)f(of)g(the)g(elemen)o(ts)h(of)75
1752 y(the)e(send)h(bu\013ers)f(and)h(the)f(receiv)o(e)h(bu\013er.)75
1857 y Fg(MPI)p 160 1857 V 16 w(USER)p 288 1857 V 17
w(REDUCEA\()g(sendbuf,)h(recvbuf,)f(cnt,)g(t)o(yp)q(e,)g(function,)g
(ro)q(ot,)e(comm\))117 1936 y Fh(IN)171 b Fg(sendbuf)428
b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
2015 y(OUT)124 b Fg(recvbuf)434 b Fh(starting)15 b(address)g(of)f
(receiv)o(e)i(bu\013er)f({)f(signi\014can)o(t)g(only)g(at)905
2071 y(ro)q(ot)117 2150 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)13
b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g(\(in)o(teger\))117
2229 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(elemen)o(ts)h(of)f(input)h(bu\013er)117 2307 y(IN)171
b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)117
2386 y(IN)171 b Fg(ro)q(ot)492 b Fh(rank)14 b(of)f(ro)q(ot)h(pro)q
(cess)i(\(in)o(teger\))117 2464 y(IN)171 b Fg(comm)454
b Fh(comm)o(unicator)11 b(handle)166 2591 y Fq(Iden)o(tical)16
b(to)f Fg(MPI)p 493 2591 V 16 w(USER)p 621 2591 V 17
w(REDUCE)p Fq(,)g(except)g(that)g(the)g(op)q(eration)g(de\014ned)h(b)o
(y)30 b Fg(function)16 b Fq(is)f(not)75 2647 y(required)j(to)e(b)q(e)h
(comm)o(utativ)o(e,)f(but)g(only)h(asso)q(ciativ)o(e.)25
b(Th)o(us,)16 b(the)h(order)f(of)g(computation)h(can)f(b)q(e)75
2704 y(mo)q(di\014ed)h(only)e(using)h(asso)q(ciativit)o(y)l(.)-32
46 y Fk(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272
y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40
554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780
y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006
y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232
y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457
y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683
y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909
y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135
y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361
y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587
y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 10 12
10 11 bop 75 -100 a Fq(10)724 b Fj(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)166 45 y Fi(Implemen)o(tati)o(on)c
(note:)166 114 y Fh(The)19 b(co)q(de)h(for)e Fc(MPI)p
502 114 13 2 v 14 w(USER)p 620 114 V 14 w(REDUCEA)f Fh(can)i(b)q(e)g
(used)h(to)e(pro)o(vide)h(an)f(iden)o(tical)g(implemen)o(tation)d(for)
75 171 y Fc(MPI)p 152 171 V 14 w(USER)p 270 171 V 14
w(REDUCE)p Fh(.)166 398 y Fi(Discussion:)166 467 y Fh(The)c(addition)f
(of)h(the)g(third)g(parameter,)22 b Fc(*len)12 b Fh(in)21
b Fc(function)12 b Fh(allo)o(w)e(the)h(system)g(to)g(a)o(v)o(oid)e
(calling)21 b Fc(function)75 524 y Fh(for)15 b(eac)o(h)g(elemen)o(t)g
(in)f(the)i(input)e(bu\013er;)i(rather,)g(the)g(system)e(can)h(c)o(ho)q
(ose)h(to)f(apply)29 b Fc(function)16 b Fh(to)f(c)o(h)o(unks)h(of)75
580 y(inputs,)g(where)h(the)f(size)h(of)e(the)h(c)o(h)o(unk)g(is)g(c)o
(hosen)h(b)o(y)e(the)i(system)e(so)h(as)g(to)g(optimize)e(comm)o
(unicatio)o(n)f(and)75 637 y(computation)f(pip)q(elining.)17
b(E.g.,)27 b Fc(*len)14 b Fh(could)g(b)q(e)h(set)g(to)f(b)q(e)g(the)h
(t)o(ypical)e(pac)o(k)o(et)h(size)h(in)f(the)g(comm)o(unication)75
693 y(subsystem.)166 845 y Fq(MPI)g(includes)i(v)m(arian)o(ts)e(of)f
(eac)o(h)h(of)f(the)h(reduce)h(op)q(erations)f(where)g(the)g(result)g
(is)g(kno)o(wn)g(to)f(all)75 901 y(pro)q(cesses)j(in)g(the)f(group)g
(on)g(return.)75 1018 y Fg(MPI)p 160 1018 14 2 v 16 w(ALLREDUCE\()g
(sendbuf,)i(recvbuf,)f(cnt,)g(t)o(yp)q(e,)f(op,)g(comm\))117
1107 y Fh(IN)171 b Fg(sendbuf)428 b Fh(starting)14 b(address)h(of)f
(send)g(bu\013er)117 1208 y(OUT)124 b Fg(recvbuf)434
b Fh(starting)14 b(address)h(of)f(receiv)o(e)h(bu\013er)117
1308 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)13 b(of)h(elemen)o(ts)f
(in)h(input)g(bu\013er)g(\(in)o(teger\))117 1408 y(IN)171
b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f(elemen)o(ts)h(of)f
(input)h(bu\013er)117 1509 y(IN)171 b Fg(op)525 b Fh(op)q(eration)117
1609 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
1746 y Fq(Same)19 b(as)h(the)f Fg(MPI)p 519 1746 V 16
w(REDUCE)h Fq(op)q(eration)g(function)g(except)g(that)f(the)h(result)g
(app)q(ears)f(in)i(the)75 1802 y(receiv)o(e)16 b(bu\013er)f(of)g(all)h
(the)f(group)g(mem)o(b)q(ers.)75 1919 y Fg(MPI)p 160
1919 V 16 w(USER)p 288 1919 V 17 w(ALLREDUCE\()g(sendbuf,)j(recvbuf,)d
(cnt,)h(t)o(yp)q(e,)g(function,)g(comm\))117 2009 y Fh(IN)171
b Fg(sendbuf)428 b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
2109 y(OUT)124 b Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f
(receiv)o(e)h(bu\013er)117 2209 y(IN)171 b Fg(cnt)512
b Fh(n)o(um)o(b)q(er)13 b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g
(\(in)o(teger\))117 2309 y(IN)171 b Fg(t)o(yp)q(e)491
b Fh(data)14 b(t)o(yp)q(e)g(of)f(elemen)o(ts)h(of)f(input)h(bu\013er)
117 2410 y(IN)171 b Fg(function)418 b Fh(user)15 b(de\014ned)g
(function)117 2510 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 2647 y Fq(Same)k(as)g(the)g Fg(MPI)p 506
2647 V 16 w(USER)p 634 2647 V 18 w(REDUCE)h Fq(op)q(eration)f(function)
h(except)g(that)f(the)g(result)h(app)q(ears)f(in)75 2704
y(the)g(receiv)o(e)h(bu\013er)g(of)e(all)i(the)g(group)f(mem)o(b)q
(ers.)1967 46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215
y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967
498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959
724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959
949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 11 13
11 12 bop 75 -100 a Fj(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)
883 b Fq(11)75 45 y Fg(MPI)p 160 45 14 2 v 16 w(USER)p
288 45 V 17 w(ALLREDUCEA\()16 b(sendbuf,)h(recvbuf,)e(cnt,)h(t)o(yp)q
(e,)g(function,)g(comm\))117 140 y Fh(IN)171 b Fg(sendbuf)428
b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
250 y(OUT)124 b Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f
(receiv)o(e)h(bu\013er)117 360 y(IN)171 b Fg(cnt)512
b Fh(n)o(um)o(b)q(er)13 b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g
(\(in)o(teger\))117 470 y(IN)171 b Fg(t)o(yp)q(e)491
b Fh(data)14 b(t)o(yp)q(e)g(of)f(elemen)o(ts)h(of)f(input)h(bu\013er)
117 580 y(IN)171 b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)
117 690 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11
b(handle)166 832 y Fq(Same)16 b(as)f Fg(MPI)p 429 832
V 16 w(USER)p 557 832 V 18 w(REDUCEA)p Fq(,)h(except)g(that)g(the)g
(result)g(app)q(ears)g(in)h(the)f(receiv)o(e)h(bu\013er)e(of)75
889 y(all)h(the)f(group)g(mem)o(b)q(ers.)166 1039 y Fi(Implemen)o(tati)
o(on)d(note:)166 1113 y Fh(The)j(allreduce)h(op)q(erations)f(can)g(b)q
(e)h(implem)o(en)o(ted)d(as)i(a)g(reduce,)h(follo)o(w)o(ed)e(b)o(y)h(a)
f(broadcast.)22 b(Ho)o(w)o(ev)o(er,)75 1169 y(a)14 b(direct)g(implemen)
o(tation)d(can)j(lead)f(to)h(b)q(etter)i(p)q(erformance.)166
1326 y Fq(MPI)c(also)g(includes)i(v)m(arian)o(ts)e(of)g(eac)o(h)g(of)g
(the)g(reduce)h(op)q(erations)f(where)g(the)g(result)h(is)f(scattered)
75 1382 y(to)j(all)h(pro)q(cesses)f(in)h(the)g(group)f(on)g(return.)75
1504 y Fg(MPI)p 160 1504 V 16 w(REDUCE)p 352 1504 V 17
w(SCA)l(TTER\()h(sendbuf,)h(recvbuf,)f(distcnts,)h(t)o(yp)q(e,)f(op,)f
(comm\))117 1598 y Fh(IN)171 b Fg(sendbuf)428 b Fh(starting)14
b(address)h(of)f(send)g(bu\013er)117 1708 y(OUT)124 b
Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f(receiv)o(e)h(bu\013er)
117 1819 y(IN)171 b Fg(distcnts)428 b Fh(in)o(teger)14
b(arra)o(y)f(sp)q(ecifying)g(the)h(n)o(um)o(b)q(er)e(of)h(elemen)o(ts)g
(in)f(re-)905 1875 y(sult)i(distributed)h(to)f(eac)o(h)h(pro)q(cess.)21
b(Arra)o(y)14 b(m)o(ust)f(b)q(e)i(iden-)905 1931 y(tical)e(on)h(all)f
(calling)f(pro)q(cesses.)117 2042 y(IN)171 b Fg(t)o(yp)q(e)491
b Fh(data)14 b(t)o(yp)q(e)g(of)f(elemen)o(ts)h(of)f(input)h(bu\013er)
117 2152 y(IN)171 b Fg(op)525 b Fh(op)q(eration)117 2262
y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
2404 y Fg(MPI)p 251 2404 V 16 w(REDUCE)p 443 2404 V 17
w(SCA)l(TTER)16 b Fq(\014rst)f(do)q(es)h(a)f(comp)q(onen)o(t)o(wise)g
(reduction)h(on)g(v)o(ectors)e(pro)o(vided)i(b)o(y)75
2460 y(the)j(pro)q(cesses.)32 b(Next,)20 b(the)f(resulting)i(v)o(ector)
d(of)h(results)g(is)h(split)g(in)o(to)g(disjoin)o(t)f(segmen)o(ts,)h
(where)75 2517 y(segmen)o(t)15 b Fb(i)g Fq(has)g(length)h
Fg(discnts[i])p Fq(;)h(the)e Fb(i)p Fq(-th)g(segmen)o(t)g(is)h(sen)o(t)
f(to)f(pro)q(cess)i(with)f(rank)g Fb(i)p Fq(.)166 2591
y(This)21 b(routine)f(is)h(functionally)h(equiv)m(alen)o(t)g(to:)29
b(A)20 b Fg(MPI)p 1185 2591 V 16 w(REDUCE)h Fq(op)q(eration)g(function)
g(with)75 2647 y Fg(cnt)15 b Fq(equal)g(to)f(the)g(sum)h(of)e
Fg(distcnts[i])k Fq(follo)o(w)o(ed)e(b)o(y)f Fg(MPI)p
1065 2647 V 16 w(SCA)l(TTER)h Fq(with)g(mpiargsendcn)o(ts)g(equal)g(to)
75 2704 y(mpiargdistcn)o(ts.)20 b(Ho)o(w)o(ev)o(er,)14
b(it)h(can)h(b)q(e)g(implemen)o(ted)h(to)d(run)i(substan)o(tially)g
(faster.)-32 46 y Fk(1)-32 103 y(2)-32 159 y(3)-32 215
y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32
498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724
y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40
1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40
1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40
1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40
1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40
1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40
2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40
2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40
2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 12 14
12 13 bop 75 -100 a Fq(12)724 b Fj(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)75 45 y Fg(MPI)p 160
45 14 2 v 16 w(USER)p 288 45 V 17 w(REDUCE)p 481 45 V
17 w(SCA)l(TTER\()g(sendbuf,)i(recvbuf,)d(distcnts,)j(t)o(yp)q(e,)d
(function,)i(comm\))117 149 y Fh(IN)171 b Fg(sendbuf)428
b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
278 y(OUT)124 b Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f
(receiv)o(e)h(bu\013er)117 407 y(IN)171 b Fg(distcnts)428
b Fh(in)o(teger)14 b(arra)o(y)f(sp)q(ecifying)g(the)h(n)o(um)o(b)q(er)e
(of)h(elemen)o(ts)g(in)f(re-)905 463 y(sult)i(distributed)h(to)f(eac)o
(h)h(pro)q(cess.)21 b(Arra)o(y)14 b(m)o(ust)f(b)q(e)i(iden-)905
520 y(tical)e(on)h(all)f(calling)f(pro)q(cesses.)117
649 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(elemen)o(ts)h(of)f(input)h(bu\013er)117 778 y(IN)171
b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)117
906 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
1058 y Fq(Same)j(as)g(the)g Fg(MPI)p 503 1058 V 16 w(REDUCE)p
695 1058 V 17 w(SCA)l(TTER)i Fq(op)q(eration)e(function)h(except)g
(that)e(the)i(user)f(sp)q(eci\014es)75 1114 y(the)h(reduction)h(op)q
(eration)g(as)f(in)h Fg(MPI)p 753 1114 V 15 w(USER)p
880 1114 V 18 w(REDUCE)p Fq(.)75 1245 y Fg(MPI)p 160
1245 V 16 w(USER)p 288 1245 V 17 w(REDUCE)p 481 1245
V 17 w(SCA)l(TTERA\()h(sendbuf,)g(recvbuf,)e(distcnts,)j(t)o(yp)q(e,)e
(function,)g(comm\))117 1349 y Fh(IN)171 b Fg(sendbuf)428
b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
1478 y(OUT)124 b Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f
(receiv)o(e)h(bu\013er)117 1606 y(IN)171 b Fg(distcnts)428
b Fh(in)o(teger)14 b(arra)o(y)f(sp)q(ecifying)g(the)h(n)o(um)o(b)q(er)e
(of)h(elemen)o(ts)g(in)f(re-)905 1663 y(sult)i(distributed)h(to)f(eac)o
(h)h(pro)q(cess.)21 b(Arra)o(y)14 b(m)o(ust)f(b)q(e)i(iden-)905
1719 y(tical)e(on)h(all)f(calling)f(pro)q(cesses.)117
1848 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(elemen)o(ts)h(of)f(input)h(bu\013er)117 1977 y(IN)171
b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)117
2106 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
2257 y Fq(Same)19 b(as)g(the)h Fg(MPI)p 519 2257 V 15
w(USER)p 646 2257 V 18 w(REDUCE)p 840 2257 V 17 w(SCA)l(TTER)g
Fq(op)q(eration)g(function)g(except)g(that)e(the)i(user)75
2314 y(sp)q(eci\014ed)d(reduction)f(op)q(eration)g(only)g(need)g(to)e
(b)q(e)i(asso)q(ciativ)o(e)g(as)e(in)i Fg(MPI)p 1407
2314 V 16 w(USER)p 1535 2314 V 18 w(REDUCEA)p Fq(.)166
2473 y Fi(Implemen)o(tati)o(on)c(note:)166 2556 y Fh(The)k(REDUCE)p
435 2556 13 2 v 15 w(SCA)m(TTER)f(op)q(erations)g(can)h(b)q(e)g
(implemen)o(ted)e(as)h(a)g(reduce,)i(follo)o(w)o(ed)d(b)o(y)i(a)f
(scatter.)75 2613 y(Ho)o(w)o(ev)o(er,)f(a)f(direct)i(implemen)o(tation)
c(can)j(lead)f(to)h(b)q(etter)h(p)q(erformance.)1967
46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967
272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498
y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724
y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949
y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 13 15
13 14 bop 75 -100 a Fj(1.5.)34 b(GLOBAL)16 b(COMPUTE)f(OPERA)l(TIONS)
883 b Fq(13)75 45 y Fd(1.5.2)49 b(Scan)75 179 y Fg(MPI)p
160 179 14 2 v 16 w(SCAN\()15 b(sendbuf,)j(recvbuf,)d(cnt,)h(t)o(yp)q
(e,)g(op,)f(comm)e(\))117 257 y Fh(IN)171 b Fg(sendbuf)428
b Fh(starting)14 b(address)h(of)f(send)g(bu\013er)117
333 y(IN)171 b Fg(recvbuf)434 b Fh(starting)14 b(address)h(of)f(receiv)
o(e)h(bu\013er)117 409 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)13
b(of)h(elemen)o(ts)f(in)h(input)g(bu\013er)g(\(in)o(teger\))117
486 y(IN)171 b Fg(t)o(yp)q(e)491 b Fh(data)14 b(t)o(yp)q(e)g(of)f
(elemen)o(ts)h(of)f(input)h(bu\013er)117 562 y(IN)171
b Fg(op)525 b Fh(op)q(eration)117 638 y(IN)171 b Fg(comm)454
b Fh(comm)o(unicator)11 b(handle)166 763 y Fg(MPI)p 251
763 V 16 w(SCAN)23 b Fq(is)g(used)h(to)e(p)q(erform)g(a)h(parallel)h
(pre\014x)f(with)g(resp)q(ect)g(to)f(an)h(asso)q(ciativ)o(e)g(and)75
819 y(comm)o(utativ)o(e)e(reduction)i(op)q(eration)g(on)f(data)f
(distributed)j(across)d(the)h(group.)40 b(The)23 b(op)q(eration)75
876 y(returns)17 b(in)h(the)f(receiv)o(e)i(bu\013er)e(of)g(the)g(pro)q
(cess)g(with)h(rank)f Fb(i)g Fq(the)g(reduction)h(of)f(the)g(v)m(alues)
i(in)f(the)75 932 y(send)k(bu\013ers)f(of)f(pro)q(cesses)i(with)g
(ranks)e Fb(0,...,i)p Fq(.)37 b(The)21 b(t)o(yp)q(e)h(of)e(op)q
(erations)i(supp)q(orted,)g(their)75 989 y(seman)o(tics,)15
b(and)g(the)h(constrain)o(ts)e(on)h(send)h(and)g(receiv)o(e)g
(bu\013ers)f(are)g(as)g(for)f Fg(MPI)p 1531 989 V 16
w(REDUCE)p Fq(.)75 1093 y Fg(MPI)p 160 1093 V 16 w(USER)p
288 1093 V 17 w(SCAN\()i(sendbuf,)h(recvbuf,)f(cnt,)f(t)o(yp)q(e,)h
(function,)h(comm\))117 1171 y Fh(IN)171 b Fg(sendbuf)428
b Fh(address)15 b(of)f(input)f(bu\013er)117 1247 y(OUT)124
b Fg(recvbuf)434 b Fh(address)15 b(of)f(output)g(bu\013er)117
1323 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)14 b(of)h(elemen)o(ts)g
(in)f(input)h(and)g(output)g(bu\013er)h(\(in)o(te-)905
1380 y(ger\))117 1456 y(IN)171 b Fg(t)o(yp)q(e)491 b
Fh(data)14 b(t)o(yp)q(e)g(of)f(bu\013er)117 1532 y(IN)171
b Fg(function)418 b Fh(user)15 b(pro)o(vided)f(function)117
1608 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
1733 y Fq(Same)h(as)g(the)h Fg(MPI)p 498 1733 V 15 w(SCAN)g
Fq(op)q(eration)g(function)g(except)g(that)f(a)g(user)g(supplied)j
(function)e(is)g(used.)75 1790 y Fg(function)k Fq(is)f(an)g(asso)q
(ciativ)o(e)f(and)h(comm)o(utativ)o(e)f(function)h(with)g(an)g(input)g
(v)o(ector,)f(an)g(inout)h(v)o(ector,)75 1846 y(and)g(a)g(length)g
(argumen)o(t.)21 b(The)16 b(t)o(yp)q(es)g(of)g(the)g(t)o(w)o(o)e(v)o
(ectors)h(and)h(of)g(the)g(returned)g(v)m(alues)h(all)g(agree.)75
1903 y(See)f Fg(MPI)p 241 1903 V 16 w(USER)p 369 1903
V 17 w(REDUCE)g Fq(for)e(more)h(details.)75 2007 y Fg(MPI)p
160 2007 V 16 w(USER)p 288 2007 V 17 w(SCANA\()h(sendbuf,)h(recvbuf,)f
(cnt,)g(t)o(yp)q(e,)g(function,)g(comm\))117 2085 y Fh(IN)171
b Fg(sendbuf)428 b Fh(address)15 b(of)f(input)f(bu\013er)117
2161 y(OUT)124 b Fg(recvbuf)434 b Fh(address)15 b(of)f(output)g
(bu\013er)117 2237 y(IN)171 b Fg(cnt)512 b Fh(n)o(um)o(b)q(er)14
b(of)h(elemen)o(ts)g(in)f(input)h(and)g(output)g(bu\013er)h(\(in)o(te-)
905 2294 y(ger\))117 2370 y(IN)171 b Fg(t)o(yp)q(e)491
b Fh(data)14 b(t)o(yp)q(e)g(of)f(bu\013er)117 2446 y(IN)171
b Fg(function)418 b Fh(user)15 b(de\014ned)g(function)117
2522 y(IN)171 b Fg(comm)454 b Fh(comm)o(unicator)11 b(handle)166
2647 y Fq(Same)i(as)g Fg(MPI)p 424 2647 V 16 w(USER)p
552 2647 V 17 w(SCAN)p Fq(,)h(except)f(that)g(the)g(user-de\014ned)i
(op)q(eration)e(need)h(not)f(b)q(e)h(comm)o(u-)75 2704
y(tativ)o(e.)-32 46 y Fk(1)-32 103 y(2)-32 159 y(3)-32
215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32
498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724
y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40
1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40
1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40
1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40
1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40
1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40
2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40
2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40
2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop
%%Page: 14 16
14 15 bop 75 -100 a Fq(14)724 b Fj(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)166 45 y Fi(Implemen)o(tati)o(on)c
(note:)166 103 y Fc(MPI)p 243 103 13 2 v 14 w(USER)p
361 103 V 14 w(SCAN)i Fh(can)g(b)q(e)g(implemen)o(ted)e(as)i
Fc(MPI)p 1000 103 V 14 w(USER)p 1118 103 V 15 w(SCANA)p
Fh(.)75 337 y Fn(1.6)59 b(Co)n(rrectness)75 441 y Fq(A)13
b(correct)f(program)g(should)i(in)o(v)o(ok)o(e)f(collectiv)o(e)h(comm)o
(unications)g(so)e(that)g(deadlo)q(c)o(k)i(will)g(not)f(o)q(ccur,)75
497 y(whether)22 b(collectiv)o(e)h(comm)o(unication)f(is)g(sync)o
(hronizing)h(or)d(not.)38 b(The)22 b(follo)o(wing)g(t)o(w)o(o)e
(examples)75 554 y(illustrate)c(dangerous)f(use)h(of)f(collectiv)o(e)i
(routines.)j(The)15 b(\014rst)g(example)h(is)g(erroneous.)75
668 y Fb(/*)24 b(Example)e(A)i(*/)75 725 y
(switch\(MPI_rank\(comm,rank\))o(;)d(rank\))147 781 y({)147
837 y(case)i(0:)g({)h(MPI_bcast\(var1,)e(cnt,)h(type,)h(0,)f(comm\);)
385 894 y(MPI_send\(var2,)f(cnt,)h(type,)h(1,)f(tag,)h(comm\);)385
950 y(break;)337 1007 y(})147 1063 y(case)f(1:)g({)h(MPI_recv\(var2,)e
(cnt,)h(type,)h(0,)f(tag,)h(comm\);)385 1120 y(MPI_bcast\(var1,)e(cnt,)
h(type,)h(0,)f(comm\);)385 1176 y(break;)337 1233 y(})147
1289 y(})166 1402 y Fq(Pro)q(cess)16 b(zero)f(executes)h(a)g
(broadcast,)f(follo)o(w)o(ed)h(b)o(y)f(a)h(blo)q(c)o(king)h(send)f(op)q
(eration;)g(pro)q(cess)g(one)75 1459 y(\014rst)h(executes)h(a)g(matc)o
(hing)f(blo)q(c)o(king)i(receiv)o(e,)g(follo)o(w)o(ed)f(b)o(y)g(the)f
(matc)o(hing)h(broadcast)f(call.)28 b(This)75 1515 y(program)18
b(ma)o(y)g(deadlo)q(c)o(k.)33 b(The)20 b(broadcast)e(call)i(on)f(pro)q
(cess)h(zero)f(ma)o(y)f(blo)q(c)o(k)i(un)o(til)g(pro)q(cess)g(one)75
1572 y(executes)c(the)f(matc)o(hing)g(broadcast)g(call,)h(so)f(that)f
(the)i(send)f(is)h(not)f(executed.)21 b(Pro)q(cess)15
b(one)g(blo)q(c)o(ks)75 1628 y(on)g(the)g(receiv)o(e)i(and)e(nev)o(er)g
(executes)h(the)f(broadcast.)166 1686 y(The)g(follo)o(wing)h(example)g
(is)g(correct,)e(but)i(nondeterministic:)75 1800 y Fb(/*)24
b(Example)e(B)i(*/)75 1857 y(switch\(MPI_rank\(comm,rank\))o(;)d
(rank\))147 1913 y({)170 1970 y(case)j(0:)f({)h(MPI_bcast\(var1,)e
(cnt,)h(type,)g(0,)h(comm\);)409 2026 y(MPI_send\(var2,)e(cnt,)h(type,)
h(1,)f(tag,)g(comm\);)409 2083 y(break;)361 2139 y(})170
2195 y(case)h(1:)f({)h(MPI_recv\(var2,)e(cnt,)h(type,)h
(MPI_SOURCE_ANY,)d(tag,)j(comm\);)409 2252 y(MPI_bcast\(var1,)e(cnt,)h
(type,)g(0,)h(comm\);)409 2308 y(MPI_recv\(var2,)e(cnt,)h(type,)h
(MPI_SOURCE_ANY,)d(tag,)j(comm\);)409 2365 y(break;)361
2421 y(})170 2478 y(case)g(2:)f({)h(MPI_send\(var2,)e(cnt,)h(type,)h
(1,)f(tag,)g(comm\);)409 2534 y(MPI_bcast\(var1,)f(cnt,)h(type,)g(0,)h
(comm\);)409 2591 y(break;)361 2647 y(})170 2704 y(})1967
46 y Fk(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967
272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498
y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724
y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949
y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959
1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959
1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959
1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959
1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959
2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959
2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959
2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p
eop
%%Page: 15 17
15 16 bop 75 -100 a Fj(1.6.)34 b(CORRECTNESS)1303 b Fq(15)166
45 y(All)17 b(three)e(pro)q(cesses)h(participate)g(in)g(a)f(broadcast.)
20 b(Pro)q(cess)15 b(0)g(sends)h(a)f(message)g(to)g(pro)q(cess)g(1)75
102 y(after)d(the)h(broadcast,)f(and)g(pro)q(cess)h(2)g(sends)g(a)f
(message)g(to)g(pro)q(cess)h(1)f(after)g(the)h(broadcast.)18
b(Pro)q(cess)75 158 y(1)d(receiv)o(es)h(b)q(efore)f(and)h(after)e(the)i
(broadcast,)e(with)h(a)g(wildcard)h(source)g(parameter.)166
214 y(Tw)o(o)e(p)q(ossible)i(executions,)g(with)f(di\013eren)o(t)g
(matc)o(hings)g(of)f(sends)h(and)g(receiv)o(es)h(are)f(illustrated)75
271 y(b)q(elo)o(w.)337 370 y Fb(First)24 b(Execution)170
483 y(0)311 b(1)357 b(2)648 539 y(/-----)47 b(send)457
596 y(recv)23 b(<-/)75 652 y(broadcast)118 b(broadcast)166
b(broadcast)123 708 y(send)23 b(---\\)337 765 y(\\-->)h(recv)337
864 y(Second)g(Execution)147 977 y(0)334 b(1)357 b(2)75
1033 y(broadcast)123 1090 y(send)23 b(---\\)337 1146
y(\\-->)48 b(recv)433 1202 y(broadcast)166 b(broadcast)719
1259 y(/---)47 b(send)481 1315 y(recv)23 b(<---/)166
1414 y Fq(Note)16 b(that)g(the)h(second)g(execution)h(has)f(the)f(p)q
(eculiar)j(e\013ect)e(that)f(a)g(send)h(executed)h(after)e(the)75
1471 y(broadcast)e(is)i(receiv)o(ed)h(at)d(another)h(no)q(de)h(b)q
(efore)f(the)g(broadcast.)166 1603 y Fi(Discussion:)166
1653 y Fh(An)d(alternativ)o(e)g(design)h(is)f(to)g(require)h(that)g
(all)e(collectiv)o(e)h(comm)o(unication)d(calls)j(are)g(sync)o
(hronizing.)18 b(In)75 1703 y(this)11 b(case,)h(the)g(second)g(program)
d(is)i(deterministic)g(and)f(only)h(the)g(\014rst)h(execution)g(ma)o(y)
d(o)q(ccur.)18 b(This)11 b(will)e(mak)o(e)75 1753 y(a)14
b(di\013erence)h(only)e(for)h(collectiv)o(e)g(op)q(erations)g(where)h
(not)e(all)g(pro)q(cesses)j(b)q(oth)e(send)h(and)f(receiv)o(e)h
(\(broadcast,)75 1802 y(reduce,)g(scatter,)g(gather\).)166
1941 y Fq(It)21 b(is)g(the)g(user's)f(resp)q(onsibilit)o(y)j(to)d(mak)o
(e)g(sure)h(that)f(there)h(are)f(no)h(t)o(w)o(o)e(concurren)o(tly)i
(exe-)75 1998 y(cuting)g(collectiv)o(e)i(calls)e(that)f(use)h(the)g
(same)f(comm)o(unicator)g(on)h(the)g(same)f(pro)q(cess.)36
b(\(Since)22 b(all)75 2054 y(collectiv)o(e)d(comm)o(unication)f(calls)h
(are)e(blo)q(c)o(king)i(this)e(restriction)h(only)g(a\013ects)f(m)o
(ultithreaded)h(im-)75 2111 y(plemen)o(tations.\))i(On)14
b(the)g(other)g(hand,)g(it)g(is)g(legitimate)h(for)e(one)h(pro)q(cess)h
(to)e(start)g(a)g(new)h(collectiv)o(e)75 2167 y(comm)o(unication)j
(call)g(ev)o(en)f(though)g(a)f(previous)i(call)g(that)e(uses)h(the)g
(same)g(comm)o(unicator)g(has)f(not)75 2224 y(y)o(et)g(terminated)g(on)
g(another)g(pro)q(cess.)20 b(As)15 b(illustrated)i(in)f(the)f(follo)o
(wing)h(example:)75 2323 y Fb(/*)24 b(Example)e(C)i(*/)99
2379 y(MPI_bcast\(var1,)e(cnt,)h(type,)g(0,)h(comm\);)99
2435 y(MPI_bcast\(var2,)e(cnt,)h(type,)g(1,)h(comm\);)166
2534 y Fq(In)17 b(a)e(nonsync)o(hronizing)j(implemen)o(tation)g(of)d
(broadcast,)g(pro)q(cess)i(zero)f(ma)o(y)f(start)g(executing)75
2591 y(the)h(second)g(broadcast)f(b)q(efore)h(pro)q(cess)g(one)g
(terminated)g(the)g(\014rst)f(broadcast.)21 b(Both)15
b(pro)q(cess)h(zero)75 2647 y(and)h(one)f(ma)o(y)g(terminate)g(their)h
(t)o(w)o(o)e(broadcast)g(calls)j(b)q(efore)e(other)g(pro)q(cesses)h(ha)
o(v)o(e)f(started)f(their)75 2704 y(calls.)21 b(It)15
b(is)h(the)f(implemen)o(tor's)h(resp)q(onsibilit)o(y)h(to)e(ensure)h
(this)f(will)i(not)e(cause)h(an)o(y)e(error.)-32 46 y
Fk(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32
328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40
611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836
y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062
y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288
y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514
y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740
y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966
y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191
y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417
y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643
y(47)-40 2699 y(48)p eop
%%Page: 16 18
16 17 bop 75 -100 a Fq(16)724 b Fj(SECTION)16 b(1.)35
b(COLLECTIVE)16 b(COMMUNICA)l(TION)166 45 y Fi(Implemen)o(tati)o(on)c
(note:)166 95 y Fh(Assume)e(that)g(broadcast)h(is)f(implemen)o(ted)e
(using)i(p)q(oin)o(t-to-p)q(oin)o(t)e(MPI)j(comm)o(unicati)o(on.)j(The)
d(follo)o(wing)75 145 y(t)o(w)o(o)i(rules)i(are)f(satis\014ed:)134
219 y(1.)22 b(All)13 b(receiv)o(es)i(sp)q(ecify)g(their)f(source)h
(explicitly)e(\(no)h(wildcards\).)134 294 y(2.)22 b(Eac)o(h)12
b(pro)q(cess)i(sends)g(all)d(messages)h(that)h(p)q(ertain)f(to)h(one)f
(collectiv)o(e)g(call)g(b)q(efore)h(sending)f(an)o(y)g(message)189
344 y(that)i(p)q(ertain)g(to)g(a)f(subsequen)o(t)j(collectiv)o(e)e
(call.)166 419 y(Then)i(messages)g(b)q(elonging)f(to)h(successiv)o(e)i
(broadcasts)f(cannot)f(b)q(e)h(confused,)g(as)f(the)h(order)f(of)g(p)q
(oin)o(t-)75 469 y(to-p)q(oin)o(t)d(messages)h(is)g(preserv)o(ed.)20
b(This)14 b(is)f(true,)i(in)e(general,)h(for)f(an)o(y)h(collectiv)o(e)g
(library)m(.)166 608 y Fq(A)k(collectiv)o(e)h(comm)o(unication)f(ma)o
(y)f(execute)i(in)f(a)g(con)o(text)f(while)i(p)q(oin)o(t-to-p)q(oin)o
(t)f(comm)o(uni-)75 664 y(cations)g(that)g(use)g(the)h(same)f(con)o
(text)f(are)h(p)q(ending,)j(or)c(o)q(ccur)i(concurren)o(tly)l(.)30
b(This)19 b(is)g(illustrated)75 721 y(in)d(example)g(B)g(ab)q(o)o(v)o
(e,)f(the)g(\014rst)g(pro)q(cess)h(ma)o(y)f(receiv)o(e)h(a)f(message)g
(sen)o(t)g(with)h(the)f(con)o(text)g(of)g(com-)75 777
y(m)o(unicator)32 b Fg(comm)14 b Fq(while)j(it)f(is)g(executing)h(a)f
(broadcast)f(with)h(the)g(same)g(comm)o(unicator.)22
b(It)15 b(is)i(the)75 833 y(implemen)o(ter)f(resp)q(onsibili)q(t)o(y)h
(to)e(ensure)h(this)f(will)i(not)e(cause)g(an)o(y)g(confusion.)166
966 y Fi(Implemen)o(tati)o(on)21 b(note:)65 b Fh(Assume)22
b(that)f(collectiv)o(e)g(comm)o(unications)d(are)k(implemen)o(ted)d
(using)75 1016 y(p)q(oin)o(t-to-p)q(oin)o(t)e(MPI)i(comm)o(unication.)
29 b(Then,)20 b(in)e(order)i(to)e(a)o(v)o(oid)g(confusion,)h(whenev)o
(er)h(a)e(comm)o(unica-)75 1066 y(tor)g(is)f(created,)i(a)e(\\hidden)h
(comm)o(unicator")c(need)19 b(b)q(e)f(created)h(for)e(collectiv)o(e)g
(comm)o(unication.)26 b(A)17 b(direct)75 1115 y(implemen)o(tatio)o(n)11
b(of)i(MPI)h(collectiv)o(e)g(comm)o(unicatio)o(n)d(can)j(ac)o(hiev)o(e)
g(a)g(similar)d(e\013ect)k(more)e(c)o(heaply)m(,)g(e.g.,)f(b)o(y)75
1165 y(using)j(a)g(hidden)h(tag)f(or)g(con)o(text)h(bit)g(to)f
(indicate)g(whether)i(the)f(comm)o(unicator)d(is)i(used)h(for)f(p)q
(oin)o(t-to-p)q(oin)o(t)75 1215 y(or)f(collectiv)o(e)g(comm)o(unicatio)
o(n.)166 1265 y(An)g(alternativ)o(e)h(c)o(hoice)g(is)f(to)g(require)h
(that)g(a)f(comm)o(unicator)e(is)i(quiescen)o(t)i(when)f(used)g(in)f(a)
g(collectiv)o(e)75 1315 y(comm)o(unication:)h(No)f(messages)h(using)f
(a)f(con)o(text)i(can)g(b)q(e)g(p)q(ending)f(at)g(a)g(pro)q(cess)i
(when)f(this)f(pro)q(cess)i(starts)75 1364 y(executing)e(a)f(collectiv)
o(e)h(comm)o(unicati)o(on)d(with)i(this)g(con)o(text,)h(nor)f(can)h(an)
o(y)f(new)h(message)f(with)g(this)g(con)o(text)75 1414
y(arriv)o(e)j(during)h(the)g(execution)g(of)f(this)g(collectiv)o(e)h
(comm)o(unicatio)o(n,)d(unless)j(they)g(w)o(ere)h(sen)o(t)f(as)g(part)f
(of)g(the)75 1464 y(execution)f(of)e(the)h(collectiv)o(e)g(call)f
(itself.)166 1514 y(This)j(approac)o(h)h(has)f(the)h(adv)n(an)o(tage)f
(of)f(simplifying)e(the)k(la)o(y)o(ering)f(of)f(collectiv)o(e)i(comm)o
(unicatio)o(ns)d(on)75 1564 y(top)d(of)f(p)q(oin)o(t-to-p)q(oin)o(t)f
(comm)o(unicatio)o(n)f(\(no)j(need)g(for)g(hidden)f(con)o(texts\).)19
b(Also,)10 b(it)g(imp)q(oses)g(on)h(the)g(collectiv)o(e)75
1614 y(collectiv)o(e)k(comm)o(unicatio)o(n)d(library)i(the)i(same)d
(restrictions)j(that)f(hold)f(for)h(an)o(y)f(other)i(collectiv)o(e)e
(library)m(.)20 b(It)75 1663 y(has)14 b(the)h(disadv)n(an)o(tage)e(of)g
(restricting)i(the)f(use)h(of)e(collectiv)o(e)h(comm)o(unicatio)o(ns.)
166 1713 y(The)i(question)h(is)f(whether)h(w)o(e)g(w)o(an)o(t)e(to)h
(view)g(the)h(collectiv)o(e)f(comm)o(unication)d(op)q(erations)j(as)g
(part)h(of)75 1763 y(the)f(basic)g(comm)o(unicati)o(on)d(services)k(of)
e(MPI,)g(or)g(whether)i(w)o(e)f(w)o(an)o(t)f(to)g(see)i(them)d(as)i(a)f
(library)g(la)o(y)o(ered)g(on)75 1813 y(top)f(of)f(these)i(basic)f
(services.)1967 46 y Fk(1)1967 103 y(2)1967 159 y(3)1967
215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441
y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667
y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893
y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959
1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959
1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959
1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959
1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959
2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959
2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959
2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959
2699 y(48)p eop
%%Trailer
end
userdict /end-hook known{end-hook}if
%%EOF
From owner-mpi-collcomm@CS.UTK.EDU Fri Sep 17 15:46:10 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA11214; Fri, 17 Sep 93 15:46:10 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA01454; Fri, 17 Sep 93 15:40:22 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 17 Sep 1993 15:40:21 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA01436; Fri, 17 Sep 93 15:40:20 -0400
Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA17312; Fri, 17 Sep 1993 15:40:18 -0400
Date: Fri, 17 Sep 1993 15:40:18 -0400
From: walker@rios2.epm.ornl.gov (David Walker)
Message-Id: <9309171940.AA17312@rios2.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: Data movement routines


The restrictions on sendtype and recvtype in the gather, satter,
allcast and alltoall routines seem rather odd to me. I think we should
say that the type signature of the sendtype and recvtype
must be the same, but the displacements may differ.

Thus for example in gathering data from processes 1 and 2 to process
0 we might have:

sendtype = {(int, 0),(int,  8)} in process 1
sendtype = {(int, 0),(int, 12)} in process 2

and

recvtype = {(int, 0)} in process 0   (or MPI_INT)

These datatypes could be used to send to 0 every second integer
from an array in process 1, and every third integer from an array on
process 2. These data are packed into a contiguous array
on process 0.

More generally the sendtype may have been generated by an indexed
constructor, in which case the displacements certainly won't be the same
in all processes.  

Similar arguments apply to the scatter, allcast and alltoall routines.

Generally we require in all cases that all datatypes be type consistent, i.e.
the types match up but the displacements may differ.

David
From owner-mpi-collcomm@CS.UTK.EDU Fri Sep 17 15:55:19 1993
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib)
	id AA11300; Fri, 17 Sep 93 15:55:19 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA02315; Fri, 17 Sep 93 15:52:26 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 17 Sep 1993 15:52:25 EDT
Errors-To: owner-mpi-collcomm@CS.UTK.EDU
Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA02307; Fri, 17 Sep 93 15:52:24 -0400
Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA12467; Fri, 17 Sep 1993 15:52:23 -0400
Message-Id: <9309171952.AA12467@rios2.epm.ornl.gov>
To: mpi-collcomm@cs.utk.edu
Subject: more on gather
Date: Fri, 17 Sep 93 15:52:23 -0500
From: David W. Walker <walker@rios2.epm.ornl.gov>


Some of you may have noticed that the sendtypes in my example don't quite
do what I say they do. But the point I'm making is still valid.

David
From owner-mpi-collcomm@CS.UTK.EDU Fri Jan 21 16:35:59 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib)
	id QAA25014; Fri, 21 Jan 1994 16:35:56 -0500
Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id QAA24069; Fri, 21 Jan 1994 16:36:06 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 21 Jan 1994 16:36:02 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from bigblu0.epm.ornl.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id QAA24042; Fri, 21 Jan 1994 16:35:53 -0500
Received: by bigblu0.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03)
          id AA15599; Fri, 21 Jan 1994 16:35:57 -0500
Date: Fri, 21 Jan 1994 16:35:57 -0500
From: geist@bigblu0.epm.ornl.gov (Al Geist)
Message-Id: <9401212135.AA15599@bigblu0.epm.ornl.gov>
To: mpi-collcomm@CS.UTK.EDU
Subject: Latest draft of collective chapter. 


Marc Snir found and corrected some mistakes in the collective chapter
In particular he fixed examples mostly MPI_COMMIT(&name), rather than
MPI_COMMIT(name). Added a sentence that ordering may change the outcome 
of MPI_REDUCE and made some changes in the way the user function in 
MPI_USER_REDUCE is defined. My thanks to Marc.

I'm sorry I didn't get this out to the reflector more quickly
I was busy having a new baby boy.

Al
------------

\include{chapter-head}
% Version as of Aug 3, 1993
% Minor edits by S. Otto, August 7, 1993
% added a few labels, August 9, 1993 -- SO
% added littlefield's ammendments Sept 13, 1993
% added back simple versions of 5 functions Sept 26, 1993
% version of Oct 18th
% Minor corrections by M. Snir Jan. 4, 1994

\chapter{Collective Communication}
\label{sec:coll}
\label{chap:coll}
\footnotetext[1]{Version of Jan 4, 1994 -- with some corrections}

\section{Introduction}

Collective communication is defined to be communication that involves
a group of processes.
The functions provided by the MPI collective communication include:
\begin{itemize}
\item
Broadcast from one member to all members of a group.
\item
Barrier synchronization across all group members
\item
Gather data from all group members to one member.
\item
Scatter data from one member to all members of a group.
\item
Global operations such as sum, max, min, etc., were the result
is known by all group members and a variation where the result is
known by only one member. The ability to have user defined
global operations.
\item
Scan across all members of a group (also called parallel prefix).
\item
Broadcast from all members to all members of a group.
\item
Scatter/Gather data from all members to all members of a group
(also called complete exchange or all-to-all).
\end{itemize}
While vendors may optimize some of these collective routines for
their architectures, a complete library of the collective communication
routines can be written entirely using the MPI point-to-point communication
functions and a few auxiliary functions.

A collective operation is executed by having all processes in the group
call the
communication routine, with matching arguments.
The syntax and semantics of the collective operations are
defined to be consistent with the syntax and semantics of the
point-to-point operations. Thus general datatypes are allowed
and must match between sending and receiving processes as specified
in the chapter on point-to-point functions.
One of the key arguments is a communicator that defines the group
of participating processes and provides a context for the operation.
Several collective routines such as broadcast and gather have
a single originating or receiving process. In this chapter such processes
are called the {\em root}.
Some arguments in the collective functions are specified as
``significant only at root''. These arguments are ignored for all
participants except the root, and can be set to any value.
The reader is referred to chapter~\ref{chap:pt2pt}
for information concerning communication buffers,
general datatypes and type matching rules; and to
chapter~\ref{chap:context} for information on how to define groups and
create communicators.

Collective routines can (but are not required to) return as soon as their
participation in the collective communication is complete.  The completion
of a call indicates that the caller is now free to access the locations in the
communication buffer.  It does not indicate that other processes in
the group have started the operation (unless otherwise indicated in the
description of the operation).   The successful completion of
a collective communication call may depend on the execution of a matching call
at all processes in the group.   Thus, a collective communication call may, or
may not, have the effect of synchronizing all calling processes.
Collective communication calls may use the same
communicators as point to point communication; MPI guarantees that
messages generated on behalf of collective communication calls will not
be confused with messages generated by point to point communication.

A more detailed discussion of the correct use of the collective
routines can be found at the end of this chapter.

\discuss{
The collective operations do not accept a message tag argument.
The rationale for not using tags is that the need for distinguishing collective
operations with the same context seldom arises (since the operations are
blocking); the tag field can be used by the point-to-point messages that
implement the collective communication.
}

\section{Communication Functions}

The key concept of the collective functions is to have a ``group''
of participating processes. The routines do not have a group identifier
as an explicit argument. Instead, there is a communicator argument.
In this chapter a communicator can be thought of as a group identifier
linked with a context. (Inter-communicators, that is between groups
communicators, are not allowed in the collective functions.)


\section{Barrier synchronization}
\label{sec:coll-barrier}

\begin{funcdef}{MPI\_BARRIER( comm )}
\funcarg{\IN}{comm}{communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Barrier(MPI\_Comm~comm )}

\mpifbind{MPI\_BARRIER(COMM, IERROR) \fargs INTEGER COMM, IERROR}


\func{MPI\_BARRIER} blocks the caller until all group members have called
it; the call returns at any process only after all group members have
entered the call.

\section{Data move functions}

Figure \ref{fig:collcom} illustrates the the different collective move
functions
supported by MPI.
\begin{figure}
\vbox to8truein{
\vfil
\special{psfile=coll-fig1.ps}}
\caption{Collective move functions illustrated
for a group of six processes. In each case, each row of boxes
represents data locations in one process. Thus, in the one-all broadcast,
initially just the first process contains the data $A_0$, but after the
broadcast all processes contain it.}
\label{fig:collcom}
\end{figure}

\subsection{Broadcast}
\label{subsec:coll-broadcast}

\begin{funcdef}{ MPI\_BCAST( buffer, count, datatype, root, comm )}
\funcarg{\INOUT}{buffer}{starting address of buffer (choice)}
\funcarg{\IN}{ count}{ number of entries in buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of buffer (handle)}
\funcarg{\IN}{ root}{ rank of broadcast root (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{ MPI\_Bcast(void*~buf, int~count, MPI\_Datatype~datatype,
int~root, MPI\_Comm~comm )}

\mpifbind{ MPI\_BCAST(BUFFER, COUNT, DATATYPE, ROOT, COMM, IERROR)
\fargs <type>  BUFFER(*) \\ INTEGER COUNT, DATATYPE, ROOT, COMM, IERROR}


\func{MPI\_BCAST} broadcasts a message from
the process with rank \mpiarg{ root} to all other processes
of the group. It is called by all members of group using the same arguments for
\mpiarg{ comm, root} and matching arguments for \mpiarg{ count, datatype}.
On return the contents of the buffer of the process with rank \mpiarg{ root}
is contained in the buffer of the calling process.

\subsection{Gather}
\label{subsec:coll-gather}

\begin{funcdef}{MPI\_GATHER( sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, root, comm) }
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcount}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice,
significant only at root)}
\funcarg{\IN}{ recvcount}{ number of elements for any single receive (integer,
significant only at root)}
\funcarg{\IN}{ recvtype}{ data type of recv buffer elements
(significant only at root) (handle)}
\funcarg{\IN}{ root}{ rank of receiving process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Gather(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~recvcount,
MPI\_Datatype~recvtype, int~root, MPI\_Comm~comm) }

\mpifbind{MPI\_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
RECVTYPE, ROOT, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR}


Each process (including the root process) sends the contents of its send
buffer to the root process.  The root process receives the messages at
stores them in rank order.
The outcome is as if each of the {\tt n} processes in the group
had executed a call to
\[\tt
MPI\_Send(sendbuf, sendcount, sendtype, root , ...),
\]
and the
root had executed {\tt n} calls to
\[\tt
MPI\_Recv(recvbuf[i], recvcount, recvtype, i ,...),
\]
where,
\[\tt recvbuf[i] = recvbuf + i \times recvcount \times extent(recvtype). \]

An alternative description is that the {\tt n} messages sent by the
processes in the group are concatenated in rank order, and the
resulting message is received by the root as if by a call to
\mpifunc{MPI\_RECV(recvbuf, recvcount $\times$n, recvtype, ...)}.

The receive buffer is ignored for all non-root processes.

General derived datatypes are allowed for both \mpiarg{ sendtype}
and \mpiarg{ recvtype}.
The type signature of \mpiarg{ sendcount, sendtype} on process i
must be equal to the type signature of
\mpiarg{ recvcount, recvtype} at the root.
Note that the amount of data sent must be equal to the amount received
(pairwise between each process and the root).
\func{MPI\_GATHER} and all other data movement collective routines
make this restriction and provide no facility (such as the status
argument of \func{MPI\_RECV}) for discovering how much data was sent.

All arguments to the function are significant on process \mpiarg{root},
while on other processes, only arguments \mpiarg{sendbuf, sendcount,
sendtype, root, comm} are significant.
The arguments \mpiarg{root} and \mpiarg{comm}
must have identical values on all processes.


%See section \ref{coll:sec-operational-defn} for an operational
%definition of this function in terms of point-to-point functions.

\begin{funcdef}{MPI\_GATHERV( sendbuf, sendcount, sendtype, recvbuf,
recvcounts, displs, recvtype, root, comm) }
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcount}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice,
significant only at root)}
\funcarg{\IN}{ recvcounts}{ integer array (of length group size)
containing the number of elements that are received from each process
(significant only at root)}
\funcarg{\IN}{ displs}{ integer array (of length group size).  Entry
{\tt i} specifies the displacement relative to \mpiarg{recvbuf} at
which to place the incoming data from process {\tt i} (significant only
at root)}
\funcarg{\IN}{ recvtype}{ data type of recv buffer elements
(significant only at root) (handle)}
\funcarg{\IN}{ root}{ rank of receiving process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Gatherv(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~*recvcounts, int~*displs,
MPI\_Datatype~recvtype, int~root, MPI\_Comm~comm) }

\mpifbind{MPI\_GATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
RECVCOUNTS, DISPLS, RECVTYPE, ROOT, COMM, IERROR) \fargs <type>
SENDBUF(*), RECVBUF(*) \\ INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*),
DISPLS(*), RECVTYPE, ROOT, COMM, IERROR}


\func{MPI\_GATHERV} extends the functionality of \func{MPI\_GATHER}
by allowing a varying count of data from each process (since
\mpiarg{recvcounts}
is now an array), and also allows more flexibility as to where the data
is placed on the root (by providing the new argument, \mpiarg{displs}).

The outcome is as if each process has sent a message to the root,
\[\tt
MPI\_Send(sendbuf, sendcount, sendtype, root, ...),
\]
and the root executed {\tt n} receives,
\[\tt
MPI\_Recv(recvbuf+disp[i]\times extent(recvtype), recvcounts[i],
recvtype, i, ...).
\]

Messages are placed in the receive buffer of the root process
in rank order, that is, the data sent from process {\tt j} is
placed in the {\tt j}-th portion of the receive buffer \mpiarg{recvbuf}
on process \mpiarg{root}.  The {\tt j}-th portion of \mpiarg{recvbuf}
begins at offset \mpiarg{displs[j]} elements (in terms of
\mpiarg{recvtype}) into \mpiarg{recvbuf}.

The type signature implied by \mpiarg{sendcount, sendtype} on process {\tt i}
must be equal to the type signature implied by \mpiarg{recvcounts[i], recvtype}
at the root (however, the type maps may be different).

All arguments to the function are significant on process \mpiarg{root},
while on other processes, only arguments \mpiarg{sendbuf, sendcount,
sendtype, root, comm} are significant.
The arguments \mpiarg{root} and \mpiarg{comm}
must have identical values on all processes.

For both functions, the specification of count(s), type(s), (displacements),
should not cause any location on the root to be written more than
once.  Such a call is erroneous.

%See section \ref{coll:sec-operational-defn} for an operational
%definition of this function in terms of point-to-point functions.

We illustrate the matching conditions with the following examples.

% Steve Otto has examples for here
\subsection{Examples of Usage of \func{MPI\_GATHER}, \func{MPI\_GATHERV}}

\subsubsection{Example 1}

Gather 100 ints from every process in group to root. See figure
\ref{fig-Example1}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100];
    int root, *rbuf;

    ...

    /* The variable comm is set elsewhere in the program
     */
    MPI_Comm_size( comm, &gsize);
    rbuf = (int *)malloc(gsize*100*sizeof(int));
    MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig2.ps,width=3.50in}}}
  \small
  \caption{The root process gathers 100 {\tt int}s from each process
  in the group.
  }
  \label{fig-Example1}
\end{figure}

\subsubsection{Example 2}

Do the same as in previous example, but use a derived datatype.  Note that
the type cannot be the entire set of {\tt gsize*100 int}s since type matching
is defined pairwise between the root and each process in the gather.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100];
    int root, *rbuf;
    MPI_Datatype rtype;

    ...

    /* The variable comm is set elsewhere in the program
     */
    MPI_Comm_size( comm, &gsize);
    MPI_Type_contiguous( 100, MPI_INT, &rtype );
    MPI_Type_commit( &rtype );
    rbuf = (int *)malloc(gsize*100*sizeof(int));
    MPI_Gather( sendarray, 100, MPI_INT, rbuf, 1, rtype, root, comm);
\end{verbatim}

\subsubsection{Example 3}
\label{coll:example3}

Now have each process send 100 ints to root, but place each set (of 100)
{\em stride} ints apart at receiving end. Use \func{MPI\_GATHERV}
and the \mpiarg{displs}
argument to achieve this effect. Assume $stride \geq 100$.
See figure \ref{fig-Example3}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100];
    int root, *rbuf, stride;
    int *displs,i,*rcounts;

    ...

    MPI_Comm_size( comm, &gsize);
    rbuf = (int *)malloc(gsize*stride*sizeof(int));
    displs = (int *)malloc(gsize*sizeof(int));
    rcounts = (int *)malloc(gsize*sizeof(int));
    for (i=0; i<gsize; ++i) {
        displs[i] = i*stride;
        rcounts[i] = 100;
    }
    MPI_Gatherv( sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT,
                                                               root, comm);
\end{verbatim}

Note that the program is erroneous if $stride < 100$.

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig3.ps,width=3.50in}}}
  \small
  \caption{The root process gathers 100 {\tt int}s from each process
  in the group, each set is placed {\tt stride} ints apart.
  }
  \label{fig-Example3}
\end{figure}

\subsubsection{Example 4}

Same as Example 3 on the receiving side, but send the
100 ints from the 0th column of a
100$\times$150 int array, in C.  See figure \ref{fig-Example4}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100][150];
    int root, *rbuf, stride;
    MPI_Datatype stype;
    int *displs,i,*rcounts;

    ...

    MPI_Comm_size( comm, &gsize);
    rbuf = (int *)malloc(gsize*stride*sizeof(int));
    displs = (int *)malloc(gsize*sizeof(int));
    rcounts = (int *)malloc(gsize*sizeof(int));
    for (i=0; i<gsize; ++i) {
        displs[i] = i*stride;
        rcounts[i] = 100;
    }
    /* Create datatype for 1 column of array
     */
    MPI_Type_vector( 100, 1, 150, MPI_INT, &stype);
    MPI_Type_commit( &stype );
    MPI_Gatherv( sendarray, 1, stype, rbuf, rcounts, displs, MPI_INT,
                                                             root, comm);
\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig4.ps,width=4.00in}}}
  \small
  \caption{The root process gathers column {\tt 0} of a 100$\times$150
  C array, and each set is placed {\tt stride} ints apart.
  }
  \label{fig-Example4}
\end{figure}

\subsubsection{Example 5}

Process i sends (100-i) ints from the i-th column of a
100 $\times$ 150 int array, in C.  It is received into a buffer with stride,
as in the previous two examples. See figure \ref{fig-Example5}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100][150],*sptr;
    int root, *rbuf, stride, myrank;
    MPI_Datatype stype;
    int *displs,i,*rcounts;

    ...

    MPI_Comm_size( comm, &gsize);
    MPI_Comm_rank( comm, &myrank );
    rbuf = (int *)malloc(gsize*stride*sizeof(int));
    displs = (int *)malloc(gsize*sizeof(int));
    rcounts = (int *)malloc(gsize*sizeof(int));
    for (i=0; i<gsize; ++i) {
        displs[i] = i*stride;
        rcounts[i] = 100-i;     /* note change from previous example */
    }
    /* Create datatype for the column we are sending
     */
    MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &stype);
    MPI_Type_commit( &stype );
    /* sptr is the address of start of "myrank" column
     */
    sptr = &sendarray[0][myrank];
    MPI_Gatherv( sptr, 1, stype, rbuf, rcounts, displs, MPI_INT, root, comm);
\end{verbatim}

Note that a different amount of data is received from each process.

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig5.ps,width=4.00in}}}
  \small
  \caption{The root process gathers {100-i} ints from
  column {\tt i} of a 100$\times$150
  C array, and each set is placed {\tt stride} ints apart.
  }
  \label{fig-Example5}
\end{figure}

\subsubsection{Example 6}

Same as Example 5, but done in a different way at the sending end.
We create a datatype that causes the correct striding at the
sending end so that that we read a column of a C array --- this
was also done in Example 3, 2nd part, section \ref{subsec:pt2pt-examples}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100][150],*sptr;
    int root, *rbuf, stride, myrank, disp[2], blocklen[2];
    MPI_Datatype stype,type[2];
    int *displs,i,*rcounts;

    ...

    MPI_Comm_size( comm, &gsize);
    MPI_Comm_rank( comm, &myrank );
    rbuf = (int *)malloc(gsize*stride*sizeof(int));
    displs = (int *)malloc(gsize*sizeof(int));
    rcounts = (int *)malloc(gsize*sizeof(int));
    for (i=0; i<gsize; ++i) {
        displs[i] = i*stride;
        rcounts[i] = 100-i;
    }
    /* Create datatype for one int, with extent of entire row
     */
    disp[0] = 0;       disp[1] = 150*sizeof(int);
    type[0] = MPI_INT; type[1] = MPI_UB;
    blocklen[0] = 1;   blocklen[1] = 1;
    MPI_Type_struct( 2, blocklen, disp, type, &stype );
    MPI_Type_commit( &stype );
    sptr = &sendarray[0][myrank];
    MPI_Gatherv( sptr, 100-myrank, stype, rbuf, rcounts, displs, MPI_INT,
                                                               root, comm);
\end{verbatim}

\subsubsection{Example 7}

Same as Example 5 at sending side, but at receiving side we make the
stride between received blocks vary from block to block.
See figure \ref{fig-Example7}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100][150],*sptr;
    int root, *rbuf, *stride, myrank, bufsize;
    MPI_Datatype stype;
    int *displs,i,*rcounts,offset;

    ...

    MPI_Comm_size( comm, &gsize);
    MPI_Comm_rank( comm, &myrank );

    stride = (int *)malloc(gsize*sizeof(int));
    ...
    /* stride[i] for i = 0 to gsize-1 is set somehow
     */

    /* set up displs and rcounts vectors first
     */
    displs = (int *)malloc(gsize*sizeof(int));
    rcounts = (int *)malloc(gsize*sizeof(int));
    offset = 0;
    for (i=0; i<gsize; ++i) {
        displs[i] = offset;
        offset += stride[i];
        rcounts[i] = 100-i;
    }
    /* the required buffer size for rbuf is now easily obtained
     */
    bufsize = displs[gsize-1]+rcounts[gsize-1];
    rbuf = (int *)malloc(bufsize*sizeof(int));
    /* Create datatype for the column we are sending
     */
    MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &stype);
    MPI_Type_commit( &stype );
    sptr = &sendarray[0][myrank];
    MPI_Gatherv( sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
                                                        root, comm);
\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig6.ps,width=4.00in}}}
  \small
  \caption{The root process gathers {100-i} ints from
  column {\tt i} of a 100$\times$150
  C array, and each set is placed {\tt stride[i]} ints apart (a varying
stride).
  }
  \label{fig-Example7}
\end{figure}

\subsubsection{Example 8}

Process i sends {\tt num} ints from the i-th column of a
100 $\times$ 150 int array, in C.  The complicating factor is that
the various values of {\tt num} are not known to {\tt root}, so a
separate gather must first be run to find these out.  The data is
placed contiguously at the receiving end.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,sendarray[100][150],*sptr;
    int root, *rbuf, stride, myrank, disp[2], blocklen[2];
    MPI_Datatype stype,types[2];
    int *displs,i,*rcounts,num;

    ...

    MPI_Comm_size( comm, &gsize);
    MPI_Comm_rank( comm, &myrank );

    /* First, gather nums to root
     */
    rcounts = (int *)malloc(gsize*sizeof(int));
    MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
    /* root now has correct rcounts, using these we set displs[] so
     * that data is placed contiguously (or concatenated) at receive end
     */
    displs = (int *)malloc(gsize*sizeof(int));
    displs[0] = 0;
    for (i=1; i<gsize; ++i) {
        displs[i] = displs[i-1]+rcounts[i-1];
    }
    /* And, create receive buffer
     */
    rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])*sizeof(int));
    /* Create datatype for one int, with extent of entire row
     */
    disp[0] = 0;       disp[1] = 150*sizeof(int);
    type[0] = MPI_INT; type[1] = MPI_UB;
    blocklen[0] = 1;   blocklen[1] = 1;
    MPI_Type_struct( 2, blocklen, disp, type, &stype );
    MPI_Type_commit( &stype );
    sptr = &sendarray[0][myrank];
    MPI_Gatherv( sptr, num, stype, rbuf, rcounts, displs, MPI_INT,
                                                               root, comm);
\end{verbatim}

\subsection{Scatter}
\label{subsec:coll-scatter}

\begin{funcdef}{MPI\_SCATTER( sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, root, comm)}
\funcarg{\IN}{ sendbuf}{ address of send buffer (choice, significant
only at root)}
\funcarg{\IN}{ sendcount}{ number of elements sent to each process
(integer, significant only at root)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements
(significant only at root) (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcount}{ number of elements in receive buffer (integer)}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ root}{  rank of sending process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Scatter(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~recvcount,
MPI\_Datatype~recvtype, int~root, MPI\_Comm~comm)}

\mpifbind{MPI\_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) \fargs <type> SENDBUF(*),
RECVBUF(*) \\ INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT,
COMM, IERROR}


\func{ MPI\_SCATTER} is the inverse operation to \func{MPI\_GATHER}.

The root process sends a message to each process (including itself);
each process stores the incoming message in its receive buffer.
The outcome is as if the root executed {\tt n} send operations,
\[\tt
MPI\_Send(sendbuf+i\times sendcount \times extent(sendtype), sendcount,
sendtype, i,...),
\]
and each process executed a receive,
\[\tt
MPI\_Recv(recvbuf, recvcount, recvtype, i,...).
\]

An alternative description is that the root send a message with
\mpifunc{MPI\_Send(sendbuf, sendcount$\times$n, sendtype, ...)}; this
message is split into {\tt n} equal segments, and the $i-th$ segment is
sent to the $i$-th process in the group; and each process receives
this message as above.

The type signature associated with \mpiarg{sendcount, sendtype} at the root
must be equal to the type signature associated with
\mpiarg{recvcount, recvtype} at all
processes (however, the type maps may be different).

All arguments to the function are significant on process \mpiarg{root},
while on other processes, only arguments \mpiarg{recvbuf, recvcount,
recvtype, root, comm} are significant.
The arguments \mpiarg{root} and \mpiarg{comm}
must have identical values on all processes.


\begin{funcdef}{MPI\_SCATTERV( sendbuf, sendcounts, displs, sendtype,
recvbuf, recvcount, recvtype, root, comm)}
\funcarg{\IN}{ sendbuf}{ address of send buffer (choice, significant
only at root)}
\funcarg{\IN}{ sendcounts}{ integer array (of length group size)
specifying the number of elements to send to each processor }
\funcarg{\IN}{ displs}{ integer array (of length group size).  Entry
{\tt i} specifies the displacement (relative to \mpiarg{sendbuf} from
which to take the outgoing data to process {\tt i}}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcount}{ number of elements in receive buffer (integer)}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ root}{  rank of sending process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Scatterv(void*~sendbuf, int~*sendcounts, int~*displs,
MPI\_Datatype~sendtype, void*~recvbuf, int~recvcount,
MPI\_Datatype~recvtype, int~root, MPI\_Comm~comm)}

\mpifbind{MPI\_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF,
RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) \fargs <type> SENDBUF(*),
RECVBUF(*) \\ INTEGER SENDCOUNTS(*), DISPLS(*), SENDTYPE, RECVCOUNT,
RECVTYPE, ROOT, COMM, IERROR}


\func{ MPI\_SCATTERV} is the inverse operation to \func{MPI\_GATHERV}.

\func{MPI\_SCATTERV} extends the functionality of \func{MPI\_SCATTER}
by allowing a varying count of data to be sent to each process
(since \mpiarg{sendcounts}
is now an array), and also allows more flexibility as to where the data
is taken from on the root (by providing the new argument, \mpiarg{displs}).

The outcome is as if the root excuted {\tt n} send operations,
\[\tt
MPI\_Send(sendbuf+displs[i]\times extent(sendtype), sendcounts[i],
sendtype, i,...),
\]
and each process executed a receive,
\[\tt
MPI\_Recv(recvbuf, recvcount, recvtype, i,...).
\]

The type signature implied by \mpiarg{sendcount[i], sendtype} at the root
must be equal to the type signature implied by
\mpiarg{recvcount, recvtype} at process
{\tt i} (however, the type maps may be different).

All arguments to the function are significant on process \mpiarg{root},
while on other processes, only arguments \mpiarg{recvbuf, recvcount,
recvtype, root, comm} are significant.
The arguments \mpiarg{root} and \mpiarg{comm}
must have identical values on all processes.

For both functions, the specification of count(s), type(s), (displacements),
should not cause any location on the root to be read more than
once.

\subsection{Examples of Usage of \func{MPI\_SCATTER}, \func{MPI\_SCATTERV}}

\subsubsection{Example 9}

The reverse of Example 1.
Scatter sets of 100 ints from the root to each process in the group.
See figure \ref{fig-Example9}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,*sendbuf;
    int root, rbuf[100];

    ...

    /* The variable comm is set elsewhere in the program
     */
    MPI_Comm_size( comm, &gsize);
    sendbuf = (int *)malloc(gsize*100*sizeof(int));
    MPI_Scatter( sendbuf, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig7.ps,width=3.50in}}}
  \small
  \caption{The root process scatters sets of 100 {\tt int}s to each process
  in the group.
  }
  \label{fig-Example9}
\end{figure}

\subsubsection{Example 10}
\label{coll:example10}

The reverse of Example 3.
The root process scatters sets of 100 ints to the other processes,
but the sets of 100 are {\em stride} ints apart in the sending buffer.
Requires use of \func{MPI\_SCATTERV}.
Assume $stride \geq 100$.  See figure \ref{fig-Example10}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,*sendbuf;
    int root, rbuf[100], i, *displs, *scounts;

    ...

    MPI_Comm_size( comm, &gsize);
    sendbuf = (int *)malloc(gsize*stride*sizeof(int));
    displs = (int *)malloc(gsize*sizeof(int));
    scounts = (int *)malloc(gsize*sizeof(int));
    for (i=0; i<gsize; ++i) {
        displs[i] = i*stride;
        scounts[i] = 100;
    }
    MPI_Scatterv( sendbuf, displs, scounts, MPI_INT, rbuf, 100, MPI_INT,
                                                              root, comm);
\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig8.ps,width=3.50in}}}
  \small
  \caption{The root process scatters sets of 100 {\tt int}s, moving by
  {\tt stride} ints from send to send in the scatter.
  }
  \label{fig-Example10}
\end{figure}

\subsubsection{Example 11}

The reverse of Example 7.
We have a varying stride between blocks at sending (root) side,
at the receiving side we receive into the i-th column of a 100$\times$150
C array.
See figure \ref{fig-Example11}.

\begin{verbatim}
    MPI_Comm comm;
    int gsize,recvarray[100][150],*rptr;
    int root, *sendbuf, myrank, bufsize, *stride;
    MPI_Datatype rtype;
    int i, *displs, *scounts, offset;

    ...

    MPI_Comm_size( comm, &gsize);
    MPI_Comm_rank( comm, &myrank );

    stride = (int *)malloc(gsize*sizeof(int));
    ...
    /* stride[i] for i = 0 to gsize-1 is set somehow
     */

    /* sendbuf comes from elsewhere
     */
    ...
    displs = (int *)malloc(gsize*sizeof(int));
    scounts = (int *)malloc(gsize*sizeof(int));
    offset = 0;
    for (i=0; i<gsize; ++i) {
        displs[i] = offset;
        offset += stride[i];
        scounts[i] = 100 - i;
    }
    /* Create datatype for the column we are receiving
     */
    MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &rtype);
    MPI_Type_commit( &rtype );
    rptr = &recvarray[0][myrank];
    MPI_Scatterv( sendbuf, scounts, displs, MPI_INT, rptr, 1, rtype,
                                                            root, comm);

\end{verbatim}

\begin{figure}
\centerline{\hbox{
\psfig{figure=mycoll-fig9.ps,width=4.00in}}}
  \small
  \caption{The root scatters gathers blocks of {100-i} ints into
  column {\tt i} of a 100$\times$150
  C array.  At the sending side, the blocks are {\tt stride[i]} ints apart.
  }
  \label{fig-Example11}
\end{figure}

\subsection{Gather-to-all}
\label{subsec:coll-allcast}

\begin{funcdef}{MPI\_ALLGATHER( sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcount}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcount}{ number of elements received from any
process (integer)}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ comm}{  communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Allgather(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~recvcount,
MPI\_Datatype~recvtype, MPI\_Comm~comm)}

\mpifbind{MPI\_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
RECVCOUNT, RECVTYPE, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*)
\\ INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR}


\func{MPI\_ALLGATHER} can be thought of as \func{MPI\_GATHER}, but
where all processes receive the result, instead of just the root.
The {\tt j}-th block of data sent from each process is received
by every process and placed in the {\tt j}-th block of the
buffer \mpiarg{recvbuf}.

The type signature associated with \mpiarg{sendcount, sendtype},
at a process must be equal to the type signature associated with
\mpiarg{recvcount, recvtype} at any other process.

Thus, the outcome of a call to \mpifunc{MPI\_ALLGATHER(...)} is as if
all processes executed {\tt n} calls to
\[\tt
MPI\_GATHER(sendbuf,sendcount,sendtype,recvbuf,recvcount,recvtype,root,comm),
\]
for $\tt root = 0 , \cdots, n-1$.

\begin{funcdef}{MPI\_ALLGATHERV( sendbuf, sendcount, sendtype, recvbuf,
recvcounts, displs, recvtype, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcount}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcounts}{ integer array (of length group size)
containing the number of elements that are received from each process}
\funcarg{\IN}{ displs}{ integer array (of length group size).  Entry
{\tt i} specifies the displacement (relative to \mpiarg{recvbuf} at
which to place the incoming data from process {\tt i}}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ comm}{  communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Allgatherv(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~*recvcounts, int~*displs,
MPI\_Datatype~recvtype, MPI\_Comm~comm)}

\mpifbind{MPI\_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR) \fargs <type> SENDBUF(*),
RECVBUF(*) \\ INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*),
RECVTYPE, COMM, IERROR}


\func{MPI\_ALLGATHERV} can be thought of as \func{MPI\_GATHERV}, but
where all processes receive the result, instead of just the root.
The {\tt j}-th block of data sent from each process is received
by every process and placed in the {\tt j}-th block of the
buffer \mpiarg{recvbuf}.  These blocks need not all be the same size.

The type signature associated with \mpiarg{sendcount, sendtype},
at process {\tt j} must be equal to the type signature associated with
\mpiarg{recvcounts[j], recvtype} at any other process.

The outcome is as if all processes executed calls to
\[\tt
MPI\_GATHERV(sendbuf,sendcount,sendtype,recvbuf,displs,recvcounts,recvtype,
\]
\[\tt
root,comm),
\]
for $\tt root = 0 , \cdots, n-1$.

For both \func{MPI\_ALLGATHER} and \func{MPI\_ALLGATHERV}, all arguments
on all processes are significant.  The argument \mpiarg{comm}
must have identical values on all processes.

\subsection{All-to-All Scatter/Gather}
\label{subsec:coll-alltoall}

\begin{funcdef}{MPI\_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcount}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcount}{ number of elements received from any
process (integer)}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Alltoall(void*~sendbuf, int~sendcount,
MPI\_Datatype~sendtype, void*~recvbuf, int~recvcount,
MPI\_Datatype~recvtype, MPI\_Comm~comm)}

\mpifbind{MPI\_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
RECVCOUNT, RECVTYPE, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*)
\\ INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR}


\func{MPI\_ALLTOALL} is an extension of \func{MPI\_ALLGATHER} to the case
where each process sends distinct data to each of the receivers.
The {\tt j}-th block sent from process {\tt i} is received by process {\tt j}
and is placed in the {\tt i}-th block of \mpiarg{recvbuf}.

The type signature associated with \mpiarg{sendcount, sendtype},
at a process must be equal to the type signature associated with
\mpiarg{recvcount, recvtype} at any other process.

The outcome is as if each process executed a send to each
process (itself included)
with a call to
\[\tt
MPI\_Send(sendbuf+i\times sendcount \times
extent(sendtype),sendcount,sendtype,i, ...),
\]
and a receive from every other process
with a call to
\[\tt
MPI\_Recv(recvbuf+i\times recvcount \times extent(recvtype),recvcount,i,...).
\]

\begin{funcdef}{MPI\_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype,
recvbuf, recvcounts, rdispls, recvtype, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\IN}{ sendcounts}{ integer array equal to the group size
specifying the number of elements to send to each processor}
\funcarg{\IN}{ sdispls}{ integer array (of length group size).  Entry
{\tt j} specifies the displacement (relative to \mpiarg{sendbuf} from
which to take the outgoing data destined for process {\tt j}}
\funcarg{\IN}{ sendtype}{ data type of send buffer elements (handle)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice)}
\funcarg{\IN}{ recvcounts}{ integer array equal to the group size
specifying the maximum number of elements that can be received from
each processor}
\funcarg{\IN}{ rdispls}{ integer array (of length group size).  Entry
{\tt i} specifies the displacement (relative to \mpiarg{recvbuf} at
which to place the incoming data from process {\tt i}}
\funcarg{\IN}{ recvtype}{ data type of receive buffer elements (handle)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Alltoallv(void*~sendbuf, int~*sendcounts, int~*sdispls,
MPI\_Datatype~sendtype, void*~recvbuf, int~*recvcounts, int~*rdispls,
MPI\_Datatype~recvtype, MPI\_Comm~comm)}

\mpifbind{MPI\_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR) \fargs <type>
SENDBUF(*), RECVBUF(*) \\ INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE,
RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR}


\func{MPI\_ALLTOALLV} adds flexibility to \func{MPI\_ALLTOALL} in that
the location of data for the send is specified by \mpiarg{sdispls}
and the location of the placement of the data on the receive side
is specified by \mpiarg{rdispls}.

The {\tt j}-th block sent from process {\tt i} is received by process {\tt j}
and is placed in the {\tt i}-th block of \mpiarg{recvbuf}.  These blocks
need not all have the same size.

The type signature associated with
\mpiarg{sendcount[j], sendtype} at process {\tt i} must be equal
to the type signature
associated with \mpiarg{recvcount[i], recvtype} at process {\tt j}.

The outcome is as if each process sent a message to each other process
with,
\[\tt
MPI\_Send(sendbuf+displs[i],sendcounts[i],sendtype,i,...),
\]
and received a message from every other process with
a call to
\[\tt
MPI\_Recv(recvbuf+displs[i],recvcounts[i],recvtype,i,...).
\]

For both \func{MPI\_ALLTOALL} and \func{MPI\_ALLTOALLV}, all arguments
on all processes are significant.  The argument \mpiarg{comm}
must have identical values on all processes.

\discuss{
The definition of the \mpifunc{MPI\_xxxV} operations gives as much
flexibility as one would achieve by specifying {\tt n} independent point to
point communications, with one exception: all messages use the same
datatype, and messages are scattered from (or gathered to) sequential
storage.}

\implement{
Although the discussion of collective communication in terms of point
to point operation implies that each message is transfered directly
from sender to receiver, implementations may use a tree communication
pattern, where messages are forwarded by intermediate nodes where they
are split (for scatter) or concatenated (for gather), if this
is more efficient.
}
\section{Global Compute Operations}

The functions in this section perform one of the following operations
across all the members of a group:
\begin{enumerate}
\item[] global max on integer and floating point data types
\item[] global min on integer and floating point data types
\item[] global sum on integer and floating point data types
\item[] global product on integer and floating point data types
\item[] global AND on logical and integer data types
\item[] global OR on logical and integer data types
\item[] global XOR on logical and integer data types
\item[] rank of process with maximum value
\item[] rank of process with minimum value
\item[] user defined (associative) operation
\item[] user defined (associative and commutative) operation
\end{enumerate}

\subsection{Reduce}
\label{subsec:coll-reduce}

\begin{funcdef}{MPI\_REDUCE( sendbuf, recvbuf, count, datatype, op,
root, comm)}
\funcarg{\IN}{ sendbuf}{ address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ address of receive buffer (choice,
significant only at root)}
\funcarg{\IN}{ count}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of send buffer (basic
types only) (handle)}
\funcarg{\IN}{ op}{ operation (state)}
\funcarg{\IN}{ root}{ rank of root process (integer)}
\funcarg{\IN}{ comm}{  communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Reduce(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Op~op, int~root, MPI\_Comm~comm)}

\mpifbind{MPI\_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT,
COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ INTEGER COUNT,
DATATYPE, OP, ROOT, COMM, IERROR}


Combines the values provided in the send buffer of each process in the
group, using the operation \mpiarg{ op}, and returns the combined value in
the receive buffer of the process with rank \mpiarg{ root}.
The routine is called by all group members using the same arguments
for \mpiarg{ count, datatype, op, root} and \mpiarg{ comm}.
Each process can provide one value, or a sequence of values, in which case the
combine operation is executed element-wise on each entry of the sequence.
For example, if the operation is \const{MPI\_MAX} and the send buffer
contains two
floating point numbers, then recvbuf(1) $=$ global max(sendbuf(1)) and
recvbuf(2) $=$ global max(sendbuf(2)). All send
buffers should define sequences of equal length of entries all of the same
data type, where the type is a {\bf basic} MPI datatype and
one of those allowed for operands of \mpiarg{ op} (see below).
For all operations the number
of elements in the send buffer are the same as for the receive buffers.
For all operations,
the type of elements in the send buffer are the same as for the receive
buffers.

%For \const{ MPI\_MINLOC} and \const{ MPI\_MAXLOC}, the receive buffer will
%be filled with \mpiarg{ count} integers (ranks).

The operation
defined by \mpiarg{ op} is associative and commutative, and the
implementation can
take advantage of associativity and commutativity in order to change
order of evaluation.
\change
This may change the result of the reduction, for operations that are not
strictly associative and commutative, such as floating point addition.
\mpifunc{MPI\_REDUCE} should be used only when such changes are acceptable.

\implement{
It is strongly recommended that \mpifunc{MPI\_REDUCE} be implemented so
that the
same result be obtained whenever the function is applied on the same arguments,
appearing in the same order.  Note that this may
prevent optimizations that take
advantage of the physical location of processors.
}

We list below the supported options for \mpiarg{ op}.
\begin{constlist}
\constitem{MPI\_MAX}{ maximum}
\constitem{MPI\_MIN}{ minimum}
\constitem{MPI\_SUM}{ sum}
\constitem{MPI\_PROD}{ product}
\constitem{MPI\_LAND}{ logical and }
\constitem{MPI\_BAND}{ bit-wise and }
\constitem{MPI\_LOR}{ logical or }
\constitem{MPI\_BOR}{ bit-wise or }
\constitem{MPI\_LXOR}{ logical xor}
\constitem{MPI\_BXOR}{ bit-wise xor}
\constitem{MPI\_MAXLOC}{ maximum value and rank of process with it}
\constitem{MPI\_MINLOC}{ minimum value and rank of process with it}
\end{constlist}

\commentOut{
\const{ MPI\_MAXLOC} and \const{ MPI\_MINLOC} return
(explicitly)
the {\tt rank} of the process containing the maximum (or minimum) value.
In the case of ties, that is, equal values, the lower rank is always
returned.

\const{ MPI\_MAXLOC} and
\const{ MPI\_MINLOC} can be thought of as commutative and associative
binary operations that apply
on pairs {\tt (v, i)}: \const{ MPI\_MINLOC} returns the smaller pair in
lexicographic order, and similarly for \const{ MPI\_MAXLOC}.
The input for
{\tt i} is the rank of the calling process
that is passed implicitly; only {\tt i} is returned in the output.
} %endcommentOut

The \const{ MPI\_MINLOC} (\const{ MPI\_MAXLOC}) operations return
both minimum (maximum) values and the ranks of processes containing those
values.  The potentially mixed-type nature of the output buffer
is a concern, so MPI treats the buffers uniformly
and coerces the ranks to the same type as the values.
When \const{ MPI\_MINLOC} or \const{ MPI\_MAXLOC}
are invoked, the input buffer should
contain $m$ elements of a data type to which the operation
\const{ MPI\_MIN} or
\const{ MPI\_MAX} can be applied, followed by space for another $m$ elements
of the same type.  Internally, the function will coerce the rank of the
calling process (an integer type) to the type of the data values, and carry
these values along during the reduction.
The operation returns at the root the
$m$ minimum (or maximum) values, followed by the $m$ ranks of the processes
containing these values.  To recover the ranks of the processes as integers,
the second set of values can now be copied out of the buffer and coerced
to integer values.

\discuss{
This solution will run into trouble if the values are of type \const{MPI\_CHAR}
and the group contains more than 128 processes.  It forces two superfluous type
conversions: e.g., MPI converts ranks to floating point and, next, the user
converts back to integer. Besides, it's ugly. Any better suggestions?
}


The operation that defines \const{ MPI\_MAXLOC} is

\[
\left( \begin{array}{c} u \\ i \end{array} \right)
\circ
\left( \begin{array}{c} v \\ j \end{array} \right)
=
\left( \begin{array}{c} w \\ k \end{array} \right)
\]
where
\[
w = \max (u,v)
\]
and
\[
k = \left\{ \begin{array}{ll}
    i & \mbox{if $u > v$} \\
    \min(i,j) & \mbox{if $u=v$} \\
    j & \mbox{if $u < v$}
\end{array}
\right.
\]

Note that ties are resolved in favor of the process with lower
rank and hence this operation is associative and commutative.

\const{ MPI\_MINLOC} is defined similarly:

\[
\left( \begin{array}{c} u \\ i \end{array} \right)
\circ
\left( \begin{array}{c} v \\ j \end{array} \right)
=
\left( \begin{array}{c} w \\ k \end{array} \right)
\]
where
\[
w = \min (u,v)
\]
and
\[
k = \left\{ \begin{array}{ll}
    i & \mbox{if $u < v$} \\
    \min(i,j) & \mbox{if $u=v$} \\
    j & \mbox{if $u > v$}
\end{array}
\right.
\]


\subsubsection{Operation / Type Compatibility}

All of the options for \mpiarg{op} in \func{MPI\_REDUCE} do not apply
for each of the possible MPI basic datatypes.  We enumerate the allowed
combinations here.  First, define groups of MPI basic datatypes
in the following way.

\begin{description}
\item[C integer:]{\tt MPI\_CHAR,
MPI\_INT, MPI\_LONG, MPI\_SHORT, \linebreak
MPI\_UNSIGNED\_SHORT, MPI\_UNSIGNED, MPI\_UNSIGNED\_LONG}
\item[Fortran integer:]{\tt MPI\_INTEGER}
\item[Floating point:]{\tt MPI\_FLOAT, MPI\_DOUBLE,
MPI\_REAL, \linebreak MPI\_DOUBLE\_PRECISION}
\item[Logical:]{\tt MPI\_LOGICAL}
\item[Complex:]{\tt MPI\_COMPLEX}
\item[Byte:]{\tt MPI\_BYTE}
\end{description}

Now, the valid datatypes for each option is specified below.

\begin{constlist}
\constitem{MPI\_MAX, MPI\_MIN, MPI\_MAXLOC, MPI\_MINLOC}{ C integer,
Fortran integer, Floating point}
\constitem{MPI\_SUM, MPI\_PROD}{  C integer, Fortran integer, Floating
point, Complex}
\constitem{MPI\_LAND, MPI\_LOR, MPI\_LXOR}{ C integer, Logical}
\constitem{MPI\_BAND, MPI\_BOR, MPI\_BXOR}{ C integer, Byte}
\end{constlist}

\subsubsection{Example of Reduce}

Each process has an array of 30 {\tt double}s, in C.  For each
of the 30 locations, compute the value and rank of the process containing
the largest value.

\begin{verbatim}
    ...
    /* each process has an array of 30 double: a[]
     */
    double a[30];
    double in[60],out[60];
    int i,outranks[30];

    for (i=0; i<30; ++i) {
        in[i] = a[i];
    }
    MPI_Reduce( in, out, 30, MPI_DOUBLE, MPI_MAXLOC, root, comm );
    /* At this point, the answer resides on process root
     */
    if (myrank == root) {
        /* read ranks out and coerce back to ints
         */
        for (i=0; i<30; ++i) {
            outranks[i] = out[30+i];
        }
    }
\end{verbatim}

\subsection{User-Reduce}
\label{subsec:coll-user-reduce}

\begin{funcdef}{MPI\_USER\_REDUCE( sendbuf, recvbuf, count, datatype,
function, root, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice,
significant only at root)}
\funcarg{\IN}{ count}{ number of elements in input buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ root}{ rank of root process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\change
\mpibind{MPI\_User\_reduce(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, int~root, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE,
FUNCTION, ROOT, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\
EXTERNAL FUNCTION \\ INTEGER COUNT, DATATYPE, ROOT, COMM, IERROR}

Similar to the reduce operation function above except that a user
supplied function is used.  \mpiarg{function} is a
function with three arguments.   The C type for such function is

\begin{verbatim}
typedef void MPI_Uop( *void, *void, *int);
\end{verbatim}

If the function is passed actual arguments
\mpiarg{(void *)invec, (void *)inoutvec, len} then
\mpiarg{ *invec} and \mpiarg{ *inoutvec} should be arrays with
\mpiarg{ *len} values.
The type of the elements of \mpiarg{ *invec}
and of \mpiarg{ *inoutvec} match the type of
the elements of the send buffers and the receive buffer.
The function computes element-wise a commutative and
associative operation on each pair of entries and
returns the result in \mpiarg{ *inoutvec}.
A pseudo-code for \mpiarg{ function} is given below, where {\tt op} is the
commutative and associative operation defined by \mpiarg{ function}.

\begin{verbatim}
            for(i=0; i < *len; i++) {
                    inoutvec[i] op= invec[i]
            }
\end{verbatim}

No MPI functions may be called inside the user defined function.

The Fortran declaration for it is

\begin{verbatim}
FUNCTION UOP(INVEC(*), INOUTVEC(*), LEN)
<type> INVEC(LEN), INOUTVEC(LEN)
INTEGER LEN
\end{verbatim}

\begin{rationale}
The addition of the third argument,
\mpiarg{ len}, in \mpiarg{ function} allows the
system to avoid calling \mpiarg{ function} for each
element in the input buffer.
Rather, the system can choose to apply
\mpiarg{ function} to chunks of input.
\end{rationale}

\discuss{
One could take advantage of the more lenient C typing rules and declare
the user function to be of type \const{void Uop()}; such declaration would not
constrain the types of the arguments and would avoid an additional typecasting
within the function body.  Howvwer, such usage is not compatible with C++.
}

\subsubsection{Example of User-Reduce}

Compute the product of an array of complex numbers, in C.

\begin{verbatim}
typedef struct {
    double real,imag;
} Complex;

/* the user-defined function
 */

void myProd( void *in, void *inout, int *len )
{
    int i;
    Complex c, *invec, *inoutvec;

    invec = (Complex *)in;
    inoutvec = (Complex *)out;
    for (i=0; i< *len; ++i) {
        c.real = inoutvec->real*invec->.real -
                   inoutvec->imag*invec->imag;
        c.imag = inoutvec->real*invec->imag +
                   inoutvec->imag*invec->real;
        *inoutvec = c;
        invec++; inoutvec++;
    }
}

/* and, to call it...
 */
...

    /* each process has an array of 100 Complexes
     */
    Complex a[100], answer[100];
    MPI_Datatype ctype;

    /* explain to MPI how type Complex is defined
     */
    MPI_Type_contiguous( 2, MPI_DOUBLE, &ctype );
    MPI_Type_commit( &ctype );

    MPI_User_reduce( a, answer, 100, ctype, myProd, root, comm );

    /* At this point, the answer, which consists of 100 Complexes,
     * resides on process root
     */
\end{verbatim}

\begin{funcdef}{MPI\_USER\_REDUCEA( sendbuf, recvbuf, count, datatype,
function, root, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice,
significant only at root)}
\funcarg{\IN}{ count}{ number of elements in input buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ root}{ rank of root process (integer)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_User\_reducea(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, int~root, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_REDUCEA(SENDBUF, RECVBUF, COUNT, DATATYPE,
FUNCTION, ROOT, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\
EXTERNAL FUNCTION \\ INTEGER COUNT, DATATYPE, ROOT, COMM, IERROR}


Identical to \func{MPI\_USER\_REDUCE}, except
that the operation defined by \mpiarg{ function}
is not required to be commutative, but only associative.  Thus, the
order of computation can be modified only using associativity.
Use of this function means that the implementation {\em cannot} try and
change the order of computation of the reduce by assuming commutativity.

\implement{

\func{MPI\_USER\_REDUCEA} and
\func{MPI\_USER\_REDUCE} can have identical implementations,
if one does not wish to take advantage of commutativity.
}


\commentOut{
\implement{ The addition of the third argument,
\mpiarg{ *len} in \mpiarg{ function} allow the
system to avoid calling
\mpiarg{ function} for each element in the input buffer;
rather, the system can
choose to apply \mpiarg{ function} to chunks of inputs, where
the size of the chunk is chosen by the system so as to optimize
communication and computation pipelining.  E.g., \mpiarg{ *len}
could be set to be
the typical packet size in the communication subsystem.
}
} %end commentOut

\subsection{All-Reduce}
\label{subsec:coll-all-reduce}

MPI includes variants of each of the reduce operations
where the result is known to all processes in the group on return.
MPI requires that all processes participating in any of the
all-reduce operations receive exactly identical results.

\begin{funcdef}{MPI\_ALLREDUCE( sendbuf, recvbuf, count, datatype, op, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ count}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of send buffer (handle)}
\funcarg{\IN}{ op}{ operation (state)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Allreduce(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Op~op, MPI\_Comm~comm)}

\mpifbind{MPI\_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM,
IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ INTEGER COUNT,
DATATYPE, OP, COMM, IERROR}


Same as the \func{MPI\_REDUCE} operation function except that the result
appears in the receive buffer of all the group members.

\begin{funcdef}{MPI\_USER\_ALLREDUCE( sendbuf, recvbuf, count,
datatype, function, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ count}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of send buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_User\_allreduce(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE,
FUNCTION, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\
EXTERNAL FUNCTION \\ INTEGER COUNT, DATATYPE, COMM, IERROR}


Same as the \func{MPI\_USER\_REDUCE} operation function except that the result
appears in the receive buffer of all the group members.

\begin{funcdef}{MPI\_USER\_ALLREDUCEA( sendbuf, recvbuf, count,
datatype, function, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ count}{ number of elements in send buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of send buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_User\_allreducea(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_ALLREDUCEA(SENDBUF, RECVBUF, COUNT, DATATYPE,
FUNCTION, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\
EXTERNAL FUNCTION \\ INTEGER COUNT, DATATYPE, COMM, IERROR}


Same as \func{MPI\_USER\_REDUCEA}, except that the result appears
in the receive buffer of all the group members.

\implement{
The all-reduce operations can be implemented as a reduce, followed by a
broadcast.  However, a direct implementation can lead to better performance.
}

\subsection{Reduce-Scatter}
\label{subsec:coll-reduce-scatter}

MPI also includes variants of each of the reduce operations
where the result is scattered to all processes in the group on return.

\begin{funcdef}{MPI\_REDUCE\_SCATTER( sendbuf, recvbuf, recvcounts,
datatype, op, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ recvcounts}{ integer array specifying the
number of elements in result distributed to each process.
Array must be identical on all calling processes.}
\funcarg{\IN}{ datatype}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ op}{ operation (state)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Reduce\_scatter(void*~sendbuf, void*~recvbuf,
int~*recvcounts, MPI\_Datatype~datatype, MPI\_Op~op, MPI\_Comm~comm)}

\mpifbind{MPI\_REDUCE\_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE,
OP, COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ INTEGER
RECVCOUNTS, DATATYPE, OP, COMM, IERROR}


\func{MPI\_REDUCE\_SCATTER} first does
a componentwise reduction on vectors provided by the processes.
Next, the resulting vector of results is split into {\tt n} disjoint
segments, where {\tt n} is the number of members in the group;
segment {\tt i} contains \mpiarg{recvcounts[i]} elements.
The {\tt i}-th segment is sent to process with rank {\tt i}.

\implement{The \mpifunc{MPI\_REDUCE\_SCATTER}
routine is functionally equivalent to:
A \func{MPI\_REDUCE} operation function with \mpiarg{count} equal to
the sum of \mpiarg{recvcounts[i]} followed by
\linebreak
\func{MPI\_SCATTERV} with \mpiarg{sendcounts} equal to \mpiarg{recvcounts}.
However, a direct implementation may run faster.}

\begin{funcdef}{MPI\_USER\_REDUCE\_SCATTER( sendbuf, recvbuf, recvcnts, type,
function, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ recvcnts}{ integer array specifying the
number of elements in result distributed to each process.
Array must be identical on all calling processes.}
\funcarg{\IN}{ type}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

Same as the \func{MPI\_REDUCE\_SCATTER} operation function
except that the user specifies the reduction operation
as in \func{MPI\_USER\_REDUCE}.

\begin{funcdef}{MPI\_USER\_REDUCE\_SCATTERA( sendbuf, recvbuf, recvcnts, type,
function, comm)}
\funcarg{\IN}{ sendbuf}{ starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ recvcnts}{ integer array specifying the
number of elements in result distributed to each process.
Array must be identical on all calling processes.}
\funcarg{\IN}{ type}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

The ``A'' version of \func{MPI\_USER\_REDUCE\_SCATTER}.

\subsection{Scan}
\label{subsec:coll-scan}

\begin{funcdef}{MPI\_SCAN( sendbuf, recvbuf, count, datatype, op, comm )}
\funcarg{\IN}{ sendbuf}{starting address of send buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ starting address of receive buffer (choice)}
\funcarg{\IN}{ count}{ number of elements in input buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of elements of input buffer (handle)}
\funcarg{\IN}{ op}{ operation (state)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_Scan(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Op~op, MPI\_Comm~comm )}

\mpifbind{MPI\_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM,
IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ INTEGER COUNT,
DATATYPE, OP, COMM, IERROR}


\func{MPI\_SCAN} is used to perform a parallel prefix with respect to
an associative and commutative reduction operation on data distributed across
the group.
The operation returns in the receive buffer of the process with rank
{\tt i} the
reduction of the values in the send buffers of processes with ranks {\tt
0,...,i}.  The type of operations supported, their semantics, and the
constraints on send and receive buffers are as for \func{MPI\_REDUCE}.

\begin{funcdef}{MPI\_USER\_SCAN( sendbuf, recvbuf, count, datatype,
function, comm)}
\funcarg{\IN}{ sendbuf}{ address of input buffer}
\funcarg{\OUT}{ recvbuf}{ address of output buffer}
\funcarg{\IN}{ count}{ number of elements in input and output buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of buffer (handle)}
\funcarg{\IN}{ function}{ user provided function}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_User\_scan(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, FUNCTION,
COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ EXTERNAL FUNCTION
\\ INTEGER COUNT, DATATYPE, COMM, IERROR}


Same as the \func{MPI\_SCAN} operation function except that a user
supplied function is used.  \mpiarg{ function} is an associative and
commutative
function with an input vector, an inout vector, and a length argument.
The types of the two vectors and of the returned values all agree.
See \func{MPI\_USER\_REDUCE} for more details.

\begin{funcdef}{MPI\_USER\_SCANA( sendbuf, recvbuf, count, datatype,
function, comm)}
\funcarg{\IN}{ sendbuf}{ address of input buffer (choice)}
\funcarg{\OUT}{ recvbuf}{ address of output buffer (choice)}
\funcarg{\IN}{ count}{ number of elements in input and output buffer (integer)}
\funcarg{\IN}{ datatype}{ data type of buffer (handle)}
\funcarg{\IN}{ function}{ user defined function (function)}
\funcarg{\IN}{ comm}{ communicator (handle)}
\end{funcdef}

\mpibind{MPI\_User\_scana(void*~sendbuf, void*~recvbuf, int~count,
MPI\_Datatype~datatype, MPI\_Uop~function, MPI\_Comm~comm)}

\mpifbind{MPI\_USER\_SCANA(SENDBUF, RECVBUF, COUNT, DATATYPE, FUNCTION,
COMM, IERROR) \fargs <type> SENDBUF(*), RECVBUF(*) \\ EXTERNAL FUNCTION
\\ INTEGER COUNT, DATATYPE, COMM, IERROR}


Same as \func{MPI\_USER\_SCAN}, except that the user-defined operation
need not be commutative.

\implement{

\func{MPI\_USER\_SCAN} could be implemented as \func{MPI\_USER\_SCANA},
though for some architectures it may be possible to arrive at a
faster \func{MPI\_USER\_SCAN} by taking advantage of commutativity.

}

\section{Correctness}
\label{coll:correct}

\subsection{Synchronization Side-Effects}

A correct program should invoke collective communications so that deadlock will
not occur, whether collective communication is synchronizing or not.
The following examples illustrate dangerous use of collective routines.

The following example illustrates an erroneous program.

\begin{verbatim}
/* Example A */
switch(rank)
   {
   case 0: { MPI_Bcast(&var1, count, type, 0, comm);
             MPI_Send(&var2, count, type, 1, tag, comm);
             break;
           }
   case 1: { MPI_Recv(&var2, count, type, 0, tag, comm);
             MPI_Bcast(&var1, count, type, 0, comm);
             break;
           }
   }
\end{verbatim}

Process zero executes a broadcast, followed by a blocking send operation;
process one first executes a matching blocking receive,
followed by the matching broadcast call.
This program may deadlock.  The broadcast call on process zero
{\em may} block until process one executes the matching
broadcast call, so that the
send is not executed.  Process one will definitely block on the
receive and so, in this case, never executes the
broadcast.

The following example is correct, but non-deterministic:

\begin{verbatim}
/* Example B */
switch(rank)
   {
    case 0: { MPI_Bcast(&var1, count, type, 0, comm);
              MPI_Send(&var2, count, type, 1, tag, comm);
              break;
            }
    case 1: { MPI_Recv(&var2, count, type, MPI_ANY_SOURCE, tag, comm);
              MPI_Bcast(&var1, count, type, 0, comm);
              MPI_Recv(&var2, count, type, MPI_ANY_SOURCE, tag, comm);
              break;
            }
    case 2: { MPI_Send(&var2, count, type, 1, tag, comm);
              MPI_Bcast(&var1, count, type, 0, comm);
              break;
            }
    }
\end{verbatim}

All three processes participate in a broadcast.  Process 0 sends a message to
process 1 after the broadcast, and process 2 sends a message
to process 1 before
the broadcast.  Process 1 receives before and after the broadcast, with a
wildcard source argument.

Two possible executions, with different matchings of sends and receives, are
illustrated in figure \ref{fig-coll-matchings}.

\commentOut{
\begin{verbatim}
           First Execution

    0             1               2
                        /-----  send
                recv <-/
broadcast     broadcast       broadcast
  send ---\
           \--> recv
\end{verbatim}

\begin{verbatim}
           Second Execution

   0              1               2
broadcast
  send ---\
           \-->  recv
               broadcast       broadcast
                           /---  send
                 recv <---/
\end{verbatim}
} % end commentOut -- NOTE that above was wrong!

\begin{figure}
\centerline{\hbox{
\psfig{figure=coll-matchings.ps,width=4.00in}}}
  \small
  \caption{A race condition causes non-deterministic matching of sends
  and receives.  One cannot rely on synchronization from a broadcast
  to make the program deterministic.}
  \label{fig-coll-matchings}
\end{figure}


Note that the second execution has the peculiar effect that a send executed
after the broadcast is received at another node before the broadcast.
This example illustrates the fact that one should not rely on
collective communication functions to have particular synchronization
effects.  To assume that collective communication functions do or
do not have certain synchronizing side-effects is non-portable.

\discuss{

An alternative design is to require that all collective communication calls are
synchronizing.  In this case, the second program is deterministic and only the
first execution may occur.  This will make a difference only for
collective operations where not all processes both send and receive
(broadcast, reduce, scatter, gather).

}

\subsection{Multiple Calls to Collective}

It is the user's responsibility to make sure that there are no two concurrently
executing collective calls that use the same communicator on the same process.
Since all collective communication calls are blocking this restriction only
affects multithreaded implementations.   On the other hand, it is legitimate
for
one process to start a new collective communication call even though a previous
call that uses the same communicator has not yet terminated on another process.
As illustrated in the following example:

\begin{verbatim}
/* Example C */
 MPI_Bcast(&var1, count, type, 0, comm);
 MPI_Bcast(&var2, count, type, 1, comm);
\end{verbatim}

In a nonsynchronizing implementation of broadcast, process zero may start
executing the second broadcast before process one terminated the first
broadcast.  Both process zero and one may terminate their two broadcast calls
before other processes have started their calls.  It is the implementor's
responsibility to ensure this will not cause any error.

\implement{

Assume that broadcast is implemented using point-to-point MPI communication.
The following two rules are satisfied:
\begin{enumerate}
\item
All receives specify their source explicitly (no wildcards).
\item
Each process sends all messages that pertain to one collective call before
sending any message that pertain to a subsequent collective call.
\end{enumerate}

Then messages belonging to successive broadcasts cannot be confused,
as the order of point-to-point messages is preserved.  This is true, in
general, for any collective library.

}

A collective communication may execute in a context while
point-to-point communications that use the same context are pending, or
occur concurrently.  This is illustrated in example B above,
the first process may receive a
message sent with the context of communicator
\mpiarg{ comm} while it is executing
a broadcast with the same communicator.  It is the implementor's
responsibility to
ensure this will not cause any confusion.

\implement{
Assume that collective communications are implemented using point-to-point MPI
communication.  Then, in order to avoid confusion, whenever a communicator is
created, a ``hidden communicator'' need be created for collective
communication.
A direct implementation of MPI collective communication can achieve a similar
effect more cheaply, e.g., by using a hidden tag or context bit to indicate
whether the communicator is used for point-to-point or collective
communication.

}

\commentOut{
\section{Operational or Point to Point Definition of Collective Routines}
\label{coll:sec-operational-defn}

\subsection{Definition of Gather, Operational Semantics}

A simple, C implementation of \func{MPI\_GATHER} in terms of point to point
functions is given below.  Note that we are {\em not} saying that a
realistic version of  \func{MPI\_GATHER} would be implemented this way,
rather, we attempt to clarify the semantics of gather by reducing
many of the questions of semantics to corresponding questions for
the point to point communication functions.

% But this is dangerous...because the following functions are deadlock-prone.

\begin{verbatim}
MPI_Gather(void *sbuf, int scount, MPI_Datatype stype, void *rbuf, int rcount,
                        MPI_Datatype rtype, int root, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* Everyone sends to root.  Note that on root, we send to self --
     * deadlock possibility?
     */
    MPI_Send( sbuf, scount, stype, root, GATHER_TAG, comm' );

    /* Now, root receives messages in group rank order
     */
    if ( myrank == root ) {
        for (i=0; i<gsize; ++i) {
            MPI_Recv( rbuf+i*rcount*MPI_Type_extent(rtype), rcount, rtype,
                                        i, GATHER_TAG, comm', &status );
        }
    }
}
\end{verbatim}

\subsection{Definition of Gatherv, Operational Semantics}

A simple, C implementation of \func{MPI\_GATHERV} in terms of point to point
functions is given below.

\begin{verbatim}
MPI_Gatherv(void *sbuf, int scount, MPI_Datatype stype, void *rbuf, int
displs[],
            int rcount[], MPI_Datatype rtype, int root, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* Everyone sends to root.  Note that on root, we send to self --
     * deadlock possibility?
     */
    MPI_Send( sbuf, scount, stype, root, GATHER_TAG, comm' );

    /* Now, root receives messages in group rank order
     */
    if ( myrank == root ) {
        for (i=0; i<gsize; ++i) {
            MPI_Recv( rbuf+displs[i]*MPI_Type_extent(rtype), rcount[i],
                                    rtype, i, GATHER_TAG, comm, &status );
        }
    }
}
\end{verbatim}

\subsection{Definition of Scatter, Operational Semantics}

A simple, C implementation of \func{MPI\_SCATTER} in terms of point to point
functions is given below.

\begin{verbatim}
MPI_Scatter(void *sbuf, int scount, MPI_Datatype stype, void *rbuf,
                int rcount, MPI_Datatype rtype, int root, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* root sends messages.   Note that on root, we send to self --
     * deadlock possibility?
     */
    if ( myrank == root ) {
        for (i=0; i<gsize; ++i) {
            MPI_Send( sbuf+i*scount*MPI_Type_extent(stype), scount, stype,
                                        i, SCATTER_TAG, comm' );
        }
    }
    /* Everyone now receives.
     */
    MPI_Recv( rbuf, rcount, rtype, root, SCATTER_TAG, comm', &status );

}
\end{verbatim}

\subsection{Definition of Scatterv, Operational Semantics}

A simple, C implementation of \func{MPI\_SCATTERV} in terms of point to point
functions is given below.

\begin{verbatim}
MPI_Scatterv(void *sbuf, int displs[], int scount[], MPI_Datatype stype,
    void *rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* root sends messages.   Note that on root, we send to self --
     * deadlock possibility?
     */
    if ( myrank == root ) {
        for (i=0; i<gsize; ++i) {
            MPI_Send( sbuf+displs[i]*MPI_Type_extent(stype), scount[i], stype,
                                        i, SCATTER_TAG, comm' );
        }
    }
    /* Everyone now receives.
     */
    MPI_Recv( rbuf, rcount, rtype, root, SCATTER_TAG, comm', &status );

}
\end{verbatim}

\subsection{Definition of All-Gather, Operational Semantics}

\begin{verbatim}
MPI_Allgather(void *sbuf, int scount, MPI_Datatype stype, void *rbuf,
int rcount,
                        MPI_Datatype rtype, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* Loop through all possible roots
     */
    for (i=0; i<gsize; ++i) {
        MPI_Send( sbuf, scount, stype, i, ALLGATHER_TAG, comm' );
    }
    for (i=0; i<gsize; ++i) {
        MPI_Recv( rbuf+i*rcount*MPI_Type_extent(rtype), rcount, rtype,
                                    i, ALLGATHER_TAG, comm', &status );
    }
}
\end{verbatim}

\subsection{Definition of All-Gatherv, Operational Semantics}

\begin{verbatim}
MPI_Allgatherv( void *sbuf, int scount, MPI_Datatype stype, void *rbuf,
    int displs[], int rcount[], MPI_Datatype rtype, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    /* Loop through all possible roots
     */
    for (i=0; i<gsize; ++i) {
        MPI_Send( sbuf, scount, stype, i, ALLGATHERV_TAG, comm' );
    }
    for (i=0; i<gsize; ++i) {
        MPI_Recv( rbuf+rdispls[i]*MPI_Type_extent(rtype), rcount[i],
                            rtype, i, ALLGATHERV_TAG, comm', &status );
    }
}
\end{verbatim}

\subsection{Definition of All-to-All, Operational Semantics}

\begin{verbatim}
MPI_Alltoall(void *sbuf, int scount, MPI_Datatype stype, void *rbuf,
int rcount,
                        MPI_Datatype rtype, MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    for (i=0; i<gsize; ++i) {
        MPI_Send( sbuf+i*scount*MPI_Type_extent(stype), scount, stype, i,
                                                      ALLTOALL_TAG, comm' );
    }
    for (i=0; i<gsize; ++i) {
        MPI_Recv( rbuf+i*rcount*MPI_Type_extent(rtype), rcount, rtype,
                                    i, ALLGATHER_TAG, comm', &status );
    }
}
\end{verbatim}

\subsection{Definition of All-to-Allv, Operational Semantics}

\begin{verbatim}
MPI_Alltoallv(void *sbuf, int sdispls[], int scount[], MPI_Datatype stype,
    void *rbuf, int rdispls[], int rcount[], MPI_Datatype rtype,
    MPI_Comm comm)
{
    MPI_Group mygroup;
    int i,myrank,gsize;
    MPI_Status status;

    /* 1st, find comm' -- the communicator corresponding to comm, but with a
     * special context for collective operations.
     */

    /* Then, determine group, group size, and my rank in it.
     */
    MPI_Comm_group( comm', &mygroup );
    MPI_Group_size( mygroup, &gsize );
    MPI_Comm_rank( comm', &myrank );

    for (i=0; i<gsize; ++i) {
        MPI_Send( sbuf+sdispls[i]*MPI_Type_extent(stype), scount[i], stype, i,
                                                      ALLTOALL_TAG, comm' );
    }
    for (i=0; i<gsize; ++i) {
        MPI_Recv( rbuf+rdispls[i]*MPI_Type_extent(rtype), rcount[i], rtype,
                                    i, ALLGATHER_TAG, comm', &status );
    }
}
\end{verbatim}

} %end commentOut

\end{document}

From owner-mpi-collcomm@CS.UTK.EDU Mon Feb 21 09:12:08 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib)
	id JAA08810; Mon, 21 Feb 1994 09:12:00 -0500
Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id JAA08849; Mon, 21 Feb 1994 09:11:16 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 21 Feb 1994 09:11:11 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from watson.ibm.com by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id JAA08674; Mon, 21 Feb 1994 09:10:43 -0500
Message-Id: <199402211410.JAA08674@CS.UTK.EDU>
Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 8719;
   Mon, 21 Feb 94 09:10:43 EST
Date: Mon, 21 Feb 94 09:10:13 EST
From: "Marc Snir ((914) 945-3204 (862)" <snir@watson.ibm.com>
To: mpi-collcomm@CS.UTK.EDU
Subject: suggestions for change in collective comm chapter
Reply-To: SNIR@watson.ibm.com

------------------------------- Referenced Note ---------------------------
%!PS-Adobe-2.0
%%Title: TeX output 1994.02.20:2122
%%Creator: DVILASER/PS, ArborText, Inc.
%%BoundingBox: (atend)
%%Pages: (atend)
%%DocumentFonts: (atend)
%%EndComments

%!
%  Dvips.pro - included prolog for DviLaser-generated PostScript files.
%
%  Copyright (c) 1986-89, ArborText, Inc.
%  Permission to copy is granted so long as the PostScript code
%  is not resold or used in a commercial product.
%
%  $Header: dvips.pro,v 1.3 90/12/20 14:51:43 jsg Exp $

systemdict /setpacking known  % use array packing mode if its available
  {/savepackingmode currentpacking def
   true setpacking}
  if

/$DviLaser 400 dict def

% Begin document
/BeginDviLaserDoc {
  vmstatus pop pop 0 eq
    { $DviLaser begin
      InitializeState }
    { /DviLaserJob save def
      $DviLaser begin
      InitializeState
      /DviLaserFonts save def }
    ifelse
} bind def

% End document
/EndDviLaserDoc {
  vmstatus pop pop 0 eq
    { end }
    { DviLaserFonts restore
      end
      DviLaserJob restore }
    ifelse
} bind def

$DviLaser begin

/tempstr 64 string def
/tempint 0 def
/tempmatrix matrix def

%
%  Debugging routines
%
/DebugMode false def

/PrintInt {
  tempstr cvs print
} bind def

/PrintLn {
  (\n) print flush
} bind def

/PrintVMStats {
  print
  PrintLn
  (VM status - ) print
  vmstatus
  3 copy
  PrintInt (\(total\), ) print
  PrintInt (\(used\), ) print
  pop
  exch sub
  PrintInt (\(remaining\), ) print
  PrintInt (\(level\)) print
  PrintLn
} bind def

/VMS /PrintVMStats load def

/VMSDebug {
  DebugMode
    {PrintVMStats}
    {pop}
    ifelse
} bind def

(beginning of common prolog) VMSDebug

% Make it easy to bind definitions.
/bdef { bind def } bind def
/xdef { exch def } bdef

% Begin page
/BP {
  /Magnification xdef
  /DviLaserPage save def
  (beginning of page) VMSDebug
} bdef

% End page
/EP {
  DviLaserPage restore
} bdef

% Exit page (temporarily) to add fonts/characters.
/XP {
  % Save current point information so it can be reset later.
  /Xpos where {pop Xpos} {0} ifelse
  /Ypos where {pop Ypos} {0} ifelse
  /currentpoint cvx stopped {0 0 moveto currentpoint} if
  /DviLaserPage where {pop DviLaserPage restore} if
  moveto
  /Ypos xdef
  /Xpos xdef
} bdef

% Resume page
/RP {
  /DviLaserPage save def
} bdef

% Purge all fonts to reclaim memory space.
/PF {
  GlobalMode
  LocalMode
} bdef

% Switch to base save/restore level, saving state information.
/GlobalMode {
  /UserSave where {pop UserSave} if  % invoke "UserSave" if available
  PortraitMode
  PaperWidth
  PaperHeight
  PxlResolution
  Resolution
  Magnification
  Ymax
  RasterScaleFactor
  % Save current point information so it can be reset later.
  /currentpoint cvx stopped {0 0 moveto currentpoint} if
  /DviLaserPage where {pop DviLaserPage restore} if
  DviLaserFonts restore
  RecoverState
} bdef

% Preserve state at the base level.
/RecoverState {
  10 copy
  /Ypos xdef
  /Xpos xdef
  /RasterScaleFactor xdef
  /Ymax xdef
  /Magnification xdef
  /Resolution xdef
  /PxlResolution xdef
  /PaperHeight xdef
  /PaperWidth xdef
  /PortraitMode xdef
  DoInitialScaling
  PortraitMode not {PaperWidth 0 SetupLandscape} if
  Xpos Ypos moveto
} bdef

% Initialize state variables to default values.
/InitializeState {
  /Resolution 3600.0 def
  /PxlResolution 300.0 def
  /RasterScaleFactor PxlResolution Resolution div def
  /PortraitMode true def
  11.0 Resolution mul /PaperHeight xdef
  8.5 Resolution mul /PaperWidth xdef
  /Ymax PaperHeight def
  /Magnification 1000.0 def
  /Xpos 0.0 def
  /Ypos 0.0 def
  /InitialMatrix matrix currentmatrix def
} bdef

% Switch from base save/restore level, restoring state information.
/LocalMode {
  /Ypos xdef
  /Xpos xdef
  /RasterScaleFactor xdef
  /Ymax xdef
  /Magnification xdef
  /Resolution xdef
  /PxlResolution xdef
  /PaperHeight xdef
  /PaperWidth xdef
  /PortraitMode xdef
  DoInitialScaling
  PortraitMode not {PaperWidth 0 SetupLandscape} if
  Xpos Ypos moveto
  /UserRestore where {pop UserRestore} if  % invoke "UserRestore" if available
  /DviLaserFonts save def
  /DviLaserPage save def
} bdef

% Abbreviations
/S /show load def
/SV /save load def
/RST /restore load def

/Yadjust {Ymax exch sub} bdef

% (x,y) position absolute, just set Xpos & Ypos, don't move.
/SXY {
  Yadjust
  /Ypos xdef /Xpos xdef
} bdef

% (x,y) position absolute
/XY {
  Yadjust
  2 copy /Ypos xdef /Xpos xdef
  moveto
} bdef

% (x,0) position absolute
/X {
  currentpoint exch pop
  2 copy /Ypos xdef /Xpos xdef
  moveto
} bdef

% (0,y) position absolute
/Y {
  currentpoint pop exch Yadjust
  2 copy /Ypos xdef /Xpos xdef
  moveto
} bdef

% (x,y) position relative
/xy {
  neg rmoveto
  currentpoint /Ypos xdef /Xpos xdef
} bdef

% (x,0) position relative
/x {
  0.0 rmoveto
  currentpoint /Ypos xdef /Xpos xdef
} bdef

% (0,y) position relative
/y {
  0.0 exch neg rmoveto
  currentpoint /Ypos xdef /Xpos xdef
  } bdef

% Print a rule.  In order to get correct pixel size and positioning,
% we usually create a temporary font in which the rule is the only character.
% When the rule is large, however, we fill a rectangle instead.
/R {
  /ht xdef
  /wd xdef
  ht 1950 le wd 1950 le and PxlResolution 400 le and
    {save
    /tfd 6 dict def
    tfd begin
      /FontType 3 def
      /FontMatrix [1 0 0 1 0 0] def
      /FontBBox [0 0 wd ht] def
      /Encoding 256 array dup 97 /a put def
      /BuildChar {
        pop   % ignore character code
        pop   % ignore font dict, too
        wd 0 0 0 wd ht setcachedevice
        wd ht true
        [1 0 0 -1 0 ht] {<FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF>} imagemask
        } def
      end % tfd
    /tf tfd definefont setfont
    (a) show
    restore
    }
    {gsave
    0 setgray
    currentpoint
    newpath
      moveto
      0.0 ht rlineto
      wd 0.0 rlineto
      0.0 ht neg rlineto
      wd neg 0.0 rlineto
    closepath fill
    grestore
    }
  ifelse
  wd 0.0 rmoveto
  currentpoint /Ypos xdef /Xpos xdef
} bdef

%
%  <PXL-file resolution(pix/inch)> <resolution(pix/inch)> RES
%
/RES {
  /Resolution xdef
  /PxlResolution xdef
  /RasterScaleFactor PxlResolution Resolution div def
  DoInitialScaling
} bdef

%
% Do initial scaling.
%
/DoInitialScaling {
  InitialMatrix setmatrix
  72.0 Resolution div dup scale
} bdef

%
%  <paper-height(pix)> <paper-width(pix)> PM
%
/PM {
  XP
  /PaperWidth xdef
  /PaperHeight xdef
  /Ymax PaperHeight def
  /PortraitMode true def
  DoInitialScaling
  RP
} bdef

%
%  <paper-height(pix)> <paper-width(pix)> LM
%
/LM {
  XP
  /PaperWidth xdef
  /PaperHeight xdef
  /Ymax PaperWidth def
  /PortraitMode false def
  DoInitialScaling
  PaperWidth 0 SetupLandscape
  RP
} bdef

% Change magnification setting
/MAG {
  XP
  /Magnification xdef
  RP
} bdef

%
%  Switch to landscape mode
%
/SetupLandscape {
  translate
  90.0 rotate
} bdef

%
%  <mode> SPB - begin "\special" mode
%
%  This is the PostScript procedure used to transfer from the internal
%  environment used for the DVI translation code emitted by DVIPS to
%  a standard PostScript environment.
%
%  Parameters: 0 - Local
%              1 - Global
%              2 - Inline
%
/SPB {
  /spc_mode xdef
  spc_mode 0 eq spc_mode 2 eq or
    {XP}
    {spc_mode 1 eq {GlobalMode} if}
    ifelse
  Resolution 72.0 div dup scale        % Restore default scaling...
  Magnification 1000.0 div dup scale   % Adjust for any magnification...
  /Xpos Xpos 72.0 Resolution div mul 1000.0 Magnification div mul def
  /Ypos Ypos 72.0 Resolution div mul 1000.0 Magnification div mul def
} bdef

%
%  <mode> SPE - end "\special" mode
%
%  This is the PostScript procedure used to reenter the internal
%  environment used for the DVI translation code emitted by DVIPS from
%  the standard PostScript environment provided for processing user-supplied
%  PostScript code.
%
%  Parameters: 0 - Local
%              1 - Global
%              2 - Inline
%
/SPE {
  /spc_mode xdef
  1000.0 Magnification div dup scale   % Un-adjust for any magnification...
  72.0 Resolution div dup scale        % Restore default internal scaling...
  spc_mode 0 eq spc_mode 2 eq or
    {RP}
    {spc_mode 1 eq {LocalMode} if}
    ifelse
} bdef

%
%  <num-copies> PP
%
/PP {
  /#copies xdef
  showpage
  /#copies 1 def
} bdef

%
%  /font-name <point-size(pix)> DMF
%
/DMF {
  /psz xdef
  /nam xdef
  nam findfont psz scalefont setfont
} bdef

%
%  /abcd (xxx) str-concat  ==> /abcdxxx
%
/str-concatstr 64 string def

/str-concat {
  /xxx xdef
  /nam xdef
  /namstr nam str-concatstr cvs def
  /newnam namstr length xxx length add string def
  newnam 0 namstr putinterval
  newnam namstr length xxx putinterval
  newnam cvn
} bdef

%
%  /abcdef 2 str-strip ==> /cdef
%
/str-strip {
  /num xdef
  /nam xdef
  /namstr nam tempstr cvs def
  /newlen namstr length num sub def
  namstr num newlen getinterval
  cvn
} bdef

%
%  <old-dict> copydict ==> new-dict on stack
%
/copydict {
  dup length 1 add dict /newdict xdef
    {1 index /FID ne
      {newdict 3 1 roll put}
      {pop pop}
     ifelse
    } forall
  newdict
} bdef

%
%  <font-type> DefineCMEncoding
%
/DefineCMEncoding {
  /EncodeType xdef

  /CMEncoding 256 array def
  /Times-Roman findfont /Encoding get aload pop CMEncoding astore pop

  EncodeType 11 eq {Do-CM-rm-encoding} if
  EncodeType 12 eq {Do-CM-it-encoding} if
  EncodeType 13 eq {Do-CM-tt-encoding} if
} bdef

%
%  Do special mappings for the various CM-font types.  Characters that
%  get "covered up" are repositioned in the range (128,128+32).
%
/Do-standard-CM-encodings {
  CMEncoding
  dup 0 /.notdef put
  dup 1 /.notdef put
  dup 2 /.notdef put
  dup 3 /.notdef put
  dup 4 /.notdef put
  dup 5 /.notdef put
  dup 6 /.notdef put
  dup 7 /.notdef put

  dup 8 /.notdef put
  dup 9 /.notdef put
  dup 10 /.notdef put
  dup 11 /.notdef put
  dup 12 /fi put
  dup 13 /fl put
  dup 14 /.notdef put
  dup 15 /.notdef put

  dup 16 /dotlessi put
  dup 17 /.notdef put
  dup 18 /grave put
  dup 19 /acute put
  dup 20 /caron put
  dup 21 /breve put
  dup 22 /macron put
  dup 23 /ring put

  dup 24 /cedilla put
  dup 25 /germandbls put
  dup 26 /ae put
  dup 27 /oe put
  dup 28 /oslash put
  dup 29 /AE put
  dup 30 /OE put
  dup 31 /Oslash put

  dup 127 /dieresis put

  dup 128 /space put
  dup 129 /quotedbl put
  dup 130 /sterling put
  dup 131 /dollar put
  dup 132 /less put
  dup 133 /greater put
  dup 134 /backslash put
  dup 135 /asciicircum put

  dup 136 /underscore put
  dup 137 /braceleft put
  dup 138 /bar put
  dup 139 /braceright put
  dup 140 /asciitilde put
  pop
} bdef

/Do-CM-rm-encoding {
  Do-standard-CM-encodings
  CMEncoding
  dup 32 /.notdef put
  dup 34 /quotedblright put
  dup 60 /exclamdown put
  dup 62 /questiondown put
  dup 92 /quotedblleft put
  dup 94 /circumflex put
  dup 95 /dotaccent put
  dup 123 /endash put
  dup 124 /emdash put
  dup 125 /hungarumlaut put
  dup 126 /tilde put
  pop
} bdef

/Do-CM-it-encoding {
  Do-standard-CM-encodings
  CMEncoding
  dup 32 /.notdef put
  dup 34 /quotedblright put
  dup 36 /sterling put
  dup 60 /exclamdown put
  dup 62 /questiondown put
  dup 92 /quotedblleft put
  dup 94 /circumflex put
  dup 95 /dotaccent put
  dup 123 /endash put
  dup 124 /emdash put
  dup 125 /hungarumlaut put
  dup 126 /tilde put
  pop
} bdef

/Do-CM-tt-encoding {
  Do-standard-CM-encodings
  CMEncoding
  dup 12 /.notdef put
  dup 13 /quotesingle put
  dup 14 /exclamdown put
  dup 15 /questiondown put
  dup 94 /circumflex put
  dup 126 /tilde put
  pop
} bdef

%
% Routines to handle packing/unpacking numbers.
%
%  <target> <pos> <num> PackHW --> <new target>
%
/PackHW {
  /num xdef
  /pos xdef
  /target xdef
  num 16#0000FFFF and 1 pos sub 16 mul bitshift
    target or
} bdef

%
%  <target> <pos> <num> PackByte --> <new target>
%
/PackByte {
  /num xdef
  /pos xdef
  /target xdef
  num 16#000000FF and 3 pos sub 8 mul bitshift
    target or
} bdef

%
%  <pos> <num> UnpkHW --> <unpacked value>
%
/UnpkHW {
  /num xdef
  /pos xdef
  num 1 pos sub -16 mul bitshift 16#0000FFFF and
  dup 16#00007FFF gt {16#00010000 sub} if
} bdef

%
%  <pos> <num> UnpkByte --> <unpacked value>
%
/UnpkByte {
  /num xdef
  /pos xdef
  num 3 pos sub -8 mul bitshift 16#000000FF and
  dup 16#0000007F gt {16#00000100 sub} if
} bdef

%
%  <int-font-name> <ext-font-name> <pt-sz(pix)> <type> <loaded-fg> DefineCMFont
%
%    type 10: "as-is" PostScript font
%    type 11: CM-mapped PostScript font - roman
%    type 12: CM-mapped PostScript font - text italic
%    type 13: CM-mapped PostScript font - typewriter type
%
/int-dict-name {int (-dict) str-concat} bdef
/int-dict {int (-dict) str-concat cvx load} bdef

/DF {
  true  % signal that the font is already loaded
  DefineCMFont
} bdef

/DNF {
  false  % signal that the font is not already loaded
  DefineCMFont
} bdef

/DefineCMFont {
  /loaded xdef
  /typ xdef
  /psz xdef
  /ext xdef
  /int xdef

  typ 10 ne
    { % font_type = 11, 12, 13
    loaded not
      { /fnam ext 3 str-strip def
        fnam findfont copydict /newdict xdef
        typ DefineCMEncoding
        newdict /Encoding CMEncoding put
        ext newdict definefont pop
      } if
    int-dict-name ext findfont psz scalefont def
    currentdict int [int-dict /setfont cvx] cvx put
    }
    { % font_type = 10
    /fnam ext def
    int-dict-name fnam findfont psz scalefont def
    currentdict int [int-dict /setfont cvx] cvx put
    }
  ifelse
} bdef

%
%  <int-font-name> <ext-font-name> <pt-sz(pix)> <PXL mag> <num-chars>
%      [llx lly urx ury] <newfont-fg> DefinePXLFont
%

/PXLF {
  true  % signal that the font is already loaded
  DefinePXLFont
} bdef

/PXLNF {
  false  % signal that the font is not already loaded
  DefinePXLFont
} bdef

/PXLBuildCharDict 17 dict def

/CMEncodingArray 256 array def
0 1 255 {CMEncodingArray exch dup tempstr cvs cvn put} for

/RasterConvert {RasterScaleFactor div} bdef

/TransformBBox {
  aload pop

  /BB-ury xdef
  /BB-urx xdef
  /BB-lly xdef
  /BB-llx xdef

  [BB-llx RasterConvert BB-lly RasterConvert
   BB-urx RasterConvert BB-ury RasterConvert]
} bdef

/DefinePXLFont {
  /newfont xdef
  /bb xdef
  /num xdef
  /psz xdef
  /dsz xdef
  /pxlmag xdef
  /ext xdef
  /int xdef

  /fnam ext (-) str-concat pxlmag tempstr cvs str-concat def

  newfont not {
    int-dict-name 13 dict def

    int-dict begin
      /FontType 3 def
      /FontMatrix [1 dsz div 0 0 1 dsz div 0 0] def
      /FontBBox bb TransformBBox def
      /Encoding CMEncodingArray def
      /CharDict 1 dict def
      CharDict begin
        /Char-Info num array def
        end

      /BuildChar
        {
          PXLBuildCharDict begin
            /char xdef
            /fontdict xdef

            fontdict /CharDict get /Char-Info get char get aload pop

            /rasters xdef
            /PackedWord1 xdef

            0 PackedWord1 UnpkHW 16#7FFF ne
              { /PackedWord2 xdef
                /wx 0 PackedWord1 UnpkHW def
                /rows 2 PackedWord1 UnpkByte def
                /cols 3 PackedWord1 UnpkByte def
                /llx 0 PackedWord2 UnpkByte def
                /lly 1 PackedWord2 UnpkByte def
                /urx 2 PackedWord2 UnpkByte def
                /ury 3 PackedWord2 UnpkByte def }
              { /PackedWord2 xdef
                /PackedWord3 xdef
                /PackedWord4 xdef
                /wx 1 PackedWord1 UnpkHW def
                /rows 0 PackedWord2 UnpkHW def
                /cols 1 PackedWord2 UnpkHW def
                /llx 0 PackedWord3 UnpkHW def
                /lly 1 PackedWord3 UnpkHW def
                /urx 0 PackedWord4 UnpkHW def
                /ury 1 PackedWord4 UnpkHW def }
               ifelse

            rows 0 lt
              { /rows rows neg def
                /runlength 1 def }
              { /runlength 0 def }
             ifelse

            wx 0
            llx RasterConvert lly RasterConvert
            urx RasterConvert ury RasterConvert setcachedevice
            rows 0 ne
              {
              gsave
                cols rows true
                RasterScaleFactor 0 0 RasterScaleFactor neg llx neg ury
                  tempmatrix astore
                {GenerateRasters} imagemask
              grestore
              } if
            end
        } def
      end

      fnam int-dict definefont pop
    } if

  int-dict-name fnam findfont psz scalefont def
  currentdict int [int-dict /setfont cvx] cvx put
} bdef

%
%  <int-font-name> <code> <wx> <llx> <lly> <urx> <ury> <rows> <cols> <runlength> <rasters> PXLC
%
/PXLC {

  /rasters xdef
  /runlength xdef
  /cols xdef
  /rows xdef
  /ury xdef
  /urx xdef
  /lly xdef
  /llx xdef
  /wx xdef
  /code xdef
  /int xdef

  % See if the long or short format is required
  true cols CKSZ rows CKSZ ury CKSZ urx CKSZ lly CKSZ llx CKSZ
    TackRunLengthToRows
    { int-dict /CharDict get /Char-Info get code
        [0 0 llx PackByte 1 lly PackByte 2 urx PackByte 3 ury PackByte
         0 0 wx PackHW 2 rows PackByte 3 cols PackByte
         rasters] put}
    { int-dict /CharDict get /Char-Info get code
        [0 0 urx PackHW 1 ury PackHW
         0 0 llx PackHW 1 lly PackHW
         0 0 rows PackHW 1 cols PackHW
         0 0 16#7FFF PackHW 1 wx PackHW
         rasters] put}
    ifelse
} bdef

/CKSZ {abs 127 le and} bdef
/TackRunLengthToRows {runlength 0 ne {/rows rows neg def} if} bdef

%
%  <wx> <dsz> <psz> <llx> <lly> <urx> <ury> <rows> <cols> <runlength> <rasters> PLOTC
%
/PLOTC {
  /rasters xdef
  /runlength xdef
  /cols xdef
  /rows xdef
  /ury xdef
  /urx xdef
  /lly xdef
  /llx xdef
  /psz xdef
  /dsz xdef
  /wx xdef

  % "Plot" a character's raster pattern.
  rows 0 ne
    {
    gsave
      currentpoint translate
      psz dsz div dup scale
      cols rows true
      RasterScaleFactor 0 0 RasterScaleFactor neg llx neg ury
        tempmatrix astore
      {GenerateRasters} imagemask
    grestore
    } if
  wx x
} bdef

% Routine to generate rasters for "imagemask".
/GenerateRasters {
  rasters
  runlength 1 eq {RunLengthToRasters} if
} bdef

% Routine to convert from runlength encoding back to rasters.
/RunLengthToRasters {
  % ...not done yet...
} bdef

%
%  These procedures handle bitmap processing.
%
%  <bitmap columns> <bitmap rows> <bitmap pix/inch> <magnification> BMbeg
%
/BMbeg {
  /BMmagnification xdef
  /BMresolution xdef
  /BMrows xdef
  /BMcols xdef

  /BMcurrentrow 0 def
  gsave
    0.0 setgray
    Resolution BMresolution div dup scale
    currentpoint translate
    BMmagnification 1000.0 div dup scale
    0.0 BMrows moveto
    BMrows dup scale
    currentpoint translate
    /BMCheckpoint save def
  } bdef

/BMend {
  BMCheckpoint restore
  grestore
  } bdef

%
%  <hex raster bitmap> <rows> BMswath
%
/BMswath {
  /rows xdef
  /rasters xdef

  BMcols rows true
  [BMrows 0 0 BMrows neg 0 BMcurrentrow neg]
  {rasters}
  imagemask

  /BMcurrentrow BMcurrentrow rows add def
  BMcurrentrow % save this on the stack around a restore...
  BMCheckpoint restore
  /BMcurrentrow xdef
  /BMCheckpoint save def
  } bdef

%
%  Procedures for implementing the "rotate <theta>" special:
%  <theta> ROTB -
%        - ROTE -

/ROTB {
  XP
  gsave
  Xpos Ypos translate
  rotate % using <theta> from the stack
  Xpos neg Ypos neg translate
  RP
  } bdef

/ROTE {XP grestore RP} bdef

%
%  Procedures for implementing the "epsfile <filename> [<mag>]" special:
%  <llx> <lly> <mag> EPSB -
%  - EPSE -

/EPSB {
  0 SPB
  save
  4 1 roll % push the savelevel below the parameters
  /showpage {} def
  Xpos Ypos translate
  1000 div dup scale % using <mag> from the stack
  neg exch neg exch translate % using <llx> <lly> from the stack
  } bdef

/EPSE {restore 0 SPE} bdef

%
%  Procedure for implementing revision bars:
%  <bary1> <bary2> <barx> <barw> REVB -
%  The bar is a line of width barw drawn from (barx,bary1) to (barx,bary2).

/REVB {
  /barw xdef
  /barx xdef
  /bary2 xdef
  /bary1 xdef
  gsave
    barw setlinewidth
    barx bary1 Yadjust moveto
    barx bary2 Yadjust lineto
    stroke
  grestore
  } bdef

%
%  A small array and two procedures to facilitate The Publisher's
%  implementation of gray table cells:
%                               <ptnum> GRSP -
%  <ultpnum> <lrptnum> <graylev> <freq> GRFB -
%
%  GRSP saves the current DVI location so that it can be retrieved later
%  by the index <ptnum>.  GRFB fills a box whose corners are given by the
%  indexes <ultpnum> and <lrptnum> with a halftone gray with the given
%  level and frequency.  The array GRPM holds the coordinates of points
%  marking the corners of gray table cells.

/GRPM 40 dict def

/GRSP {GRPM exch [Xpos Ypos] put} bdef

/GRFB {
  /GRfreq xdef
  /GRgraylev xdef
  GRPM exch get aload pop /GRlry xdef /GRlrx xdef
  GRPM exch get aload pop /GRuly xdef /GRulx xdef
  gsave
    % set the screen frequency if it isn't zero
    GRfreq 0 ne
      {currentscreen
      3 -1 roll pop GRfreq 3 1 roll
      setscreen}
    if
    % set the gray level
    GRgraylev setgray
    % draw and fill the path
    GRulx GRuly moveto
    GRlrx GRuly lineto
    GRlrx GRlry lineto
    GRulx GRlry lineto
    closepath
    fill
  grestore
  } bdef


%
%  Procedures for implementing the "paper <source>" option:
%  <name> <eop> SPS          -
%         <eop> paper-manual -
%  etc.  The boolean <eop> is passed so that a paper source procedure
%  knows if it is being called at the beginning (false) or end
%  (true) of a page.

/SPS {
  /eop xdef
  /name xdef
  name where {pop eop name cvx exec} if
  } bdef

/paper-manual {
    {statusdict /manualfeed known
      {statusdict /manualfeed true put}
    if}
  if
  } bdef

/paper-automatic {
    {statusdict /manualfeed known
      {statusdict /manualfeed false put}
    if}
  if
  } bdef

/paper-top-tray {
    {}
    {statusdict /setpapertray known
      {statusdict begin gsave 0 setpapertray grestore end}
    if}
  ifelse
  } bdef

/paper-bottom-tray {
    {}
    {statusdict /setpapertray known
      {statusdict begin gsave 1 setpapertray grestore end}
    if}
  ifelse
  } bdef

/paper-both-trays {
    {}
    {statusdict /setpapertray known
      {statusdict begin gsave 2 setpapertray grestore end}
    if}
  ifelse
  } bdef

(end of common prolog) VMSDebug

end

systemdict /setpacking known
  {savepackingmode setpacking}
  if

%
% End of included prolog section.
%

%%EndProlog
%%BeginSetup
BeginDviLaserDoc
300 300 RES
%%EndSetup


%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
375 657 XY
SV 72 86 86.094 5 -1 65 60 61 64 0
<0000001FFF000070 000001FFFFE000F0 00000FFFFFFC01F0 00007FFFFFFF03F0
 0001FFFE007F87F0 0007FFE0000FEFF0 000FFF000003FFF0 003FFE000001FFF0
 007FF8000000FFF0 00FFF00000007FF0 01FFE00000003FF0 03FFC00000001FF0
 03FF800000001FF0 07FF800000000FF0 0FFF0000000007F0 0FFF0000000007F0
 1FFE0000000007F0 1FFE0000000003F0 3FFC0000000003F0 3FFC0000000003F0
 7FFC0000000001F0 7FFC0000000001F0 7FFC0000000001F0 7FF8000000000000
 FFF8000000000000 FFF8000000000000 FFF8000000000000 FFF8000000000000
 FFF8000000000000 FFF8000000000000 FFF8000000000000 FFF8000000000000
 FFF8000000000000 FFF8000000000000 FFF8000000000000 FFF8000000000000
 FFF8000000000000 7FF8000000000000 7FFC000000000000 7FFC0000000000F0
 7FFC0000000000F0 3FFC0000000000F0 3FFC0000000000F0 1FFE0000000000F0
 1FFE0000000001F0 0FFF0000000001E0 0FFF0000000003E0 07FF8000000003E0
 03FF8000000007C0 03FFC000000007C0 01FFE00000000F80 00FFF00000001F00
 007FF80000003E00 003FFE0000007C00 000FFF000001F800 0007FFE00007F000
 0001FFFE003FC000 00007FFFFFFF8000 00000FFFFFFC0000 000001FFFFF00000
 0000001FFF000000>
PLOTC RST
447 657 XY
SV 55 86 86.094 4 0 52 60 60 48 0
<00FF00000000 FFFF00000000 FFFF00000000 FFFF00000000 FFFF00000000
 07FF00000000 03FF00000000 03FF00000000 03FF00000000 03FF00000000
 03FF00000000 03FF00000000 03FF00000000 03FF00000000 03FF00000000
 03FF00000000 03FF00000000 03FF00000000 03FF00000000 03FF00000000
 03FF00000000 03FF00000000 03FF007FC000 03FF01FFF800 03FF07FFFE00
 03FF1F03FF00 03FF3C01FF00 03FF7801FF80 03FF7000FF80 03FFE000FFC0
 03FFC000FFC0 03FFC000FFC0 03FF8000FFC0 03FF8000FFC0 03FF0000FFC0
 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0
 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0
 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0
 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0 03FF0000FFC0
 03FF0000FFC0 FFFFFC3FFFFF FFFFFC3FFFFF FFFFFC3FFFFF FFFFFC3FFFFF>
PLOTC RST
502 657 XY
SV 48 86 86.094 3 0 47 38 38 48 0
<001FFF000000 01FFFFF00000 07FFFFFC0000 0FF807FF0000 1FF801FF8000
 1FFC00FFC000 1FFC007FE000 1FFC007FE000 1FFC007FF000 1FFC003FF000
 0FF8003FF000 07F0003FF000 01C0003FF000 0000003FF000 0000003FF000
 0000003FF000 0000FFFFF000 000FFFFFF000 007FF83FF000 03FF803FF000
 07FE003FF000 1FFC003FF000 3FF8003FF000 7FF0003FF000 7FE0003FF000
 FFE0003FF000 FFC0003FF000 FFC0003FF000 FFC0003FF000 FFC0007FF000
 FFC0007FF000 FFE000FFF000 7FF001DFF000 3FF803DFF800 1FFC0F8FFFF0
 0FFFFE0FFFF0 01FFFC07FFF0 003FE000FFF0>
PLOTC RST
550 657 XY
SV 55 86 86.094 3 -17 51 38 55 48 0
<00FF01FF8000 FFFF0FFFF000 FFFF3FFFFC00 FFFFFE03FF00 FFFFF001FFC0
 03FFE0007FE0 03FF80007FF0 03FF80003FF8 03FF00001FF8 03FF00001FFC
 03FF00000FFC 03FF00000FFE 03FF00000FFE 03FF00000FFE 03FF000007FF
 03FF000007FF 03FF000007FF 03FF000007FF 03FF000007FF 03FF000007FF
 03FF000007FF 03FF000007FF 03FF000007FF 03FF000007FF 03FF00000FFE
 03FF00000FFE 03FF00000FFE 03FF00001FFC 03FF00001FFC 03FF00001FF8
 03FF80003FF0 03FFC0007FF0 03FFE000FFE0 03FFF001FF80 03FFFC07FF00
 03FF3FFFFC00 03FF0FFFF000 03FF01FF0000 03FF00000000 03FF00000000
 03FF00000000 03FF00000000 03FF00000000 03FF00000000 03FF00000000
 03FF00000000 03FF00000000 03FF00000000 03FF00000000 03FF00000000
 03FF00000000 FFFFFC000000 FFFFFC000000 FFFFFC000000 FFFFFC000000>
PLOTC RST
605 657 XY
SV 39 86 86.094 2 0 32 55 55 32 0
<00078000 00078000 00078000 00078000 00078000 000F8000 000F8000
 000F8000 000F8000 001F8000 001F8000 003F8000 003F8000 007F8000
 00FF8000 01FF8000 07FF8000 1FFFFFF0 FFFFFFF0 FFFFFFF0 FFFFFFF0
 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000
 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000
 01FF8000 01FF8000 01FF8000 01FF8000 01FF8000 01FF803C 01FF803C
 01FF803C 01FF803C 01FF803C 01FF803C 01FF803C 01FF803C 00FF8078
 00FFC078 007FC0F8 007FE1F0 001FFFE0 0007FFC0 0001FF00>
PLOTC RST
643 657 XY
SV 45 86 86.094 3 0 41 38 38 40 0
<0001FFC000 000FFFF800 003FFFFE00 00FF80FF00 01FE003F80 07FC001FC0
 0FF8000FE0 0FF8000FF0 1FF00007F0 3FF00007F8 3FF00007F8 7FE00007F8
 7FE00003FC 7FE00003FC 7FE00003FC FFE00003FC FFFFFFFFFC FFFFFFFFFC
 FFFFFFFFFC FFE0000000 FFE0000000 FFE0000000 FFE0000000 7FE0000000
 7FE0000000 7FE0000000 3FE0000000 3FF000003C 1FF000003C 1FF000003C
 0FF8000078 07FC0000F8 03FE0001F0 01FF0007E0 00FFC03FC0 003FFFFF00
 0007FFFC00 0000FFE000>
PLOTC RST
689 657 XY
SV 41 86 86.094 3 0 38 38 38 40 0
<00FE03F000 FFFE0FFE00 FFFE3FFF80 FFFE3C7FC0 FFFE707FC0 07FEF0FFE0
 03FEE0FFE0 03FEC0FFE0 03FFC0FFE0 03FF80FFE0 03FF807FC0 03FF803F80
 03FF800E00 03FF000000 03FF000000 03FF000000 03FF000000 03FF000000
 03FF000000 03FF000000 03FF000000 03FF000000 03FF000000 03FF000000
 03FF000000 03FF000000 03FF000000 03FF000000 03FF000000 03FF000000
 03FF000000 03FF000000 03FF000000 03FF000000 FFFFFE0000 FFFFFE0000
 FFFFFE0000 FFFFFE0000>
PLOTC RST
762 657 XY
SV 50 86 86.094 7 0 41 56 56 40 0
<00001E0000 00003E0000 0000FE0000 0003FE0000 003FFE0000 FFFFFE0000
 FFFFFE0000 FFFFFE0000 FFCFFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000 000FFE0000
 000FFE0000 000FFE0000 000FFE0000 000FFE0000 7FFFFFFFC0 7FFFFFFFC0
 7FFFFFFFC0 7FFFFFFFC0>
PLOTC RST
375 864 XY
SV 86 103 103.279 7 -1 78 72 73 72 0
<000000003FFE00000E 0000000FFFFFC0001E 0000007FFFFFF8003E
 000003FFFFFFFE00FE 00000FFFFFFFFF81FE 00003FFFF800FFC3FE
 0000FFFF80000FF7FE 0001FFFC000003FFFE 0007FFF0000001FFFE
 000FFFC00000007FFE 001FFF800000003FFE 003FFF000000001FFE
 007FFE000000000FFE 00FFFC0000000007FE 01FFF80000000007FE
 03FFF00000000003FE 03FFF00000000001FE 07FFE00000000001FE
 07FFE00000000000FE 0FFFC00000000000FE 0FFFC000000000007E
 1FFFC000000000007E 1FFF8000000000007E 3FFF8000000000007E
 3FFF8000000000003E 3FFF8000000000003E 7FFF8000000000003E
 7FFF0000000000003E 7FFF00000000000000 7FFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 7FFF00000000000000 7FFF00000000000000
 7FFF00000000000000 7FFF8000000000003E 3FFF8000000000003E
 3FFF8000000000003E 3FFF8000000000003E 1FFF8000000000003E
 1FFFC000000000003E 0FFFC000000000007C 0FFFC000000000007C
 07FFE000000000007C 07FFE00000000000F8 03FFF00000000000F8
 03FFF00000000001F8 01FFF80000000001F0 00FFFC0000000003E0
 007FFE0000000007E0 003FFF000000000FC0 001FFF800000001F80
 000FFFC00000003F00 0007FFF0000000FE00 0001FFFC000001FC00
 0000FFFF80000FF800 00003FFFF8007FF000 00000FFFFFFFFFC000
 000003FFFFFFFF0000 0000007FFFFFFC0000 0000000FFFFFE00000
 000000003FFE000000>
PLOTC RST
461 864 XY
SV 59 103 103.279 3 0 55 46 46 56 0
<00000FFF000000 0000FFFFF00000 0007FFFFFE0000 001FFFFFFF8000
 003FFC03FFC000 00FFE0007FF000 01FF80001FF800 03FF00000FFC00
 07FE000007FE00 0FFE000007FF00 0FFC000003FF00 1FFC000003FF80
 3FFC000003FFC0 3FF8000001FFC0 3FF8000001FFC0 7FF8000001FFE0
 7FF8000001FFE0 7FF8000001FFE0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0
 3FFC000003FFC0 3FFC000003FFC0 1FFC000003FF80 1FFE000007FF80
 0FFE000007FF00 07FF00000FFE00 03FF80001FFC00 01FFC0003FF800
 00FFE0007FF000 007FFC03FFE000 001FFFFFFF8000 0007FFFFFE0000
 0000FFFFF00000 00000FFF000000>
PLOTC RST
520 864 XY
SV 33 103 103.279 4 0 29 72 72 32 0
<007FC000 FFFFC000 FFFFC000 FFFFC000 FFFFC000 FFFFC000 03FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 FFFFFF80 FFFFFF80 FFFFFF80
 FFFFFF80 FFFFFF80>
PLOTC RST
553 864 XY
SV 33 103 103.279 4 0 29 72 72 32 0
<007FC000 FFFFC000 FFFFC000 FFFFC000 FFFFC000 FFFFC000 03FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000 01FFC000
 01FFC000 01FFC000 01FFC000 01FFC000 FFFFFF80 FFFFFF80 FFFFFF80
 FFFFFF80 FFFFFF80>
PLOTC RST
586 864 XY
SV 54 103 103.279 3 0 50 46 46 48 0
<00001FFE0000 0001FFFFE000 0007FFFFF800 001FFFFFFE00 007FFC07FF00
 00FFE001FF80 01FFC0007FC0 03FF80003FE0 07FF00003FF0 0FFE00001FF0
 1FFE00000FF8 1FFC00000FF8 3FFC00000FFC 3FFC000007FC 7FFC000007FC
 7FF8000007FC 7FF8000007FE 7FF8000007FE FFF8000007FE FFF8000007FE
 FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFC FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 3FFC0000001C 1FFE0000003E
 0FFE0000003E 07FF0000007E 07FF000000FC 03FF800001F8 01FFC00003F0
 007FF0001FE0 003FFE00FFC0 001FFFFFFF80 0007FFFFFE00 0000FFFFF800
 00000FFF8000>
PLOTC RST
641 864 XY
SV 53 103 103.279 4 0 49 46 46 48 0
<00001FFFC000 0000FFFFF800 0007FFFFFE00 001FFFFFFF80 007FFC00FFC0
 00FFE001FFC0 01FFC003FFE0 03FF8003FFE0 07FF0003FFE0 0FFE0003FFE0
 0FFE0003FFE0 1FFC0001FFC0 1FFC0001FFC0 3FFC0000FF80 3FFC00003E00
 7FF800000000 7FF800000000 7FF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 1FFC000000F8 1FFE000000F8
 0FFE000000F8 0FFF000001F0 07FF800003F0 03FFC00007E0 01FFE0000FC0
 00FFF0001F80 007FFE00FF00 001FFFFFFE00 0007FFFFF800 0000FFFFE000
 00001FFE0000>
PLOTC RST
693 864 XY
SV 46 103 103.279 2 0 38 66 66 40 0
<0001F00000 0001F00000 0001F00000 0001F00000 0001F00000 0001F00000
 0003F00000 0003F00000 0003F00000 0007F00000 0007F00000 0007F00000
 000FF00000 000FF00000 001FF00000 003FF00000 003FF00000 007FF00000
 01FFF00000 03FFF00000 0FFFFFFFC0 FFFFFFFFC0 FFFFFFFFC0 FFFFFFFFC0
 FFFFFFFFC0 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0
 00FFF001F0 00FFF001F0 00FFF001F0 007FF001E0 007FF803E0 003FF803E0
 003FFC07C0 001FFE0F80 000FFFFF80 0007FFFE00 0001FFFC00 00001FF000>
PLOTC RST
740 864 XY
SV 33 103 103.279 4 0 28 73 73 24 0
<00FC00 01FF00 03FF80 07FFC0 0FFFC0 1FFFE0 1FFFE0 1FFFE0 1FFFE0
 1FFFE0 1FFFE0 0FFFC0 07FFC0 03FF80 01FF00 00FC00 000000 000000
 000000 000000 000000 000000 000000 000000 000000 000000 000000
 007FC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 03FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 FFFFFF FFFFFF FFFFFF FFFFFF
 FFFFFF>
PLOTC RST
773 864 XY
SV 63 103 103.279 3 0 59 46 46 56 0
<FFFFFF8001FFFF FFFFFF8001FFFF FFFFFF8001FFFF FFFFFF8001FFFF
 FFFFFF8001FFFF 01FFE000001FC0 01FFF000001F80 01FFF000001F80
 00FFF800001F00 00FFF800003F00 007FF800003E00 007FFC00007E00
 003FFC00007C00 003FFE0000FC00 001FFE0000F800 001FFF0001F800
 000FFF0001F000 000FFF8003F000 0007FF8003E000 0007FFC007E000
 0007FFC007E000 0003FFE007C000 0003FFE00FC000 0001FFE00F8000
 0001FFF01F8000 0000FFF01F0000 0000FFF83F0000 00007FF83E0000
 00007FFC7E0000 00003FFC7C0000 00003FFEFC0000 00001FFEF80000
 00001FFFF80000 00001FFFF80000 00000FFFF00000 00000FFFF00000
 000007FFE00000 000007FFE00000 000003FFC00000 000003FFC00000
 000001FF800000 000001FF800000 000000FF000000 000000FF000000
 0000007E000000 0000003C000000>
PLOTC RST
832 864 XY
SV 54 103 103.279 3 0 50 46 46 48 0
<00001FFE0000 0001FFFFE000 0007FFFFF800 001FFFFFFE00 007FFC07FF00
 00FFE001FF80 01FFC0007FC0 03FF80003FE0 07FF00003FF0 0FFE00001FF0
 1FFE00000FF8 1FFC00000FF8 3FFC00000FFC 3FFC000007FC 7FFC000007FC
 7FF8000007FC 7FF8000007FE 7FF8000007FE FFF8000007FE FFF8000007FE
 FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFC FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 3FFC0000001C 1FFE0000003E
 0FFE0000003E 07FF0000007E 07FF000000FC 03FF800001F8 01FFC00003F0
 007FF0001FE0 003FFE00FFC0 001FFFFFFF80 0007FFFFFE00 0000FFFFF800
 00000FFF8000>
PLOTC RST
926 864 XY
SV 86 103 103.279 7 -1 78 72 73 72 0
<000000003FFE00000E 0000000FFFFFC0001E 0000007FFFFFF8003E
 000003FFFFFFFE00FE 00000FFFFFFFFF81FE 00003FFFF800FFC3FE
 0000FFFF80000FF7FE 0001FFFC000003FFFE 0007FFF0000001FFFE
 000FFFC00000007FFE 001FFF800000003FFE 003FFF000000001FFE
 007FFE000000000FFE 00FFFC0000000007FE 01FFF80000000007FE
 03FFF00000000003FE 03FFF00000000001FE 07FFE00000000001FE
 07FFE00000000000FE 0FFFC00000000000FE 0FFFC000000000007E
 1FFFC000000000007E 1FFF8000000000007E 3FFF8000000000007E
 3FFF8000000000003E 3FFF8000000000003E 7FFF8000000000003E
 7FFF0000000000003E 7FFF00000000000000 7FFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 FFFF00000000000000 FFFF00000000000000
 FFFF00000000000000 7FFF00000000000000 7FFF00000000000000
 7FFF00000000000000 7FFF8000000000003E 3FFF8000000000003E
 3FFF8000000000003E 3FFF8000000000003E 1FFF8000000000003E
 1FFFC000000000003E 0FFFC000000000007C 0FFFC000000000007C
 07FFE000000000007C 07FFE00000000000F8 03FFF00000000000F8
 03FFF00000000001F8 01FFF80000000001F0 00FFFC0000000003E0
 007FFE0000000007E0 003FFF000000000FC0 001FFF800000001F80
 000FFFC00000003F00 0007FFF0000000FE00 0001FFFC000001FC00
 0000FFFF80000FF800 00003FFFF8007FF000 00000FFFFFFFFFC000
 000003FFFFFFFF0000 0000007FFFFFFC0000 0000000FFFFFE00000
 000000003FFE000000>
PLOTC RST
1012 864 XY
SV 59 103 103.279 3 0 55 46 46 56 0
<00000FFF000000 0000FFFFF00000 0007FFFFFE0000 001FFFFFFF8000
 003FFC03FFC000 00FFE0007FF000 01FF80001FF800 03FF00000FFC00
 07FE000007FE00 0FFE000007FF00 0FFC000003FF00 1FFC000003FF80
 3FFC000003FFC0 3FF8000001FFC0 3FF8000001FFC0 7FF8000001FFE0
 7FF8000001FFE0 7FF8000001FFE0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0
 3FFC000003FFC0 3FFC000003FFC0 1FFC000003FF80 1FFE000007FF80
 0FFE000007FF00 07FF00000FFE00 03FF80001FFC00 01FFC0003FF800
 00FFE0007FF000 007FFC03FFE000 001FFFFFFF8000 0007FFFFFE0000
 0000FFFFF00000 00000FFF000000>
PLOTC RST
1071 864 XY
SV 99 103 103.279 4 0 96 46 46 96 0
<007FC001FFC00000FFE00000 FFFFC00FFFF80007FFFC0000
 FFFFC03FFFFE001FFFFF0000 FFFFC0FFFFFF007FFFFF8000
 FFFFC1FC07FF80FE03FFC000 FFFFC3E003FFC1F001FFE000
 03FFC7C001FFC3E000FFE000 01FFCF0001FFE78000FFF000
 01FFDE0000FFEF00007FF000 01FFDC0000FFEE00007FF000
 01FFFC0000FFFE00007FF800 01FFF80000FFFC00007FF800
 01FFF00000FFF800007FF800 01FFF00000FFF800007FF800
 01FFF00000FFF800007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 FFFFFFC07FFFFFE03FFFFFF0
 FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0
 FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0>
PLOTC RST
1170 864 XY
SV 99 103 103.279 4 0 96 46 46 96 0
<007FC001FFC00000FFE00000 FFFFC00FFFF80007FFFC0000
 FFFFC03FFFFE001FFFFF0000 FFFFC0FFFFFF007FFFFF8000
 FFFFC1FC07FF80FE03FFC000 FFFFC3E003FFC1F001FFE000
 03FFC7C001FFC3E000FFE000 01FFCF0001FFE78000FFF000
 01FFDE0000FFEF00007FF000 01FFDC0000FFEE00007FF000
 01FFFC0000FFFE00007FF800 01FFF80000FFFC00007FF800
 01FFF00000FFF800007FF800 01FFF00000FFF800007FF800
 01FFF00000FFF800007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 01FFE00000FFF000007FF800
 01FFE00000FFF000007FF800 FFFFFFC07FFFFFE03FFFFFF0
 FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0
 FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0>
PLOTC RST
1266 864 XY
SV 66 103 103.279 4 0 63 46 46 64 0
<007FE000003FF000 FFFFE0007FFFF000 FFFFE0007FFFF000 FFFFE0007FFFF000
 FFFFE0007FFFF000 FFFFE0007FFFF000 03FFE00001FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00001FFF000 01FFE00001FFF000 01FFE00001FFF000
 01FFE00003FFF000 00FFE00007FFF000 00FFE0000F7FF000 007FE0001F7FF000
 007FF0003E7FF800 003FFC00FC7FFFE0 001FFFFFF87FFFE0 0007FFFFE07FFFE0
 0001FFFF807FFFE0 00003FFE007FFFE0>
PLOTC RST
1332 864 XY
SV 66 103 103.279 4 0 63 46 46 64 0
<007FC001FFC00000 FFFFC00FFFF80000 FFFFC03FFFFE0000 FFFFC0FFFFFF0000
 FFFFC1FC07FF8000 FFFFC3E003FFC000 03FFC7C001FFC000 01FFCF0001FFE000
 01FFDE0000FFE000 01FFDC0000FFE000 01FFFC0000FFF000 01FFF80000FFF000
 01FFF00000FFF000 01FFF00000FFF000 01FFF00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0
 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0>
PLOTC RST
1398 864 XY
SV 33 103 103.279 4 0 28 73 73 24 0
<00FC00 01FF00 03FF80 07FFC0 0FFFC0 1FFFE0 1FFFE0 1FFFE0 1FFFE0
 1FFFE0 1FFFE0 0FFFC0 07FFC0 03FF80 01FF00 00FC00 000000 000000
 000000 000000 000000 000000 000000 000000 000000 000000 000000
 007FC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 03FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 FFFFFF FFFFFF FFFFFF FFFFFF
 FFFFFF>
PLOTC RST
1431 864 XY
SV 53 103 103.279 4 0 49 46 46 48 0
<00001FFFC000 0000FFFFF800 0007FFFFFE00 001FFFFFFF80 007FFC00FFC0
 00FFE001FFC0 01FFC003FFE0 03FF8003FFE0 07FF0003FFE0 0FFE0003FFE0
 0FFE0003FFE0 1FFC0001FFC0 1FFC0001FFC0 3FFC0000FF80 3FFC00003E00
 7FF800000000 7FF800000000 7FF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 1FFC000000F8 1FFE000000F8
 0FFE000000F8 0FFF000001F0 07FF800003F0 03FFC00007E0 01FFE0000FC0
 00FFF0001F80 007FFE00FF00 001FFFFFFE00 0007FFFFF800 0000FFFFE000
 00001FFE0000>
PLOTC RST
1484 864 XY
SV 58 103 103.279 3 0 57 46 46 56 0
<0007FFFC000000 007FFFFFC00000 01FFFFFFF80000 03FFFFFFFE0000
 07FE001FFF0000 07FF0003FFC000 0FFF8001FFE000 0FFF8000FFF000
 0FFF80007FF000 0FFF80007FF800 0FFF80007FF800 07FF00003FFC00
 07FF00003FFC00 03FE00003FFC00 00F800003FFC00 000000003FFC00
 000000003FFC00 000000003FFC00 000000003FFC00 000007FFFFFC00
 0000FFFFFFFC00 0007FFFFFFFC00 003FFFE03FFC00 00FFFE003FFC00
 03FFF0003FFC00 07FFC0003FFC00 0FFF00003FFC00 1FFE00003FFC00
 3FFC00003FFC00 7FF800003FFC00 7FF800003FFC00 FFF000003FFC00
 FFF000003FFC00 FFF000003FFC00 FFF000003FFC00 FFF000003FFC00
 FFF000007FFC00 7FF80000FFFC00 7FF80001EFFC00 3FFC0003EFFC00
 3FFF0007CFFF00 0FFFC03F8FFFF8 07FFFFFF07FFFC 01FFFFFC03FFFC
 007FFFF001FFFC 0003FF80007FF8>
PLOTC RST
1541 864 XY
SV 46 103 103.279 2 0 38 66 66 40 0
<0001F00000 0001F00000 0001F00000 0001F00000 0001F00000 0001F00000
 0003F00000 0003F00000 0003F00000 0007F00000 0007F00000 0007F00000
 000FF00000 000FF00000 001FF00000 003FF00000 003FF00000 007FF00000
 01FFF00000 03FFF00000 0FFFFFFFC0 FFFFFFFFC0 FFFFFFFFC0 FFFFFFFFC0
 FFFFFFFFC0 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000 00FFF00000
 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0 00FFF001F0
 00FFF001F0 00FFF001F0 00FFF001F0 007FF001E0 007FF803E0 003FF803E0
 003FFC07C0 001FFE0F80 000FFFFF80 0007FFFE00 0001FFFC00 00001FF000>
PLOTC RST
1587 864 XY
SV 33 103 103.279 4 0 28 73 73 24 0
<00FC00 01FF00 03FF80 07FFC0 0FFFC0 1FFFE0 1FFFE0 1FFFE0 1FFFE0
 1FFFE0 1FFFE0 0FFFC0 07FFC0 03FF80 01FF00 00FC00 000000 000000
 000000 000000 000000 000000 000000 000000 000000 000000 000000
 007FC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 FFFFC0 03FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0
 01FFC0 01FFC0 01FFC0 01FFC0 01FFC0 FFFFFF FFFFFF FFFFFF FFFFFF
 FFFFFF>
PLOTC RST
1620 864 XY
SV 59 103 103.279 3 0 55 46 46 56 0
<00000FFF000000 0000FFFFF00000 0007FFFFFE0000 001FFFFFFF8000
 003FFC03FFC000 00FFE0007FF000 01FF80001FF800 03FF00000FFC00
 07FE000007FE00 0FFE000007FF00 0FFC000003FF00 1FFC000003FF80
 3FFC000003FFC0 3FF8000001FFC0 3FF8000001FFC0 7FF8000001FFE0
 7FF8000001FFE0 7FF8000001FFE0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0
 3FFC000003FFC0 3FFC000003FFC0 1FFC000003FF80 1FFE000007FF80
 0FFE000007FF00 07FF00000FFE00 03FF80001FFC00 01FFC0003FF800
 00FFE0007FF000 007FFC03FFE000 001FFFFFFF8000 0007FFFFFE0000
 0000FFFFF00000 00000FFF000000>
PLOTC RST
1680 864 XY
SV 66 103 103.279 4 0 63 46 46 64 0
<007FC001FFC00000 FFFFC00FFFF80000 FFFFC03FFFFE0000 FFFFC0FFFFFF0000
 FFFFC1FC07FF8000 FFFFC3E003FFC000 03FFC7C001FFC000 01FFCF0001FFE000
 01FFDE0000FFE000 01FFDC0000FFE000 01FFFC0000FFF000 01FFF80000FFF000
 01FFF00000FFF000 01FFF00000FFF000 01FFF00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0
 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0>
PLOTC RST
1785 864 XY
SV 59 103 103.279 0 26 58 30 4 64 0
<FFFFFFFFFFFFFFC0 FFFFFFFFFFFFFFC0 FFFFFFFFFFFFFFC0 FFFFFFFFFFFFFFC0>
PLOTC RST
375 989 XY
SV 66 103 103.279 4 -20 61 46 66 64 0
<007FC00FFC000000 FFFFC07FFFC00000 FFFFC3FFFFF00000 FFFFCFFFFFFC0000
 FFFFDFF01FFF0000 FFFFFF8007FF8000 03FFFE0001FFC000 01FFF80000FFE000
 01FFF00000FFF000 01FFE000007FF800 01FFE000003FFC00 01FFE000003FFC00
 01FFE000003FFE00 01FFE000001FFE00 01FFE000001FFF00 01FFE000001FFF00
 01FFE000001FFF00 01FFE000000FFF00 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF00 01FFE000001FFF00 01FFE000001FFF00 01FFE000001FFE00
 01FFE000001FFE00 01FFE000003FFC00 01FFE000003FFC00 01FFE000007FF800
 01FFF000007FF800 01FFF80000FFF000 01FFFC0001FFE000 01FFFE0003FFC000
 01FFFF0007FF8000 01FFFFE03FFE0000 01FFEFFFFFFC0000 01FFE3FFFFF00000
 01FFE0FFFF800000 01FFE01FF8000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 FFFFFFC000000000 FFFFFFC000000000 FFFFFFC000000000
 FFFFFFC000000000 FFFFFFC000000000>
PLOTC RST
441 989 XY
SV 49 103 103.279 4 0 45 46 46 48 0
<00FF803F8000 FFFF80FFF000 FFFF83FFFC00 FFFF87FFFE00 FFFF8FC3FF00
 FFFF8F07FF00 03FF9E0FFF80 01FFBC0FFF80 01FFB80FFF80 01FFF80FFF80
 01FFF00FFF80 01FFF007FF00 01FFF007FF00 01FFE003FE00 01FFE000F800
 01FFE0000000 01FFE0000000 01FFC0000000 01FFC0000000 01FFC0000000
 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000
 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000
 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000
 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000 01FFC0000000
 01FFC0000000 FFFFFFE00000 FFFFFFE00000 FFFFFFE00000 FFFFFFE00000
 FFFFFFE00000>
PLOTC RST
490 989 XY
SV 59 103 103.279 3 0 55 46 46 56 0
<00000FFF000000 0000FFFFF00000 0007FFFFFE0000 001FFFFFFF8000
 003FFC03FFC000 00FFE0007FF000 01FF80001FF800 03FF00000FFC00
 07FE000007FE00 0FFE000007FF00 0FFC000003FF00 1FFC000003FF80
 3FFC000003FFC0 3FF8000001FFC0 3FF8000001FFC0 7FF8000001FFE0
 7FF8000001FFE0 7FF8000001FFE0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0
 3FFC000003FFC0 3FFC000003FFC0 1FFC000003FF80 1FFE000007FF80
 0FFE000007FF00 07FF00000FFE00 03FF80001FFC00 01FFC0003FF800
 00FFE0007FF000 007FFC03FFE000 001FFFFFFF8000 0007FFFFFE0000
 0000FFFFF00000 00000FFF000000>
PLOTC RST
549 989 XY
SV 66 103 103.279 4 -20 61 46 66 64 0
<007FC00FFC000000 FFFFC07FFFC00000 FFFFC3FFFFF00000 FFFFCFFFFFFC0000
 FFFFDFF01FFF0000 FFFFFF8007FF8000 03FFFE0001FFC000 01FFF80000FFE000
 01FFF00000FFF000 01FFE000007FF800 01FFE000003FFC00 01FFE000003FFC00
 01FFE000003FFE00 01FFE000001FFE00 01FFE000001FFF00 01FFE000001FFF00
 01FFE000001FFF00 01FFE000000FFF00 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80 01FFE000000FFF80
 01FFE000000FFF00 01FFE000001FFF00 01FFE000001FFF00 01FFE000001FFE00
 01FFE000001FFE00 01FFE000003FFC00 01FFE000003FFC00 01FFE000007FF800
 01FFF000007FF800 01FFF80000FFF000 01FFFC0001FFE000 01FFFE0003FFC000
 01FFFF0007FF8000 01FFFFE03FFE0000 01FFEFFFFFFC0000 01FFE3FFFFF00000
 01FFE0FFFF800000 01FFE01FF8000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 01FFE00000000000 01FFE00000000000 01FFE00000000000
 01FFE00000000000 FFFFFFC000000000 FFFFFFC000000000 FFFFFFC000000000
 FFFFFFC000000000 FFFFFFC000000000>
PLOTC RST
619 989 XY
SV 59 103 103.279 3 0 55 46 46 56 0
<00000FFF000000 0000FFFFF00000 0007FFFFFE0000 001FFFFFFF8000
 003FFC03FFC000 00FFE0007FF000 01FF80001FF800 03FF00000FFC00
 07FE000007FE00 0FFE000007FF00 0FFC000003FF00 1FFC000003FF80
 3FFC000003FFC0 3FF8000001FFC0 3FF8000001FFC0 7FF8000001FFE0
 7FF8000001FFE0 7FF8000001FFE0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0 FFF8000001FFF0
 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0 7FF8000001FFE0
 3FFC000003FFC0 3FFC000003FFC0 1FFC000003FF80 1FFE000007FF80
 0FFE000007FF00 07FF00000FFE00 03FF80001FFC00 01FFC0003FF800
 00FFE0007FF000 007FFC03FFE000 001FFFFFFF8000 0007FFFFFE0000
 0000FFFFF00000 00000FFF000000>
PLOTC RST
678 989 XY
SV 47 103 103.279 4 0 42 46 46 40 0
<000FFF00E0 007FFFF3E0 01FFFFFFE0 07FFFFFFE0 0FF800FFE0 1FC0001FE0
 3F80000FE0 3F000007E0 7F000003E0 7F000003E0 FF000003E0 FF000003E0
 FF800003E0 FFC0000000 FFF0000000 FFFE000000 FFFFF80000 7FFFFFC000
 7FFFFFF000 3FFFFFFC00 1FFFFFFF00 0FFFFFFF80 07FFFFFFC0 03FFFFFFE0
 00FFFFFFF0 003FFFFFF0 0003FFFFF8 00001FFFF8 000000FFFC 0000001FFC
 7800000FFC F8000007FC F8000003FC FC000003FC FC000003FC FE000003F8
 FE000003F8 FF000003F8 FF800007F0 FFC0000FF0 FFF0001FE0 FFFC00FFC0
 FFFFFFFF80 FC7FFFFE00 F81FFFF800 E003FF8000>
PLOTC RST
725 989 XY
SV 54 103 103.279 3 0 50 46 46 48 0
<00001FFE0000 0001FFFFE000 0007FFFFF800 001FFFFFFE00 007FFC07FF00
 00FFE001FF80 01FFC0007FC0 03FF80003FE0 07FF00003FF0 0FFE00001FF0
 1FFE00000FF8 1FFC00000FF8 3FFC00000FFC 3FFC000007FC 7FFC000007FC
 7FF8000007FC 7FF8000007FE 7FF8000007FE FFF8000007FE FFF8000007FE
 FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFC FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 3FFC0000001C 1FFE0000003E
 0FFE0000003E 07FF0000007E 07FF000000FC 03FF800001F8 01FFC00003F0
 007FF0001FE0 003FFE00FFC0 001FFFFFFF80 0007FFFFFE00 0000FFFFF800
 00000FFF8000>
PLOTC RST
779 989 XY
SV 66 103 103.279 4 0 61 72 72 64 0
<00000000007FC000 00000000FFFFC000 00000000FFFFC000 00000000FFFFC000
 00000000FFFFC000 00000000FFFFC000 0000000003FFC000 0000000001FFC000
 0000000001FFC000 0000000001FFC000 0000000001FFC000 0000000001FFC000
 0000000001FFC000 0000000001FFC000 0000000001FFC000 0000000001FFC000
 0000000001FFC000 0000000001FFC000 0000000001FFC000 0000000001FFC000
 0000000001FFC000 0000000001FFC000 0000000001FFC000 0000000001FFC000
 0000000001FFC000 0000000001FFC000 00000FFC01FFC000 0000FFFF81FFC000
 0007FFFFE1FFC000 001FFFFFF9FFC000 007FFC03FFFFC000 00FFF0007FFFC000
 01FFC0001FFFC000 03FF80000FFFC000 07FF000007FFC000 0FFE000003FFC000
 0FFE000003FFC000 1FFC000003FFC000 1FFC000003FFC000 3FFC000003FFC000
 3FFC000003FFC000 7FF8000003FFC000 7FF8000003FFC000 7FF8000003FFC000
 FFF8000003FFC000 FFF8000003FFC000 FFF8000003FFC000 FFF8000003FFC000
 FFF8000003FFC000 FFF8000003FFC000 FFF8000003FFC000 FFF8000003FFC000
 FFF8000003FFC000 FFF8000003FFC000 7FF8000003FFC000 7FF8000003FFC000
 7FF8000003FFC000 3FF8000003FFC000 3FFC000003FFC000 3FFC000003FFC000
 1FFC000003FFC000 1FFC000003FFC000 0FFE000007FFC000 07FF00000FFFC000
 03FF00001FFFC000 01FFC0003FFFC000 00FFE000FFFFE000 007FF807FBFFFF80
 001FFFFFF3FFFF80 0007FFFFC3FFFF80 0001FFFF03FFFF80 00001FF803FFFF80>
PLOTC RST
885 989 XY
SV 53 103 103.279 4 0 49 46 46 48 0
<00001FFFC000 0000FFFFF800 0007FFFFFE00 001FFFFFFF80 007FFC00FFC0
 00FFE001FFC0 01FFC003FFE0 03FF8003FFE0 07FF0003FFE0 0FFE0003FFE0
 0FFE0003FFE0 1FFC0001FFC0 1FFC0001FFC0 3FFC0000FF80 3FFC00003E00
 7FF800000000 7FF800000000 7FF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 FFF800000000 FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 1FFC000000F8 1FFE000000F8
 0FFE000000F8 0FFF000001F0 07FF800003F0 03FFC00007E0 01FFE0000FC0
 00FFF0001F80 007FFE00FF00 001FFFFFFE00 0007FFFFF800 0000FFFFE000
 00001FFE0000>
PLOTC RST
934 989 XY
SV 66 103 103.279 4 0 63 72 72 64 0
<007FC00000000000 FFFFC00000000000 FFFFC00000000000 FFFFC00000000000
 FFFFC00000000000 FFFFC00000000000 03FFC00000000000 01FFC00000000000
 01FFC00000000000 01FFC00000000000 01FFC00000000000 01FFC00000000000
 01FFC00000000000 01FFC00000000000 01FFC00000000000 01FFC00000000000
 01FFC00000000000 01FFC00000000000 01FFC00000000000 01FFC00000000000
 01FFC00000000000 01FFC00000000000 01FFC00000000000 01FFC00000000000
 01FFC00000000000 01FFC00000000000 01FFC001FFC00000 01FFC00FFFF80000
 01FFC03FFFFE0000 01FFC0FFFFFF0000 01FFC1FC07FF8000 01FFC3E003FFC000
 01FFC7C001FFC000 01FFCF0001FFE000 01FFDE0000FFE000 01FFDC0000FFE000
 01FFFC0000FFF000 01FFF80000FFF000 01FFF00000FFF000 01FFF00000FFF000
 01FFF00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 FFFFFFC07FFFFFE0
 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0>
PLOTC RST
1000 989 XY
SV 58 103 103.279 3 0 57 46 46 56 0
<0007FFFC000000 007FFFFFC00000 01FFFFFFF80000 03FFFFFFFE0000
 07FE001FFF0000 07FF0003FFC000 0FFF8001FFE000 0FFF8000FFF000
 0FFF80007FF000 0FFF80007FF800 0FFF80007FF800 07FF00003FFC00
 07FF00003FFC00 03FE00003FFC00 00F800003FFC00 000000003FFC00
 000000003FFC00 000000003FFC00 000000003FFC00 000007FFFFFC00
 0000FFFFFFFC00 0007FFFFFFFC00 003FFFE03FFC00 00FFFE003FFC00
 03FFF0003FFC00 07FFC0003FFC00 0FFF00003FFC00 1FFE00003FFC00
 3FFC00003FFC00 7FF800003FFC00 7FF800003FFC00 FFF000003FFC00
 FFF000003FFC00 FFF000003FFC00 FFF000003FFC00 FFF000003FFC00
 FFF000007FFC00 7FF80000FFFC00 7FF80001EFFC00 3FFC0003EFFC00
 3FFF0007CFFF00 0FFFC03F8FFFF8 07FFFFFF07FFFC 01FFFFFC03FFFC
 007FFFF001FFFC 0003FF80007FF8>
PLOTC RST
1058 989 XY
SV 66 103 103.279 4 0 63 46 46 64 0
<007FC001FFC00000 FFFFC00FFFF80000 FFFFC03FFFFE0000 FFFFC0FFFFFF0000
 FFFFC1FC07FF8000 FFFFC3E003FFC000 03FFC7C001FFC000 01FFCF0001FFE000
 01FFDE0000FFE000 01FFDC0000FFE000 01FFFC0000FFF000 01FFF80000FFF000
 01FFF00000FFF000 01FFF00000FFF000 01FFF00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000 01FFE00000FFF000
 01FFE00000FFF000 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0
 FFFFFFC07FFFFFE0 FFFFFFC07FFFFFE0>
PLOTC RST
1124 989 XY
SV 59 103 103.279 3 -21 56 47 68 56 0
<00000000001F80 00007FF000FFE0 0007FFFF03FFF0 001FFFFFC7FFF0
 007FFFFFFFC7F8 00FFE03FFE0FF8 01FF800FFC0FF8 03FF0007FE0FF8
 07FE0003FF07F0 07FE0003FF07F0 0FFC0001FF81C0 0FFC0001FF8000
 0FFC0001FF8000 1FFC0001FFC000 1FFC0001FFC000 1FFC0001FFC000
 1FFC0001FFC000 1FFC0001FFC000 1FFC0001FFC000 0FFC0001FF8000
 0FFC0001FF8000 0FFC0001FF8000 07FE0003FF0000 07FE0003FF0000
 03FF0007FE0000 01FF800FFC0000 00FFE03FF80000 01FFFFFFF00000
 01DFFFFFC00000 03C7FFFF000000 03C07FF0000000 07C00000000000
 07C00000000000 07C00000000000 07C00000000000 07E00000000000
 07F00000000000 07F80000000000 07FFFFFFF00000 07FFFFFFFF0000
 03FFFFFFFFE000 03FFFFFFFFF800 01FFFFFFFFFE00 01FFFFFFFFFF00
 00FFFFFFFFFF80 007FFFFFFFFF80 03FFFFFFFFFFC0 0FFFFFFFFFFFC0
 1FF800001FFFE0 3FE0000001FFE0 7FC00000007FF0 7FC00000003FF0
 FF800000001FF0 FF800000001FF0 FF800000001FF0 FF800000001FF0
 FF800000001FF0 7FC00000003FE0 7FC00000003FE0 3FE00000007FC0
 3FF0000000FFC0 1FFC000003FF80 0FFF00000FFF00 03FFF000FFFC00
 00FFFFFFFFF000 003FFFFFFFC000 0007FFFFFE0000 00003FFFC00000>
PLOTC RST
1183 989 XY
SV 54 103 103.279 3 0 50 46 46 48 0
<00001FFE0000 0001FFFFE000 0007FFFFF800 001FFFFFFE00 007FFC07FF00
 00FFE001FF80 01FFC0007FC0 03FF80003FE0 07FF00003FF0 0FFE00001FF0
 1FFE00000FF8 1FFC00000FF8 3FFC00000FFC 3FFC000007FC 7FFC000007FC
 7FF8000007FC 7FF8000007FE 7FF8000007FE FFF8000007FE FFF8000007FE
 FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFE FFFFFFFFFFFC FFF800000000
 FFF800000000 FFF800000000 FFF800000000 7FF800000000 7FF800000000
 7FFC00000000 3FFC00000000 3FFC00000000 3FFC0000001C 1FFE0000003E
 0FFE0000003E 07FF0000007E 07FF000000FC 03FF800001F8 01FFC00003F0
 007FF0001FE0 003FFE00FFC0 001FFFFFFF80 0007FFFFFE00 0000FFFFF800
 00000FFF8000>
PLOTC RST
1238 989 XY
SV 47 103 103.279 4 0 42 46 46 40 0
<000FFF00E0 007FFFF3E0 01FFFFFFE0 07FFFFFFE0 0FF800FFE0 1FC0001FE0
 3F80000FE0 3F000007E0 7F000003E0 7F000003E0 FF000003E0 FF000003E0
 FF800003E0 FFC0000000 FFF0000000 FFFE000000 FFFFF80000 7FFFFFC000
 7FFFFFF000 3FFFFFFC00 1FFFFFFF00 0FFFFFFF80 07FFFFFFC0 03FFFFFFE0
 00FFFFFFF0 003FFFFFF0 0003FFFFF8 00001FFFF8 000000FFFC 0000001FFC
 7800000FFC F8000007FC F8000003FC FC000003FC FC000003FC FE000003F8
 FE000003F8 FF000003F8 FF800007F0 FFC0000FF0 FFF0001FE0 FFFC00FFC0
 FFFFFFFF80 FC7FFFFE00 F81FFFF800 E003FF8000>
PLOTC RST
XP /F74 /cmss10 432 59.8 59.8 128 [-3 -16 59 45] PXLNF RP
XP /F74 49 30 5 0 25 40 40 24 0
<001800 003800 00F800 07F800 FFF800 FFF800 F8F800 00F800 00F800
 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800
 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800
 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800 00F800
 00F800 7FFFF0 7FFFF0 7FFFF0>
PXLC RP
375 1230 XY F74(1)S
XP /F74 46 17 6 0 11 5 5 8 0
<F8 F8 F8 F8 F8>
PXLC RP
405 1230 XY F74(.1)S
XP /F74 67 38 4 -1 34 43 44 32 0
<0001FF00 000FFFE0 003FFFF8 007FFFF8 00FE01F8 01F80030 03F00010
 07C00000 0F800000 1F800000 1F000000 3E000000 3E000000 7E000000
 7C000000 7C000000 7C000000 F8000000 F8000000 F8000000 F8000000
 F8000000 F8000000 F8000000 F8000000 F8000000 F8000000 7C000000
 7C000000 7C000000 7E000000 3E000000 3E000000 1F000000 1F800000
 0F800000 07C00000 03F00004 01F8001C 00FE00FC 007FFFFC 003FFFF8
 000FFFE0 0001FF00>
PXLC RP
511 1230 XY F74(C)S
XP /F74 111 30 2 0 27 27 27 32 0
<007F0000 01FFC000 07FFF000 0FFFF800 1FC1FC00 3F007E00 3E003E00
 7C001F00 7C001F00 78000F00 F8000F80 F8000F80 F8000F80 F8000F80
 F8000F80 F8000F80 F8000F80 7C001F00 7C001F00 7E003F00 3E003E00
 3F007E00 1FC1FC00 0FFFF800 07FFF000 01FFC000 007F0000>
PXLC RP
549 1230 XY F74(o)S
XP /F74 108 14 4 0 9 42 42 8 0
<F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8
 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8>
PXLC RP
579 1230 XY F74(ll)S
XP /F74 101 27 2 0 24 27 27 24 0
<007E00 03FF80 07FFC0 0FFFE0 1F83F0 3F00F0 3E0078 7C0078 7C0038
 78003C FFFFFC FFFFFC FFFFFC FFFFFC F80000 F80000 F80000 780000
 7C0000 7C0000 3E0000 3F000C 1FC07C 0FFFFC 07FFFC 01FFF0 007F80>
PXLC RP
608 1230 XY F74(e)S
XP /F74 99 27 2 0 24 27 27 24 0
<007FC0 01FFF0 07FFFC 0FFFFC 1FC07C 1F0008 3E0000 7C0000 7C0000
 7C0000 F80000 F80000 F80000 F80000 F80000 F80000 F80000 7C0000
 7C0000 7E0000 3E0000 1F000C 1FC07C 0FFFFC 07FFFC 01FFF0 007F80>
PXLC RP
634 1230 XY F74(c)S
XP /F74 116 22 1 0 20 34 34 24 0
<07C000 07C000 07C000 07C000 07C000 07C000 07C000 FFFFC0 FFFFC0
 FFFFC0 07C000 07C000 07C000 07C000 07C000 07C000 07C000 07C000
 07C000 07C000 07C000 07C000 07C000 07C000 07C000 07C000 07C000
 07C000 07C040 07E1C0 03FFE0 03FFE0 01FF80 00FC00>
PXLC RP
661 1230 XY F74(t)S
XP /F74 105 14 4 0 9 42 42 8 0
<F8 F8 F8 F8 F8 00 00 00 00 00 00 00 00 00 00 F8 F8 F8 F8 F8 F8 F8 F8
 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8>
PXLC RP
683 1230 XY F74(i)S
XP /F74 118 28 1 0 26 27 27 32 0
<F8000F80 FC000F80 7C001F00 7C001F00 7E001F00 3E003E00 3E003E00
 1F003C00 1F007C00 1F007C00 0F807800 0F80F800 0F80F800 07C0F000
 07C1F000 07C1F000 03E1E000 03E3E000 01E3C000 01E3C000 01F3C000
 00F78000 00F78000 00F78000 007F0000 007F0000 007F0000>
PXLC RP
697 1230 XY F74(ve)S
XP /F74 100 31 2 0 25 42 42 24 0
<00003E 00003E 00003E 00003E 00003E 00003E 00003E 00003E 00003E
 00003E 00003E 00003E 00003E 00003E 00003E 00FC3E 03FF3E 07FFFE
 0FFFFE 1FC1FE 3F007E 3E003E 7C003E 7C003E FC003E F8003E F8003E
 F8003E F8003E F8003E F8003E F8003E FC003E 7C003E 7C003E 3E007E
 3F00FE 1FC1FE 0FFFFE 07FFBE 03FF3E 00FC3E>
PXLC RP
771 1230 XY F74(d)S
XP /F74 97 29 2 0 23 27 27 24 0
<01FE00 0FFF80 3FFFC0 3FFFE0 3C03F0 3001F0 0001F8 0000F8 0000F8
 0000F8 0000F8 0000F8 007FF8 07FFF8 1FFFF8 3FE0F8 7F00F8 FC00F8
 F800F8 F800F8 F800F8 FC01F8 7E07F8 7FFFF8 3FFFF8 1FFCF8 0FE0F8>
PXLC RP
802 1230 XY F74(ata)S
XP /F74 109 47 5 0 42 27 27 40 0
<F83F003F00 F8FFC0FFC0 FBFFE3FFE0 FFFFF7FFF0 FF83F783F0 FE01FE01F8
 FC00FC00F8 FC00FC00F8 FC00FC00F8 F800F800F8 F800F800F8 F800F800F8
 F800F800F8 F800F800F8 F800F800F8 F800F800F8 F800F800F8 F800F800F8
 F800F800F8 F800F800F8 F800F800F8 F800F800F8 F800F800F8 F800F800F8
 F800F800F8 F800F800F8 F800F800F8>
PXLC RP
901 1230 XY F74(moveme)S
XP /F74 110 31 5 0 25 27 27 24 0
<F83F00 F8FF80 FBFFC0 FFFFE0 FF07E0 FE03F0 FC01F0 FC01F0 FC01F0
 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0
 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0 F801F0>
PXLC RP
1106 1230 XY F74(nt)S
XP /F74 115 23 2 0 21 27 27 24 0
<03FC00 1FFF80 3FFFC0 7FFFC0 7C07C0 F80080 F80000 F80000 F80000
 FC0000 7F8000 7FF800 3FFE00 1FFF00 07FF80 00FFC0 000FE0 0007E0
 0003E0 0003E0 4003E0 E007E0 FC0FC0 FFFFC0 7FFF80 1FFE00 03F800>
PXLC RP
1159 1230 XY F74(s)S
XP /F74 123 30 0 15 29 19 4 32 0
<FFFFFFF8 FFFFFFF8 FFFFFFF8 FFFFFFF8>
PXLC RP
1202 1230 XY F74({)S 19 x(sim)S
XP /F74 112 31 5 -12 28 27 39 24 0
<F83F00 F9FFC0 FBFFE0 FFFFF0 FF07F0 FC01F8 F800FC F800FC F8007C
 F8007E F8003E F8003E F8003E F8003E F8003E F8003E F8003E F8007C
 F8007C F800FC FC00F8 FC03F8 FF07F0 FFFFE0 FBFFC0 F9FF80 F87E00
 F80000 F80000 F80000 F80000 F80000 F80000 F80000 F80000 F80000
 F80000 F80000 F80000>
PXLC RP
1336 1230 XY F74(pli)S
XP /F74 12 32 1 0 26 43 43 32 0
<00000F80 003F8F80 007F8F80 00FF8F80 01FF8F80 03E00000 03C00000
 07C00000 07C00000 07C00000 07C00000 07C00000 07C00000 07C00000
 07C00000 07C00000 FFFF8F80 FFFF8F80 FFFF8F80 07C00F80 07C00F80
 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80
 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80
 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80 07C00F80
 07C00F80>
PXLC RP
1396 1230 XY F74(\014cation)S
XP /F34 /cmr10 329 45.5 45.5 128 [-2 -12 45 34] PXLNF RP
XP /F34 80 31 2 0 27 31 31 32 0
<FFFFE000 0F807800 07801C00 07801E00 07800F00 07800F80 07800F80
 07800F80 07800F80 07800F80 07800F80 07800F00 07801E00 07801C00
 07807800 07FFE000 07800000 07800000 07800000 07800000 07800000
 07800000 07800000 07800000 07800000 07800000 07800000 07800000
 07800000 0FC00000 FFFC0000>
PXLC RP
375 1331 XY F34(P)S
XP /F34 114 18 1 0 16 20 20 16 0
<0E78 FE8C 0F1E 0F1E 0F0C 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 0E00 0E00 0E00 0E00 FFE0>
PXLC RP
406 1331 XY F34(r)S
XP /F34 111 23 1 0 21 20 20 24 0
<01F800 070E00 1C0380 3801C0 3801C0 7000E0 7000E0 F000F0 F000F0
 F000F0 F000F0 F000F0 F000F0 7000E0 7000E0 3801C0 3801C0 1C0380
 070E00 01F800>
PXLC RP
424 1331 XY F34(o)S
XP /F34 112 25 1 -9 22 20 29 24 0
<0E3E00 FEC380 0F01C0 0F00E0 0E00E0 0E00F0 0E0070 0E0078 0E0078
 0E0078 0E0078 0E0078 0E0078 0E0070 0E00F0 0E00E0 0F01E0 0F01C0
 0EC300 0E3E00 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 FFE000>
PXLC RP
446 1331 XY F34(p)S 1 x(o)S
XP /F34 115 18 2 0 15 20 20 16 0
<1F90 3070 4030 C010 C010 C010 E000 7800 7F80 3FE0 0FF0 0070 8038
 8018 8018 C018 C018 E030 D060 8F80>
PXLC RP
496 1331 XY F34(s)S
XP /F34 97 23 2 0 22 20 20 24 0
<1FE000 303000 781800 781C00 300E00 000E00 000E00 000E00 00FE00
 078E00 1E0E00 380E00 780E00 F00E10 F00E10 F00E10 F01E10 781E10
 386720 0F83C0>
PXLC RP
514 1331 XY F34(a)S
XP /F34 108 13 0 0 11 32 32 16 0
<0E00 FE00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 0E00 0E00 0E00 FFE0>
PXLC RP
536 1331 XY F34(l)S
XP /F34 116 18 1 0 14 28 28 16 0
<0200 0200 0200 0600 0600 0E00 0E00 3E00 FFF8 0E00 0E00 0E00 0E00
 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E08 0E08 0E08 0E08 0E08 0610
 0310 01E0>
PXLC RP
570 1331 XY F34(to)S
XP /F34 100 25 2 0 23 32 32 24 0
<000380 003F80 000380 000380 000380 000380 000380 000380 000380
 000380 000380 000380 03E380 061B80 1C0780 380380 380380 700380
 700380 F00380 F00380 F00380 F00380 F00380 F00380 700380 700380
 380380 380780 1C0780 0E1B80 03E3F8>
PXLC RP
631 1331 XY F34(d)S
XP /F34 101 20 1 0 18 20 20 24 0
<03F000 0E1C00 1C0E00 380700 380700 700700 700380 F00380 F00380
 FFFF80 F00000 F00000 F00000 700000 700000 380080 180080 0C0100
 070600 01F800>
PXLC RP
656 1331 XY F34(elete)S 20 x(t)S
XP /F34 104 25 1 0 23 32 32 24 0
<0E0000 FE0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 0E0000 0E0000 0E3E00 0E4300 0E8180 0F01C0 0F01C0 0E01C0
 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 0E01C0 0E01C0 0E01C0 FFE7FC>
PXLC RP
785 1331 XY F34(he)S
XP /F41 /cmss10 329 45.5 45.5 128 [-3 -12 44 34] PXLNF RP
XP /F41 114 16 3 0 14 20 20 16 0
<F0E0 F3E0 F7E0 FF00 FC00 FC00 F800 F800 F000 F000 F000 F000 F000
 F000 F000 F000 F000 F000 F000 F000>
PXLC RP
851 1331 XY F41(r)S
XP /F41 101 20 1 0 18 20 20 24 0
<03F000 0FFC00 1FFE00 3E1F00 3C0700 780700 700380 FFFF80 FFFF80
 FFFF80 F00000 F00000 F00000 700000 780000 3C0100 3E0700 1FFF00
 07FE00 01F800>
PXLC RP
867 1331 XY F41(e)S
XP /F41 99 20 2 0 18 20 20 16 0
<03F0 0FFC 1FFE 3E0E 3C02 7800 7800 F000 F000 F000 F000 F000 F000
 7800 7800 3C01 3E0F 1FFF 0FFE 03F0>
PXLC RP
887 1331 XY F41(c)S
XP /F41 118 21 1 0 19 20 20 24 0
<F003C0 F003C0 780380 780780 780780 3C0F00 3C0F00 3C0F00 1E0E00
 1E1E00 1E1E00 0E1C00 0F3C00 0F3C00 073800 073800 073800 03B000
 03F000 01E000>
PXLC RP
907 1331 XY F41(v)S
XP /F41 116 16 1 0 13 26 26 16 0
<1E00 1E00 1E00 1E00 1E00 1E00 FFF0 FFF0 FFF0 1E00 1E00 1E00 1E00
 1E00 1E00 1E00 1E00 1E00 1E00 1E00 1E00 1E00 1E20 1FF0 0FF0 07C0>
PXLC RP
928 1331 XY F41(t)S
XP /F41 121 21 1 -9 19 20 29 24 0
<F003C0 F003C0 780780 780780 7C0780 3C0F00 3C0F00 1E0F00 1E1E00
 0E1E00 0F1C00 0F1C00 073C00 073800 03B800 03B800 03B000 01B000
 01F000 00E000 00E000 01C000 01C000 01C000 038000 078000 7F0000
 7E0000 7C0000>
PXLC RP
943 1331 XY F41(y)S
XP /F41 112 23 3 -9 20 20 29 24 0
<F1F000 F7FC00 FFFE00 FC3E00 F81F00 F00F00 F00F80 F00780 F00780
 F00780 F00780 F00780 F00780 F00F00 F00F00 F81F00 FC3E00 FFFC00
 F7F800 F1E000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000>
PXLC RP
964 1331 XY F41(p)S 1 x(e)S 22 x F34(ar)S
XP /F34 103 23 1 -10 21 21 31 24 0
<0000E0 03E330 0E3C30 1C1C30 380E00 780F00 780F00 780F00 780F00
 780F00 380E00 1C1C00 1E3800 33E000 200000 200000 300000 300000
 3FFE00 1FFF80 0FFFC0 3001E0 600070 C00030 C00030 C00030 C00030
 600060 3000C0 1C0380 03FC00>
PXLC RP
1070 1331 XY F34(g)S
XP /F34 117 25 1 0 23 20 20 24 0
<0E01C0 FE1FC0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E03C0 0603C0
 030DC0 01F1FC>
PXLC RP
1093 1331 XY F34(u)S
XP /F34 109 38 1 0 36 20 20 40 0
<0E1F01F000 FE61861800 0E81C81C00 0F00F00E00 0F00F00E00 0E00E00E00
 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00
 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00 0E00E00E00
 0E00E00E00 FFE7FE7FE0>
PXLC RP
1118 1331 XY F34(me)S
XP /F34 110 25 1 0 23 20 20 24 0
<0E3E00 FE4300 0E8180 0F01C0 0F01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 FFE7FC>
PXLC RP
1176 1331 XY F34(n)S -1 x(t)S
XP /F34 102 14 0 0 16 32 32 16 0
<007C 00C6 018F 038F 0706 0700 0700 0700 0700 0700 0700 0700 FFF0
 0700 0700 0700 0700 0700 0700 0700 0700 0700 0700 0700 0700 0700
 0700 0700 0700 0700 0700 7FF0>
PXLC RP
1239 1331 XY F34(from)S
XP /F41 77 40 5 0 34 32 32 32 0
<F80001F8 FC0003F8 FC0003F8 F4000378 F6000778 F6000778 F6000778
 F3000E78 F3000E78 F3000E78 F3801E78 F3801E78 F1801C78 F1C03C78
 F1C03C78 F0C03878 F0C03878 F0E07878 F0E07878 F0607078 F070F078
 F070F078 F030E078 F039E078 F039E078 F019C078 F019C078 F019C078
 F00F8078 F00F8078 F00F8078 F0000078>
PXLC RP
1352 1331 XY F41(M)S
XP /F41 80 29 5 0 25 32 32 24 0
<FFF800 FFFF00 FFFF80 F00FC0 F003E0 F001E0 F000F0 F000F0 F000F0
 F000F0 F000F0 F000F0 F000F0 F001E0 F003E0 F00FC0 FFFF80 FFFF00
 FFF800 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000>
PXLC RP
1391 1331 XY F41(P)S
XP /F41 73 13 4 0 8 32 32 8 0
<F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0>
PXLC RP
1420 1331 XY F41(I)S 1436 1331 XY 14 2 R
XP /F41 71 30 3 -1 26 33 34 24 0
<001FE0 00FFF8 01FFFE 03E03E 07800E 0F0000 1E0000 3E0000 3C0000
 7C0000 780000 780000 780000 F00000 F00000 F00000 F00000 F00000
 F00000 F003FE F003FE 7803FE 78001E 78001E 7C001E 3C001E 3E001E
 1E001E 0F001E 07801E 03E03E 01FFFE 00FFF8 001FC0>
PXLC RP
1449 1331 XY F41(G)S
XP /F41 65 30 1 0 28 32 32 32 0
<001F0000 001F0000 003F8000 003B8000 003B8000 007BC000 0073C000
 0071C000 00F1E000 00E1E000 00E0E000 01E0F000 01E0F000 01C0F000
 03C07800 03C07800 03807800 07803C00 07803C00 07003C00 0FFFFE00
 0FFFFE00 0FFFFE00 1E000F00 1E000F00 3C000F80 3C000780 3C000780
 780007C0 780003C0 780003C0 F00003E0>
PXLC RP
1480 1331 XY F41(A)S
XP /F41 84 31 2 0 28 32 32 32 0
<FFFFFFC0 FFFFFFC0 FFFFFFC0 001E0000 001E0000 001E0000 001E0000
 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000
 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000
 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000 001E0000
 001E0000 001E0000 001E0000 001E0000>
PXLC RP
1506 1331 XY F41(T)S
XP /F41 72 32 5 0 26 32 32 24 0
<F00078 F00078 F00078 F00078 F00078 F00078 F00078 F00078 F00078
 F00078 F00078 F00078 F00078 F00078 FFFFF8 FFFFF8 FFFFF8 F00078
 F00078 F00078 F00078 F00078 F00078 F00078 F00078 F00078 F00078
 F00078 F00078 F00078 F00078 F00078>
PXLC RP
1537 1331 XY F41(H)S
XP /F41 69 27 5 0 24 32 32 24 0
<FFFFC0 FFFFC0 FFFFC0 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000 FFFF80 FFFF80 FFFF80 F00000
 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 FFFFE0 FFFFE0 FFFFE0>
PXLC RP
1569 1331 XY F41(E)S
XP /F41 82 29 5 0 27 32 32 24 0
<FFF800 FFFF00 FFFF80 F007C0 F003E0 F001E0 F000F0 F000F0 F000F0
 F000F0 F000F0 F001E0 F003E0 F007C0 FFFF80 FFFF00 FFF800 F03C00
 F01E00 F01E00 F00F00 F00F00 F00780 F00780 F003C0 F001C0 F001E0
 F000F0 F000F0 F00078 F00078 F0003C>
PXLC RP
1597 1331 XY F41(R)S
XP /F34 44 13 4 -9 10 5 14 8 0
<70 F8 FC FC 74 04 04 04 08 08 10 10 20 40>
PXLC RP
1626 1331 XY F34(,)S 20 x F41(MPI)S 1743 1331 XY 14 2 R
XP /F41 83 25 2 -1 21 33 34 24 0
<01FC00 07FF80 0FFFC0 1F03C0 3C00C0 3C0000 780000 780000 780000
 780000 780000 7C0000 3C0000 3F0000 1FE000 0FFC00 07FE00 01FF00
 003F80 0007C0 0003C0 0003E0 0001E0 0001E0 0001E0 0001E0 0001E0
 0001C0 C003C0 F007C0 FC0F80 7FFF00 1FFE00 03F800>
PXLC RP
1757 1331 XY F41(S)S
XP /F41 67 29 3 -1 26 33 34 24 0
<001FC0 00FFF8 01FFFC 03E03C 07800C 0F0000 1E0000 3E0000 3C0000
 7C0000 780000 780000 780000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 780000 780000 780000 7C0000 3C0000 3E0000
 1E0000 0F0002 07800E 03E03E 01FFFC 00FFF0 001FC0>
PXLC RP
1782 1331 XY F41(CA)S -4 x(TTER)S F34(,)S 21 x(et)S
XP /F34 99 20 2 0 18 20 20 16 0
<03F8 0E0C 1C1E 381E 380C 7000 7000 F000 F000 F000 F000 F000 F000
 7000 7000 3801 3801 1C02 0E0C 03F0>
PXLC RP
2027 1331 XY F34(c)S
XP /F34 46 13 4 0 9 5 5 8 0
<70 F8 F8 F8 70>
PXLC RP
2047 1331 XY F34(.)S
XP /F34 84 33 2 0 30 31 31 32 0
<7FFFFFE0 780F01E0 600F0060 400F0020 400F0020 C00F0030 800F0010
 800F0010 800F0010 800F0010 000F0000 000F0000 000F0000 000F0000
 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000
 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000
 000F0000 001F8000 07FFFE00>
PXLC RP
2097 1331 XY F34(The)S 57 y 375 X(same)S 15 x(datat)S
XP /F34 121 24 1 -9 22 20 29 24 0
<FF83F8 1E01E0 1C00C0 0E0080 0E0080 0E0080 070100 070100 038200
 038200 038200 01C400 01C400 01EC00 00E800 00E800 007000 007000
 007000 002000 002000 004000 004000 004000 F08000 F08000 F10000
 620000 3C0000>
PXLC RP
594 1388 XY F34(yp)S 1 x(e)S
XP /F34 119 33 1 0 31 20 20 32 0
<FF9FE1FC 3C078070 1C030060 1C038020 0E038040 0E038040 0E03C040
 0707C080 0704C080 0704E080 03886100 03887100 03C87300 01D03200
 01D03A00 00F03C00 00E01C00 00E01C00 00601800 00400800>
PXLC RP
680 1388 XY F34(w)S
XP /F34 105 13 0 0 10 31 31 16 0
<1C00 1E00 3E00 1E00 1C00 0000 0000 0000 0000 0000 0000 0E00 7E00
 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 0E00 0E00 FFC0>
PXLC RP
712 1388 XY F34(ill)S
XP /F34 98 25 1 0 22 32 32 24 0
<0E0000 FE0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 0E0000 0E0000 0E3E00 0EC380 0F01C0 0F00E0 0E00E0 0E0070
 0E0070 0E0078 0E0078 0E0078 0E0078 0E0078 0E0078 0E0070 0E0070
 0E00E0 0F00E0 0D01C0 0CC300 083E00>
PXLC RP
765 1388 XY F34(b)S 1 x(e)S 16 x(used)S 16 x(b)S 1 x(oth)S 15 x
(the)S 16 x(the)S 15 x(send)S 16 x(bu)S
XP /F34 11 27 0 0 29 32 32 32 0
<001F83E0 00F06E30 01C07878 0380F878 0300F030 07007000 07007000
 07007000 07007000 07007000 07007000 07007000 FFFFFF80 07007000
 07007000 07007000 07007000 07007000 07007000 07007000 07007000
 07007000 07007000 07007000 07007000 07007000 07007000 07007000
 07007000 07007000 07007000 7FE3FF00>
PXLC RP
1349 1388 XY F34(\013er)S 15 x(and)S 15 x(the)S 16 x(recei)S
XP /F34 118 24 1 0 22 20 20 24 0
<FF83F8 1E01E0 1C00C0 0E0080 0E0080 0E0080 070100 070100 038200
 038200 038200 01C400 01C400 01EC00 00E800 00E800 007000 007000
 007000 002000>
PXLC RP
1687 1388 XY F34(v)S -1 x(e)S 15 x(bu\013fer.)S 56 y 466 X(The)S
XP /F34 106 14 -2 -9 10 31 40 16 0
<00E0 01F0 01F0 01F0 00E0 0000 0000 0000 0000 0000 0000 0070 07F0
 00F0 0070 0070 0070 0070 0070 0070 0070 0070 0070 0070 0070 0070
 0070 0070 0070 0070 0070 0070 0070 0070 0070 6070 F060 F0C0 6180
 3F00>
PXLC RP
558 1444 XY F34(justi)S
XP /F34 12 25 0 0 23 32 32 24 0
<003F00 00E0C0 01C0C0 0381E0 0701E0 0701E0 070000 070000 070000
 070000 070000 070000 FFFFE0 0700E0 0700E0 0700E0 0700E0 0700E0
 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0
 0700E0 0700E0 0700E0 0700E0 7FC3FE>
PXLC RP
646 1444 XY F34(\014cation)S 14 x(is)S 13 x(that)S 14 x(this)S 14 x
(simpli)S
(\014es)S 13 x(the)S 15 x(use)S 14 x(of)S 14 x(the)S 15 x(simple)S
13 x(collectiv)S -1 x(e)S 14 x(data)S 13 x(mo)S -1 x(v)S -1 x(e)S
14 x(mo)S -1 x(v)S -1 x(e)S 57 y 375 X(function.)S 34 x(The)S 20 x
(more)S 20 x(general)S 19 x(setup,)S 22 x(with)S 19 x(a)S 20 x(di)S
(\013eren)S -1 x(t)S 19 x(datat)S -1 x(yp)S 1 x(e)S 19 x(in)S 20 x
(the)S 20 x(send)S 21 x(bu\013er)S 20 x(and)S 20 x(the)S 56 y 375 X
(receiv)S -1 x(e)S 15 x(bu\013er,)S 15 x(will)S 14 x(still)S 13 x
(b)S 1 x(e)S 16 x(a)S -1 x(v)S -3 x(ailable)S 14 x(for)S 15 x(the)S
15 x F41(MPI)S 1322 1557 XY 14 2 R(GA)S -4 x(THER)S
XP /F41 86 30 1 0 28 32 32 32 0
<F00001E0 F00001E0 780003C0 780003C0 780003C0 3C000780 3C000780
 3C000780 1E000F00 1E000F00 1F000F00 0F001E00 0F001E00 07801C00
 07803C00 07803C00 03C03800 03C07800 03C07800 01E07000 01E0F000
 01E0F000 00F0E000 00F1E000 00F1E000 0071C000 007BC000 003B8000
 003B8000 003F8000 001F0000 001F0000>
PXLC RP
1512 1557 XY F41(V)S F34(,)S 15 x F41(MPI)S 1655 1557 XY
14 2 R -1 x(SCA)S -4 x(TTERV)S F34(,)S 16 x(etc.)S
XP /F50 /cmss12 300 49.8 49.8 128 [-3 -13 48 37] PXLNF RP
XP /F50 49 24 4 0 20 34 34 16 0
<00C0 01C0 07C0 FFC0 FFC0 FBC0 03C0 03C0 03C0 03C0 03C0 03C0 03C0
 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0 03C0
 03C0 03C0 03C0 03C0 03C0 FFFF FFFF FFFF>
PXLC RP
375 1679 XY F50(1)S
XP /F50 46 14 5 0 9 4 4 8 0
<F0 F0 F0 F0>
PXLC RP
399 1679 XY F50(.1.1)S
XP /F50 65 32 2 0 30 35 35 32 0
<001F0000 001F0000 003F8000 003F8000 003B8000 007BC000 0073C000
 0071C000 00F1E000 00F1E000 00E0E000 01E0F000 01E0F000 01C0F000
 03C07800 03C07800 03807800 07803C00 07803C00 07003C00 0F001E00
 0F001E00 0FFFFE00 1FFFFF00 1FFFFF00 1C000F00 3C000780 3C000780
 38000780 780003C0 780003C0 700003C0 F00001E0 F00001E0 E00001E0>
PXLC RP
524 1679 XY F50(A)S
XP /F50 108 11 3 0 7 35 35 8 0
<F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0>
PXLC RP
556 1679 XY F50(l)S
XP /F50 116 18 1 0 15 28 28 16 0
<0F00 0F00 0F00 0F00 0F00 0F00 FFF8 FFF8 FFF8 0F00 0F00 0F00 0F00
 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F08 0F1C 07FC
 07F8 03E0>
PXLC RP
568 1679 XY F50(t)S
XP /F50 101 22 2 0 20 22 22 24 0
<03F000 07FC00 1FFE00 3E0F00 3C0780 780380 780380 F001C0 FFFFC0
 FFFFC0 FFFFC0 F00000 F00000 F00000 700000 780000 780000 3C0080
 1F0780 0FFF80 07FF00 01F800>
PXLC RP
585 1679 XY F50(e)S
XP /F50 114 17 4 0 15 22 22 16 0
<F0E0 F3E0 F7E0 FF00 FE00 FC00 F800 F800 F000 F000 F000 F000 F000
 F000 F000 F000 F000 F000 F000 F000 F000 F000>
PXLC RP
607 1679 XY F50(r)S
XP /F50 110 25 4 0 20 22 22 16 0
<F1F8 F3FC F7FE FE1E F80F F80F F00F F00F F00F F00F F00F F00F F00F
 F00F F00F F00F F00F F00F F00F F00F F00F F00F>
PXLC RP
624 1679 XY F50(n)S
XP /F50 97 23 2 0 18 22 22 16 0
<07E0 1FF8 3FFC 381E 201E 000F 000F 000F 000F 00FF 07FF 1FFF 3E0F
 780F F00F F00F F00F F00F F83F 7FFF 3FEF 1F8F>
PXLC RP
649 1679 XY F50(at)S
XP /F50 105 11 3 0 7 35 35 8 0
<F0 F0 F0 F0 00 00 00 00 00 00 00 00 00 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0>
PXLC RP
690 1679 XY F50(i)S
XP /F50 118 22 1 0 20 22 22 24 0
<F001E0 F001E0 7803C0 7803C0 7803C0 3C0780 3C0780 3C0780 1E0700
 1E0F00 1E0F00 0E0E00 0F1E00 0F1E00 071C00 079C00 07BC00 03B800
 03B800 03F800 01F000 01F000>
PXLC RP
701 1679 XY F50(ve)S
XP /F50 100 25 2 0 20 35 35 24 0
<0003C0 0003C0 0003C0 0003C0 0003C0 0003C0 0003C0 0003C0 0003C0
 0003C0 0003C0 0003C0 0003C0 03E3C0 0FFBC0 1FFFC0 3F0FC0 3C07C0
 7803C0 7803C0 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0
 F003C0 7803C0 7803C0 3C07C0 3E0FC0 1FFFC0 0FFBC0 03E3C0>
PXLC RP
761 1679 XY F50(de)S
XP /F50 12 26 0 0 21 36 36 24 0
<000078 007C78 00FC78 01FC78 03C000 038000 078000 078000 078000
 078000 078000 078000 078000 078000 FFFC78 FFFC78 FFFC78 078078
 078078 078078 078078 078078 078078 078078 078078 078078 078078
 078078 078078 078078 078078 078078 078078 078078 078078 078078>
PXLC RP
808 1679 XY F50(\014niti)S
XP /F50 111 24 1 0 22 22 22 24 0
<01FC00 07FF00 0FFF80 1F07C0 3C01E0 7800F0 7800F0 700070 F00078
 F00078 F00078 F00078 F00078 F00078 7800F0 7800F0 7C01F0 3E03E0
 1F07C0 0FFF80 07FF00 01FC00>
PXLC RP
900 1679 XY F50(on)S 16 x(o)S
XP /F50 102 15 0 0 16 35 35 16 0
<003F 00FF 01FF 03C0 0380 0780 0780 0780 0780 0780 0780 0780 0780
 FFF8 FFF8 FFF8 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780
 0780 0780 0780 0780 0780 0780 0780 0780 0780>
PXLC RP
990 1679 XY F50(f)S 16 x(red)S
XP /F50 117 25 4 0 20 22 22 16 0
<F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F
 F00F F00F F00F F00F F01F F83F 7FFF 7FCF 1F0F>
PXLC RP
1084 1679 XY F50(u)S
XP /F50 99 22 2 0 20 22 22 24 0
<01FC00 07FF00 0FFF80 1F0380 3C0180 780000 780000 700000 F00000
 F00000 F00000 F00000 F00000 F00000 780000 780000 780000 3C0040
 1F03C0 0FFFC0 07FF80 01FC00>
PXLC RP
1109 1679 XY F50(ction)S 17 x(function)S
XP /F50 115 19 1 0 17 22 22 16 0
<07F0 1FFC 3FFE 3C0E 7806 7800 7800 7C00 3F00 3FF0 1FF8 0FFC 01FE
 001F 000F 000F 000F C00F F81E FFFE 3FFC 0FF0>
PXLC RP
1391 1679 XY F50(s)S
XP /F30 /cmbx10 300 41.5 41.5 128 [-3 -10 47 31] PXLNF RP
XP /F30 68 37 2 0 33 28 28 32 0
<FFFFF800 FFFFFF00 0FC01FC0 0FC007E0 0FC001F0 0FC001F8 0FC000F8
 0FC000FC 0FC0007C 0FC0007C 0FC0007E 0FC0007E 0FC0007E 0FC0007E
 0FC0007E 0FC0007E 0FC0007E 0FC0007E 0FC0007C 0FC0007C 0FC0007C
 0FC000F8 0FC000F8 0FC001F0 0FC007E0 0FC01FC0 FFFFFF00 FFFFF800>
PXLC RP
375 1848 XY F30(D)S
XP /F30 105 13 1 0 12 30 30 16 0
<1E00 3F00 3F00 3F00 3F00 1E00 0000 0000 0000 0000 0000 0000 FF00
 FF00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00
 1F00 1F00 FFE0 FFE0>
PXLC RP
412 1848 XY F30(i)S
XP /F30 115 19 2 0 16 18 18 16 0
<1FD8 3078 6018 E018 E018 F000 FF80 7FE0 7FF0 1FF8 07FC 007C C01C
 C01C E01C E018 F830 CFC0>
PXLC RP
425 1848 XY F30(s)S
XP /F30 99 21 2 0 19 18 18 24 0
<03FC00 0E0E00 1C1F00 3C1F00 781F00 780E00 F80000 F80000 F80000
 F80000 F80000 F80000 780000 780180 3C0180 1C0300 0E0E00 03F800>
PXLC RP
444 1848 XY F30(c)S
XP /F30 117 27 1 0 25 18 18 24 0
<FF07F8 FF07F8 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8
 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 1F01F8 0F01F8 0786FF 01F8FF>
PXLC RP
465 1848 XY F30(ussi)S
XP /F30 111 24 1 0 22 18 18 24 0
<01FC00 0F0780 1C01C0 3C01E0 7800F0 7800F0 F800F8 F800F8 F800F8
 F800F8 F800F8 F800F8 7800F0 7800F0 3C01E0 1E03C0 0F0780 01FC00>
PXLC RP
542 1848 XY F30(o)S
XP /F30 110 27 1 0 25 18 18 24 0
<FF0FC0 FF31E0 1F40F0 1F80F8 1F80F8 1F00F8 1F00F8 1F00F8 1F00F8
 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 1F00F8 FFE7FF FFE7FF>
PXLC RP
566 1848 XY F30(n)S
XP /F30 58 13 3 0 9 18 18 8 0
<78 FC FC FC FC 78 00 00 00 00 00 00 78 FC FC FC FC 78>
PXLC RP
593 1848 XY F30(:)S
XP /F25 /cmr10 300 41.5 41.5 128 [-3 -11 41 31] PXLNF RP
XP /F25 87 43 1 -1 41 28 29 40 0
<FFE0FFE0FF 1F001F003C 1E001E0018 0F001F0010 0F001F0010 0F001F0010
 07801F0020 0780278020 0780278020 03C0278040 03C043C040 03C043C040
 03E043C040 01E081E080 01E081E080 01E081E080 00F100F100 00F100F100
 00F100F100 007900FA00 007A007A00 007A007A00 003E007C00 003C003C00
 003C003C00 003C003C00 0018001800 0018001800 0018001800>
PXLC RP
657 1848 XY F25(W)S
XP /F25 101 18 1 0 16 18 18 16 0
<03E0 0C30 1818 300C 700E 6006 E006 FFFE E000 E000 E000 E000 6000
 7002 3002 1804 0C18 03E0>
PXLC RP
696 1848 XY F25(e)S
XP /F25 112 23 1 -8 20 18 26 24 0
<FC7C00 1D8600 1E0300 1C0180 1C01C0 1C00C0 1C00E0 1C00E0 1C00E0
 1C00E0 1C00E0 1C00E0 1C01C0 1C01C0 1C0180 1E0300 1D0600 1CF800
 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 FF8000>
PXLC RP
732 1848 XY F25(p)S
XP /F25 114 16 1 0 14 18 18 16 0
<FCE0 1D30 1E78 1E78 1C30 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00
 1C00 1C00 1C00 1C00 FFC0>
PXLC RP
755 1848 XY F25(r)S
XP /F25 111 21 1 0 19 18 18 24 0
<03F000 0E1C00 180600 300300 700380 600180 E001C0 E001C0 E001C0
 E001C0 E001C0 E001C0 600180 700380 300300 180600 0E1C00 03F000>
PXLC RP
771 1848 XY F25(op)S 1 x(o)S
XP /F25 115 16 1 0 14 18 18 16 0
<1F90 3070 4030 C010 C010 E010 F800 7F80 3FE0 0FF0 00F8 8038 8018
 C018 C018 E010 D060 8FC0>
PXLC RP
837 1848 XY F25(se)S
XP /F25 116 16 1 0 13 26 26 16 0
<0400 0400 0400 0400 0C00 0C00 1C00 3C00 FFE0 1C00 1C00 1C00 1C00
 1C00 1C00 1C00 1C00 1C00 1C10 1C10 1C10 1C10 1C10 0C10 0E20 03C0>
PXLC RP
889 1848 XY F25(to)S
XP /F25 109 35 1 0 34 18 18 40 0
<FC7E07E000 1C83883800 1D01901800 1E01E01C00 1C01C01C00 1C01C01C00
 1C01C01C00 1C01C01C00 1C01C01C00 1C01C01C00 1C01C01C00 1C01C01C00
 1C01C01C00 1C01C01C00 1C01C01C00 1C01C01C00 1C01C01C00 FF8FF8FF80>
PXLC RP
943 1848 XY F25(mer)S
XP /F25 103 21 1 -9 19 19 28 24 0
<000380 03C4C0 0C38C0 1C3880 181800 381C00 381C00 381C00 381C00
 181800 1C3800 0C3000 13C000 100000 300000 180000 1FF800 1FFF00
 1FFF80 300380 6001C0 C000C0 C000C0 C000C0 600180 300300 1C0E00
 07F800>
PXLC RP
1012 1848 XY F25(ge)S 18 x(t)S
XP /F25 104 23 1 0 21 29 29 24 0
<FC0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000
 1C0000 1C0000 1C7C00 1C8700 1D0300 1E0380 1C0380 1C0380 1C0380
 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380
 1C0380 FF9FF0>
PXLC RP
1085 1848 XY F25(he)S 18 x(three)S 18 x(t)S
XP /F25 121 22 1 -8 20 18 26 24 0
<FF07E0 3C0380 1C0100 1C0100 0E0200 0E0200 070400 070400 070400
 038800 038800 03D800 01D000 01D000 00E000 00E000 00E000 004000
 004000 008000 008000 F08000 F10000 F30000 660000 3C0000>
PXLC RP
1268 1848 XY F25(yp)S 1 x(e)S 18 x(o)S
XP /F25 102 13 0 0 15 29 29 16 0
<00F8 018C 071E 061E 0E0C 0E00 0E00 0E00 0E00 0E00 0E00 FFE0 0E00
 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 7FE0>
PXLC RP
1371 1848 XY F25(f)S 17 x(re)S
XP /F25 100 23 2 0 21 29 29 24 0
<003F00 000700 000700 000700 000700 000700 000700 000700 000700
 000700 000700 03E700 0C1700 180F00 300700 700700 600700 E00700
 E00700 E00700 E00700 E00700 E00700 600700 700700 300700 180F00
 0C3700 07C7E0>
PXLC RP
1435 1848 XY F25(d)S
XP /F25 117 23 1 0 21 18 18 24 0
<FC1F80 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380
 1C0380 1C0380 1C0380 1C0380 1C0380 1C0780 0C0780 0E1B80 03E3F0>
PXLC RP
1459 1848 XY F25(u)S
XP /F25 99 18 2 0 16 18 18 16 0
<07E0 0C30 1878 3078 7030 6000 E000 E000 E000 E000 E000 E000 6000
 7004 3004 1808 0C30 07C0>
PXLC RP
1482 1848 XY F25(ce)S 18 x(fu)S
XP /F25 110 23 1 0 21 18 18 24 0
<FC7C00 1C8700 1D0300 1E0380 1C0380 1C0380 1C0380 1C0380 1C0380
 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 1C0380 FF9FF0>
PXLC RP
1572 1848 XY F25(nct)S
XP /F25 105 12 1 0 10 29 29 16 0
<1800 3C00 3C00 1800 0000 0000 0000 0000 0000 0000 0000 FC00 1C00
 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00
 1C00 1C00 FF80>
PXLC RP
1629 1848 XY F25(ions)S
XP /F25 58 12 4 0 8 18 18 8 0
<60 F0 F0 60 00 00 00 00 00 00 00 00 00 00 60 F0 F0 60>
PXLC RP
1701 1848 XY F25(:)S 25 x(reduce)S
XP /F25 44 12 4 -8 8 4 12 8 0
<60 F0 F0 70 10 10 10 10 20 20 40 80>
PXLC RP
1855 1848 XY F25(,)S 18 x(user)S
XP /F25 45 14 0 8 11 10 2 16 0
<FFE0 FFE0>
PXLC RP
1959 1848 XY F25(-reduce)S
XP /F25 97 21 2 0 20 18 18 24 0
<1FC000 307000 783800 781C00 301C00 001C00 001C00 01FC00 0F1C00
 381C00 701C00 601C00 E01C40 E01C40 E01C40 603C40 304E80 1F8700>
PXLC RP
2108 1848 XY F25(and)S
XP /F27 /cmsy10 300 41.5 41.5 128 [-1 -40 45 32] PXLNF RP
XP /F27 32 42 3 3 40 17 14 40 0
<0200000000 0400000000 0400000000 0800000000 1000000000 2000000000
 FFFFFFFFF0 FFFFFFFFF0 2000000000 1000000000 0800000000 0400000000
 0400000000 0200000000>
PXLC RP
2175 1898 XY F27( )S 6 y 375 X F25(user-reducea,)S 21 x(and)S 18 x
(same)S 19 x(for)S 18 x(scan,)S 19 x(reduce-scatter,)S 22 x(etc)S
XP /F25 46 12 4 0 8 4 4 8 0
<60 F0 F0 60>
PXLC RP
1341 1904 XY F25(.)S
XP /F25 84 30 1 0 28 28 28 32 0
<7FFFFFC0 700F01C0 600F00C0 400F0040 400F0040 C00F0020 800F0020
 800F0020 800F0020 000F0000 000F0000 000F0000 000F0000 000F0000
 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000 000F0000
 000F0000 000F0000 000F0000 000F0000 000F0000 001F8000 03FFFC00>
PXLC RP
1384 1904 XY F25(These)S 20 x(functions)S
XP /F25 119 30 1 0 28 18 18 32 0
<FF3FCFE0 3C0F0380 1C070180 1C070100 1C0B0100 0E0B8200 0E0B8200
 0E118200 0711C400 0711C400 0720C400 03A0E800 03A0E800 03C06800
 01C07000 01C07000 01803000 00802000>
PXLC RP
1692 1904 XY F25(wi)S
XP /F25 108 12 1 0 10 29 29 16 0
<FC00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00
 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C00
 1C00 1C00 FF80>
PXLC RP
1734 1904 XY F25(ll)S 17 x(accept)S 20 x(as)S 18 x(argumen)S -1 x
(t)S 18 x(a)S 57 y 375 X(reduce)S 16 x(function)S 13 x(o)S
XP /F25 98 23 1 0 20 29 29 24 0
<FC0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000
 1C0000 1C0000 1C7C00 1D8600 1E0300 1C0180 1C01C0 1C00C0 1C00E0
 1C00E0 1C00E0 1C00E0 1C00E0 1C00E0 1C00C0 1C01C0 1C0180 1E0300
 190600 10F800>
PXLC RP
690 1961 XY F25(b)S
XP /F25 106 13 -3 -8 8 29 37 16 0
<00C0 01E0 01E0 00C0 0000 0000 0000 0000 0000 0000 0000 0FE0 00E0
 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0
 00E0 00E0 00E0 00E0 00E0 00E0 60E0 F0C0 F1C0 6180 3E00>
PXLC RP
715 1961 XY F25(ject,)S 14 x(that)S 14 x(can)S 14 x(b)S 1 x(e)S 15 x
(either)S 15 x(a)S 13 x(prede)S
XP /F25 12 23 0 0 21 29 29 24 0
<007E00 01C180 030180 0703C0 0E03C0 0E0180 0E0000 0E0000 0E0000
 0E0000 0E0000 FFFFC0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0 0E01C0
 0E01C0 7F87F8>
PXLC RP
1281 1961 XY F25(\014ned)S 14 x(op)S 1 x(erator,)S 14 x(or)S 14 x
(a)S 13 x(user-de\014ned)S 16 x(op)S 1 x(erator.)S 2100 Y 466 X F34
(The)S 20 x(functions)S 21 x(in)S 20 x(this)S 20 x(section)S 19 x
(p)S 1 x(erform)S 21 x(one)S 20 x(of)S 20 x(the)S 20 x(follo)S -1 x
(wing)S 18 x(op)S 1 x(erations)S 20 x(across)S 20 x(all)S 19 x(the)S
56 y 375 X(mem)S -1 x(b)S 1 x(ers)S 15 x(of)S 15 x(a)S 15 x(group)S
XP /F34 58 13 4 0 9 20 20 8 0
<70 F8 F8 F8 70 00 00 00 00 00 00 00 00 00 00 70 F8 F8 F8 70>
PXLC RP
771 2156 XY F34(:)S 2251 Y 489 X(global)S 13 x(ma)S
XP /F34 120 24 0 0 23 20 20 24 0
<7FC3FC 0F01E0 0701C0 070180 038100 01C200 00E400 00EC00 007800
 003800 003C00 007C00 004E00 008700 010700 030380 0201C0 0601E0
 1E01E0 FF07FE>
PXLC RP
683 2251 XY F34(x)S 15 x(on)S 15 x(in)S -1 x(teger)S 15 x(and)S
XP /F34 13 25 0 0 23 32 32 24 0
<003FE0 00E0E0 01C1E0 0381E0 0700E0 0700E0 0700E0 0700E0 0700E0
 0700E0 0700E0 0700E0 FFFFE0 0700E0 0700E0 0700E0 0700E0 0700E0
 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0 0700E0
 0700E0 0700E0 0700E0 0700E0 7FE7FE>
PXLC RP
1024 2251 XY F34(\015oating)S 14 x(p)S 1 x(oin)S -1 x(t)S 15 x
(data)S 14 x(t)S -1 x(yp)S 1 x(es)S 2345 Y 489 X(global)S 13 x(min)S
15 x(on)S 16 x(in)S -1 x(teger)S 14 x(and)S 15 x(\015oating)S 14 x
(p)S 1 x(oin)S -1 x(t)S 15 x(data)S 15 x(t)S -1 x(yp)S 1 x(es)S
2439 Y 489 X(global)S 13 x(sum)S 16 x(on)S 15 x(in)S -1 x(teger)S
14 x(and)S 16 x(\015oating)S 14 x(p)S 1 x(oin)S -1 x(t)S 14 x(data)S
15 x(t)S -1 x(yp)S 1 x(es)S 2533 Y 489 X(global)S 13 x(pro)S 1 x
(duct)S 16 x(on)S 15 x(in)S -1 x(teger)S 15 x(and)S 15 x
(\015oating)S 14 x(p)S 1 x(oin)S -1 x(t)S 15 x(data)S 14 x(t)S -1 x
(yp)S 1 x(es)S 2627 Y 489 X(global)S
XP /F34 65 34 1 0 32 32 32 32 0
<00010000 00038000 00038000 00038000 0007C000 0007C000 0007C000
 0009E000 0009E000 0009E000 0010F000 0010F000 0010F000 00207800
 00207800 00207800 00403C00 00403C00 00403C00 00801E00 00801E00
 00FFFE00 01000F00 01000F00 01000F00 02000780 02000780 02000780
 040003C0 0E0003C0 1F0007E0 FFC03FFE>
PXLC RP
622 2627 XY F34(A)S
XP /F34 78 34 2 0 31 31 31 32 0
<FF803FF8 07C007C0 07C00380 05E00100 05E00100 04F00100 04780100
 04780100 043C0100 043C0100 041E0100 040F0100 040F0100 04078100
 04078100 0403C100 0401E100 0401E100 0400F100 0400F100 04007900
 04003D00 04003D00 04001F00 04001F00 04000F00 04000700 04000700
 0E000300 1F000300 FFE00100>
PXLC RP
657 2627 XY F34(N)S
XP /F34 68 35 2 0 31 31 31 32 0
<FFFFE000 0F803C00 07801E00 07800700 07800380 078003C0 078001E0
 078001E0 078001F0 078000F0 078000F0 078000F8 078000F8 078000F8
 078000F8 078000F8 078000F8 078000F8 078000F8 078000F8 078000F0
 078000F0 078000F0 078001E0 078001E0 078003C0 07800380 07800700
 07800E00 0F803C00 FFFFE000>
PXLC RP
691 2627 XY F34(D)S 15 x(on)S 15 x(logical)S 13 x(and)S 15 x(in)S
-1 x(teger)S 15 x(data)S 14 x(t)S -1 x(yp)S 1 x(es)S 2721 Y 489 X
(global)S
XP /F34 79 35 3 -1 31 32 33 32 0
<001F8000 00F0F000 01C03800 07801E00 0F000F00 0E000700 1E000780
 3C0003C0 3C0003C0 7C0003E0 780001E0 780001E0 F80001F0 F80001F0
 F80001F0 F80001F0 F80001F0 F80001F0 F80001F0 F80001F0 F80001F0
 780001E0 7C0003E0 7C0003E0 3C0003C0 3C0003C0 1E000780 0E000700
 0F000F00 07801E00 01C03800 00F0F000 001F8000>
PXLC RP
622 2721 XY F34(O)S
XP /F34 82 33 2 -1 32 31 32 32 0
<FFFF8000 0F80F000 07807800 07803C00 07801E00 07801E00 07801F00
 07801F00 07801F00 07801F00 07801E00 07801E00 07803C00 07807800
 0780F000 07FF8000 0781C000 0780E000 0780F000 07807000 07807800
 07807800 07807800 07807C00 07807C00 07807C00 07807C04 07807E04
 07803E04 0FC01E08 FFFC0F10 000003E0>
PXLC RP
658 2721 XY F34(R)S 15 x(on)S 16 x(logical)S 13 x(and)S 15 x(in)S
-1 x(teger)S 15 x(data)S 14 x(t)S -1 x(yp)S 1 x(es)S 2815 Y 489 X
(global)S
XP /F34 88 34 1 0 32 31 31 32 0
<7FF83FF8 0FE00FC0 07C00700 03C00200 01E00400 01F00C00 00F00800
 00781000 007C1000 003C2000 003E4000 001E4000 000F8000 000F8000
 00078000 0003C000 0007E000 0005E000 0009F000 0018F800 00107800
 00207C00 00603C00 00401E00 00801F00 01800F00 01000780 020007C0
 070003C0 1F8007E0 FFE01FFE>
PXLC RP
622 2815 XY F34(X)S -1 x(OR)S 16 x(on)S 15 x(logical)S 14 x(and)S
15 x(in)S -1 x(teger)S 15 x(data)S 14 x(t)S -1 x(yp)S 1 x(es)S 2909 Y
489 X(ran)S
XP /F34 107 24 1 0 22 32 32 24 0
<0E0000 FE0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 0E0000 0E0000 0E0FF0 0E03C0 0E0300 0E0200 0E0400 0E0800
 0E1000 0E3000 0E7000 0EF800 0F3800 0E1C00 0E1E00 0E0E00 0E0700
 0E0780 0E0380 0E03C0 0E03E0 FFCFF8>
PXLC RP
554 2909 XY F34(k)S 16 x(of)S 14 x(pro)S 1 x(cess)S 16 x(with)S 14 x
(maxim)S -1 x(um)S 14 x(v)S -3 x(alue)S 3004 Y 489 X(rank)S 15 x
(of)S 14 x(pro)S 1 x(cess)S 16 x(with)S 14 x(minim)S -1 x(um)S 15 x
(v)S -3 x(alue)S
XP /F4 /cmr6 300 24.9 24.9 128 [-1 -7 28 19] PXLNF RP
XP /F4 49 15 2 0 11 16 16 16 0
<0C00 3C00 CC00 0C00 0C00 0C00 0C00 0C00 0C00 0C00 0C00 0C00 0C00
 0C00 0C00 FF80>
PXLC RP
268 346 XY F4(1)S
XP /F4 50 15 1 0 12 16 16 16 0
<1F00 6180 40C0 8060 C060 0060 0060 00C0 0180 0300 0600 0C00 1020
 2020 7FC0 FFC0>
PXLC RP
268 403 XY F4(2)S
XP /F4 51 15 1 0 12 16 16 16 0
<1F00 2180 60C0 60C0 00C0 0080 0180 0F00 0080 0040 0060 C060 C060
 8040 6080 1F00>
PXLC RP
268 459 XY F4(3)S
XP /F4 52 15 1 0 12 16 16 16 0
<0300 0300 0700 0F00 0B00 1300 3300 2300 4300 C300 FFE0 0300 0300
 0300 0300 1FE0>
PXLC RP
268 515 XY F4(4)S
XP /F4 53 15 1 0 12 16 16 16 0
<2080 3F00 2C00 2000 2000 2000 2F00 3080 2040 0060 0060 0060 C060
 80C0 6180 1F00>
PXLC RP
268 572 XY F4(5)S
XP /F4 54 15 1 0 12 16 16 16 0
<0780 1840 30C0 60C0 6000 C000 CF00 F080 E040 C060 C060 C060 4060
 60C0 3080 1F00>
PXLC RP
268 628 XY F4(6)S
XP /F4 55 15 2 0 13 17 17 16 0
<4000 7FE0 7FC0 8080 8080 0100 0200 0400 0400 0C00 0800 0800 1800
 1800 1800 1800 1800>
PXLC RP
268 685 XY F4(7)S
XP /F4 56 15 1 0 12 16 16 16 0
<1F00 2080 4040 4040 4040 7080 3F00 0F00 3380 61C0 C060 C060 C060
 4040 6080 1F00>
PXLC RP
268 741 XY F4(8)S
XP /F4 57 15 1 0 12 16 16 16 0
<1F00 3180 60C0 C040 C060 C060 C060 40E0 21E0 1E60 0060 0040 60C0
 6080 4300 3E00>
PXLC RP
268 798 XY F4(9)S 56 y -23 x(1)S
XP /F4 48 15 1 0 12 16 16 16 0
<1F00 3180 60C0 4040 C060 C060 C060 C060 C060 C060 C060 C060 4040
 60C0 3180 1F00>
PXLC RP
275 854 XY F4(0)S 57 y -30 x(11)S 56 y -30 x(12)S 57 y -30 x(13)S
56 y -30 x(14)S 56 y -30 x(15)S 57 y -30 x(16)S 56 y -30 x(17)S 57 y
-30 x(18)S 56 y -30 x(19)S 57 y -30 x(20)S 56 y -30 x(21)S 57 y -30 x
(22)S 56 y -30 x(23)S 57 y -30 x(24)S 56 y -30 x(25)S 56 y -30 x
(26)S 57 y -30 x(27)S 56 y -30 x(28)S 57 y -30 x(29)S 56 y -30 x
(30)S 57 y -30 x(31)S 56 y -30 x(32)S 57 y -30 x(33)S 56 y -30 x
(34)S 57 y -30 x(35)S 56 y -30 x(36)S 56 y -30 x(37)S 57 y -30 x
(38)S 56 y -30 x(39)S 57 y -30 x(40)S 56 y -30 x(41)S 57 y -30 x
(42)S 56 y -30 x(43)S 57 y -30 x(44)S 56 y -30 x(45)S 57 y -30 x
(46)S 56 y -30 x(47)S 56 y -30 x(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 62 68 532 714

%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
XP /F34 50 23 2 0 20 30 30 24 0
<03F000 0C1C00 100E00 200700 400780 800780 F007C0 F803C0 F803C0
 F803C0 2007C0 0007C0 000780 000780 000F00 000E00 001C00 003800
 007000 006000 00C000 018000 030000 060040 0C0040 180040 100080
 3FFF80 7FFF80 FFFF80>
PXLC RP
375 200 XY F34(2)S
XP /F38 /cmsl10 329 45.5 45.5 128 [-3 -12 50 34] PXLNF RP
XP /F38 67 33 5 -1 34 32 33 32 0
<0001F808 000E0618 00380138 007000F8 01E00078 03C00070 07800030
 07800030 0F000030 1F000030 1E000030 3E000020 3C000000 7C000000
 7C000000 7C000000 7C000000 F8000000 F8000000 F8000000 F8000000
 F8000000 78000040 78000080 78000080 3C000080 3C000100 1C000200
 0E000200 06000C00 03001000 01C0E000 003F0000>
PXLC RP
573 200 XY F38(C)S
XP /F38 72 34 2 0 36 31 31 40 0
<07FFC7FFC0 007C00F800 003C007800 003C007800 007800F000 007800F000
 007800F000 007800F000 007800F000 007800F000 00F001E000 00F001E000
 00F001E000 00F001E000 00FFFFE000 00F001E000 01E003C000 01E003C000
 01E003C000 01E003C000 01E003C000 01E003C000 03C0078000 03C0078000
 03C0078000 03C0078000 03C0078000 03C0078000 07800F0000 07C00F8000
 FFF8FFF800>
PXLC RP
606 200 XY F38(H)S
XP /F38 65 34 2 0 32 32 32 32 0
<00001000 00001800 00003800 00003800 00007800 00007800 0000FC00
 0001BC00 00013C00 00033C00 00023C00 00063C00 00043E00 00081E00
 00081E00 00101E00 00101E00 00201E00 00200F00 00400F00 00400F00
 00FFFF00 00800F00 01000F80 01000780 02000780 02000780 04000780
 04000780 0C0007C0 3E0007C0 FF807FFC>
PXLC RP
640 200 XY F38(A)S
XP /F38 80 31 2 0 31 31 31 32 0
<07FFFF00 007C03C0 003C01E0 003C00F0 007800F0 007800F8 007800F8
 007800F8 007800F8 007800F0 00F001F0 00F001E0 00F003C0 00F00780
 00F00F00 00FFF800 01E00000 01E00000 01E00000 01E00000 01E00000
 01E00000 03C00000 03C00000 03C00000 03C00000 03C00000 03C00000
 07800000 07C00000 FFFC0000>
PXLC RP
674 200 XY F38(P)S
XP /F38 84 33 6 0 34 31 31 32 0
<3FFFFFF0 3C0780F0 30078030 60078030 400F0010 400F0010 C00F0010
 800F0010 800F0010 800F0010 001E0000 001E0000 001E0000 001E0000
 001E0000 001E0000 003C0000 003C0000 003C0000 003C0000 003C0000
 003C0000 00780000 00780000 00780000 00780000 00780000 00780000
 00F00000 01F80000 7FFFE000>
PXLC RP
705 200 XY F38(T)S
XP /F38 69 31 2 0 31 31 31 32 0
<07FFFFF8 007C0078 003C0038 003C0018 00780018 00780008 00780008
 00780008 00780008 00780808 00F01000 00F01000 00F01000 00F03000
 00FFF000 00F07000 01E02000 01E02000 01E02000 01E02000 01E00008
 01E00010 03C00010 03C00010 03C00020 03C00020 03C00060 03C000C0
 078001C0 078007C0 FFFFFF80>
PXLC RP
738 200 XY F38(E)S
XP /F38 82 33 2 -1 33 31 32 32 0
<07FFFC00 007C0700 003C03C0 003C01E0 007801E0 007801F0 007801F0
 007801F0 007801F0 007801E0 00F003E0 00F003C0 00F00780 00F00F00
 00F03C00 00FFF000 01E03000 01E03800 01E01C00 01E01C00 01E01C00
 01E01E00 03C03E00 03C03E00 03C03E00 03C03E00 03C03E00 03C03E02
 07803E04 07C01F04 FFFC0F18 000003E0>
PXLC RP
769 200 XY F38(R)S
XP /F38 49 23 4 0 19 30 30 16 0
<000C 001C 00FC 0F38 0038 0038 0038 0038 0038 0070 0070 0070 0070
 0070 0070 00E0 00E0 00E0 00E0 00E0 00E0 01C0 01C0 01C0 01C0 01C0
 01C0 0380 03C0 FFFE>
PXLC RP
817 200 XY F38(1)S
XP /F38 46 13 4 0 9 5 5 8 0
<30 78 F8 78 70>
PXLC RP
840 200 XY F38(.)S 35 x(C)S
XP /F38 79 35 5 -1 34 32 33 32 0
<0003F800 001E0E00 00380700 00E00380 01C001C0 03C001E0 078000E0
 0F0000F0 0F0000F0 1E0000F0 1E0000F8 3E0000F8 3C0000F8 7C0000F8
 7C0000F8 7C0000F8 7C0000F8 F80001F0 F80001F0 F80001F0 F80001F0
 F80003E0 780003E0 780003C0 780007C0 7C000780 3C000F00 3C001E00
 1E001C00 0E003800 0700F000 03C3C000 00FE0000>
PXLC RP
921 200 XY F38(O)S
XP /F38 76 28 2 0 27 31 31 32 0
<07FFF000 007E0000 003C0000 003C0000 00780000 00780000 00780000
 00780000 00780000 00780000 00F00000 00F00000 00F00000 00F00000
 00F00000 00F00000 01E00000 01E00000 01E00000 01E00000 01E00080
 01E00100 03C00100 03C00100 03C00300 03C00200 03C00600 03C00600
 07801E00 07807C00 FFFFFC00>
PXLC RP
956 200 XY F38(LLECT)S
XP /F38 73 16 1 0 20 31 31 24 0
<07FFE0 007C00 003C00 003C00 007800 007800 007800 007800 007800
 007800 00F000 00F000 00F000 00F000 00F000 00F000 01E000 01E000
 01E000 01E000 01E000 01E000 03C000 03C000 03C000 03C000 03C000
 03C000 078000 07C000 FFFC00>
PXLC RP
1110 200 XY F38(I)S
XP /F38 86 34 6 -1 37 31 32 32 0
<FFF003FE 1F8000F8 0F000060 0F000040 0F000040 0F800080 07800180
 07800100 07800200 07800200 07C00400 03C00400 03C00800 03C00800
 03C01000 03E01000 01E02000 01E02000 01E04000 01E04000 01F08000
 00F10000 00F10000 00F20000 00F20000 00FC0000 007C0000 00780000
 00780000 00700000 00700000 00200000>
PXLC RP
1126 200 XY F38(VE)S 15 x(CO)S
XP /F38 77 42 2 0 44 31 31 48 0
<07FC0000FFC0 007C0000F800 003C00017800 003C00017800 004E0002F000
 004E0002F000 004E0004F000 004E0004F000 004E0008F000 004E0008F000
 00870011E000 00870011E000 00870021E000 00870021E000 00870041E000
 00838041E000 01038083C000 01038083C000 01038103C000 01038203C000
 0101C203C000 0101C403C000 0201C4078000 0201C8078000 0201C8078000
 0201D0078000 0200F0078000 0600E0078000 0600E00F0000 0F00C00F8000
 FFE0C1FFF800>
PXLC RP
1274 200 XY F38(MM)S
XP /F38 85 34 7 -1 36 31 32 32 0
<FFFC3FF8 0F8007C0 07800300 07800300 0F000200 0F000200 0F000200
 0F000200 0F000200 0F000200 1E000400 1E000400 1E000400 1E000400
 1E000400 1E000400 3C000800 3C000800 3C000800 3C000800 3C000800
 3C000800 38001000 38001000 38001000 38002000 3C004000 1C004000
 1C008000 0E010000 07060000 01F80000>
PXLC RP
1358 200 XY F38(U)S
XP /F38 78 34 2 0 36 31 31 40 0
<07FC01FFC0 003E003E00 003E001800 003E001800 004F001000 004F001000
 0047801000 0047801000 0043C01000 0043C01000 0083C02000 0081E02000
 0081E02000 0080F02000 0080F02000 0080782000 0100784000 01007C4000
 01003C4000 01003C4000 01001E4000 01001E4000 02000F8000 02000F8000
 02000F8000 0200078000 0200078000 0600038000 0600030000 0F00010000
 FFE0010000>
PXLC RP
1392 200 XY F38(NICA)S -4 x(TION)S
XP /F38 123 23 2 12 24 13 1 24 0
<FFFFFC>
PXLC RP
1639 200 XY F38({)S 15 x(PR)S -1 x(OPO)S
XP /F38 83 25 3 -1 25 32 33 24 0
<003F04 0060CC 01803C 03801C 03001C 070018 060008 0E0008 0E0008
 0E0008 0E0000 0F0000 0F8000 0FE000 07FE00 03FF80 01FFC0 007FE0
 0007E0 0001E0 0000E0 0000F0 0000F0 4000E0 4000E0 4000E0 4000E0
 6000C0 600180 E00380 F80300 C60C00 81F800>
PXLC RP
1842 200 XY F38(SE)S
XP /F38 68 35 2 0 34 31 31 32 0
<07FFFF00 007C01E0 003C00F0 003C0078 0078003C 0078003C 0078001E
 0078001E 0078001E 0078001F 00F0001F 00F0001F 00F0001F 00F0001F
 00F0001F 00F0001F 01E0001E 01E0003E 01E0003E 01E0003E 01E0003C
 01E0007C 03C00078 03C000F0 03C000F0 03C001E0 03C003C0 03C00780
 07800F00 07803C00 FFFFE000>
PXLC RP
1898 200 XY F38(D)S 15 x(CHAN)S
XP /F38 71 36 5 -1 35 32 33 32 0
<0001FC04 000F030C 003C009C 0070007C 00E0003C 01C00038 03800018
 07800018 0F000018 1F000018 1E000018 3E000010 3C000000 7C000000
 7C000000 7C000000 7C000000 F8000000 F8000000 F8007FFC F80003E0
 780001E0 780001E0 780003C0 780003C0 3C0003C0 3C0003C0 1C0003C0
 0E0007C0 07000B80 03801180 01E06080 003F8000>
PXLC RP
2083 200 XY F38(GES)S 345 Y 489 X F34(user)S 15 x(de\014ned)S
XP /F34 40 18 3 -12 14 34 46 16 0
<0020 0040 0080 0100 0200 0600 0C00 0C00 1800 1800 3000 3000 3000
 7000 6000 6000 6000 E000 E000 E000 E000 E000 E000 E000 E000 E000
 E000 E000 E000 6000 6000 6000 7000 3000 3000 3000 1800 1800 0C00
 0C00 0600 0200 0100 0080 0040 0020>
PXLC RP
742 345 XY F34(\(asso)S 1 x(ciativ)S -1 x(e)S
XP /F34 41 18 3 -12 14 34 46 16 0
<8000 4000 2000 1000 0800 0C00 0600 0600 0300 0300 0180 0180 0180
 01C0 00C0 00C0 00C0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0 00E0
 00E0 00E0 00E0 00C0 00C0 00C0 01C0 0180 0180 0180 0300 0300 0600
 0600 0C00 0800 1000 2000 4000 8000>
PXLC RP
971 345 XY F34(\))S 14 x(op)S 1 x(eration)S 514 Y 489 X(user)S 15 x
(de)S
(\014ned)S 17 x(\(asso)S 1 x(ciativ)S -1 x(e)S 13 x(and)S 15 x
(comm)S -1 x(utativ)S -1 x(e\))S 13 x(op)S 1 x(eration)S 730 Y 375 X
F41(MPI)S 459 730 XY 14 2 R(RE)S
XP /F41 68 33 5 0 29 32 32 24 0
<FFFC00 FFFF80 FFFFC0 F007E0 F001F0 F000F8 F00078 F0003C F0003C
 F0001E F0001E F0000E F0000F F0000F F0000F F0000F F0000F F0000F
 F0000F F0000F F0000F F0001E F0001E F0001E F0003C F0007C F000F8
 F001F0 F007E0 FFFFC0 FFFF80 FFFC00>
PXLC RP
529 730 XY F41(D)S
XP /F41 85 31 5 -1 25 32 33 24 0
<F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0
 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0
 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 F000F0 7801E0 7801E0
 3C03C0 3C03C0 1F0F80 0FFF00 07FE00 01F800>
PXLC RP
562 730 XY F41(UCE)S
XP /F41 40 18 2 -12 14 34 46 16 0
<0070 00E0 01C0 0380 0780 0700 0E00 1E00 1E00 3C00 3C00 3C00 7800
 7800 7800 7800 7000 F000 F000 F000 F000 F000 F000 F000 F000 F000
 F000 F000 F000 7000 7800 7800 7800 7800 3C00 3C00 3C00 1E00 1E00
 0E00 0700 0780 0380 01C0 00E0 0070>
PXLC RP
650 730 XY F41(\()S
XP /F41 115 17 1 0 15 20 20 16 0
<07F0 1FFC 3FFC 780C 7800 7800 7800 7C00 3FC0 1FF0 0FF8 03F8 007C
 003C 003C C03C F07C FFF8 7FF0 0FC0>
PXLC RP
682 730 XY F41(se)S
XP /F41 110 23 3 0 19 20 20 16 0
<F1F8 F3FC F7FE FC1F F80F F80F F00F F00F F00F F00F F00F F00F F00F
 F00F F00F F00F F00F F00F F00F F00F>
PXLC RP
720 730 XY F41(n)S
XP /F41 100 23 2 0 19 32 32 24 0
<000780 000780 000780 000780 000780 000780 000780 000780 000780
 000780 000780 000780 07C780 0FF780 1FFF80 3E1F80 7C0780 780780
 F80780 F00780 F00780 F00780 F00780 F00780 F00780 F00780 780780
 780F80 3E1F80 1FFF80 0FF780 07C780>
PXLC RP
744 730 XY F41(d)S
XP /F41 98 23 3 0 20 32 32 24 0
<F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F1F000 F7FC00 FFFE00 FC3E00 F80F00 F00F00
 F00780 F00780 F00780 F00780 F00780 F00780 F00780 F00F00 F00F00
 F81F00 FC3E00 FFFC00 F7F800 F1E000>
PXLC RP
767 730 XY F41(b)S
XP /F41 117 23 3 0 19 20 20 16 0
<F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F F00F
 F00F F00F F01F F03F FFFF 7FEF 3F0F>
PXLC RP
790 730 XY F41(u)S
XP /F41 102 14 0 0 15 32 32 16 0
<007E 01FE 03FE 0780 0700 0F00 0F00 0F00 0F00 0F00 0F00 0F00 FFF0
 FFF0 FFF0 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00
 0F00 0F00 0F00 0F00 0F00 0F00>
PXLC RP
814 730 XY F41(f)S
XP /F41 44 13 3 -6 8 4 10 8 0
<78 78 78 78 18 30 30 60 60 E0>
PXLC RP
828 730 XY F41(,)S 15 x(recvbuf,)S 15 x(c)S
XP /F41 111 23 1 0 21 20 20 24 0
<01F800 07FE00 1FFF80 3F0FC0 3C03C0 7801E0 7801E0 F000F0 F000F0
 F000F0 F000F0 F000F0 F000F0 7801E0 7801E0 3C03C0 3F0FC0 1FFF80
 07FE00 01F800>
PXLC RP
1041 730 XY F41(ount,)S 16 x(d)S
XP /F41 97 22 2 0 18 20 20 16 0
<07E0 3FF8 7FFC 701E 401F 000F 000F 000F 003F 07FF 1FFF 7E0F F80F
 F00F F00F F00F F83F 7FFF 3FEF 1F8F>
PXLC RP
1179 730 XY F41(atat)S -1 x(yp)S 1 x(e,)S 16 x(op,)S 15 x(ro)S 1 x
(ot,)S 14 x(co)S
XP /F41 109 36 3 0 33 20 20 32 0
<F0FC07E0 F3FE1FF0 F7FF3FF8 FE0FF07C F807C03C F807C03C F007803C
 F007803C F007803C F007803C F007803C F007803C F007803C F007803C
 F007803C F007803C F007803C F007803C F007803C F007803C>
PXLC RP
1571 730 XY F41(mm)S
XP /F41 41 18 3 -12 15 34 46 16 0
<E000 7000 3800 1C00 1E00 0E00 0700 0780 0780 03C0 03C0 03C0 01E0
 01E0 01E0 01E0 00E0 00F0 00F0 00F0 00F0 00F0 00F0 00F0 00F0 00F0
 00F0 00F0 00F0 00E0 01E0 01E0 01E0 01E0 03C0 03C0 03C0 0780 0780
 0700 0E00 1E00 1C00 3800 7000 E000>
PXLC RP
1643 730 XY F41(\))S
XP /F25 73 15 1 0 13 28 28 16 0
<FFF0 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00
 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00 0F00
 0F00 FFF0>
PXLC RP
417 826 XY F25(I)S
XP /F25 78 31 2 0 28 28 28 32 0
<FF007FC0 0F800E00 0F800400 0BC00400 09E00400 09E00400 08F00400
 08F80400 08780400 083C0400 083C0400 081E0400 080F0400 080F0400
 08078400 0807C400 0803C400 0801E400 0801E400 0800F400 08007C00
 08007C00 08003C00 08003C00 08001C00 08000C00 1C000C00 FF800400>
PXLC RP
432 826 XY F25(N)S 634 X F41(sendbuf)S 1205 X F25(address)S 15 x
(of)S 14 x(send)S 14 x(bu)S
XP /F25 11 24 0 0 26 29 29 32 0
<007E1F00 01C1B180 0303E3C0 0703C3C0 0E03C180 0E01C000 0E01C000
 0E01C000 0E01C000 0E01C000 0E01C000 FFFFFC00 0E01C000 0E01C000
 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000
 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000 0E01C000
 7F87FC00>
PXLC RP
1542 826 XY F25(\013er)S
XP /F25 40 16 3 -11 13 31 42 16 0
<0040 0080 0100 0200 0600 0C00 0C00 1800 1800 3000 3000 7000 6000
 6000 6000 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000
 E000 6000 6000 6000 7000 3000 3000 1800 1800 0C00 0C00 0600 0200
 0100 0080 0040>
PXLC RP
1614 826 XY F25(\(c)S -1 x(hoice)S
XP /F25 41 16 2 -11 12 31 42 16 0
<8000 4000 2000 1000 1800 0C00 0C00 0600 0600 0300 0300 0380 0180
 0180 0180 01C0 01C0 01C0 01C0 01C0 01C0 01C0 01C0 01C0 01C0 01C0
 01C0 0180 0180 0180 0380 0300 0300 0600 0600 0C00 0C00 1800 1000
 2000 4000 8000>
PXLC RP
1740 826 XY F25(\))S
XP /F25 79 32 2 -1 29 29 30 32 0
<003F8000 00E0E000 03803800 07001C00 0E000E00 1C000700 3C000780
 38000380 780003C0 780003C0 700001C0 F00001E0 F00001E0 F00001E0
 F00001E0 F00001E0 F00001E0 F00001E0 F00001E0 700001C0 780003C0
 780003C0 38000380 3C000780 1C000700 0E000E00 07001C00 03803800
 00E0E000 003F8000>
PXLC RP
417 939 XY F25(O)S
XP /F25 85 31 2 -1 28 28 29 32 0
<FFF07FC0 0F000E00 0F000400 0F000400 0F000400 0F000400 0F000400
 0F000400 0F000400 0F000400 0F000400 0F000400 0F000400 0F000400
 0F000400 0F000400 0F000400 0F000400 0F000400 0F000400 0F000400
 0F000400 07000800 07800800 03801000 01801000 00C02000 0070C000
 001F0000>
PXLC RP
449 939 XY F25(UT)S 634 X F41(recvbuf)S 1205 X F25(address)S 21 x
(of)S 19 x(recei)S
XP /F25 118 22 1 0 20 18 18 24 0
<FF07E0 3C0380 1C0100 1C0100 0E0200 0E0200 070400 070400 070400
 038800 038800 03D800 01D000 01D000 00E000 00E000 00E000 004000>
PXLC RP
1496 939 XY F25(v)S -1 x(e)S 20 x(bu\013er)S 20 x(\(c)S -1 x
(hoice,)S 22 x(signi\014can)S -1 x(t)S 19 x(only)S 19 x(at)S 56 y
1205 X(ro)S 1 x(ot\))S 1108 Y 417 X(IN)S 634 X F41(count)S 1205 X
F25(n)S -1 x(um)S -1 x(b)S 1 x(er)S 14 x(of)S 14 x(elemen)S -1 x
(ts)S 14 x(in)S 14 x(send)S 15 x(bu\013er)S 14 x(\(in)S -1 x
(teger\))S 1221 Y 417 X(IN)S 634 X F41(datat)S -1 x(yp)S 1 x(e)S
1205 X F25(data)S 14 x(t)S -1 x(yp)S 1 x(e)S 14 x(of)S 13 x(elemen)S
-1 x(ts)S 15 x(of)S 13 x(send)S 15 x(bu\013er)S 15 x(\(handle\))S
1333 Y 417 X(IN)S 634 X F41(op)S 1205 X F25(reduce)S 16 x(op)S 1 x
(eration)S 14 x(\(handle\))S 1446 Y 417 X(IN)S 634 X F41(ro)S 1 x
(ot)S 1205 X F25(ran)S
XP /F25 107 22 1 0 20 29 29 24 0
<FC0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000
 1C0000 1C0000 1C3FC0 1C0F00 1C0C00 1C0800 1C1000 1C2000 1C4000
 1CE000 1DE000 1E7000 1C7800 1C3800 1C3C00 1C1C00 1C0E00 1C0F00
 1C0F80 FF9FE0>
PXLC RP
1265 1446 XY F25(k)S 14 x(of)S 13 x(ro)S 1 x(ot)S 14 x(pro)S 1 x
(cess)S 16 x(\(in)S -1 x(teger\))S 1559 Y 417 X(IN)S 634 X F41
(comm)S 1205 X F25(comm)S -1 x(unicator)S 13 x(\(handle\))S
XP /F40 /cmtt10 329 45.5 45.5 128 [-1 -11 24 32] PXLNF RP
XP /F40 105 24 4 0 21 29 29 24 0
<038000 07C000 07C000 07C000 038000 000000 000000 000000 000000
 7FC000 FFC000 7FC000 01C000 01C000 01C000 01C000 01C000 01C000
 01C000 01C000 01C000 01C000 01C000 01C000 01C000 01C000 FFFF00
 FFFF80 FFFF00>
PXLC RP
375 1702 XY F40(i)S
XP /F40 110 24 0 0 23 20 20 24 0
<7E3E00 FEFF80 7FFFC0 0FC1C0 0F80E0 0F00E0 0E00E0 0E00E0 0E00E0
 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 7FC3FC
 FFE7FE 7FC3FC>
PXLC RP
399 1702 XY F40(n)S
XP /F40 116 24 1 0 20 25 25 24 0
<018000 038000 038000 038000 038000 7FFFC0 FFFFC0 FFFFC0 038000
 038000 038000 038000 038000 038000 038000 038000 038000 038040
 0380E0 0380E0 0380E0 01C1C0 01FFC0 00FF80 003E00>
PXLC RP
423 1702 XY F40(t)S
XP /F40 77 24 1 0 22 28 28 24 0
<FC01F8 FE03F8 FE03F8 3B06E0 3B06E0 3B06E0 3B06E0 3B8EE0 3B8EE0
 398CE0 398CE0 39DCE0 39DCE0 39DCE0 38D8E0 38D8E0 38F8E0 3870E0
 3870E0 3800E0 3800E0 3800E0 3800E0 3800E0 3800E0 FE03F8 FE03F8
 FE03F8>
PXLC RP
470 1702 XY F40(M)S
XP /F40 80 24 1 0 21 28 28 24 0
<FFFE00 FFFF80 FFFFC0 1C03C0 1C01E0 1C00E0 1C0070 1C0070 1C0070
 1C0070 1C0070 1C00E0 1C01E0 1C03C0 1FFFC0 1FFF80 1FFE00 1C0000
 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 FF8000 FF8000
 FF8000>
PXLC RP
494 1702 XY F40(P)S
XP /F40 73 24 3 0 20 28 28 24 0
<7FFF00 FFFF80 7FFF00 01C000 01C000 01C000 01C000 01C000 01C000
 01C000 01C000 01C000 01C000 01C000 01C000 01C000 01C000 01C000
 01C000 01C000 01C000 01C000 01C000 01C000 01C000 7FFF00 FFFF80
 7FFF00>
PXLC RP
518 1702 XY F40(I)S 545 1702 XY 15 2 R
XP /F40 82 24 1 0 23 28 28 24 0
<7FF800 FFFE00 7FFF00 1C0F80 1C0380 1C03C0 1C01C0 1C01C0 1C01C0
 1C03C0 1C0380 1C0F80 1FFF00 1FFE00 1FFE00 1C0F00 1C0700 1C0380
 1C0380 1C0380 1C0380 1C0380 1C039C 1C039C 1C039C 7F01F8 FF81F8
 7F00F0>
PXLC RP
559 1702 XY F40(R)S
XP /F40 101 24 3 0 21 20 20 24 0
<01F000 07FC00 1FFE00 3E0F00 380780 700380 700380 E001C0 E001C0
 FFFFC0 FFFFC0 FFFFC0 E00000 700000 7001C0 3801C0 3E03C0 1FFF80
 07FF00 01FC00>
PXLC RP
583 1702 XY F40(e)S
XP /F40 100 24 2 0 23 28 28 24 0
<001F80 003F80 001F80 000380 000380 000380 000380 000380 03E380
 0FFB80 1FFF80 3C1F80 380F80 700780 700380 E00380 E00380 E00380
 E00380 E00380 E00380 700780 700780 380F80 3C1F80 1FFFF0 0FFBF8
 03E3F0>
PXLC RP
607 1702 XY F40(d)S
XP /F40 117 24 0 0 23 20 20 24 0
<7E07E0 FE0FE0 7E07E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0
 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E00E0 0E01E0 0F03E0 07FFFC
 03FFFE 01FCFC>
PXLC RP
631 1702 XY F40(u)S
XP /F40 99 24 3 0 21 20 20 24 0
<01FE00 07FF00 1FFF80 3E0780 380300 700000 700000 E00000 E00000
 E00000 E00000 E00000 E00000 700000 7001C0 3801C0 3E03C0 1FFF80
 07FF00 01FC00>
PXLC RP
655 1702 XY F40(ce)S
XP /F40 40 24 7 -4 19 32 36 16 0
<0070 00F0 01E0 03C0 0780 0F00 1E00 1C00 3800 3800 7000 7000 7000
 7000 E000 E000 E000 E000 E000 E000 E000 E000 7000 7000 7000 7000
 3800 3800 1C00 1E00 0F00 0780 03C0 01F0 00F0 0070>
PXLC RP
702 1702 XY F40(\()S
XP /F40 118 24 1 0 22 20 20 24 0
<7F8FF0 FF8FF8 7F8FF0 1E03C0 0E0380 0E0380 0E0380 070700 070700
 070700 038E00 038E00 038E00 038E00 01DC00 01DC00 01DC00 00F800
 00F800 007000>
PXLC RP
726 1702 XY F40(v)S
XP /F40 111 24 2 0 21 20 20 24 0
<01F000 0FFE00 1FFF00 3E0F80 380380 7001C0 7001C0 E000E0 E000E0
 E000E0 E000E0 E000E0 F001E0 7001C0 7803C0 3C0780 3E0F80 1FFF00
 0FFE00 01F000>
PXLC RP
750 1702 XY F40(oid)S
XP /F40 42 24 3 4 20 24 20 24 0
<01C000 01C000 01C000 01C000 C1C180 F1C780 F9CF80 7FFF00 1FFC00
 07F000 07F000 1FFC00 7FFF00 F9CF80 F1C780 C1C180 01C000 01C000
 01C000 01C000>
PXLC RP
822 1702 XY F40(*)S
XP /F40 115 24 3 0 20 20 20 24 0
<07F700 3FFF00 7FFF00 780F00 E00700 E00700 E00700 7C0000 7FE000
 1FFC00 03FE00 001F00 600780 E00380 E00380 F00380 F80F00 FFFF00
 FFFC00 E7F000>
PXLC RP
869 1702 XY F40(send)S
XP /F40 98 24 0 0 21 28 28 24 0
<7E0000 FE0000 7E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E3E00
 0EFF80 0FFFC0 0FC1E0 0F80E0 0F0070 0E0070 0E0038 0E0038 0E0038
 0E0038 0E0038 0E0038 0F0070 0F0070 0F80E0 0FC1E0 0FFFC0 0EFF80
 063E00>
PXLC RP
965 1702 XY F40(bu)S
XP /F40 102 24 1 0 20 28 28 24 0
<001F80 007FC0 00FFE0 00E1E0 01C0C0 01C000 01C000 01C000 7FFFC0
 FFFFC0 FFFFC0 01C000 01C000 01C000 01C000 01C000 01C000 01C000
 01C000 01C000 01C000 01C000 01C000 01C000 01C000 7FFF00 7FFF00
 7FFF00>
PXLC RP
1013 1702 XY F40(f)S
XP /F40 44 24 8 -6 16 6 12 8 0
<1C 3E 7E 7F 3F 1F 07 0E 1E 7C F8 60>
PXLC RP
1036 1702 XY F40(,)S 24 x(void*)S
XP /F40 114 24 1 0 22 20 20 24 0
<7F87E0 FF9FF0 7FBFF8 03F878 03F030 03E000 03C000 03C000 038000
 038000 038000 038000 038000 038000 038000 038000 038000 7FFE00
 FFFF00 7FFE00>
PXLC RP
1227 1702 XY F40(recvbuf,)S 23 x(int)S 24 x(count,)S 56 y 693 X
(MPI)S 768 1758 XY 15 2 R
XP /F40 68 24 1 0 21 28 28 24 0
<7FF800 FFFE00 7FFF00 1C0F80 1C03C0 1C03C0 1C01E0 1C00E0 1C00E0
 1C00F0 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070
 1C00F0 1C00E0 1C00E0 1C01E0 1C01C0 1C03C0 1C0F80 7FFF00 FFFE00
 7FF800>
PXLC RP
782 1758 XY F40(D)S
XP /F40 97 24 3 0 23 20 20 24 0
<1FE000 3FF800 7FFC00 781E00 300E00 000700 000700 00FF00 07FF00
 1FFF00 7F0700 780700 E00700 E00700 E00700 F00F00 781F00 3FFFF0
 1FFBF0 07E1F0>
PXLC RP
806 1758 XY F40(atat)S
XP /F40 121 24 1 -10 22 20 30 24 0
<7F8FF0 FF8FF8 7F8FF0 0E01C0 0E0380 0E0380 070380 070700 070700
 038700 038600 038E00 01CE00 01CE00 00CC00 00CC00 00DC00 007800
 007800 007800 007000 007000 007000 00F000 00E000 79E000 7BC000
 7F8000 3F0000 1E0000>
PXLC RP
901 1758 XY F40(y)S
XP /F40 112 24 0 -10 21 20 30 24 0
<7E3E00 FEFF80 7FFFC0 0FC1E0 0F80E0 0F0070 0E0070 0E0038 0E0038
 0E0038 0E0038 0E0038 0E0038 0F0070 0F0070 0F80E0 0FC1E0 0FFFC0
 0EFF80 0E3E00 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 7FC000 FFE000 7FC000>
PXLC RP
925 1758 XY F40(pe)S 24 x(datatype,)S 22 x(MPI)S 1310 1758 XY
15 2 R
XP /F40 79 24 3 0 20 28 28 24 0
<0FF800 3FFE00 7FFF00 780F00 700700 F00780 E00380 E00380 E00380
 E00380 E00380 E00380 E00380 E00380 E00380 E00380 E00380 E00380
 E00380 E00380 E00380 E00380 F00780 700700 780F00 7FFF00 3FFE00
 0FF800>
PXLC RP
1324 1758 XY F40(Op)S 24 x(op,)S 23 x(int)S 24 x(root,)S 23 x(MPI)S
1804 1758 XY 15 2 R
XP /F40 67 24 2 0 21 28 28 24 0
<00F8E0 03FEE0 07FFE0 0F07E0 1E03E0 3C01E0 3800E0 7000E0 7000E0
 700000 E00000 E00000 E00000 E00000 E00000 E00000 E00000 E00000
 700000 7000E0 7000E0 3800E0 3C00E0 1E01C0 0F07C0 07FF80 03FE00
 00F800>
PXLC RP
1819 1758 XY F40(Co)S
XP /F40 109 24 -1 0 24 20 20 32 0
<7CE0E000 FFFBF800 7FFFF800 1F1F1C00 1E1E1C00 1E1E1C00 1C1C1C00
 1C1C1C00 1C1C1C00 1C1C1C00 1C1C1C00 1C1C1C00 1C1C1C00 1C1C1C00
 1C1C1C00 1C1C1C00 1C1C1C00 7F1F1F00 FFBFBF80 7F1F1F00>
PXLC RP
1866 1758 XY F40(mm)S 24 x(comm)S
XP /F40 41 24 4 -4 16 32 36 16 0
<6000 F000 7800 3C00 1E00 0F00 0780 0380 01C0 01C0 00E0 00E0 00E0
 00E0 0070 0070 0070 0070 0070 0070 0070 0070 00E0 00E0 00E0 00E0
 01C0 01C0 0380 0780 0F00 1E00 3C00 7800 F000 6000>
PXLC RP
2033 1758 XY F40(\))S 1863 Y 375 X(MPI)S 449 1863 XY 15 2 R(R)S
XP /F40 69 24 1 0 22 28 28 24 0
<FFFFF0 FFFFF0 FFFFF0 1C0070 1C0070 1C0070 1C0070 1C0000 1C0000
 1C0E00 1C0E00 1C0E00 1FFE00 1FFE00 1FFE00 1C0E00 1C0E00 1C0E00
 1C0000 1C0000 1C0038 1C0038 1C0038 1C0038 1C0038 FFFFF8 FFFFF8
 FFFFF8>
PXLC RP
488 1863 XY F40(ED)S
XP /F40 85 24 0 0 23 28 28 24 0
<FF83FE FF83FE FF83FE 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070
 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070 1C0070
 1C0070 1C0070 1C0070 1C0070 0E00E0 0F01E0 0783C0 03FF80 01FF00
 007C00>
PXLC RP
535 1863 XY F40(UCE\()S
XP /F40 83 24 2 0 21 28 28 24 0
<03F380 1FFF80 3FFF80 7C0F80 700780 E00380 E00380 E00380 E00000
 700000 780000 3F0000 1FF000 07FE00 00FF00 000F80 0003C0 0001C0
 0000E0 0000E0 6000E0 E000E0 E001E0 F001C0 F80780 FFFF80 FFFE00
 E7F800>
PXLC RP
631 1863 XY F40(SE)S
XP /F40 78 24 1 0 22 28 28 24 0
<7E07F0 FF0FF8 7F07F0 1D81C0 1D81C0 1D81C0 1DC1C0 1CC1C0 1CC1C0
 1CE1C0 1CE1C0 1CE1C0 1C61C0 1C71C0 1C71C0 1C31C0 1C39C0 1C39C0
 1C39C0 1C19C0 1C19C0 1C1DC0 1C0DC0 1C0DC0 1C0DC0 7F07C0 FF87C0
 7F03C0>
PXLC RP
679 1863 XY F40(ND)S
XP /F40 66 24 1 0 21 28 28 24 0
<FFFC00 FFFF00 FFFF80 1C03C0 1C01C0 1C00E0 1C00E0 1C00E0 1C00E0
 1C01E0 1C01C0 1C07C0 1FFF80 1FFF00 1FFFC0 1C03C0 1C00E0 1C00F0
 1C0070 1C0070 1C0070 1C0070 1C00F0 1C00E0 1C03E0 FFFFC0 FFFF80
 FFFE00>
PXLC RP
726 1863 XY F40(BU)S
XP /F40 70 24 2 0 21 28 28 24 0
<FFFFE0 FFFFE0 FFFFE0 1C00E0 1C00E0 1C00E0 1C00E0 1C0000 1C0000
 1C1C00 1C1C00 1C1C00 1FFC00 1FFC00 1FFC00 1C1C00 1C1C00 1C1C00
 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 1C0000 FFC000 FFC000
 FFC000>
PXLC RP
774 1863 XY F40(F,)S 24 x(REC)S
XP /F40 86 24 1 0 22 28 28 24 0
<FF07F8 FF07F8 FF07F8 1C01C0 1C01C0 1C01C0 1C01C0 0E0380 0E0380
 0E0380 0E0380 0F0780 070700 070700 070700 070700 038E00 038E00
 038E00 038E00 018C00 01DC00 01DC00 01DC00 00D800 00F800 00F800
 007000>
PXLC RP
917 1863 XY F40(VBUF,)S 23 x(COUN)S
XP /F40 84 24 1 0 22 28 28 24 0
<7FFFF8 FFFFF8 FFFFF8 E07038 E07038 E07038 E07038 007000 007000
 007000 007000 007000 007000 007000 007000 007000 007000 007000
 007000 007000 007000 007000 007000 007000 007000 07FF00 07FF00
 07FF00>
PXLC RP
1156 1863 XY F40(T,)S 23 x(D)S
XP /F40 65 24 1 0 22 28 28 24 0
<007000 00F800 00F800 00D800 00D800 01DC00 01DC00 01DC00 018C00
 038E00 038E00 038E00 038E00 030600 070700 070700 070700 070700
 0FFF80 0FFF80 0FFF80 0E0380 0E0380 1C01C0 1C01C0 7F07F0 FF8FF8
 7F07F0>
PXLC RP
1251 1863 XY F40(ATAT)S
XP /F40 89 24 1 0 22 28 28 24 0
<FF07F8 FF07F8 FF07F8 1C01C0 1E03C0 0E0380 0F0780 070700 070700
 038E00 038E00 01DC00 01DC00 01DC00 00F800 00F800 007000 007000
 007000 007000 007000 007000 007000 007000 007000 01FC00 03FE00
 01FC00>
PXLC RP
1347 1863 XY F40(YPE,)S 23 x(OP,)S 23 x(ROOT,)S 24 x(COMM,)S 23 x
(IERROR\))S
XP /F40 60 24 3 2 20 26 24 24 0
<000300 000780 001F80 003F00 007E00 01FC00 03F000 07E000 1FC000
 3F0000 7E0000 FC0000 FC0000 7E0000 3F0000 1FC000 07E000 03F000
 01FC00 007E00 003F00 001F80 000780 000300>
PXLC RP
470 1920 XY F40(<type)S
XP /F40 62 24 3 2 20 26 24 24 0
<600000 F00000 FC0000 7E0000 3F0000 1FC000 07E000 03F000 01FC00
 007E00 003F00 001F80 001F80 003F00 007E00 01FC00 03F000 07E000
 1FC000 3F0000 7E0000 FC0000 F00000 600000>
PXLC RP
590 1920 XY F40(>)S 23 x(SENDBUF\(*\),)S 23 x(RECVBUF\(*\))S 56 y
470 X(INTE)S
XP /F40 71 24 2 0 22 28 28 24 0
<01F1C0 03FDC0 0FFFC0 1F0FC0 1C03C0 3803C0 3801C0 7001C0 7001C0
 700000 E00000 E00000 E00000 E00000 E00000 E00FF0 E01FF0 E00FF0
 7001C0 7001C0 7003C0 3803C0 3803C0 1C07C0 1F0FC0 0FFFC0 03FDC0
 01F1C0>
PXLC RP
566 1976 XY F40(GER)S 23 x(COUNT,)S 23 x(DATATYPE,)S 23 x(OP,)S 23 x
(ROOT,)S 24 x(COMM,)S 23 x(IERROR)S
XP /F34 67 33 3 -1 29 32 33 32 0
<000FC040 007030C0 01C009C0 038005C0 070003C0 0E0001C0 1E0000C0
 1C0000C0 3C0000C0 7C000040 7C000040 78000040 F8000000 F8000000
 F8000000 F8000000 F8000000 F8000000 F8000000 F8000000 F8000000
 78000000 7C000040 7C000040 3C000040 1C000040 1E000080 0E000080
 07000100 03800200 01C00400 00703800 000FC000>
PXLC RP
466 2082 XY F34(Com)S -1 x(bines)S 18 x(the)S 18 x(v)S -3 x(alues)S
19 x(pro)S -1 x(vided)S 18 x(in)S 18 x(the)S 19 x(send)S 19 x(bu)S
(\013er)S 18 x(of)S 18 x(eac)S -1 x(h)S 18 x(pro)S 1 x(cess)S 19 x
(in)S 18 x(the)S 19 x(group,)S 18 x(using)S 56 y 375 X(the)S 15 x
(op)S 1 x(eration)S 29 x F41(op)S F34(,)S 15 x(and)S 15 x(returns)S
15 x(the)S 15 x(com)S -1 x(bined)S 15 x(v)S -3 x(alue)S 16 x(in)S
15 x(the)S 15 x(receiv)S -1 x(e)S 15 x(bu\013er)S 15 x(of)S 14 x
(the)S 15 x(pro)S 1 x(cess)S 16 x(with)S 56 y 375 X(rank)S 43 x F41
(ro)S 1 x(ot)S F34(.)S 39 x(The)S 22 x(routine)S 21 x(is)S 21 x
(called)S 21 x(b)S -1 x(y)S 22 x(all)S 20 x(group)S 22 x(mem)S -1 x
(b)S 1 x(ers)S 21 x(using)S 22 x(the)S 21 x(same)S 22 x(argumen)S
-1 x(ts)S 21 x(for)S 57 y 375 X F41(count,)S 20 x(datat)S -1 x(yp)S
1 x(e,)S 20 x(op,)S 19 x(ro)S 1 x(ot)S 18 x F34(and)S 37 x F41
(comm)S F34(.)S
XP /F34 69 31 2 0 29 31 31 32 0
<FFFFFF00 0F800F00 07800300 07800300 07800100 07800180 07800080
 07800080 07800080 07808080 07808000 07808000 07808000 07818000
 07FF8000 07818000 07808000 07808000 07808000 07808000 07800020
 07800020 07800020 07800040 07800040 07800040 078000C0 078000C0
 07800180 0F800F80 FFFFFF80>
PXLC RP
1152 2251 XY F34(Eac)S -1 x(h)S 18 x(pro)S 1 x(cess)S 19 x(can)S
18 x(pro)S -1 x(vide)S 18 x(one)S 19 x(v)S -3 x(alue,)S 19 x(or)S
18 x(a)S 18 x(se)S
XP /F34 113 24 2 -9 23 20 29 24 0
<03E080 061980 1C0580 3C0780 380380 780380 700380 F00380 F00380
 F00380 F00380 F00380 F00380 700380 780380 380380 380780 1C0B80
 0E1380 03E380 000380 000380 000380 000380 000380 000380 000380
 000380 003FF8>
PXLC RP
2040 2251 XY F34(quence)S 56 y 375 X(of)S 18 x(v)S -3 x(alues,)S
18 x(in)S 18 x(whic)S -1 x(h)S 18 x(case)S 18 x(the)S 18 x(com)S
-1 x(bine)S 18 x(op)S 1 x(eration)S 17 x(is)S 17 x(executed)S 19 x
(elemen)S -1 x(t)S
XP /F34 45 15 1 9 11 11 2 16 0
<FFC0 FFC0>
PXLC RP
1729 2307 XY F34(-wise)S 18 x(on)S 18 x(eac)S -1 x(h)S 18 x(en)S
-1 x(try)S 17 x(of)S 57 y 375 X(the)S 17 x(sequence.)S
XP /F34 70 30 2 0 27 31 31 32 0
<FFFFFF00 0F800F00 07800300 07800300 07800100 07800180 07800080
 07800080 07800080 07800080 07808000 07808000 07808000 07808000
 07818000 07FF8000 07818000 07808000 07808000 07808000 07808000
 07800000 07800000 07800000 07800000 07800000 07800000 07800000
 07800000 0FC00000 FFFE0000>
PXLC RP
666 2364 XY F34(F)S -4 x(or)S 17 x(example,)S 17 x(if)S 16 x(the)S
17 x(op)S 1 x(eration)S 16 x(is)S
XP /F32 /cmss10 300 41.5 41.5 128 [-3 -11 41 31] PXLNF RP
XP /F32 77 36 4 0 31 29 29 32 0
<FC0007E0 FC0007E0 FC0007E0 EE000DE0 EE000DE0 EE000DE0 E70019E0
 E70019E0 E70019E0 E78039E0 E38031E0 E3C071E0 E3C071E0 E1C061E0
 E1C061E0 E1E0E1E0 E1E0E1E0 E0E0C1E0 E0F1C1E0 E07181E0 E07181E0
 E07181E0 E03B01E0 E03B01E0 E03B01E0 E01E01E0 E01E01E0 E01E01E0
 E00001E0>
PXLC RP
1319 2364 XY F32(M)S
XP /F32 80 27 4 0 24 29 29 24 0
<FFFC00 FFFF00 F00F80 F003C0 F001E0 F000F0 F000F0 F000F0 F000F0
 F000F0 F000F0 F001E0 F003E0 F00FC0 FFFF80 FFFE00 F00000 F00000
 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000>
PXLC RP
1355 2364 XY F32(P)S
XP /F32 73 12 4 0 8 29 29 8 0
<F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0>
PXLC RP
1382 2364 XY F32(I)S 1396 2364 XY 13 2 R -1 x(M)S
XP /F32 65 28 1 0 26 29 29 32 0
<001C0000 003E0000 003E0000 002E0000 00670000 00670000 00E78000
 00C78000 00C38000 01C3C000 0183C000 0181C000 0381E000 0381E000
 0700F000 0700F000 0600F000 0E007800 0FFFF800 0FFFF800 1C003C00
 1C003C00 18003C00 38001E00 38001E00 70001F00 70000F00 70000F00
 E0000780>
PXLC RP
1444 2364 XY F32(A)S
XP /F32 88 28 1 0 26 29 29 32 0
<78000E00 7C001E00 3C003C00 1E003800 0F007000 0F00F000 0781E000
 03C1C000 01C3C000 01E78000 00F70000 007E0000 003E0000 003C0000
 003C0000 007E0000 00770000 00E78000 01E38000 03C1C000 0381E000
 0700F000 0F00F800 0E007800 1C003C00 3C003E00 78001F00 70000F00
 F0000F80>
PXLC RP
1472 2364 XY F32(X)S 17 x F34(and)S 17 x(the)S 17 x(send)S 17 x(bu)S
(\013er)S 17 x(con)S -1 x(tains)S 16 x(t)S -1 x(w)S -1 x(o)S 56 y
375 X(\015oating)S 13 x(p)S 1 x(oin)S -1 x(t)S 14 x(n)S -1 x(um)S
-1 x(b)S 1 x(ers,)S 14 x(then)S 15 x(recvbuf\()S
XP /F34 49 23 4 0 19 30 30 16 0
<0180 0380 0F80 F380 0380 0380 0380 0380 0380 0380 0380 0380 0380
 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380
 0380 0380 07C0 FFFE>
PXLC RP
1118 2420 XY F34(1\))S
XP /F34 61 35 3 5 31 17 12 32 0
<7FFFFFE0 FFFFFFF0 00000000 00000000 00000000 00000000 00000000
 00000000 00000000 00000000 FFFFFFF0 7FFFFFE0>
PXLC RP
1173 2420 XY F34(=)S 14 x(global)S 13 x(max\(sendbuf\(1\)\))S 13 x
(and)S 15 x(recvbuf\(2\))S 14 x(=)S 14 x(global)S 57 y 375 X
(max\(sendbuf\(2\)\).)S 23 x(All)S 16 x(send)S 17 x(bu\013ers)S 16 x
(should)S 17 x(de\014ne)S 18 x(sequences)S 17 x(of)S 16 x(equal)S
17 x(length)S 16 x(of)S 16 x(en)S -1 x(tries)S 16 x(all)S 15 x(of)S
56 y 375 X(the)S 16 x(same)S 16 x(data)S 15 x(t)S -1 x(yp)S 1 x(e,)S
16 x(where)S 16 x(the)S 16 x(t)S -1 x(yp)S 1 x(e)S 16 x(is)S 15 x
(a)S
XP /F39 /cmbx10 329 45.5 45.5 128 [-3 -11 52 34] PXLNF RP
XP /F39 98 29 2 0 26 32 32 24 0
<FF0000 FF0000 1F0000 1F0000 1F0000 1F0000 1F0000 1F0000 1F0000
 1F0000 1F0000 1F0000 1F1FC0 1F7FF0 1FE0F8 1F807C 1F007E 1F003E
 1F003E 1F003F 1F003F 1F003F 1F003F 1F003F 1F003F 1F003E 1F003E
 1F007C 1F807C 1EC1F8 1C7FE0 181F80>
PXLC RP
1188 2533 XY F39(b)S
XP /F39 97 25 1 0 24 20 20 24 0
<07FC00 1FFF00 3F0F80 3F07C0 3F03E0 3F03E0 0C03E0 0003E0 007FE0
 07FBE0 1F03E0 3C03E0 7C03E0 F803E0 F803E0 F803E0 FC05E0 7E0DE0
 3FF9FE 0FE07E>
PXLC RP
1217 2533 XY F39(a)S
XP /F39 115 21 2 0 18 20 20 16 0
<0FE6 3FFE 701E 600E E006 E006 F800 FFC0 7FF8 3FFC 1FFE 03FE 001F
 C007 C007 E007 F006 F81E FFFC C7F0>
PXLC RP
1242 2533 XY F39(s)S
XP /F39 105 15 2 0 13 33 33 16 0
<1C00 3F00 7F00 7F00 7F00 3F00 1C00 0000 0000 0000 0000 0000 0000
 FF00 FF00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00 1F00
 1F00 1F00 1F00 1F00 1F00 FFE0 FFE0>
PXLC RP
1263 2533 XY F39(i)S
XP /F39 99 23 2 0 21 20 20 24 0
<01FE00 07FF80 1F0FC0 3E0FC0 3E0FC0 7C0FC0 7C0300 FC0000 FC0000
 FC0000 FC0000 FC0000 FC0000 7C0000 7E0000 3E0060 3F00C0 1F81C0
 07FF00 01FC00>
PXLC RP
1278 2533 XY F39(c)S
XP /F34 77 42 2 0 39 31 31 40 0
<FF80001FF8 0F80001F80 0780001F00 05C0002F00 05C0002F00 05C0002F00
 04E0004F00 04E0004F00 0470008F00 0470008F00 0470008F00 0438010F00
 0438010F00 0438010F00 041C020F00 041C020F00 041C020F00 040E040F00
 040E040F00 040E040F00 0407080F00 0407080F00 0407080F00 0403900F00
 0403900F00 0401E00F00 0401E00F00 0401E00F00 0E00C00F00 1F00C01F80
 FFE0C1FFF8>
PXLC RP
1317 2533 XY F34(MP)S
XP /F34 73 16 1 0 15 31 31 16 0
<FFFC 0FC0 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780
 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780 0780
 0780 0780 0780 0FC0 FFFC>
PXLC RP
1389 2533 XY F34(I)S 17 x(datat)S -1 x(yp)S 1 x(e)S 15 x(and)S 16 x
(one)S 16 x(of)S 16 x(those)S 15 x(allo)S -1 x(w)S -1 x(ed)S 15 x
(for)S 57 y 375 X(op)S 1 x(erands)S 17 x(of)S 33 x F41(op)S 16 x F34
(\(see)S 17 x(b)S 1 x(elo)S -1 x(w\).)S 23 x(F)S -4 x(or)S 16 x
(all)S 16 x(op)S 1 x(erations)S 16 x(the)S 16 x(n)S -1 x(um)S -1 x
(b)S 1 x(er)S 17 x(of)S 16 x(elemen)S -1 x(ts)S 17 x(in)S 16 x(the)S
17 x(send)S 17 x(bu\013er)S 56 y 375 X(are)S 15 x(the)S 15 x(same)S
14 x(as)S 15 x(for)S 14 x(the)S 15 x(receiv)S -1 x(e)S 15 x
(bu\013ers.)S 20 x(F)S -4 x(or)S 14 x(all)S 14 x(op)S 1 x
(erations,)S 14 x(the)S 15 x(t)S -1 x(yp)S 1 x(e)S 15 x(of)S 15 x
(elemen)S -1 x(ts)S 14 x(in)S 15 x(the)S 15 x(send)S 57 y 375 X(bu)S
(\013er)S 15 x(are)S 15 x(the)S 15 x(same)S 15 x(as)S 15 x(for)S
15 x(the)S 15 x(receiv)S -1 x(e)S 15 x(bu\013ers.)S 75 y 466 X(The)S
13 x(op)S 1 x(eration)S 13 x(de\014ned)S 14 x(b)S -1 x(y)S 26 x F41
(op)S 14 x F34(is)S 12 x(asso)S 1 x(ciativ)S -1 x(e,)S 12 x(or)S
12 x(asso)S 1 x(ciativ)S -1 x(e)S 12 x(and)S 13 x(comm)S -1 x
(utativ)S -1 x(e)S
XP /F34 59 13 4 -9 9 20 29 8 0
<70 F8 F8 F8 70 00 00 00 00 00 00 00 00 00 00 70 F0 F8 F8 78 08 08 08
 10 10 10 20 20 40>
PXLC RP
1949 2778 XY F34(;)S 13 x(the)S 13 x(imple-)S 56 y 375 X(men)S -1 x
(tation)S 14 x(can)S 15 x(tak)S -1 x(e)S 15 x(adv)S -3 x(an)S -1 x
(tage)S 14 x(of)S 15 x(asso)S 1 x(ciativit)S -1 x(y)S 13 x(and)S
15 x(comm)S -1 x(utativit)S -1 x(y)S 13 x(in)S 15 x(order)S 14 x
(to)S 15 x(c)S -1 x(hange)S 15 x(order)S 57 y 375 X(of)S 20 x(ev)S
-3 x(aluation.)S 33 x(This)S 20 x(ma)S -1 x(y)S 19 x(c)S -1 x
(hange)S 20 x(the)S 21 x(result)S 19 x(of)S 20 x(the)S 20 x
(reduction,)S 21 x(for)S 19 x(op)S 1 x(erations)S 19 x(that)S 20 x
(are)S 19 x(not)S 56 y 375 X(strictly)S 12 x(asso)S 1 x(ciativ)S
-1 x(e)S 13 x(and)S 14 x(comm)S -1 x(utativ)S -1 x(e,)S 12 x(suc)S
-1 x(h)S 14 x(as)S 14 x(\015oating)S 13 x(p)S 1 x(oin)S -1 x(t)S
13 x(addition.)S 19 x F41(MPI)S 1842 2947 XY 14 2 R -1 x(REDUCE)S
15 x F34(should)S 57 y 375 X(b)S 1 x(e)S 16 x(used)S 16 x(only)S
14 x(when)S 16 x(suc)S -1 x(h)S 16 x(c)S -1 x(hanges)S 15 x(are)S
15 x(acceptable.)S 346 Y 2267 X F4(1)S 57 y -15 x(2)S 56 y -15 x(3)S
56 y -15 x(4)S 57 y -15 x(5)S 56 y -15 x(6)S 57 y -15 x(7)S 56 y
-15 x(8)S 57 y -15 x(9)S 56 y -23 x(10)S 57 y -30 x(11)S 56 y -30 x
(12)S 57 y -30 x(13)S 56 y -30 x(14)S 56 y -30 x(15)S 57 y -30 x
(16)S 56 y -30 x(17)S 57 y -30 x(18)S 56 y -30 x(19)S 57 y -30 x
(20)S 56 y -30 x(21)S 57 y -30 x(22)S 56 y -30 x(23)S 57 y -30 x
(24)S 56 y -30 x(25)S 56 y -30 x(26)S 57 y -30 x(27)S 56 y -30 x
(28)S 57 y -30 x(29)S 56 y -30 x(30)S 57 y -30 x(31)S 56 y -30 x
(32)S 57 y -30 x(33)S 56 y -30 x(34)S 57 y -30 x(35)S 56 y -30 x
(36)S 56 y -30 x(37)S 57 y -30 x(38)S 56 y -30 x(39)S 57 y -30 x
(40)S 56 y -30 x(41)S 57 y -30 x(42)S 56 y -30 x(43)S 57 y -30 x
(44)S 56 y -30 x(45)S 57 y -30 x(46)S 56 y -30 x(47)S 56 y -30 x
(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 90 68 550 752

%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
375 200 XY F38(1.1.)S 34 x(COLLECTIVE)S 16 x(D)S -1 x(A)S -4 x(T)S
-4 x(A)S 15 x(MO)S -1 x(VEMENTS)S 15 x({)S 15 x(SIMPLI)S
XP /F38 70 30 2 0 31 31 31 32 0
<07FFFFF8 007C0078 003C0038 003C0018 00780018 00780008 00780008
 00780008 00780008 00780008 00F01000 00F01000 00F01000 00F03000
 00F07000 00FFF000 01E06000 01E02000 01E02000 01E02000 01E02000
 01E00000 03C00000 03C00000 03C00000 03C00000 03C00000 03C00000
 07800000 07C00000 FFFE0000>
PXLC RP
1459 200 XY F38(FICA)S -4 x(TION)S
XP /F34 51 23 2 -1 20 30 31 24 0
<03F000 0C1C00 100E00 200F00 780F80 780780 780780 380F80 000F80
 000F00 000F00 000E00 001C00 003800 03F000 003C00 000E00 000F00
 000780 000780 0007C0 2007C0 F807C0 F807C0 F807C0 F00780 400780
 400F00 200E00 1C3C00 03F000>
PXLC RP
2152 200 XY F34(3)S 345 Y 375 X F41(MPI)S 459 345 XY 14 2 R(A)S
XP /F41 76 25 5 0 22 32 32 24 0
<F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 FFFF80 FFFF80 FFFF80>
PXLC RP
503 345 XY F41(LLREDUCE\()S 15 x(sendbuf,)S 17 x(recvbuf,)S 16 x
(count,)S 16 x(datat)S -1 x(yp)S 1 x(e,)S 16 x(op,)S 15 x(comm\))S
433 Y 417 X F25(IN)S 634 X F41(sendbuf)S 1205 X F25(starting)S 14 x
(address)S 15 x(of)S 14 x(send)S 14 x(bu\013er)S 15 x(\(c)S -1 x
(hoice\))S 531 Y 417 X(OUT)S 634 X F41(recvbuf)S 1205 X F25
(starting)S 14 x(address)S 15 x(of)S 14 x(receiv)S -1 x(e)S 15 x
(bu)S
(\013er)S 15 x(\(c)S -1 x(hoice\))S 628 Y 417 X(IN)S 634 X F41
(count)S 1205 X F25(n)S -1 x(um)S -1 x(b)S 1 x(er)S 14 x(of)S 14 x
(elemen)S -1 x(ts)S 14 x(in)S 14 x(send)S 15 x(bu\013er)S 14 x
(\(in)S -1 x(teger\))S 725 Y 417 X(IN)S 634 X F41(datat)S -1 x(yp)S
1 x(e)S 1205 X F25(data)S 14 x(t)S -1 x(yp)S 1 x(e)S 14 x(of)S 13 x
(elemen)S -1 x(ts)S 15 x(of)S 13 x(send)S 15 x(bu\013er)S 15 x
(\(handle\))S 822 Y 417 X(IN)S 634 X F41(op)S 1205 X F25(reduction)S
15 x(op)S 1 x(eration)S 14 x(\(handle\))S 920 Y 417 X(IN)S 634 X F41
(comm)S 1205 X F25(comm)S -1 x(unicator)S 13 x(\(handle\))S 1055 Y
375 X F40(int)S 23 x(MPI)S 545 1055 XY 15 2 R -1 x(A)S
XP /F40 108 24 2 0 21 28 28 24 0
<7FE000 FFE000 7FE000 00E000 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 00E000 00E000 7FFFC0 FFFFE0
 7FFFC0>
PXLC RP
583 1055 XY F40(llreduce\(void*)S 22 x(sendbuf,)S 23 x(void*)S 23 x
(recvbuf,)S 23 x(int)S 23 x(count,)S 56 y 693 X(MPI)S 768 1111 XY
15 2 R -1 x(Datatype)S 23 x(datatype,)S 22 x(MPI)S 1310 1111 XY
15 2 R -1 x(Op)S 24 x(op,)S 23 x(MPI)S 1566 1111 XY 15 2 R -1 x
(Comm)S 23 x(comm\))S 1209 Y 375 X(MPI)S 449 1209 XY 15 2 R(A)S
XP /F40 76 24 1 0 21 28 28 24 0
<7FE000 FFE000 7FE000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000 0E0000
 0E0000 0E0000 0E0070 0E0070 0E0070 0E0070 0E0070 7FFFF0 FFFFF0
 7FFFF0>
PXLC RP
488 1209 XY F40(LLREDUCE\(SENDBUF,)S 21 x(RECVBUF,)S 23 x(COUNT,)S
23 x(DATATYPE,)S 23 x(OP,)S 23 x(COMM,)S 23 x(IERROR\))S 56 y 470 X
(<type>)S 23 x(SENDBUF\(*\),)S 23 x(RECVBUF\(*\))S 57 y 470 X
(INTEGER)S 23 x(COUNT,)S 23 x(DATATYPE,)S 23 x(OP,)S 23 x(COMM,)S
24 x(IERROR)S
XP /F34 83 25 3 -1 21 32 33 24 0
<07E080 0C1980 100780 300380 600180 600180 E00180 E00080 E00080
 E00080 F00000 F00000 780000 7F0000 3FF000 1FFC00 0FFE00 03FF00
 001F80 000780 0003C0 0003C0 0001C0 8001C0 8001C0 8001C0 8001C0
 C00180 C00380 E00300 F00600 CE0C00 81F800>
PXLC RP
466 1419 XY F34(Same)S 19 x(as)S 20 x(the)S 19 x F41(MPI)S
818 1419 XY 14 2 R(REDUCE)S 20 x F34(op)S 1 x(eration)S 19 x
(function)S 19 x(except)S 20 x(that)S 19 x(the)S 20 x(result)S 19 x
(app)S 1 x(ears)S 20 x(in)S 19 x(the)S 57 y 375 X(receiv)S -1 x(e)S
15 x(bu\013er)S 15 x(of)S 15 x(all)S 14 x(the)S 15 x(group)S 15 x
(mem)S -1 x(b)S 1 x(ers.)S 1591 Y 375 X F41(MPI)S 459 1591 XY
14 2 R(REDUCE)S 652 1591 XY 14 2 R(SCA)S -4 x(TTER\()S 16 x
(sendbuf,)S 17 x(recvbuf,)S 16 x(recvcounts,)S 16 x(datat)S -1 x
(yp)S 1 x(e,)S 17 x(op,)S 15 x(comm\))S 1679 Y 417 X F25(IN)S 634 X
F41(sendbuf)S 1205 X F25(starting)S 14 x(address)S 15 x(of)S 14 x
(send)S 14 x(bu\013er)S 15 x(\(c)S -1 x(hoice\))S 1776 Y 417 X(OUT)S
634 X F41(recvbuf)S 1205 X F25(starting)S 14 x(address)S 15 x(of)S
14 x(receiv)S -1 x(e)S 15 x(bu\013er)S 15 x(\(c)S -1 x(hoice\))S
1873 Y 417 X(IN)S 634 X F41(recvcounts)S 1205 X F25(in)S -1 x
(teger)S 14 x(arra)S -1 x(y)S 13 x(sp)S 1 x(ecifying)S 13 x(the)S
14 x(n)S -1 x(um)S -1 x(b)S 1 x(er)S 13 x(of)S 13 x(elemen)S -1 x
(ts)S 14 x(in)S 12 x(re-)S 57 y 1205 X(sult)S 14 x(distributed)S
15 x(to)S 14 x(eac)S -1 x(h)S 15 x(pro)S 1 x(cess.)S
XP /F25 65 31 1 0 29 29 29 32 0
<00060000 00060000 00060000 000F0000 000F0000 000F0000 00178000
 00178000 00178000 0023C000 0023C000 0023C000 0041E000 0041E000
 0041E000 0080F000 0080F000 0180F800 01007800 01FFF800 03007C00
 02003C00 02003C00 06003E00 04001E00 04001E00 0C001F00 1E001F00
 FF80FFF0>
PXLC RP
1806 1930 XY F25(Arra)S -1 x(y)S 14 x(m)S -1 x(ust)S 14 x(b)S 1 x
(e)S 15 x(iden-)S 56 y 1205 X(tical)S 13 x(on)S 14 x(all)S 13 x
(calling)S 12 x(pro)S 1 x(cesses.)S 2083 Y 417 X(IN)S 634 X F41
(datat)S -1 x(yp)S 1 x(e)S 1205 X F25(data)S 14 x(t)S -1 x(yp)S 1 x
(e)S 14 x(of)S 13 x(elemen)S -1 x(ts)S 15 x(of)S 13 x(input)S 14 x
(bu)S
(\013er)S 15 x(\(handle\))S 2181 Y 417 X(IN)S 634 X F41(op)S 1205 X
F25(op)S 1 x(eration)S 14 x(\(state\))S 2278 Y 417 X(IN)S 634 X F41
(comm)S 1205 X F25(comm)S -1 x(unicator)S 13 x(\(handle\))S 2413 Y
375 X F40(int)S 23 x(MPI)S 545 2413 XY 15 2 R -1 x(Reduce)S
705 2413 XY 15 2 R(scatter\(void*)S 22 x(sendbuf,)S 22 x(void*)S
24 x(recvbuf,)S 22 x(int)S 24 x(*recvcounts,)S 57 y 693 X(MPI)S
768 2470 XY 15 2 R -1 x(Datatype)S 23 x(datatype,)S 22 x(MPI)S
1310 2470 XY 15 2 R -1 x(Op)S 24 x(op,)S 23 x(MPI)S 1566 2470 XY
15 2 R -1 x(Comm)S 23 x(comm\))S 2567 Y 375 X(MPI)S 449 2567 XY
15 2 R(REDUCE)S 610 2567 XY 15 2 R -1 x(SCATTER\(SENDBUF,)S 22 x
(RECVBUF,)S 23 x(RECVCOUNTS,)S 22 x(DATATYPE,)S 23 x(OP,)S 23 x
(COMM,)S 57 y 693 X(IERROR\))S 56 y 470 X(<type>)S 23 x
(SENDBUF\(*\),)S 23 x(RECVBUF\(*\))S 57 y 470 X(INTEGER)S 23 x
(RECVCOUNTS,)S 23 x(DATATYPE,)S 22 x(OP,)S 24 x(COMM,)S 23 x
(IERROR)S 2834 Y 466 X F41(MPI)S 550 2834 XY 14 2 R(REDUCE)S
743 2834 XY 14 2 R(SCA)S -4 x(TTER)S 16 x F34(\014rst)S 15 x(do)S
1 x(es)S 16 x(a)S 15 x(comp)S 1 x(onen)S -1 x(t)S -1 x(wise)S 14 x
(reduction)S 15 x(on)S 16 x(v)S -1 x(ectors)S 14 x(pro)S -1 x
(vided)S 15 x(b)S -1 x(y)S 57 y 375 X(the)S 17 x(pro)S 1 x(cesses.)S
24 x(Next,)S 17 x(the)S 16 x(resulting)S 16 x(v)S -1 x(ector)S 16 x
(of)S 16 x(results)S 16 x(is)S 16 x(split)S 15 x(in)S -1 x(to)S 16 x
F40(n)S 16 x F34(disjoin)S -1 x(t)S 16 x(segmen)S -1 x(ts,)S 16 x
(where)S 56 y 375 X F40(n)S 17 x F34(is)S 16 x(the)S 17 x(n)S -1 x
(um)S -1 x(b)S 1 x(er)S 17 x(of)S 16 x(mem)S -1 x(b)S 1 x(ers)S 17 x
(in)S 17 x(the)S 17 x(group;)S 17 x(segmen)S -1 x(t)S 16 x F40(i)S
17 x F34(con)S -1 x(tains)S 16 x F41(recvcounts)S
XP /F41 91 13 3 -11 11 34 45 8 0
<FF FF FF F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 FF FF FF>
PXLC RP
1834 2947 XY F41([)S
XP /F41 105 11 3 0 7 32 32 8 0
<F0 F0 F0 F0 00 00 00 00 00 00 00 00 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0>
PXLC RP
1848 2947 XY F41(i)S
XP /F41 93 13 1 -11 9 34 45 8 0
<FF FF FF 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F
 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F FF FF FF>
PXLC RP
1858 2947 XY F41(])S 17 x F34(elemen)S -1 x(ts.)S 25 x(The)S 57 y
375 X F40(i)S F34(-th)S 15 x(segmen)S -1 x(t)S 15 x(is)S 15 x(sen)S
-1 x(t)S 15 x(to)S 14 x(pro)S 1 x(cess)S 16 x(with)S 14 x(rank)S
15 x F40(i)S F34(.)S 346 Y 268 X F4(1)S 57 y -15 x(2)S 56 y -15 x
(3)S 56 y -15 x(4)S 57 y -15 x(5)S 56 y -15 x(6)S 57 y -15 x(7)S
56 y -15 x(8)S 57 y -15 x(9)S 56 y -23 x(10)S 57 y -30 x(11)S 56 y
-30 x(12)S 57 y -30 x(13)S 56 y -30 x(14)S 56 y -30 x(15)S 57 y -30 x
(16)S 56 y -30 x(17)S 57 y -30 x(18)S 56 y -30 x(19)S 57 y -30 x
(20)S 56 y -30 x(21)S 57 y -30 x(22)S 56 y -30 x(23)S 57 y -30 x
(24)S 56 y -30 x(25)S 56 y -30 x(26)S 57 y -30 x(27)S 56 y -30 x
(28)S 57 y -30 x(29)S 56 y -30 x(30)S 57 y -30 x(31)S 56 y -30 x
(32)S 57 y -30 x(33)S 56 y -30 x(34)S 57 y -30 x(35)S 56 y -30 x
(36)S 56 y -30 x(37)S 57 y -30 x(38)S 56 y -30 x(39)S 57 y -30 x
(40)S 56 y -30 x(41)S 57 y -30 x(42)S 56 y -30 x(43)S 57 y -30 x
(44)S 56 y -30 x(45)S 57 y -30 x(46)S 56 y -30 x(47)S 56 y -30 x
(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 62 68 522 752

%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
XP /F34 52 23 1 0 21 30 30 24 0
<000600 000600 000E00 000E00 001E00 002E00 002E00 004E00 008E00
 008E00 010E00 020E00 020E00 040E00 080E00 080E00 100E00 200E00
 200E00 400E00 C00E00 FFFFF0 000E00 000E00 000E00 000E00 000E00
 000E00 000E00 00FFE0>
PXLC RP
375 200 XY F34(4)S 573 X F38(CHAPTER)S 15 x(1.)S 35 x(COLLECTIVE)S
16 x(COMMUNICA)S -4 x(TION)S 16 x({)S 15 x(PR)S -1 x(OPOSED)S 16 x
(CHANGES)S 345 Y 375 X F41(MPI)S 459 345 XY 14 2 R(SCA)S
XP /F41 78 32 5 0 26 32 32 24 0
<FC0078 FE0078 FE0078 F60078 F70078 F70078 F38078 F38078 F38078
 F3C078 F1C078 F1E078 F1E078 F0E078 F0F078 F07078 F07078 F07878
 F03878 F03C78 F03C78 F01C78 F01E78 F00E78 F00E78 F00E78 F00778
 F00778 F00378 F003F8 F003F8 F001F8>
PXLC RP
557 345 XY F41(N\()S 15 x(sendbuf,)S 18 x(recvbuf,)S 15 x(count,)S
16 x(datat)S -1 x(yp)S 1 x(e,)S 17 x(op,)S 15 x(comm)S 15 x(\))S
423 Y 417 X F25(IN)S 634 X F41(sendbuf)S 1205 X F25(starting)S 14 x
(address)S 15 x(of)S 14 x(send)S 14 x(bu\013er)S 15 x(\(c)S -1 x
(hoice\))S 499 Y 417 X(OUT)S 634 X F41(recvbuf)S 1205 X F25
(starting)S 14 x(address)S 15 x(of)S 14 x(receiv)S -1 x(e)S 15 x
(bu)S
(\013er)S 15 x(\(c)S -1 x(hoice\))S 75 y 417 X(IN)S 634 X F41
(count)S 1205 X F25(n)S -1 x(um)S -1 x(b)S 1 x(er)S 14 x(of)S 14 x
(elemen)S -1 x(ts)S 14 x(in)S 14 x(input)S 14 x(bu\013er)S 14 x
(\(in)S -1 x(teger\))S 650 Y 417 X(IN)S 634 X F41(datat)S -1 x(yp)S
1 x(e)S 1205 X F25(data)S 14 x(t)S -1 x(yp)S 1 x(e)S 14 x(of)S 13 x
(elemen)S -1 x(ts)S 15 x(of)S 13 x(input)S 14 x(bu\013er)S 15 x
(\(handle\))S 726 Y 417 X(IN)S 634 X F41(op)S 1205 X F25(reduction)S
15 x(op)S 1 x(eration)S 14 x(\(handle\))S 802 Y 417 X(IN)S 634 X F41
(comm)S 1205 X F25(comm)S -1 x(unicator)S 13 x(\(handle\))S 927 Y
375 X F40(int)S 23 x(MPI)S 545 927 XY 15 2 R -1 x(Scan\(void*)S 23 x
(sendbuf,)S 22 x(void*)S 24 x(recvbuf,)S 22 x(int)S 24 x(count,)S
56 y 693 X(MPI)S 768 983 XY 15 2 R -1 x(Datatype)S 23 x(datatype,)S
22 x(MPI)S 1310 983 XY 15 2 R -1 x(Op)S 24 x(op,)S 23 x(MPI)S
1566 983 XY 15 2 R -1 x(Comm)S 23 x(comm)S 24 x(\))S 1070 Y 375 X
(MPI)S 449 1070 XY 15 2 R(SCAN\(SENDBUF,)S 22 x(RECVBUF,)S 23 x
(COUNT,)S 23 x(DATATYPE,)S 22 x(OP,)S 24 x(COMM,)S 23 x(IERROR\))S
56 y 470 X(<type>)S 23 x(SENDBUF\(*\),)S 23 x(RECVBUF\(*\))S 57 y
470 X(INTEGER)S 23 x(COUNT,)S 23 x(DATATYPE,)S 23 x(OP,)S 23 x
(COMM,)S 24 x(IERROR)S 1270 Y 466 X F41(MPI)S 550 1270 XY
14 2 R(SCAN)S 23 x F34(is)S 22 x(used)S 24 x(to)S 22 x(p)S 1 x
(erform)S 22 x(a)S 23 x(parallel)S 21 x(pre\014x)S 23 x(with)S 22 x
(resp)S 1 x(ect)S 23 x(to)S 22 x(an)S 23 x(asso)S 1 x(ciativ)S -1 x
(e)S 21 x(and)S 56 y 375 X(comm)S -1 x(utativ)S -1 x(e)S 20 x
(reduction)S 22 x(op)S 1 x(eration)S 22 x(on)S 22 x(data)S 21 x
(distributed)S 22 x(across)S 21 x(the)S 22 x(group.)S 40 x(The)S
23 x(op)S 1 x(eration)S 57 y 375 X(returns)S 17 x(in)S 17 x(the)S
17 x(receiv)S -1 x(e)S 18 x(bu\013er)S 17 x(of)S 17 x(the)S 17 x
(pro)S 1 x(cess)S 17 x(with)S 17 x(rank)S 17 x F40(i)S 17 x F34
(the)S 17 x(reduction)S 17 x(of)S 17 x(the)S 17 x(v)S -3 x(alues)S
18 x(in)S 17 x(the)S 56 y 375 X(send)S 22 x(bu\013ers)S 21 x(of)S
20 x(pro)S 1 x(cesses)S 22 x(with)S 21 x(ranks)S
XP /F40 48 24 2 0 21 28 28 24 0
<01F000 07FC00 0FFE00 1F1F00 1C0700 380380 7803C0 7001C0 7001C0
 E000E0 E000E0 E000E0 E000E0 E000E0 E000E0 E000E0 E000E0 E000E0
 F001E0 7001C0 7001C0 7803C0 380380 1C0700 1F1F00 0FFE00 07FC00
 01F000>
PXLC RP
1137 1439 XY F40(0,)S
XP /F40 46 24 9 0 15 6 6 8 0
<30 78 FC FC 78 30>
PXLC RP
1185 1439 XY F40(...,i)S F34(.)S 37 x(The)S 21 x(t)S -1 x(yp)S 1 x
(e)S 22 x(of)S 20 x(op)S 1 x(erations)S 21 x(supp)S 1 x(orted,)S
22 x(their)S 57 y 375 X(seman)S -1 x(tics,)S 14 x(and)S 15 x(the)S
16 x(constrain)S -1 x(ts)S 13 x(on)S 15 x(send)S 16 x(and)S 16 x
(receiv)S -1 x(e)S 15 x(bu\013ers)S 15 x(are)S 15 x(as)S 15 x(for)S
14 x F41(MPI)S 1830 1496 XY 14 2 R(REDUCE)S F34(.)S 56 y 466 X(The)S
15 x(follo)S -1 x(wing)S 13 x(prede\014ned)S 17 x(reduction)S 15 x
(op)S 1 x(erators)S 14 x(are)S 15 x(de\014ned)S 17 x(b)S -1 x(y)S
15 x(MPI)S 1713 Y 417 X F32(MPI)S 493 1713 XY 13 2 R(MAX)S 1247 X
F34(maxim)S -1 x(um)S 55 y 417 X F32(MPI)S 493 1768 XY 13 2 R(MI)S
XP /F32 78 29 4 0 24 29 29 24 0
<FC0070 FC0070 FE0070 EE0070 EF0070 E70070 E70070 E78070 E38070
 E3C070 E3C070 E1E070 E1E070 E0E070 E0F070 E07070 E07870 E07870
 E03C70 E03C70 E01C70 E01E70 E00E70 E00E70 E00F70 E00770 E007F0
 E003F0 E003F0>
PXLC RP
554 1768 XY F32(N)S 1247 X F34(minim)S -1 x(um)S 55 y 417 X F32
(MPI)S 493 1823 XY 13 2 R
XP /F32 83 23 2 -1 20 30 31 24 0
<03F800 0FFE00 1C0F00 380700 700300 600000 E00000 E00000 E00000
 E00000 F00000 780000 7F0000 3FE000 1FFC00 07FE00 01FF00 001F80
 000780 0003C0 0003C0 0001C0 0001C0 0001C0 0001C0 C00180 E00380
 F00700 7C0E00 1FFC00 07F000>
PXLC RP
506 1823 XY F32(S)S
XP /F32 85 29 4 -1 24 29 30 24 0
<F00070 F00070 F00070 F00070 F00070 F00070 F00070 F00070 F00070
 F00070 F00070 F00070 F00070 F00070 F00070 F00070 F00070 F00070
 F00070 F00070 F00070 F00070 F00070 7800E0 7800E0 3C01C0 1E0380
 0F0780 07FE00 01F800>
PXLC RP
529 1823 XY F32(UM)S 1247 X F34(sum)S 55 y 417 X F32(MPI)S
493 1878 XY 13 2 R(P)S
XP /F32 82 27 4 0 25 29 29 24 0
<FFF800 FFFF00 F00F80 F003C0 F001E0 F000F0 F000F0 F000F0 F000F0
 F000F0 F001E0 F003E0 F00FC0 FFFF80 FFFF00 FFF800 F03C00 F01C00
 F01E00 F00F00 F00F00 F00780 F00780 F003C0 F003C0 F001E0 F000F0
 F000F0 F00078>
PXLC RP
532 1878 XY F32(R)S
XP /F32 79 31 2 -1 28 30 31 32 0
<003F0000 01FFE000 03FFF000 07C0F800 0F807C00 1E001E00 3E001F00
 3C000F00 78000780 78000780 78000780 F00003C0 F00003C0 F00003C0
 F00003C0 F00003C0 F00003C0 F00003C0 F00003C0 F80007C0 78000780
 78000780 7C000F80 3C000F00 3E001F00 1F003E00 0F807C00 07C0F800
 03FFF000 01FFE000 003F0000>
PXLC RP
559 1878 XY F32(O)S
XP /F32 68 30 4 0 27 29 29 24 0
<FFFC00 FFFF00 F00F80 F003E0 F001F0 F000F0 F00078 F00038 F0003C
 F0003C F0001C F0001E F0001E F0001E F0001E F0001E F0001E F0001E
 F0001E F0003C F0003C F0003C F00078 F000F0 F000F0 F003E0 F00FC0
 FFFF00 FFFC00>
PXLC RP
590 1878 XY F32(D)S 1247 X F34(pro)S 1 x(duct)S 55 y 417 X F32(MPI)S
493 1933 XY 13 2 R
XP /F32 76 22 4 0 19 29 29 16 0
<F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000
 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000 F000
 F000 FFFE FFFE>
PXLC RP
506 1933 XY F32(LAND)S 1247 X F34(logical)S 13 x(and)S 55 y 417 X
F32(MPI)S 493 1988 XY 13 2 R
XP /F32 66 28 4 0 25 29 29 24 0
<FFF800 FFFF00 F00F80 F003C0 F001E0 F000F0 F000F0 F000F0 F000F0
 F000F0 F001E0 F007C0 FFFF80 FFFE00 FFFF80 F03FC0 F003E0 F001F0
 F000F0 F00078 F00078 F00078 F00078 F00078 F000F0 F001E0 F007C0
 FFFF80 FFFC00>
PXLC RP
506 1988 XY F32(BAND)S 1247 X F34(bit-wise)S 14 x(and)S 55 y 417 X
F32(MPI)S 493 2043 XY 13 2 R(LOR)S 1247 X F34(logical)S 13 x(or)S
56 y 417 X F32(MPI)S 493 2099 XY 13 2 R(BOR)S 1247 X F34(bit-wise)S
14 x(or)S 55 y 417 X F32(MPI)S 493 2154 XY 13 2 R(LX)S -1 x(OR)S
1247 X F34(logical)S 13 x(xor)S 55 y 417 X F32(MPI)S 493 2209 XY
13 2 R(BX)S -1 x(OR)S 1247 X F34(bit-wise)S 14 x(xor)S 55 y 417 X
F32(MPI)S 493 2264 XY 13 2 R(MAXLO)S
XP /F32 67 27 3 -1 24 30 31 24 0
<003FC0 00FFF0 03C0F0 078030 0F0000 1E0000 3C0000 3C0000 780000
 780000 780000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 780000 780000 780000 3C0000 3C0000 1E0000 0F0008
 078018 03C078 00FFF0 003F80>
PXLC RP
651 2264 XY F32(C)S 1247 X F34(maxim)S -1 x(um)S 14 x(v)S -3 x
(alue)S 15 x(and)S 16 x(rank)S 15 x(of)S 14 x(pro)S 1 x(cess)S 16 x
(with)S 14 x(it)S 55 y 417 X F32(MPI)S 493 2319 XY 13 2 R(MINLOC)S
1247 X F34(minim)S -1 x(um)S 14 x(v)S -3 x(alue)S 15 x(and)S 16 x
(rank)S 15 x(of)S 15 x(pro)S 1 x(cess)S 15 x(with)S 15 x(it)S 2402 Y
466 X(In)S 13 x(addition,)S 11 x(users)S 12 x(can)S 13 x(de\014ne)S
13 x(their)S 12 x(o)S -1 x(wn)S 11 x(reduction)S 13 x(op)S 1 x
(erators,)S 11 x(using)S 12 x(the)S 12 x F41(MPI)S 1912 2402 XY
14 2 R
XP /F41 79 33 3 -1 29 33 34 32 0
<003F0000 00FFC000 03FFF000 07E1F800 0F807C00 1F003E00 1E001E00
 3C000F00 3C000F00 78000780 78000780 78000780 F00003C0 F00003C0
 F00003C0 F00003C0 F00003C0 F00003C0 F00003C0 F00003C0 F00003C0
 F80007C0 78000780 78000780 78000780 3C000F00 3C000F00 1E001E00
 1F003E00 0F807C00 07E1F800 03FFF000 00FFC000 003F0000>
PXLC RP
1926 2402 XY F41(OP)S 1991 2402 XY 14 2 R(CREA)S -4 x(TE)S 57 y
375 X F34(call.)S 2563 Y 375 X F41(MPI)S 459 2563 XY 14 2 R(OP)S
538 2563 XY 14 2 R(CREA)S -4 x(TE\(function,)S 17 x(commute,)S 16 x
(op\))S 2640 Y 417 X F25(IN)S 618 X F41(funcion)S 1205 X F25
(function)S 14 x(to)S 14 x(b)S 1 x(e)S 14 x(used)S 15 x(for)S 13 x
(reduction)S 15 x(\(function\))S 2716 Y 417 X(IN)S 618 X F41
(commute)S
XP /F32 116 15 1 0 13 24 24 16 0
<1C00 1C00 1C00 1C00 1C00 1C00 FFE0 FFE0 1C00 1C00 1C00 1C00 1C00
 1C00 1C00 1C00 1C00 1C00 1C00 1C00 1C20 1FF0 0FF0 07C0>
PXLC RP
1205 2716 XY F32(t)S
XP /F32 114 14 3 0 12 18 18 16 0
<E380 E780 EF80 FC00 F800 F000 F000 E000 E000 E000 E000 E000 E000
 E000 E000 E000 E000 E000>
PXLC RP
1220 2716 XY F32(r)S
XP /F32 117 21 3 0 17 18 18 16 0
<E01C E01C E01C E01C E01C E01C E01C E01C E01C E01C E01C E01C E01C
 E01C E07C FFFC 7FDC 3F1C>
PXLC RP
1234 2716 XY F32(u)S
XP /F32 101 18 2 0 16 18 18 16 0
<07C0 1FE0 3FF0 7878 7018 601C FFFC FFFC FFFC E000 E000 E000 7000
 7004 3C1C 3FFC 1FF8 07E0>
PXLC RP
1256 2716 XY F32(e)S 14 x F25(if)S 13 x(op)S 1 x(eration)S 14 x(is)S
14 x(comm)S -1 x(utativ)S -1 x(e)S
XP /F25 59 12 4 -8 8 18 26 8 0
<60 F0 F0 60 00 00 00 00 00 00 00 00 00 00 60 F0 F0 70 10 10 10 10 20
 20 40 80>
PXLC RP
1788 2716 XY F25(;)S
XP /F32 102 13 0 0 14 29 29 16 0
<00FC 01FC 03FC 0700 0E00 0E00 0E00 0E00 0E00 0E00 0E00 FFE0 FFE0
 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00 0E00
 0E00 0E00 0E00>
PXLC RP
1813 2716 XY F32(f)S
XP /F32 97 20 2 0 16 18 18 16 0
<0FC0 3FF0 7FF8 7038 401C 001C 001C 00FC 0FFC 3FFC 781C E01C E01C
 E01C F07C 7FFC 7FDC 3F1C>
PXLC RP
1826 2716 XY F32(a)S
XP /F32 108 10 3 0 6 29 29 8 0
<E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0 E0
 E0 E0 E0 E0 E0 E0>
PXLC RP
1846 2716 XY F32(l)S
XP /F32 115 16 1 0 14 18 18 16 0
<1FC0 3FF0 7FF0 F030 E000 E000 F000 7F00 3FC0 1FE0 00F0 0038 0038
 8038 F078 FFF0 7FE0 1FC0>
PXLC RP
1856 2716 XY F32(se)S F25(,)S 13 x(otherwise)S 2792 Y 417 X(OUT)S
618 X F41(op)S 1205 X F25(reduce)S 16 x(op)S 1 x(eration)S 14 x
(\(handle\))S 2917 Y 375 X F40(int)S 23 x(MPI)S 545 2917 XY
15 2 R -1 x(Op)S 610 2917 XY 15 2 R -1 x(create\(MPI)S 866 2917 XY
15 2 R -1 x(Uop)S 23 x(function,)S 23 x(int)S 23 x(commute,)S 23 x
(MPI)S 1599 2917 XY 15 2 R -1 x(Op)S 24 x(*op\))S 3004 Y 375 X(MPI)S
449 3004 XY 15 2 R(OP)S 514 3004 XY 15 2 R(CREATE\(FUNCTION,)S 21 x
(COMMUTE,)S 23 x(OP,IERROR\))S 346 Y 2267 X F4(1)S 57 y -15 x(2)S
56 y -15 x(3)S 56 y -15 x(4)S 57 y -15 x(5)S 56 y -15 x(6)S 57 y
-15 x(7)S 56 y -15 x(8)S 57 y -15 x(9)S 56 y -23 x(10)S 57 y -30 x
(11)S 56 y -30 x(12)S 57 y -30 x(13)S 56 y -30 x(14)S 56 y -30 x
(15)S 57 y -30 x(16)S 56 y -30 x(17)S 57 y -30 x(18)S 56 y -30 x
(19)S 57 y -30 x(20)S 56 y -30 x(21)S 57 y -30 x(22)S 56 y -30 x
(23)S 57 y -30 x(24)S 56 y -30 x(25)S 56 y -30 x(26)S 57 y -30 x
(27)S 56 y -30 x(28)S 57 y -30 x(29)S 56 y -30 x(30)S 57 y -30 x
(31)S 56 y -30 x(32)S 57 y -30 x(33)S 56 y -30 x(34)S 57 y -30 x
(35)S 56 y -30 x(36)S 56 y -30 x(37)S 57 y -30 x(38)S 56 y -30 x
(39)S 57 y -30 x(40)S 56 y -30 x(41)S 57 y -30 x(42)S 56 y -30 x
(43)S 57 y -30 x(44)S 56 y -30 x(45)S 57 y -30 x(46)S 56 y -30 x
(47)S 56 y -30 x(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 90 68 550 752

%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
375 200 XY F38(1.1.)S 34 x(COLLECTIVE)S 16 x(D)S -1 x(A)S -4 x(T)S
-4 x(A)S 15 x(MO)S -1 x(VEMENTS)S 15 x({)S 15 x(SIMPLIFICA)S -4 x
(TION)S
XP /F34 53 23 2 -1 20 30 31 24 0
<180300 1FFE00 1FFC00 1FF800 1FE000 100000 100000 100000 100000
 100000 100000 11F000 161C00 180E00 100700 100780 000380 000380
 0003C0 0003C0 0003C0 7003C0 F003C0 F003C0 E00380 400380 400700
 200600 100E00 0C3800 03E000>
PXLC RP
2152 200 XY F34(5)S 345 Y 470 X F40(E)S
XP /F40 88 24 1 0 22 28 28 24 0
<7F8FE0 7F9FE0 7F8FE0 0E0700 0F0700 070E00 078E00 039C00 03DC00
 01F800 01F800 00F000 00F000 007000 00F000 00F800 01F800 01DC00
 039E00 038E00 070F00 070700 0E0780 0E0380 1E03C0 7F07F0 FF8FF8
 7F07F0>
PXLC RP
494 345 XY F40(XTERNAL)S 23 x(FUNCTION)S 57 y 470 X(LOGICAL)S 23 x
(COMMUTE)S 56 y 470 X(INTEGER)S 23 x(OP,)S 24 x(IERROR)S 544 Y 466 X
F41(MPI)S 550 544 XY 14 2 R(OP)S 629 544 XY 14 2 R(CREA)S -4 x(TE)S
14 x F34(creates)S 12 x(an)S 13 x(op)S 1 x(eration)S 12 x(that)S
13 x(can)S 13 x(b)S 1 x(e)S 13 x(later)S 12 x(used)S 14 x(in)S 12 x
(a)S 13 x(call)S 12 x(to)S 12 x F41(MPI)S 1985 544 XY 14 2 R -1 x
(REDUCE)S 57 y 375 X F34(or)S 21 x(an)S -1 x(y)S 22 x(other)S 22 x
(reduction)S 21 x(function.)S 40 x(The)S 23 x(op)S 1 x(eration)S
21 x(th)S -1 x(us)S 21 x(de\014ned)S 24 x(should)S 22 x(b)S 1 x(e)S
22 x(asso)S 1 x(ciativ)S -1 x(e.)S 39 x(If)S 56 y 375 X F41
(commute)S 16 x F34(=)S 16 x F32(true)S F34(,)S 16 x(then)S 15 x
(the)S 16 x(op)S 1 x(eration)S 14 x(should)S 15 x(b)S 1 x(e)S 16 x
(comm)S -1 x(utativ)S -1 x(e)S 14 x(and)S 15 x(asso)S 1 x(ciativ)S
-1 x(e.)S 57 y 466 X F41(function)S 17 x F34(is)S 14 x(a)S 15 x
(function)S 15 x(with)S 15 x(three)S 15 x(argumen)S -1 x(ts.)S 19 x
(The)S 15 x(C)S 15 x(t)S -1 x(yp)S 1 x(e)S 16 x(for)S 14 x(suc)S
-1 x(h)S 16 x(function)S 15 x(is)S 819 Y 375 X F40(typedef)S 23 x
(void)S 23 x(MPI)S
XP /F40 95 24 3 -4 20 0 4 24 0
<7FFF00 FFFF80 FFFF80 7FFF00>
PXLC RP
757 819 XY F40(_Uop\()S 23 x(*void,)S 23 x(*void,)S 23 x(*int,)S
23 x(*MPI)S
XP /F40 92 24 3 -4 20 32 36 24 0
<600000 F00000 F00000 F80000 780000 7C0000 3C0000 3C0000 3E0000
 1E0000 1F0000 0F0000 0F0000 0F8000 078000 07C000 03C000 03C000
 03E000 01E000 01F000 00F000 00F800 007800 007800 007C00 003C00
 003E00 001E00 001E00 001F00 000F00 000F80 000780 000780 000300>
PXLC RP
1473 819 XY F40(\\_Datatype\))S
XP /F40 59 24 8 -6 15 20 26 8 0
<18 3C 7E 7E 3C 18 00 00 00 00 00 00 00 00 18 3C 7E 7E 3E 1E 0E 1C 3C
 78 F0 60>
PXLC RP
1735 819 XY F40(;)S 925 Y 466 X F34(If)S 17 x(the)S 16 x(function)S
16 x(is)S 16 x(passed)S 17 x(actual)S 15 x(argumen)S -1 x(ts)S 16 x
F41(\(void)S
XP /F41 42 23 3 14 19 34 20 16 0
<0180 01C0 0180 0180 C183 E187 F99F 7DBE 1FF8 07E0 07E0 1FF8 7DBE
 F99F E187 C183 0180 0180 01C0 0180>
PXLC RP
1436 925 XY F41(*\)invec,)S 16 x(\(void)S 16 x(*\)inoutvec,)S
XP /F41 38 34 2 -1 30 33 34 32 0
<007C0000 00FE0000 01FF0000 03C78000 03878000 0383C000 0783C000
 0703C000 0703C000 0783C000 07878000 078F8000 078F0000 079E0000
 03BC0080 03F800E0 03F001E0 03E001C0 07E001C0 0FF003C0 1EF00380
 3CF80780 78780700 787C0F00 F03E0E00 F01F1E00 F01F3C00 F00FF800
 F807F000 7803E000 7C0FF830 3FFFFFF0 1FFE3FF0 07F00FC0>
PXLC RP
1941 925 XY F41(&)S
XP /F41 108 11 3 0 7 32 32 8 0
<F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
 F0 F0 F0 F0 F0 F0 F0 F0 F0>
PXLC RP
1976 925 XY F41(len,)S 16 x(&t)S -1 x(yp)S 1 x(e)S 56 y 375 X F34
(then)S 18 x F41(invec)S 19 x F34(and)S 18 x F41(inoutvec)S 19 x F34
(should)S 18 x(b)S 1 x(e)S 19 x(arra)S -1 x(ys)S 16 x(with)S 17 x
F41(*len)S 19 x F34(v)S -3 x(alues,)S 18 x(of)S 17 x(t)S -1 x(yp)S
1 x(e)S 18 x(that)S 18 x(matc)S -1 x(hes)S 17 x(the)S 18 x(MPI)S
57 y 375 X(datat)S -1 x(yp)S 1 x(e)S 18 x F41(t)S -1 x(yp)S 1 x(e)S
F34(.)S 30 x(The)S 18 x(function)S 18 x(computes)S 19 x(elemen)S
-1 x(t-wise)S 18 x(an)S 18 x(asso)S 1 x(ciativ)S -1 x(e)S 16 x
(\(asso)S 1 x(ciativ)S -1 x(e)S 17 x(and)S 18 x(com-)S 56 y 375 X
(m)S -1 x(utativ)S -1 x(e)S 18 x(if)S 20 x F41(commute)S
XP /F41 61 35 3 5 31 17 12 32 0
<FFFFFFF0 FFFFFFF0 00000000 00000000 00000000 00000000 00000000
 00000000 00000000 00000000 FFFFFFF0 FFFFFFF0>
PXLC RP
812 1094 XY F41(=)S 20 x(true)S F34(\))S 20 x(op)S 1 x(eration)S
19 x(on)S 20 x(eac)S -1 x(h)S 20 x(pair)S 19 x(of)S 20 x(en)S -1 x
(tries)S 19 x(and)S 20 x(returns)S 20 x(the)S 20 x(result)S 19 x
(in)S 57 y 375 X F41(inoutvec)S F34(.)S 20 x(A)S 13 x(pseudo-co)S
1 x(de)S 14 x(for)S 25 x F41(function)S 15 x F34(is)S 12 x(giv)S
-1 x(en)S 12 x(b)S 1 x(elo)S -1 x(w,)S 13 x(where)S 13 x F40(op)S
13 x F34(is)S 12 x(the)S 13 x(asso)S 1 x(ciativ)S -1 x(e)S 11 x
(\(asso)S 1 x(ciativ)S -1 x(e)S 56 y 375 X(and)S 15 x(comm)S -1 x
(utativ)S -1 x(e\))S 14 x(op)S 1 x(eration)S 14 x(de\014ned)S 17 x
(b)S -1 x(y)S 30 x F41(function)S F34(.)S 1313 Y 661 X F40(for\(i)S
XP /F40 61 24 2 8 21 20 12 24 0
<7FFFC0 FFFFE0 FFFFE0 FFFFE0 000000 000000 000000 000000 FFFFE0
 FFFFE0 FFFFE0 7FFFC0>
PXLC RP
781 1313 XY F40(=0;)S 23 x(i)S 24 x(<)S 24 x(*len;)S 23 x(i)S
XP /F40 43 24 2 4 21 24 20 24 0
<006000 00F000 00F000 00F000 00F000 00F000 00F000 00F000 7FFFC0
 FFFFE0 FFFFE0 7FFFC0 00F000 00F000 00F000 00F000 00F000 00F000
 00F000 006000>
PXLC RP
1139 1313 XY F40(++\))S
XP /F40 123 24 2 -4 21 32 36 24 0
<0007E0 001FE0 007FE0 007800 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 00E000 01E000 7FC000 FF8000
 FF8000 7FC000 01E000 00E000 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 007800 007FE0 001FE0 0007E0>
PXLC RP
1234 1313 XY F40({)S 56 y 852 X(inoutvec)S
XP /F40 91 24 9 -4 22 32 36 16 0
<FFF8 FFF8 FFF8 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000
 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000 E000
 E000 E000 E000 E000 E000 E000 E000 FFF8 FFF8 FFF8>
PXLC RP
1043 1369 XY F40([i)S
XP /F40 93 24 1 -4 14 32 36 16 0
<FFF8 FFF8 FFF8 0038 0038 0038 0038 0038 0038 0038 0038 0038 0038
 0038 0038 0038 0038 0038 0038 0038 0038 0038 0038 0038 0038 0038
 0038 0038 0038 0038 0038 0038 0038 FFF8 FFF8 FFF8>
PXLC RP
1091 1369 XY F40(])S 24 x(op=)S 23 x(invec[i])S
XP /F40 125 24 2 -4 21 32 36 24 0
<7C0000 FF0000 FFC000 03C000 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 00E000 00F000 007FC0 003FE0
 003FE0 007FC0 00F000 00E000 00E000 00E000 00E000 00E000 00E000
 00E000 00E000 00E000 00E000 00E000 03C000 FFC000 FF0000 7C0000>
PXLC RP
661 1426 XY F40(})S 1531 Y 466 X F34(The)S 15 x(F)S -4 x(ortran)S
14 x(declaration)S 14 x(for)S 15 x(it)S 14 x(is)S 1637 Y 375 X F40
(FUNCTION)S 23 x(UOP\(INVEC\(*\),)S 22 x(INOUTVEC\(*\),)S 22 x
(LEN,)S 23 x(TYPE\))S 56 y 375 X(<type>)S 23 x(INVEC\(LEN\),)S 22 x
(INOUTVEC\(LEN\))S 57 y 375 X(INTEGER)S 23 x(LEN,)S 23 x(TYPE)S
1855 Y 466 X F34(No)S 20 x(MPI)S 20 x(comm)S -1 x(unication)S 20 x
(function)S 20 x(ma)S -1 x(y)S 20 x(b)S 1 x(e)S 21 x(in)S -1 x(v)S
-1 x(ok)S -1 x(ed)S 19 x(in)S 21 x(the)S 20 x(b)S 1 x(o)S 1 x(dy)S
21 x(of)S 20 x(the)S 21 x(user)S 20 x(pro)S -1 x(vided)S 57 y 375 X
(reduce)S 14 x(function.)S 19 x(The)S 13 x(b)S 1 x(eha)S -1 x(vior)S
13 x(of)S 12 x(the)S 13 x(reduce)S 14 x(function)S 13 x(need)S 14 x
(not)S 13 x(b)S 1 x(e)S 14 x(de\014ned)S 14 x(for)S 12 x(all)S 12 x
(MPI)S 13 x(t)S -1 x(yp)S 1 x(es;)S 56 y 375 X(the)S 20 x(function)S
19 x(should)S 20 x(b)S 1 x(e)S 20 x(used)S 20 x(only)S 20 x(with)S
19 x(op)S 1 x(erands)S 20 x(of)S 19 x(those)S 19 x(t)S -1 x(yp)S
1 x(es)S 20 x(for)S 19 x(whic)S -1 x(h)S 19 x(its)S 19 x(b)S 1 x
(eha)S -1 x(vior)S 19 x(is)S 57 y 375 X(de\014ned.)S 32 x(In)S 20 x
(particular,)S 18 x(the)S 18 x(user)S 19 x(ma)S -1 x(y)S 19 x(de)S
(\014ne)S 19 x(a)S 19 x(reduction)S 19 x(function)S 18 x(that)S 18 x
(w)S -1 x(orks)S 18 x(for)S 18 x(only)S 19 x(one)S 56 y 375 X(sp)S
1 x(eci\014c)S 16 x(t)S -1 x(yp)S 1 x(e,)S 15 x(and)S 15 x(ignores)S
15 x(the)S 15 x F41(t)S -1 x(yp)S 1 x(e)S 16 x F34(argumen)S -1 x
(t.)S
XP /F34 87 47 1 -1 45 31 32 48 0
<FFF07FF81FF0 1F800FC007C0 0F0007800380 0F0007800100 0F0007C00100
 078007C00200 078007C00200 078007C00200 03C009E00400 03C009E00400
 03C009E00400 03E010F00C00 01E010F00800 01E010F00800 01F020780800
 00F020781000 00F020781000 00F0403C1000 0078403C2000 0078403C2000
 0078C03E2000 003C801E4000 003C801E4000 003C801E4000 001F000F8000
 001F000F8000 001F000F8000 001E00078000 000E00070000 000E00070000
 000C00030000 000400020000>
PXLC RP
466 2138 XY F34(When)S 15 x(a)S 14 x(call)S 14 x(to)S 14 x F41(MPI)S
857 2138 XY 14 2 R -1 x(REDUCE\(\))S 15 x F34(or)S 14 x(another)S
14 x(MPI)S 15 x(reduce)S 15 x(function)S 15 x(in)S -1 x(v)S -1 x
(ok)S -1 x(es)S 13 x(a)S 14 x(user-de\014ned)S 56 y 375 X
(reduction)S 20 x(op)S 1 x(erator,)S 21 x(then)S 21 x(the)S 20 x
(user-pro)S -1 x(vided)S 21 x(function)S 20 x(is)S 20 x(passed)S
21 x(the)S 21 x(datat)S -1 x(yp)S 1 x(e)S 20 x(argumen)S -1 x(t)S
19 x(of)S 57 y 375 X F41(MPI)S 459 2251 XY 14 2 R(REDUCE\(\))S F34
(.)S 2390 Y 466 X F30(Discussion:)S 32 x F25(The)S 10 x(addition)S
8 x(of)S 9 x(a)S 9 x(t)S -1 x(yp)S 1 x(e)S 11 x(argumen)S -1 x(t)S
9 x(allo)S -1 x(ws)S 8 x(user)S 11 x(reduction)S 10 x(functions)S
10 x(to)S 10 x(b)S 1 x(e)S 10 x(o)S -1 x(v)S -1 x(erloaded,)S 56 y
375 X(in)S 17 x(the)S 19 x(same)S 18 x(w)S -1 x(a)S -1 x(y)S 17 x
(that)S 18 x(the)S
XP /F25 77 38 2 0 35 28 28 40 0
<FF8000FF80 0F8000F800 0F8000F800 0BC0017800 0BC0017800 0BC0017800
 09E0027800 09E0027800 08F0047800 08F0047800 08F0047800 0878087800
 0878087800 0878087800 083C107800 083C107800 083C107800 081E207800
 081E207800 081E207800 080F407800 080F407800 0807807800 0807807800
 0807807800 0803007800 1C03007800 FF8307FF80>
PXLC RP
868 2446 XY F25(M)S
XP /F25 80 28 2 0 25 28 28 24 0
<FFFF80 0F00E0 0F0078 0F003C 0F001C 0F001E 0F001E 0F001E 0F001E
 0F001E 0F001C 0F003C 0F0078 0F00E0 0FFF80 0F0000 0F0000 0F0000
 0F0000 0F0000 0F0000 0F0000 0F0000 0F0000 0F0000 0F0000 0F0000
 FFF000>
PXLC RP
906 2446 XY F25(PI)S 18 x(prede\014ned)S 20 x(functions)S 17 x(are)S
19 x(o)S -1 x(v)S -1 x(erloaded.)S 29 x(In)S 18 x(particular,)S 18 x
(this)S 18 x(allo)S -1 x(ws)S 16 x(to)S 57 y 375 X(implemen)S -1 x
(t)S 15 x(the)S 16 x(prede\014ned)S 18 x(MPI)S 16 x(reduction)S 17 x
(functions)S 16 x(with)S 15 x(a)S 16 x(library)S 15 x(of)S 15 x
(user-function)S 16 x(de\014nitions,)S 16 x(and)S 56 y 375 X(also)S
19 x(o)S -1 x(v)S -1 x(erloads)S 19 x(the)S 20 x(MPI)S 19 x
(functions)S 20 x(with)S 19 x(additional)S 17 x(t)S -1 x(yp)S 1 x
(es,)S 21 x(e.g.)S 34 x(a)S
XP /F25 67 30 2 -1 27 29 30 32 0
<001F8080 00E06180 01801980 07000780 0E000380 1C000380 1C000180
 38000180 78000080 78000080 70000080 F0000000 F0000000 F0000000
 F0000000 F0000000 F0000000 F0000000 F0000000 70000080 78000080
 78000080 38000080 1C000100 1C000100 0E000200 07000400 01800800
 00E03000 001FC000>
PXLC RP
1579 2559 XY F25(C)S 20 x(comple)S
XP /F25 120 22 0 0 21 18 18 24 0
<7F8FF0 0F0380 0F0300 070200 038400 01C800 01D800 00F000 007000
 007800 00F800 009C00 010E00 020E00 060700 040380 1E07C0 FF0FF8>
PXLC RP
1756 2559 XY F25(x)S 19 x(sum)S 19 x(reduction)S 20 x(using)S 57 y
375 X F32(MPI)S 452 2616 XY 13 2 R -1 x(SUM)S F25(.)S 2802 Y 375 X
F41(MPI)S 459 2802 XY 14 2 R(T)S
XP /F41 89 30 0 0 29 32 32 32 0
<F80000F8 7C0001F0 3C0001E0 3E0003E0 1F0003C0 0F000780 0F800F80
 07C00F00 03C01F00 03E01E00 01F03C00 00F07C00 00787800 0078F000
 003CF000 001CE000 001FE000 000FC000 00078000 00078000 00078000
 00078000 00078000 00078000 00078000 00078000 00078000 00078000
 00078000 00078000 00078000 00078000>
PXLC RP
504 2802 XY F41(YPE)S 593 2802 XY 14 2 R
XP /F41 70 26 5 0 23 32 32 24 0
<FFFFC0 FFFFC0 FFFFC0 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000 FFFF00 FFFF00 FFFF00 F00000
 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F00000 F00000>
PXLC RP
607 2802 XY F41(FREE\(op\))S 2879 Y 417 X F25(INOUT)S 62 x F41(op)S
1205 X F25(user)S 15 x(de\014ned)S 15 x(reduction)S 15 x(op)S 1 x
(eration)S 14 x(to)S 13 x(b)S 1 x(e)S 15 x(freed)S 15 x(\(handle\))S
3004 Y 375 X F40(int)S 23 x(MPI)S 545 3004 XY 15 2 R -1 x(Op)S
610 3004 XY 15 2 R -1 x(free\(MPI)S 818 3004 XY 15 2 R -1 x(Op)S
24 x(*op\))S 346 Y 268 X F4(1)S 57 y -15 x(2)S 56 y -15 x(3)S 56 y
-15 x(4)S 57 y -15 x(5)S 56 y -15 x(6)S 57 y -15 x(7)S 56 y -15 x
(8)S 57 y -15 x(9)S 56 y -23 x(10)S 57 y -30 x(11)S 56 y -30 x(12)S
57 y -30 x(13)S 56 y -30 x(14)S 56 y -30 x(15)S 57 y -30 x(16)S 56 y
-30 x(17)S 57 y -30 x(18)S 56 y -30 x(19)S 57 y -30 x(20)S 56 y -30 x
(21)S 57 y -30 x(22)S 56 y -30 x(23)S 57 y -30 x(24)S 56 y -30 x
(25)S 56 y -30 x(26)S 57 y -30 x(27)S 56 y -30 x(28)S 57 y -30 x
(29)S 56 y -30 x(30)S 57 y -30 x(31)S 56 y -30 x(32)S 57 y -30 x
(33)S 56 y -30 x(34)S 57 y -30 x(35)S 56 y -30 x(36)S 56 y -30 x
(37)S 57 y -30 x(38)S 56 y -30 x(39)S 57 y -30 x(40)S 56 y -30 x
(41)S 57 y -30 x(42)S 56 y -30 x(43)S 57 y -30 x(44)S 56 y -30 x
(45)S 57 y -30 x(46)S 56 y -30 x(47)S 56 y -30 x(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 62 68 522 752

%%PageBoundingBox: (atend)
%%BeginPageSetup
1000 BP 3300 2550 PM /paper-automatic false SPS 375 0 XY
%%EndPageSetup
XP /F34 54 23 2 -1 20 30 31 24 0
<007C00 018200 070100 0E0380 0C0780 1C0780 380300 380000 780000
 700000 700000 F1F000 F21C00 F40600 F80700 F80380 F80380 F003C0
 F003C0 F003C0 F003C0 F003C0 7003C0 7003C0 700380 380380 380700
 180700 0C0E00 061C00 01F000>
PXLC RP
375 200 XY F34(6)S 573 X F38(CHAPTER)S 15 x(1.)S 35 x(COLLECTIVE)S
16 x(COMMUNICA)S -4 x(TION)S 16 x({)S 15 x(PR)S -1 x(OPOSED)S 16 x
(CHANGES)S 345 Y 375 X F40(MPI)S 449 345 XY 15 2 R(OP)S
514 345 XY 15 2 R(FREE\(OP,)S 22 x(IERROR\))S 57 y 470 X(INTEGER)S
23 x(OP,)S 24 x(IERROR\))S 488 Y 466 X F34(Marks)S 16 x(a)S 17 x
(reduction)S 18 x(op)S 1 x(eration)S 16 x(deallo)S 1 x(cation)S 16 x
(and)S 18 x(returns)S 17 x(an)S 18 x(n)S -1 x(ull)S 16 x(handle.)S
27 x(The)S 18 x(op)S 1 x(eration)S 56 y 375 X(is)S 15 x(freed)S 15 x
(after)S 15 x(all)S 14 x(p)S 1 x(ending)S 16 x(MPI)S 15 x(calls)S
14 x(that)S 15 x(use)S 15 x(it)S 15 x(ha)S -1 x(v)S -1 x(e)S 14 x
(completed.)S 57 y 466 X(Example)S 21 x(\(not)S 20 x(relev)S -3 x
(an)S -1 x(t)S 21 x(to)S 20 x(alternativ)S -1 x(e)S 19 x
(de\014nition)S
XP /F34 123 23 0 12 22 13 1 24 0
<FFFFFC>
PXLC RP
1445 601 XY F34({)S 21 x(just)S 21 x(nice\):)S 32 x(Matrix-v)S -1 x
(ector)S 19 x(pro)S 1 x(duct)S 56 y 375 X F41(A)S
XP /F36 /cmsy10 329 45.5 45.5 128 [-1 -44 49 35] PXLNF RP
XP /F36 1 13 4 9 9 14 5 8 0
<70 F8 F8 F8 70>
PXLC RP
415 657 XY F36(\001)S 10 x F41(b)S 13 x F34(=)S 13 x F41(c)S 57 y
466 X F34(The)S 22 x F41(n)S
XP /F36 2 35 6 -1 28 23 24 24 0
<C0000C E0001C 700038 380070 1C00E0 0E01C0 060180 070380 038700
 01CE00 00FC00 007800 007800 00FC00 01CE00 038700 070380 060180
 0E01C0 1C00E0 380070 700038 E0001C C0000C>
PXLC RP
604 714 XY F36(\002)S 15 x F41(n)S 22 x F34(matrix)S 21 x F41(A)S
22 x F34(is)S 21 x(partitioned)S 21 x(among)S
XP /F41 107 22 3 0 20 32 32 24 0
<F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000 F00000
 F00000 F00000 F00000 F01F00 F01E00 F03C00 F07800 F0F000 F1E000
 F3C000 F78000 FFC000 FFC000 FFE000 F9F000 F8F000 F0F800 F07C00
 F07C00 F03E00 F01E00 F01F00 F00F80>
PXLC RP
1353 714 XY F41(k)S 22 x F34(pro)S 1 x(cesses,)S 24 x(eac)S -1 x(h)S
22 x(holding)S 21 x F41(m)S 22 x F34(consecutiv)S -1 x(e)S 56 y
375 X(columns;)S 19 x F41(k)S 13 x F36(\001)S 12 x F41(m)S 18 x F34
(=)S 18 x F41(n)S F34(.)S 30 x(The)S 18 x(v)S -1 x(ectors)S 18 x F41
(b)S 19 x F34(and)S 18 x F41(c)S 19 x F34(are)S 18 x(partitioned)S
17 x(with)S 18 x F41(m)S 18 x F34(en)S -1 x(tries)S 18 x(p)S 1 x
(er)S 19 x(pro)S 1 x(cess.)S 29 x(T)S -4 x(o)S 57 y 375 X(shorten)S
15 x(example,)S 15 x(w)S -1 x(e)S 15 x(use)S 15 x(F)S -4 x(ortran)S
XP /F34 57 23 2 -1 20 30 31 24 0
<03F000 0E1800 1C0C00 380600 380700 700700 700380 F00380 F00380
 F003C0 F003C0 F003C0 F003C0 F003C0 7007C0 7007C0 3807C0 180BC0
 0E13C0 03E3C0 000380 000380 000380 000700 300700 780600 780E00
 700C00 201800 107000 0FC000>
PXLC RP
1038 827 XY F34(9)S
XP /F34 48 23 2 -1 20 30 31 24 0
<03F000 0E1C00 1C0E00 180600 380700 700380 700380 700380 700380
 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0 F003C0
 F003C0 F003C0 F003C0 F003C0 700380 700380 700380 780780 380700
 180600 1C0E00 0E1C00 03F000>
PXLC RP
1061 827 XY F34(0)S 15 x(arra)S -1 x(y)S 14 x(notation)S 13 x(and)S
16 x(in)S -1 x(trinsics.)S 933 Y 375 X F40(REAL)S 23 x(A\(N,M\),)S
23 x(B\(M\),)S 23 x(C\(M\),)S 24 x(TEMP\(N\))S 56 y 375 X(INTEGER)S
23 x(COUNT\()S
XP /F40 75 24 1 0 22 28 28 24 0
<7F07F0 FF87F8 7F07F0 1C03C0 1C0780 1C0700 1C0E00 1C1E00 1C3C00
 1C3800 1C7000 1CF000 1DF000 1DF000 1FB800 1FB800 1F1C00 1E1C00
 1C0E00 1C0E00 1C0700 1C0700 1C0380 1C0380 1C01C0 7F03F0 FF87F8
 7F03F0>
PXLC RP
709 989 XY F40(K\))S 57 y 375 X(...)S 1159 Y -72 x(DO)S 24 x(I=0,)S
23 x(K)S
XP /F40 45 24 3 12 20 16 4 24 0
<7FFF00 FFFF80 FFFF80 7FFF00>
PXLC RP
590 1159 XY F40(-)S
XP /F40 49 24 5 0 20 28 28 16 0
<0180 0380 0380 0780 0F80 3F80 FF80 FB80 4380 0380 0380 0380 0380
 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 0380 7FFC
 FFFE 7FFC>
PXLC RP
614 1159 XY F40(1)S 56 y 447 X(TEMP\(I*M+1)S
XP /F40 58 24 9 0 15 20 20 8 0
<30 78 FC FC 78 30 00 00 00 00 00 00 00 00 30 78 FC FC 78 30>
PXLC RP
709 1215 XY F40(:)S 24 x(\(I+1\)*M\))S 23 x(=)S 23 x(MATMUL\()S 23 x
(A\(I*M+1)S 23 x(:)S 24 x(\(I+1\)*M)S 23 x(,)S 23 x(:\),)S 24 x
(B\))S 57 y 375 X(END)S 23 x(DO)S 56 y 375 X(COUNT)S 23 x(=)S 24 x
(M)S 57 y 375 X(CALL)S 23 x(MPI_REDUCE_SCATTER\(TEMP,)S 21 x
(MPI_REAL,)S 23 x(C,)S 23 x(MPI_REAL,)S 23 x(COUNT,)S 56 y 948 X
(MPI_RSUM,)S 22 x(MPI_COMM_)S
XP /F40 87 24 1 0 22 28 28 24 0
<FE03F8 FE03F8 FE03F8 700070 700070 700070 3800E0 3800E0 3800E0
 3800E0 3800E0 38F8E0 38F8E0 39DCE0 39DCE0 19DCC0 19DCC0 19DCC0
 198CC0 1D8DC0 1D8DC0 1D8DC0 1D8DC0 0D8D80 0D0580 0F0780 0F0780
 0E0380>
PXLC RP
1401 1441 XY F40(WORLD,)S 23 x(IERR\))S 346 Y 2267 X F4(1)S 57 y
-15 x(2)S 56 y -15 x(3)S 56 y -15 x(4)S 57 y -15 x(5)S 56 y -15 x
(6)S 57 y -15 x(7)S 56 y -15 x(8)S 57 y -15 x(9)S 56 y -23 x(10)S
57 y -30 x(11)S 56 y -30 x(12)S 57 y -30 x(13)S 56 y -30 x(14)S 56 y
-30 x(15)S 57 y -30 x(16)S 56 y -30 x(17)S 57 y -30 x(18)S 56 y -30 x
(19)S 57 y -30 x(20)S 56 y -30 x(21)S 57 y -30 x(22)S 56 y -30 x
(23)S 57 y -30 x(24)S 56 y -30 x(25)S 56 y -30 x(26)S 57 y -30 x
(27)S 56 y -30 x(28)S 57 y -30 x(29)S 56 y -30 x(30)S 57 y -30 x
(31)S 56 y -30 x(32)S 57 y -30 x(33)S 56 y -30 x(34)S 57 y -30 x
(35)S 56 y -30 x(36)S 56 y -30 x(37)S 57 y -30 x(38)S 56 y -30 x
(39)S 57 y -30 x(40)S 56 y -30 x(41)S 57 y -30 x(42)S 56 y -30 x
(43)S 57 y -30 x(44)S 56 y -30 x(45)S 57 y -30 x(46)S 56 y -30 x
(47)S 56 y -30 x(48)S
%%PageTrailer
/paper-automatic true SPS 1 PP EP
%%PageBoundingBox: 90 70 550 752

%%Trailer
EndDviLaserDoc

%%BoundingBox: 62 68 550 752
%%Pages: 6
%%DocumentFonts:
From owner-mpi-collcomm@CS.UTK.EDU Mon Feb 21 13:26:49 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib)
	id NAA11415; Mon, 21 Feb 1994 13:26:48 -0500
Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id NAA28459; Mon, 21 Feb 1994 13:25:24 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Mon, 21 Feb 1994 13:25:22 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK)
	id NAA28451; Mon, 21 Feb 1994 13:25:21 -0500
Received: by msr.EPM.ORNL.GOV (4.1/1.34)
	id AA25132; Mon, 21 Feb 94 13:25:21 EST
Date: Mon, 21 Feb 94 13:25:21 EST
From: geist@msr.epm.ornl.gov (Al Geist)
Message-Id: <9402211825.AA25132@msr.EPM.ORNL.GOV>
To: mpi-collcomm@CS.UTK.EDU
Subject: Comments on Snir's proposed changes to Collective


Hi Gang,

I have read through Marc's latest suggestions. Some I recognize from the
European comments.

The first suggestion is to delete recvtype from MPI_GATHER and MPI_SCATTER
recall that this is a redundant argument in these routines 
that was left in just for consistency with "V" versions of these routines.
Full generality is still maintained in the GATHERV and SCATTERV versions.
I have no problem with this suggestion.

The second suggestion I really like. I had proposed it originally 
but never could get the idea to catch on. Marc's proposal gets around
the original arguments against the idea which is
To merge reduce, user-reduce, and user-reducea
and similarly merge the scan variants.
Everyone knows I am a BIG fan of reducing the number of MPI routines
and making the interface easier to use. The second suggestion does both.

I would like to see "OP" argument as the first argument in the new sytax
because when reading code, what the reduce is doing will not be hidden
three continuation lines down.

Two new funcions are added in Marc's proposal
MPI_OP_CREATE and MPI_OP_FREE (which Marc misspelled MPI_TYPE_FREE)

-----------------------------

One comment on the collective draft was to make the scan functions exclusive
rather than inclusive, i.e. scan would operate over 0 to (me-1)
rather than 0 to me as presently defined.

I have little experience with scan and defer judgement on such a suggested 
change to those on the collective committee who use scan a lot.

Things to talk about at the meeting.
See you there.
Al Geist
From owner-mpi-collcomm@CS.UTK.EDU Wed Mar 16 17:57:38 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.8t-netlib)
	id RAA06663; Wed, 16 Mar 1994 17:57:38 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id RAA26793; Wed, 16 Mar 1994 17:57:12 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 16 Mar 1994 17:57:11 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from canidae.cpsacs.msu.edu by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id RAA26745; Wed, 16 Mar 1994 17:56:17 -0500
Received: from pit-bull (pit-bull.cps.msu.edu) by canidae.cpsacs.msu.edu (5.0/SMI-SVR4)
	id AA22241; Wed, 16 Mar 1994 17:56:24 +0500
Received: by pit-bull (5.0/SMI-SVR4)
	id AA10316; Wed, 16 Mar 1994 17:56:23 +0500
Date: Wed, 16 Mar 1994 17:56:23 +0500
From: kalns@canidae.cps.msu.edu
Message-Id: <9403162256.AA10316@pit-bull>
To: mpi-collcomm@CS.UTK.EDU, mpi-ptop@CS.UTK.EDU
Subject: help with p.tops and collcomm
Cc: kalns@canidae.cps.msu.edu

Hello -

I have a question that relates to virtual topolgies and
collective communication.  Here is the situation:
Suppose that we create two virtual processor grids, 
a 2x3 and a 3x2, by means of MPI_MAKE_CART for each.
The result will be two communicators, say 'comm_2x3' and
'comm_3x2'.  Next I want to perform an MPI_ALLTOALL
from the processes in 'comm_2x3' to the ones in 'comm_3x2'.
This is not possible since MPI_ALLTOALL takes one 
(and only one) communicator as an argument. I looked at
'intercommunicators' but those can't be used in collective
communications. 

Is there a way aroud this, or am I misreading something?

Thanks in advance,
Edgar

----------------------------------------------------------------------
| Edgar T. Kalns		     | Internet: kalns@cps.msu.edu   |
| Advanced Computing Systems Lab     | Tel: (517) 353-8666           |
| A-714 Wells Hall		     |			             |
| Department of Computer Science     | 	                             |
| Michigan State University          |                               |
| East Lansing, MI 48824, USA        |		                     | 
----------------------------------------------------------------------


From owner-mpi-collcomm@CS.UTK.EDU Thu Mar 17 02:00:18 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.8t-netlib)
	id CAA07808; Thu, 17 Mar 1994 02:00:18 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id CAA28288; Thu, 17 Mar 1994 02:00:30 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 17 Mar 1994 02:00:29 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from gmdzi.gmd.de by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id CAA28236; Thu, 17 Mar 1994 02:00:15 -0500
Received: from f1neuman.gmd.de (f1neuman) by gmdzi.gmd.de with SMTP id AA04717
  (5.65c8/IDA-1.4.4); Thu, 17 Mar 1994 08:00:20 +0100
Received: by f1neuman.gmd.de (AIX 3.2/UCB 5.64/4.03)
          id AA17501; Thu, 17 Mar 1994 08:00:29 GMT
Date: Thu, 17 Mar 1994 08:00:29 GMT
From: Rolf.Hempel@gmd.de (Rolf Hempel)
Message-Id: <9403170800.AA17501@f1neuman.gmd.de>
To: kalns@cps.msu.edu
Subject: Re:  help with p.top and collcomm
Cc: geist@msr.epm.ornl.gov, gmap10@f1neuman.gmd.de, mpi-collcomm@CS.UTK.EDU,
        mpi-ptop@CS.UTK.EDU

Edgar,

I agree with Al that there is no direct way of doing what you would
like to do. The only possibility I see is to merge the two communicators
(i.e. the two groups) first, and do an MPI_ALLTOALL in the combined
group. Since you can trace back the process ranks in the original groups
(and, therefore, also their topological coordinates), you know the
addresses in the combined group. The major drawback of this solution,
however, is that half of the messages will be empty, because they go
to processes in the same original group. So, there seems to be no
satisfactory solution to the problem.

The situation is different if the two communicators are built on the
same process group (only the topological structure is different). Then,
you can use MPI_ALLTOALL on one of the communicators, and use
the available coordinate translation functions for computing the
addresses. Since I don't know your application, I can't say whether
you really need to use disjoint process groups for the two
topologies.

I'm afraid that's all MPI can do for you in this situation.

- Rolf
From owner-mpi-collcomm@CS.UTK.EDU Sat Mar 19 10:20:05 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.8t-netlib)
	id KAA26454; Sat, 19 Mar 1994 10:20:04 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id KAA22367; Sat, 19 Mar 1994 10:19:50 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 19 Mar 1994 10:19:49 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (cf v2.8s-UTK)
	id KAA22360; Sat, 19 Mar 1994 10:19:46 -0500
Received:  by Aurora.CS.MsState.Edu (4.1/6.0s-FWP);
	   id AA09367; Sat, 19 Mar 94 09:17:52 CST
Date: Sat, 19 Mar 94 09:17:52 CST
From: Tony Skjellum <tony@Aurora.CS.MsState.Edu>
Message-Id: <9403191517.AA09367@Aurora.CS.MsState.Edu>
To: mpi-collcomm@CS.UTK.EDU, mpi-ptop@CS.UTK.EDU, kalns@canidae.cps.msu.edu
Subject: Re: help with p.tops and collcomm

Inter-communicator collective operations are not currently supported by MPI,
though this can be layered on top of MPI in a reasonable fashion.

So, you are correct, but the problem is fixable by doing an inter-communicator
merge, provided the groups don't overlap.  If they overlap, further systematic
steps are needed to layer correct code on top of MPI for this.  Also note
that intercommunicators do not currently support MPI-defined topology 
capabilities.

-Tony Skjellum



----- Begin Included Message -----

From owner-mpi-collcomm@CS.UTK.EDU Wed Mar 16 17:02:11 1994
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Wed, 16 Mar 1994 17:57:11 EST
Date: Wed, 16 Mar 1994 17:56:23 +0500
From: kalns@canidae.cps.msu.edu
To: mpi-collcomm@CS.UTK.EDU, mpi-ptop@CS.UTK.EDU
Subject: help with p.tops and collcomm
Cc: kalns@canidae.cps.msu.edu
Content-Length: 1180

Hello -

I have a question that relates to virtual topolgies and
collective communication.  Here is the situation:
Suppose that we create two virtual processor grids, 
a 2x3 and a 3x2, by means of MPI_MAKE_CART for each.
The result will be two communicators, say 'comm_2x3' and
'comm_3x2'.  Next I want to perform an MPI_ALLTOALL
from the processes in 'comm_2x3' to the ones in 'comm_3x2'.
This is not possible since MPI_ALLTOALL takes one 
(and only one) communicator as an argument. I looked at
'intercommunicators' but those can't be used in collective
communications. 

Is there a way aroud this, or am I misreading something?

Thanks in advance,
Edgar

----------------------------------------------------------------------
| Edgar T. Kalns		     | Internet: kalns@cps.msu.edu   |
| Advanced Computing Systems Lab     | Tel: (517) 353-8666           |
| A-714 Wells Hall		     |			             |
| Department of Computer Science     | 	                             |
| Michigan State University          |                               |
| East Lansing, MI 48824, USA        |		                     | 
----------------------------------------------------------------------




----- End Included Message -----

From owner-mpi-collcomm@CS.UTK.EDU Sat Jan 28 21:46:56 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id VAA17660; Sat, 28 Jan 1995 21:46:56 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA03806; Sat, 28 Jan 1995 21:46:04 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 28 Jan 1995 21:46:02 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA03799; Sat, 28 Jan 1995 21:46:00 -0500
Received: by zingo (920330.SGI/YDL1.4-910307.16)
	id AA08103(zingo); Sat, 28 Jan 95 21:45:02 -0500
Received:  by juliet (5.52/cliff's joyful mailer #2)
	id AA00608(juliet); Sat, 28 Jan 95 21:45:00 EST
Date: Sat, 28 Jan 95 21:45:00 EST
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <9501290245.AA00608@juliet>
To: kalns@canidae.cps.msu.edu, mpi-ptop@CS.UTK.EDU, mpi-collcomm@CS.UTK.EDU
Subject: MPPOI'95 - CFP



                          Call for Papers
                 The Second International Conference on 
    MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS (MPPOI)

                           October 23-24, 1995
                           San  Antonio, Texas 

                             Sponsored by:
           ACM Special Interest Group on Architecture (SIGARCH)
                   The Optical Society of America (OSA)
      Institute for Electrical and Electronic Engineers (IEEE - pending) 
                NSF - National Science Foundation (pending) 

The second annual conference on Massively Parallel Processing Architectures 
using Optical Interconnections (MPPOI'95) will be held on Oct. 23-24, 1995 
in San-Antonio, Texas. The Conference will focus on the potential for using 
optical interconnections in massively parallel processing systems, and their 
effect on system and algorithm design. Optics offer many benefits for 
interconnecting large numbers of processing elements, but may require us to 
rethink how we build parallel computer systems and communication networks, 
and how we write applications.  Fully exploring the capabilities of optical 
interconnection networks requires an interdisciplinary effort.  It is 
critical that researchers in all areas of the field are aware of each 
other's work and results. The intent of MPPOI is to assemble the leading 
researchers and to build towards a synergetic approach to MPP architectures, 
optical interconnections, operating systems, and software development.

The topics of interest include but are not limited to the following:
      Optical interconnections
      Reconfigurable Architectures
      Embedding and mapping of applications and algorithms
      Packaging and layout of optical interconnections
      Electro-optical, and opto-electronic components 
      Relative merits of optical technologies (free-space, fibers, wave guides)
      Passive optical elements 
      Algorithms and applications exploiting MPP-OI
      Data distribution and partitioning 
      Characterizing parallel applications exploiting MPP-OI
      Cost/performance studies

The conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion. Authors are 
invited to submit manuscripts which demonstrate original unpublished research 
in areas of computer architecture and optical interconnections. Papers 
submitted must not be under considerations for another conference. 

SUBMITTING PAPERS:  All papers will be reviewed by at least 2 members of
the program committee.  Send eight (8) copies of the complete paper 
(not to exceed 15 single spaced, single sided pages) to:

Dr. Eugen Schenfeld
MPPOI'95 Conference Chair
NEC Research Institute
4 Independence Way
Princeton, NJ 08540, USA
(voice) (609)951-2742
(fax)   (609)951-2482
email:  MPPOI@RESEARCH.NJ.NEC.COM

============================================================================
DEADLINE: Papers must be sent so that they arrive on or before April 1, 1995
============================================================================

Manuscripts must be received by April  1st, 1995.  Due to the 
large number of anticipated submissions manuscripts arriving later than
the above date risk rejection. Notification of review decisions will 
be mailed by July 1st, 1995.  Camera ready papers are due 
August 1st, 1995.  Fax or electronic submissions will not be
considered. Proceedings will be published by the IEEE CS Press and will 
be available at the symposium.


FOR MORE INFORMATION: Please write (email) to the Conference Chair.


PROGRAM COMMITTEE:

Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. of CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New-York University, New-York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Computer Systems Research Lab., Bell Labs., Holmdel, NJ
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of EE, Purdue University, West Lafayette, IN
Lennart Johnsson, Div. of Applied Science, Harvard U. and TMC, Cambridge, MA
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Philippe Lalanne, Inst. D'Optique, Orsay, France
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
Anthonie Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO 
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

===========================================================


        Eugen Schenfeld 

        NEC Research Institute
        4 Independence Way
        Princeton, NJ 08540

phone:  609 951 2742
  fax:  609 951 2482
email:  eugen@research.nj.nec.com (Inet)


From owner-mpi-collcomm@CS.UTK.EDU Tue Feb 14 21:59:08 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id VAA20490; Tue, 14 Feb 1995 21:59:07 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA13282; Tue, 14 Feb 1995 21:59:43 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 14 Feb 1995 21:59:41 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA13226; Tue, 14 Feb 1995 21:58:36 -0500
Received: by zingo (920330.SGI/YDL1.4-910307.16)
	id AA15396(zingo); Tue, 14 Feb 95 21:53:04 -0500
Received:  by juliet (5.52/cliff's joyful mailer #2)
	id AA23938(juliet); Tue, 14 Feb 95 20:59:11 EST
Date: Tue, 14 Feb 95 20:59:11 EST
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <9502150159.AA23938@juliet>
To: mppoi@research.nj.nec.com
Subject: MPPOI'95 - CFP

 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
============================================================================
DEADLINE: PAPERS MUST BE SENT SO THAT THEY ARRIVE ON OR BEFORE APRIL 1, 1995
============================================================================
 



                          Call for Papers
                 The Second International Conference on 
    MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS (MPPOI)

                           October 23-24, 1995
                           San  Antonio, Texas 

                             Sponsored by:
           ACM Special Interest Group on Architecture (SIGARCH)
                   The Optical Society of America (OSA)
          The International Society for Optical Engineering (SPIE)
         IEEE CS TCCA (Technical Committee on Computer Architecture) 
                NSF - National Science Foundation (pending) 

The second annual conference on Massively Parallel Processing Architectures 
using Optical Interconnections (MPPOI'95) will be held on Oct. 23-24, 1995 
in San-Antonio, Texas. The Conference will focus on the potential for using 
optical interconnections in massively parallel processing systems, and their 
effect on system and algorithm design. Optics offer many benefits for 
interconnecting large numbers of processing elements, but may require us to 
rethink how we build parallel computer systems and communication networks, 
and how we write applications.  Fully exploring the capabilities of optical 
interconnection networks requires an interdisciplinary effort.  It is 
critical that researchers in all areas of the field are aware of each 
other's work and results. The intent of MPPOI is to assemble the leading 
researchers and to build towards a synergetic approach to MPP architectures, 
optical interconnections, operating systems, and software development.

The topics of interest include but are not limited to the following:
      Optical interconnections
      Reconfigurable Architectures
      Embedding and mapping of applications and algorithms
      Packaging and layout of optical interconnections
      Electro-optical, and opto-electronic components 
      Relative merits of optical technologies (free-space, fibers, wave guides)
      Passive optical elements 
      Algorithms and applications exploiting MPP-OI
      Data distribution and partitioning 
      Characterizing parallel applications exploiting MPP-OI
      Cost/performance studies

The conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion. Authors are 
invited to submit manuscripts which demonstrate original unpublished research 
in areas of computer architecture and optical interconnections. Papers 
submitted must not be under considerations for another conference. 

SUBMITTING PAPERS:  All papers will be reviewed by at least 2 members of
the program committee.  Send eight (8) copies of the complete paper 
(not to exceed 15 single spaced, single sided pages) to:

Dr. Eugen Schenfeld
MPPOI'95 Conference Chair
NEC Research Institute
4 Independence Way
Princeton, NJ 08540, USA
(voice) (609)951-2742
(fax)   (609)951-2482
email:  MPPOI@RESEARCH.NJ.NEC.COM

============================================================================
DEADLINE: Papers must be sent so that they arrive on or before April 1, 1995
============================================================================

Manuscripts must be received by April  1st, 1995.  Due to the 
large number of anticipated submissions manuscripts arriving later than
the above date risk rejection. Notification of review decisions will 
be mailed by July 1st, 1995.  Camera ready papers are due 
August 1st, 1995.  Fax or electronic submissions will not be
considered. Proceedings will be published by the IEEE CS Press and will 
be available at the conference.


FOR MORE INFORMATION: Please write (email) to the Conference Chair.


PROGRAM COMMITTEE:

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. of CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New-York University, New-York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA 
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of EE, Purdue University, West Lafayette, IN
Lennart Johnsson, Div. of Applied Science, Harvard U. and TMC, Cambridge, MA
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
Anthonie Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO 
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

===========================================================



        Eugen Schenfeld


From owner-mpi-collcomm@CS.UTK.EDU Fri Feb 24 02:59:56 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id CAA17410; Fri, 24 Feb 1995 02:59:56 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id DAA10570; Fri, 24 Feb 1995 03:01:22 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 24 Feb 1995 03:01:21 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id DAA10466; Fri, 24 Feb 1995 03:00:37 -0500
Received: by zingo (920330.SGI/YDL1.4-910307.16)
	id AA13732(zingo); Fri, 24 Feb 95 02:56:37 -0500
Received:  by juliet (5.52/cliff's joyful mailer #2)
	id AA09338(juliet); Fri, 24 Feb 95 02:41:56 EST
Date: Fri, 24 Feb 95 02:41:56 EST
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <9502240741.AA09338@juliet>
To: mppoi@research.nj.nec.com
Subject: MPPOI'95 - CFP and INFO

 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
============================================================================
DEADLINE: PAPERS MUST BE SENT SO THAT THEY ARRIVE ON OR BEFORE APRIL 1, 1995
============================================================================
 



                          Call for Papers
                 The Second International Conference on 
    MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS (MPPOI)

                           October 23-24, 1995
                           San  Antonio, Texas 

                             Sponsored by:
           ACM Special Interest Group on Architecture (SIGARCH)
                   The Optical Society of America (OSA)
          The International Society for Optical Engineering (SPIE)
         IEEE CS TCCA (Technical Committee on Computer Architecture) 
                NSF - National Science Foundation (pending) 

The second annual conference on Massively Parallel Processing Architectures 
using Optical Interconnections (MPPOI'95) will be held on Oct. 23-24, 1995 
in San-Antonio, Texas. The Conference will focus on the potential for using 
optical interconnections in massively parallel processing systems, and their 
effect on system and algorithm design. Optics offer many benefits for 
interconnecting large numbers of processing elements, but may require us to 
rethink how we build parallel computer systems and communication networks, 
and how we write applications.  Fully exploring the capabilities of optical 
interconnection networks requires an interdisciplinary effort.  It is 
critical that researchers in all areas of the field are aware of each 
other's work and results. The intent of MPPOI is to assemble the leading 
researchers and to build towards a synergetic approach to MPP architectures, 
optical interconnections, operating systems, and software development.

The topics of interest include but are not limited to the following:
      Optical interconnections
      Reconfigurable Architectures
      Embedding and mapping of applications and algorithms
      Packaging and layout of optical interconnections
      Electro-optical, and opto-electronic components 
      Relative merits of optical technologies (free-space, fibers, wave guides)
      Passive optical elements 
      Algorithms and applications exploiting MPP-OI
      Data distribution and partitioning 
      Characterizing parallel applications exploiting MPP-OI
      Cost/performance studies

The conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion. Authors are 
invited to submit manuscripts which demonstrate original unpublished research 
in areas of computer architecture and optical interconnections. Papers 
submitted must not be under considerations for another conference. 

SUBMITTING PAPERS:  All papers will be reviewed by at least 2 members of
the program committee.  Send eight (8) copies of the complete paper 
(not to exceed 15 single spaced, single sided pages) to:

Dr. Eugen Schenfeld
MPPOI'95 Conference Chair
NEC Research Institute
4 Independence Way
Princeton, NJ 08540, USA
(voice) (609)951-2742
(fax)   (609)951-2482
email:  MPPOI@RESEARCH.NJ.NEC.COM

============================================================================
DEADLINE: Papers must be sent so that they arrive on or before April 1, 1995
============================================================================

Manuscripts must be received by April  1st, 1995.  Due to the 
large number of anticipated submissions manuscripts arriving later than
the above date risk rejection. Notification of review decisions will 
be mailed by July 1st, 1995.  Camera ready papers are due 
August 1st, 1995.  Fax or electronic submissions will not be
considered. Proceedings will be published by the IEEE CS Press and will 
be available at the conference.


FOR MORE INFORMATION: Please write (email) to the Conference Chair.


PROGRAM COMMITTEE:

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. of CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New-York University, New-York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA 
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of EE, Purdue University, West Lafayette, IN
Lennart Johnsson, Aiken Comp. Lab, Harvard University, Cambridge, MA
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
Anthonie Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO 
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

===========================================================

***************************************************************************
TO GET THE MPPOI'94 PROC. (LAST YEAR'S), YOU CAN BUY IT FROM IEEE CS PRESS:
***************************************************************************
 
It is listing the 1994 MPPOI Proc. for sale at $35 for IEEE members or $70 
for non members. Details: ISBN 0-8186-5832-0 ; Catalog# 5830-02P. IEEE phone 
for orders: 1-800-CS-BOOKS (works only in USA).
 
In Europe:
 
IEEE Europe: 13, Avenue de l'Aquilon, B-1200 Brussels, BELGIUM
Phone: 32-2-770-21-98   Fax: 32-2-770-85-05

============================================================


        Eugen Schenfeld


From owner-mpi-collcomm@CS.UTK.EDU Thu Mar 23 13:22:06 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA22469; Thu, 23 Mar 1995 13:22:06 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA01009; Thu, 23 Mar 1995 13:21:12 -0500
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Thu, 23 Mar 1995 13:21:11 EST
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA00798; Thu, 23 Mar 1995 13:19:41 -0500
Received: by zingo (920330.SGI/YDL1.4-910307.16)
	id AA09315(zingo); Thu, 23 Mar 95 13:00:52 -0500
Received: by iris49 (5.52/cliff's joyful mailer #2)
	id AA28949(iris49); Thu, 23 Mar 95 12:32:37 EST
Date: Thu, 23 Mar 95 12:32:37 EST
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <9503231732.AA28949@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'95 - LAST CFP

 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
============================================================================
DEADLINE: PAPERS MUST BE SENT SO THAT THEY ARRIVE ON OR BEFORE APRIL 1, 1995
          FULL PAPER (up to 15 pages) should be sent.
============================================================================
 



                          Call for Papers
                 The Second International Conference on 
    MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS (MPPOI)

                           October 23-24, 1995
                           San  Antonio, Texas 

                             Sponsored by:
           ACM Special Interest Group on Architecture (SIGARCH)
                   The Optical Society of America (OSA)
          The International Society for Optical Engineering (SPIE)
         IEEE CS TCCA (Technical Committee on Computer Architecture) 
                NSF - National Science Foundation (pending) 

The second annual conference on Massively Parallel Processing Architectures 
using Optical Interconnections (MPPOI'95) will be held on Oct. 23-24, 1995 
in San-Antonio, Texas. The Conference will focus on the potential for using 
optical interconnections in massively parallel processing systems, and their 
effect on system and algorithm design. Optics offer many benefits for 
interconnecting large numbers of processing elements, but may require us to 
rethink how we build parallel computer systems and communication networks, 
and how we write applications.  Fully exploring the capabilities of optical 
interconnection networks requires an interdisciplinary effort.  It is 
critical that researchers in all areas of the field are aware of each 
other's work and results. The intent of MPPOI is to assemble the leading 
researchers and to build towards a synergetic approach to MPP architectures, 
optical interconnections, operating systems, and software development.

The topics of interest include but are not limited to the following:
      Optical interconnections
      Reconfigurable Architectures
      Embedding and mapping of applications and algorithms
      Packaging and layout of optical interconnections
      Electro-optical, and opto-electronic components 
      Relative merits of optical technologies (free-space, fibers, wave guides)
      Passive optical elements 
      Algorithms and applications exploiting MPP-OI
      Data distribution and partitioning 
      Characterizing parallel applications exploiting MPP-OI
      Cost/performance studies

The conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion. Authors are 
invited to submit manuscripts which demonstrate original unpublished research 
in areas of computer architecture and optical interconnections. Papers 
submitted must not be under considerations for another conference. 

SUBMITTING PAPERS:  All papers will be reviewed by at least 2 members of
the program committee.  Send eight (8) copies of the complete paper 
(not to exceed 15 single spaced, single sided pages) to:

Dr. Eugen Schenfeld
MPPOI'95 Conference Chair
NEC Research Institute
4 Independence Way
Princeton, NJ 08540, USA
(voice) (609)951-2742
(fax)   (609)951-2482
email:  MPPOI@RESEARCH.NJ.NEC.COM

============================================================================
DEADLINE: Papers must be sent so that they arrive on or before April 1, 1995
============================================================================

Manuscripts must be received by April  1st, 1995.  Due to the 
large number of anticipated submissions manuscripts arriving later than
the above date risk rejection. Notification of review decisions will 
be mailed by July 1st, 1995.  Camera ready papers are due 
August 1st, 1995.  Fax or electronic submissions will not be
considered. Proceedings will be published by the IEEE CS Press and will 
be available at the conference.


FOR MORE INFORMATION: Please write (email) to the Conference Chair.


PROGRAM COMMITTEE:

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. of CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New-York University, New-York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA 
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of EE, Purdue University, West Lafayette, IN
Lennart Johnsson, Aiken Comp. Lab, Harvard University, Cambridge, MA
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Ashok V. Krishnamoorthy, AT&T Bell Laboratories, Holmdel NJ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
Anthonie Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO 
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

===========================================================

***************************************************************************
TO GET THE MPPOI'94 PROC. (LAST YEAR'S), YOU CAN BUY IT FROM IEEE CS PRESS:
***************************************************************************
 
It is listing the 1994 MPPOI Proc. for sale at $35 for IEEE members or $70 
for non members. Details: ISBN 0-8186-5832-0 ; Catalog# 5830-02P. IEEE phone 
for orders: 1-800-CS-BOOKS (works only in USA).
 
In Europe:
 
IEEE Europe: 13, Avenue de l'Aquilon, B-1200 Brussels, BELGIUM
Phone: 32-2-770-21-98   Fax: 32-2-770-85-05

============================================================


        Eugen Schenfeld


From owner-mpi-collcomm@CS.UTK.EDU Sat Jul 15 15:34:48 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id PAA18183; Sat, 15 Jul 1995 15:34:47 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA19254; Sat, 15 Jul 1995 15:41:57 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 15 Jul 1995 15:41:56 EDT
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id PAA19145; Sat, 15 Jul 1995 15:40:11 -0400
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id PAA07370(zingo); Sat, 15 Jul 1995 15:23:01 -0400
Received: by iris49 (5.52/cliff's joyful mailer #2)
	id AA02620(iris49); Sat, 15 Jul 95 14:58:46 EDT
Date: Sat, 15 Jul 95 14:58:46 EDT
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <9507151858.AA02620@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI '95 FINAL PROGRAM


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

PLEASE NOTE:
============

1) A limited number of rooms at a special rate are available at the Menger Hotel in
San Antonio. First come first served (details bellow). Students wishing to share
a room at the menger and need a roomate please email to: sakr@research.nj.nec.com

2) Please help the organizers by early registration with IEEE. You may fax the
registration form with a credit card number to IEEE (details below). This will
help to better estimate the number of participants and make the needed arrangements
for the social events (lunch, dinner, reception).

THE FOLLOWING IS IN LaTex Format. This message is 32K Bytes long (14 pages).
Other formats and for more information please email to mppoi@research.nj.nec.com

=================================================================================

\documentstyle[fullpage]{article}

\begin{document}

\begin{verbatim}
==========================================================================
                 The Second International Conference on
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS
===========================================================================

                          October 23-24, 1995
                              Menger Hotel
                        San-Antonio Texas,  USA

                             SPONSORED BY:
                         IEEE Computer Society
          IEEE Technical Committee on Computer Architecture (TCCA)

                          IN COOPERATION WITH:
           ACM Special Interest Group on Architecture (SIGARCH)
          The International Society for Optical Engineering (SPIE)
            The IEEE Lasers and Electro-optics Society (LEOS)
                   The Optical Society of America (OSA)

                        ADDITIONAL SUPPORT PROVIDED BY:
                  NSF - The National Science Foundation (pending)

========================================================================
                             ADVANCE PROGRAM
========================================================================

The Second International Conference on Massively Parallel Processing
Architectures using Optical Interconnections (MPPOI '95) is a continuation to 
a very successful first meeting held last year in Cancun, Mexico. This year we 
have an exciting program featuring eight invited talks from research and
industrial leaders in the fields of parallel computer systems, optical 
interconnections and technology, parallel applications and interconnection 
networks. We also have two panels with the participation of technological and 
academic experts, representing the current thoughts and trends of the field. 
And last, but not least, there are 34 regular papers accepted for presentation 
from authors all over the world. This rich and diverse program is sure to be 
most interesting and stimulate discussions and interactions among the 
researchers of this interdisciplinary field. The organizers of MPPOI 
strongly feel that massively parallel processing needs optical interconnections 
and optical interconnections need parallel processing. The Conference's focus 
is the possible use of optical interconnections for massively parallel 
processing systems, and their effect on system and algorithm design. Optics 
offers many benefits for interconnecting large numbers of processing elements, 
but may require us to rethink how we build parallel computer systems and 
communication networks, and how we write applications.  Fully exploring the 
capabilities of optical interconnection networks requires an interdisciplinary 
effort. It is critical that researchers from all related research areas are 
aware of each other's work and results. The intent of MPPOI is to assemble the 
leading researchers and to build towards a synergetic approach to MPP 
architectures, optical interconnections, operating systems, and software 
development.
\end{verbatim}
\newpage
\begin{verbatim}
*********************************************************************
                               LOCATION
*********************************************************************

SAN ANTONIO
American humorist and homespun philosopher Will Rogers once described San
Antonio as ``One of America's four unique cities". He had a natural instinct
for getting to the very essence of a subject, and his comment about San Antonio
is no exception. San Antonio truly is unique. From its founding in 1691 by
Spanish missionaries, San Antonio has grown from a sleepy little Texas pueblo
to the 9th largest city in the United States. Along the way it has been the
birthplace of the Texas revolution with the Battle of the Alamo in 1836.  It is
the new home of bioscience and hi-tech industry now. In all, over half a dozen
cultures, from Spanish and German to Lebanese and Greek, have impacted the
growth of San Antonio. And their influence is still evident in the architecture,
festivals, cuisine and customs which all contribute to the uniqueness and charm
of the city.

THE ALAMO
An old mission-fort, the Alamo, in San Antonio, has been called the "cradle of
Texas liberty." Its gallant defense and the horrible massacre of the more than
180 men who fought there inspired the cry, "Remember the Alamo!" Texas soldiers
shouted this at the battle of San Jacinto, which brought independence to Texas.

THE MENGER HOTEL
MPPOI '95 will be held in San Antonio's Menger Hotel, a historic landmark hotel.
It is next door to the Alamo, adjacent to Rivercenter Mall, the IMAX Theater and
River Walk and two blocks to the convention center. The hotel fronts Alamo Plaza
where the Sea World shuttle and sightseeing tours depart.

SPDP'95
For those that are interested in attending the SPDP'95: info. is available at
the following www cite: http://rabbit.cs.utsa.edu/Welcome.html. You must
register to SPDP'95 conference if you wish to attend it. The advance program
and other information may be obtain from the above location, or from:
Prof. Xiaodong Zhang, email: zhang@runner.utsa.edu , Phone: (210) 691-5541,
FAX: (210) 691-4437.

AIR TRANSPORTATION
United Airlines is the official airline of MPPOI '95. United will provide attendees 
round trip transportation to San Antonio on United, United Express or Shuttle by 
United scheduled service in the United States and Canada at fares of either 5% 
discount off any United, United Express or Shuttle by United published fares, 
including First Class, in effect when the tickets are purchased subject to all
applicable restrictions, or a 10% discount off applicable BUA, or like, fares 
in effect when tickets are purchased 7 days in advance. Reservations and schedule 
information may be obtained by calling the United Meetings desk at 1-800-521-4041 
and referencing Meeting ID Code 599XM.



ACCOMMODATION
The special MPPOI '95 Menger Hotel rate is US $90 for single or double.  Please
see the enclosed information for making your reservation directly with the hotel.

REGISTRATION
Please register to the conference using the attached form DIRECTLY with IEEE. 
TO HELP WITH THE PLANNING OF THE CONFERENCE PLEASE ALSO SEND email or fax, 
indicating the name(s) of people that register with IEEE and will attend 
(email: mppoi@research.nj.nec.com fax: +USA-609-951-2482 Att. Dr. Eugen Schenfeld).

LOCAL TRANSPORTATION
Star Shuttle provides van service from San Antonio International airport to the
Menger hotel for $6.00 per person each way. For more information and
reservations call +USA-(210)366-3183. Other transportation is available at the
airport, incl. taxi and busses.

CUSTOMS/PASSPORTS
It is suggested for those of other than US nationalities to check with a travel
agent and with an US consulate the requirements for VISA and passports to enter
the United States, as well as for US Customs regulations.

WEATHER & TIME
San Antonio's weather in late october ranges from low 60's to 70's Fahrenheit.
Climate is dry and perfect for sightseeing the many attractions the city and
surroundings have to offer.

JOIN US!
MPPOI'95 is in an ideal location to bring along family.  Your traveling
companions will be well entertained while you are participating in the
conference events.  For those that plan to spend the weekend before the
conference in San Antonio, we suggest to consult with a travel agent and with
the hotel for information on sightseeing and other local activities.  Please
note that the hotel rate is valid for the nights of Oct. 22-24. If you
wish to stay over a Sat. night (Oct. 21st), the hotel will try its best
to accommodate you with the same rate. Once you make a reservation,
please make sure to ask for the night of Oct. 21st. If not available,
you will be placed on a waiting lists. Chances are you may get it, but
currently it is not possible to confirm this.

==========================================
****** NSF TRAVEL SUPPORT (PENDING) ******
==========================================

The National Science Foundation (NSF) is considering to award travel support
for minority and female faculty members as well as for graduate students. This
travel award is pending final approval by the NSF and is available for authors
presenting papers at the MPPOI'95 conference. For details on the travel support
and to obtain a Request Form please contact (email, fax, or phone) the
Conference Chair at the above address.

\end{verbatim}
\newpage
\begin{verbatim}

STEERING COMMITTEE
==================

J. Goodman, Stanford University		     
L. Johnsson, University of Houston
S. Lee, University of California 	     
R. Melhem, University of Pittsburgh
E. Schenfeld, NEC Research Institute (Chair) 
P. Wang, George Mason University

CONFERENCE CHAIR:
================

        Dr. Eugen Schenfeld             (voice) (609)951-2742
        NEC Research Institute          (fax)   (609)951-2482
        4 Independence Way              email: MPPOI@RESEARCH.NJ.NEC.COM
        Princeton, NJ 08540, USA 

PUBLICITY CHAIR:   D. Quammen, George Mason University.
================

LOCAL ARRANGEMENTS CHAIR:  X. Zhang, University of Texas at San Antonio.
========================

PROGRAM COMMITTEE:
=================

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. for CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New York University, New York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of ECE, Purdue University, West Lafayette, IN
Lennart Johnsson, Dept. of Computer Science, University of Houston, Houston TX
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Ashok V. Krishnamoorthy, AT&T Bell Laboratories, Holmdel NJ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
A. B. Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

SESSION CHAIRS
==============

P. Dowd, State University of New York at Buffalo
E. E. E. Frietman, Delft University of Technology, The Netherlands
R. Kostuk, University of Arizona at Tucson
A. Krishnamoorthy, AT&T Bell Laboratories
S. Levitan, University of Pittsburgh
Y. Li, NEC Research Institut
A. Louri, University of Arizona at Tucson
P. Wang, George Mason University

PANEL MODERATORS
================

E. E. E. Frietman, Delft University of Technology, The Netherlands
Y. Li, NEC Research Institute

INVITED SPEAKERS
================
Michael Flynn, Stanford University
G. Fox, Northeast Parallel Architectures Center at Syracus University
S. L. Johnsson, University of Houston
H. S. Hinton, University of Colorado at Boulder
Alan Huang, Terabit Corp.
H. T. Kung, Harvard University
D. Miller, AT&T Bell Labs.
B. Smith, Tera Computers Corp.

========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
_____________________________________________________________________

                      MPPOI '95 PROGRAM SCHEDULE
_____________________________________________________________________

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
***** INVITED TALKS: 40 Minutes. REGULAR TALKS: 20 Minutes *****
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

========================================
SUNDAY, OCTOBER 22, 1995
========================================

6:00 PM - 7:30 PM
REGISTRATION 

========================================
Monday, October 23, 1995
========================================

7:00 AM - 8:00 AM       
CONTINENTAL BREAKFAST 
________________________________________

7:00 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:00 AM - 8:20 AM
OPENING REMARKS
Eugen Schenfeld, NEC Research Institute

8:20 AM - 10:00 AM
Session I
Chair: R. Kostuk, University of Arizona at Tucson

Large-Scale ATM Switches for Data Communications: Are These Switches
Limited by Interconnect, Memory or Else?
H. T. Kung, Harvard University (INVITED) 

Design Issues for Through-Wafer Optoelectronic Multicomputer Interconnects
P. May, N. M. Jokerst, D. S. Wills, S. Wilkinson, M. Lee, O. Vendier, S. Bond,
Z. Hou, G. Dagnall, M. A. Brooke, A. Brown, Georgia Institute of Technology

Design of a Terabit Free-Space Photonic Backplane for Parallel Computing
T. H. Szymanski, and  H. S. Hinton, McGill University and University of Colorado

Optical Interconnection Network for Massively Parallel Processors 
Using Beam-Steering Vertical Cavity Surface-Emitting Lasers
L. Fan, and M. C. Wu, University of California at Los Angeles;
H. C. Lee. and P. Grodzinski, Motorola Inc.
________________________________________

10:00 AM - 10:30 AM     
MID-MORNING BREAK
________________________________________

10:30 AM - 1:00 PM
PARALLEL SESSIONS: II AND III
_________________________________________

10:30 AM - 1:00 PM
Session II 
Chair: A. Krishnamoorthy, AT&T Bell Laboratories

The Role of Representation in Optimizing a Computer Architecture
Michael Flynn, Stanford University (INVITED)

An Evaluation of Communication Protocols for Star-Coupled
Multidimensional WDM Networks for Multiprocessors
K. R. Desai, and K. Ghose, State University of New York at Binghamton

Small Depth Beam-Steered Optical Interconnect
M. Murdocca, H. R. Nahata, and Y. Zhou, Rutgers University

Optical Fiber Interconnection System for Massively Parallel Processor Arrays
Y.-M. Zhang, X.-Q. He, G. Zhou, W.-Y. Liu, Y. Wang, 
Z.-P. Yin, and H.-Y. Wang, Tianjin University, P. R. of China

A Case Study for the Implementation of a Stochastic Bit Stream Neuron; 
The Choice Between Electrical and Optical Interconnects
M. A. Hands, W. Peiffer, H. Thienpont, A. Kirk, Vrije University, Belgium;
T. J. Hall, King's College, University of London, UK

Characterization of Massively Parallel Smart Pixels Systems for 
The Example of a Binary Associative Memory
D. Fey, Friedrich-Schiller University, Germany
________________________________________

10:30 AM - 1:00PM
Session III 
Chair: A. Louri, University of Arizona at Tucson

The Challenges Involved in the Design of a 100 Gb/s Internet
Alan Huang, Terabit Corp. (INVITED)

Fault-tolerance in Optically Implemented Multiprocessor Networks
P. Lalwaney, and I. Koren, University of Massachusetts at Amherst

A Speed Cache Coherence Protocol for an Optical
Multi-Access Interconnect Architecture
T. M. Pinkston, and J. Ha, University of Southern California

A Reconfigurable Optical Bus Structure for Shared Memory
Multiprocessors With Improved Performance
S. Ray, and H. Jiang, University of Nebraska-Lincoln

n-Dimensional Processor Arrays with Optical dBuses
G. Liu, and K. Y. Lee, University of Denver;
H. F. Jordan, University of Colorado at Boulder

The Difficulty of Finding Good Embeddings of 
Program Graphs onto the OPAM Architecture
B. Ramamurthy, and M. Krishnamoorthy, Rensselaer Polytechnic Institute
________________________________________

1:00 PM - 2:30 PM
CONFERENCE LUNCH (PROVIDED)
________________________________________

2:30 PM - 4:10 PM
Session IV
Chair: E. E. E. Frietman, Delft University of Technology, The Netherlands

Intelligent Optical Backplanes
H. S. Hinton, University of Colorado at Boulder (INVITED)

Connection Cube and Interleaved Optical Backplane for a Multiprocessor Data Bus
R. K. Kostuk, T. J. Kim, D. Ramsey, T.-H. Oh, and R. Boye
University of Arizona at Tucson

An Efficient 3-D Optical Implementation of Binary de Bruijn 
Networks with Applications to Massively Parallel Computing
A. Louri, and  H. Sung, 
University of Arizona at Tucson

Performance Evaluation of 3D Optoelectronic Computer
Architectures on FFT and Sorting Benchmarks
G. A. Betzos, and P. A. Mitkas, Colorado State University
________________________________________

4:10 PM - 4:30 PM
AFTERNOON BREAK
________________________________________

4:30 PM - 6:30 PM
CONFERENCE PANEL I
OPTICS FOR INTERCONNECTION: INDUSTRY'S INTERESTS and RESPONSIBILITIES
MODERATOR: Y. Li, NEC Research Institute

PANELISTS: R. Chen, University of Texas at Austin; N. Dutta, AT&T Bell Labs; 
K. Kobayashi, NEC Corp.; Y. S. Liu, General Electric; B. Pecor, Cray Research;
J. Rowlette, AMP Corp.; B. Smith, Tera Computers Corp.
________________________________________

7.00 PM - 8:30 PM
ACQUAINTANCE RECEPTION

Meet some of the MPPOI participants. 
Food and small talk opportunity provided.

========================================
TUESDAY, OCTOBER 24, 1995
========================================

7:00 AM - 8:00 AM
CONTINENTAL BREAKFAST
________________________________________

7:00 AM - 11:30 AM  
CONFERENCE REGISTRATION
________________________________________

8:00 AM - 10:00 AM
Session V
Chair: S. Levitan, University of Pittsburgh

Hybrid SEED - Massively Parallel Optical Interconnections for Silicon ICs
D. Miller, AT&T Bell Labs. (INVITED)

Construction of Demonstration Parallel Optical Processors based on
CMOS/InGaAs Smart Pixel Technology
A. Walker, M. P. Y. Desmulliez, F. A. P. Tooley, 
D. T. Neilson, J. A. B. Dines, D. A. Baillie, 
S. M. Prince, L. C. Wilkinson, M. R. Taghizadeh, P. Blair, 
J. F. Snowdon, and B. S. Wherrett, Heriot-Watt University, Scotland;
C. Stanley, and F. Pottier, University of Glasgow, Scotland;
I. Underwood, and D. G. Vass, University of Edinburgh, Scotland;
W. Sibbett, and M. H. Dunn, University of St.-Andrews, Scotland.

General Purpose Bi-Directional Optical Backplane Bus
C. Zhao, S. Natarajan, and R. T. Chen, University of Texas, at Austin

Efficient Communication Scheme For Distributed Parallel Processor Systems
P. Kohler, and A. Gunzinger, Swiss Federal Institute of Technology, Switzerland

What Limits Capacity and Connectivity in Optical Interconnects
Y. Li, NEC Research Institute
________________________________________

10:00 AM - 10:30 AM      
MID-MORNING BREAK
________________________________________

________________________________________

10:30 AM - 12:30 PM
Session VI
Chair: P. Dowd, State University of New York at Buffalo

Data Partitioning for Load-Balance and Communication Bandwidth Preservation
S. L. Johnsson, University of Houston (INVITED)

Embedding Rings and Meshes in Partitioned Optical Passive Stars Networks
G. Gravenstreter, and R. G. Melhem, University of Pittsburgh

Optical Thyristor Based Subsystems for Digital Parallel Processing:  
Demonstrators and Future Perspectives
H. Thienpont, A. Kirk, and I. Veretennicoff, Vrije University, Belgium;
P. Heremans, B. Knupfer, and G. Borghs, IMEC Corp., Belgium;
M. Kuijk, and R. Vounckx, rije University, Belgium

Computer-Aided Design of Free-Space Optoelectronic Interconnection Systems
S. P. Levitan, P. J. Marchand, M. Rempel, D. M. Chiarulli, and F. B. McCormick, 
University of Pittsburgh and University of California at San Diego

Optical Design of a Fault Tolerant Self-Routing Switch for
Massively Parallel Processing Networks
M. Guizani, M. A. Memon, and S. Ghanta, King Fahd University, Saudi Arabia
________________________________________

12:30 PM - 1:30 PM   
LUNCH (ON YOUR OWN)
________________________________________

1:30 PM - 3:50 PM      
PARALLEL SESSIONS: VII and VIII
________________________________________

1:30 PM - 3:50 PM 
Session VII
Chair: P. Wang, George Mason University

Interconnection Networks for Shared Memory Parallel Computers
B. Smith, Tera Computers Corp. (INVITED)

A Comparative Study of One-to-Many WDM Lightwave 
Interconnection Networks for Multiprocessors
H. Bourdin, and A. Ferreira, CNRS - LIP ENS Lyon, France;
K. Marcus, ARTEMIS IMAG, Grenoble, France

Planar Optical Interconnections for 100Gb/s Packet Address Detection
S. H. Song and E.-H. Lee, 
Electronics & Telecommunications Research Institute, Taejon, South Korea

A Pipelined Self-Routing Optical Multichannel Time Slot Permutation Network
R. Kannan, H. F. Jordan, K. Y. Lee, and C. Reed,
University of Denver; University of Colorado at Boulder; 
and The Institute for Defense Analysis

Optical Interconnect Design for a Manufacturable Multicomputer
R. R. Krchnavek, R. D. Chamberlain, T. Barry, V. Malhotra, and Z. Dittia,
Washington University at St. Louis, Missouri

Hypercube Interconnection in TWDM Optical Passive Star Networks
S.-K. Lee, A. D. Oh, and H.-A. Choi, George Washington University
________________________________________

1:30 PM - 3:50 PM 
Session VIII
Chair: Y. Li, NEC Research Institut

From Today's Desktop Gigaflop to Tomorrow's Central Petaflop;
From Grand Challenges to the Information Age;
The Applications Driving Parallel Computing and Their Architecture Implications
G. Fox, Northeast Parallel Architectures Center at Syracus University (INVITED)

A Fiber-Optic Interconnection Concept for Scalable Massively Parallel Computing
M. Jonsson, K. Nilsson, and B. Svensson,
Halmstad University; and Chalmers University of Technology, Goteborg, Sweden

All-Optical Interconnects for Massively Parallel Processing
C. S. Ih, R. Tian, X. Xia, J. Chao, and Y. Wang, University of Delaware

Predictive Control of Opto-Electronic Reconfigurable 
Interconnection Networks Using Neural Networks
M. F. Sakr, S. P. Levitan, C. L. Giles, B. C. Horne, 
M. Maggini, and D. M. Chiarulli,
University of Pittsburgh; NEC Research Institute; and Firenze University, Italy

The Simultaneous Optical Multiprocessor Exchange Bus
J. Kulick, W. E. Cohen, C. Katsinis, E. Wells, A. Thomsen,
M. Abushagur, R. K. Gaede, R. Lindquist, G. Nordin, and D. Shen;
University of Alabama in Huntsville

On Some Architectural Issues of Optical Hierarchical Ring
Networks for Shared-Memory Multiprocessors
H. Jiang, C. Lam, and V. C. Hamacher, 
University of Nebraska-Lincoln; and Queen's University, Kingston, Canada
________________________________________

3:50 PM - 4:15 PM       
AFTERNOON BREAK
________________________________________




________________________________________

4:15 PM - 6:15 PM       
CONFERENCE PANEL II
OPTO-ELECTRONIC PROCESSING & NETWORKING IN MASSIVELY PARALLEL PROCESSING SYSTEMS
MODERATOR: E. E. E. Frietman, Delft University of Technology, The Netherlands

PANELISTS: C. Jesshope, University of Surrey, Surrey, UK; H. F. Jordan, 
University of Colorado at Boulder; G. D. Khoe, Eindhoven University of 
Technology, Eindhoven, The Netherlands; A. V. Krishnamoorthy, AT&T Bell Labs.; 
I. Koren, University of Massachusetts at Amherst; A. McAulay, Lehigh University;
I. MacDonald, Telecommunications Research Laboratories, Edmonton, Canada; 
M. Murdocca, Rutgers University; A. B. Ruighaver, Melbourne University, 
Australia; J. Sauer, University of Colorado at Boulder; H. Thienpont, Vrije 
Universiteit, Belgium; A. Walker, Heriot-Watt University, Edinburgh, Scotland; 
________________________________________

6:15 PM - 6:30 PM 

CLOSING REMARKS: ANNOUNCING MPPOI '96 AND FUTURE MEETING PLANS
Eugen Schenfeld NEC Research Institute
________________________________________

6:30 PM - 8:00 PM
CONFERENCE DINNER (PROVIDED)
________________________________________





















==============================================================================
\end{verbatim}
\newpage
\begin{verbatim}
                           Registration Form
                               MPPOI'95
                             Menger  Hotel
                           San Antonio, Texas
                          October 23-24, 1995

      TO REGISTER, MAIL OR FAX THIS FORM TO: MPPOI registration,
      IEEE Computer Society, 1730 Massachusetts Av, N.W.,
      Washington DC 20036-1992, USA. Fax: +USA-202-728-0884
      For information, call +USA-202-371-1013 - Sorry, no phone registration.

Name:----------------------------------------------------------------------
       Last                           First                        MI
Company:-------------------------------------------------------------------
Address:-------------------------------------------------------------------
City/State/Zip/Country:----------------------------------------------------
Daytime phone:----------------------- Fax number---------------------------
Company:-------------------------------------------------------------------
E-mail address:------------------------------------------------------------
IEEE/ACM/OSA/SPIE Member Number:   ------------------
Do you have any special needs: --------------------------------------------
---------------------------------------------------------------------------
Do not include my mailing address on:
-- Non-society mailing lists         -- Meeting Attendee lists

Please circle the appropriate registration fee:
Advance (before October 2, 1995)         Late(before October 16, 1995)/on site.
  Member $300                              Member $360
  Non-member $375                          Non-member $450
  Full-time student $150                   Full-time student $180

Total enclosed:$ --------------------------------
Please make all checks payable to: IEEE Computer Society. All checks must be in
US dollars drawn on US banks. Credit card charges will appear on statement as
"IEEE Computer Society Registration". Written requests for refunds must be
received by IEEE office before October 2, 1995. Refunds are subject to a $50
processing fees. Method of payment accepted (payment must accompany form):
-- Personal check               -- Company check        -- Traveler's check
-- American Express             -- Master Card          -- VISA
-- Dinner's club                -- Government purchase order (original)

Credit card number: -------------------------- Expiration date: ------------
Cardholder name   : --------------------------
Signature         : --------------------------

Non-student registration fees include conference attendance, proceedings,
continental breakfast, refreshment at breaks, conference reception, one conference
lunch and one conference dinner. Student registration fees ***DO NOT*** include
the lunch and ***DO NOT*** include the dinner.
===========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
______________________________________________________________________

                     MPPOI'95 HOTEL RESERVATION
                          The Menger Hotel
                         San Antonio, Texas
______________________________________________________________________

   PLEASE MAKE RESERVATIONS WITH THE MENGER HOTEL AS SOON AS POSSIBLE TO 
   GUARANTEE THE $90 RATE (HOTEL PHONE AND FAX NUMBERS ARE GIVEN AT BELLOW). 

 * The special MPPOI'95 group rate of US $90.00 (single or double) is available 
   from October 22 through October 25, 1995. All rates are subject to additional
   local and state taxes.  These rates will be available for reservations made 
   BEFORE September 22, 1995. Please note that the period Sep. to Nov. is the 
   high session in San Antonio and hotels are usually booked in advance. We 
   urge you to make early reservations as soon as possible.
   If you wish to stay over a Sat. night (Oct. 21st), the hotel will TRY its 
   best to accommodate you with the same rate. 

 * The MENGER HOTEL CONTACT POINTS: Phone: 1-800-345-9285 (for USA, or Canada)
   Phone: +USA-210-223-4361 (other countries);  Fax:  +USA-210-228-0022   

 * ALTERNATIVE LIST OF HOTELS (RATES AND RANK from AAA Tour Book): 

   IN CASE THE MENGER HOTEL IS FULL, here is a list of other nearby hotels (all 
   within walking distance of the Menger, in the downtown area of San Antonio). 
   These hotels have no arrangement with MPPOI and therefore you should not 
   identify yourself as a member of a group or conference. The arrangement with 
   these hotels is on a "one to one" basis, as with any other business traveler. 
   An early reservation is suggested. Also, it is always a good idea to look for 
   "specials" (i.e., advance paid rates, weekend specials, AAA rates, etc.). Also 
   please note that from the USA you may call the 800 directory (1-800-555-1212) 
   and ask for the 800 number of the hotel chain (such hotels are marked with a 
   '#' mark bellow), rates and ranking taken from AAA Tour Book 1994:

                                AAA   Typical       Phone            Fax 
                                Rank  Rate ($)     (+USA)           (+USA)
   
   * St. Anthony Hotel           4    106-130   (210)227-4392     none listed
   * Emily Morgan                3       85     (210)225-8486     none listed
   * Crocket Hotel               3     75-105   (210)225-6500     none listed
   # Hyatt Regency               4    119-170   (210)222-1234    (210)227-4925
   * La Mansion del Rio          4    135-220   (210)225-2581    (210)226-1365
   # Holiday Inn Riverwalk       3     95-119   (210)224-2500    (210)223-1302
   # Hilton Palacio del Rio      4    154-196   (210)222-1400    (210)270-0761
   * The Fairmounth Hotel        4    145-275   (210)224-8800    (210)224-2767
   # Marriott Riverwalk          4    135-150   (210)224-4555    (210)224-2754
   # La Quinta Motor Inn         3     83-90    (210)222-9181    (210)228-9816
   # Marriott Rivercenter        4      160     (210)223-1000    (210)223-6239
________________________________________________________________________________
\end{verbatim}

\end{document}


=================================================


        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Fri Aug 11 17:04:38 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id RAA02875; Fri, 11 Aug 1995 17:04:38 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA06921; Fri, 11 Aug 1995 17:13:33 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Fri, 11 Aug 1995 17:13:31 EDT
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id RAA06778; Fri, 11 Aug 1995 17:11:55 -0400
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id RAA24478(zingo); Fri, 11 Aug 1995 17:02:28 -0400
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id QAA24675(iris49); Fri, 11 Aug 1995 16:14:16 -0400
Date: Fri, 11 Aug 1995 16:14:16 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199508112014.QAA24675@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'95 Advance Program


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE  PLEASE DISTRIBUTE+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ADVANCE REGISTRATION DEADLINE FOR LOWER CONFERENCE FEE IS OCT. 2, 1995.

THE FOLLOWING IS IN LaTex Format. Other formats please email to:
mppoi@research.nj.nec.com

=================================================================================

\documentstyle[fullpage]{article}

\begin{document}

\begin{verbatim}
==========================================================================
                 The Second International Conference on
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS
===========================================================================

                          October 23-24, 1995
                              Menger Hotel
                        San-Antonio Texas,  USA

                             SPONSORED BY:
                         IEEE Computer Society
          IEEE Technical Committee on Computer Architecture (TCCA)

                          IN COOPERATION WITH:
           ACM Special Interest Group on Architecture (SIGARCH)
          The International Society for Optical Engineering (SPIE)
            The IEEE Lasers and Electro-optics Society (LEOS)
                   The Optical Society of America (OSA)

                        ADDITIONAL SUPPORT PROVIDED BY:
                  NSF - The National Science Foundation (pending)

========================================================================
                             ADVANCE PROGRAM
========================================================================

The Second International Conference on Massively Parallel Processing
Architectures using Optical Interconnections (MPPOI '95) is a continuation to 
a very successful first meeting held last year in Cancun, Mexico. This year we 
have an exciting program featuring eight invited talks from research and
industrial leaders in the fields of parallel computer systems, optical 
interconnections and technology, parallel applications and interconnection 
networks. We also have two panels with the participation of technological and 
academic experts, representing the current thoughts and trends of the field. 
And last, but not least, there are 34 regular papers accepted for presentation 
from authors all over the world. This rich and diverse program is sure to be 
most interesting and stimulate discussions and interactions among the 
researchers of this interdisciplinary field. The organizers of MPPOI 
strongly feel that massively parallel processing needs optical interconnections 
and optical interconnections need parallel processing. The Conference's focus 
is the possible use of optical interconnections for massively parallel 
processing systems, and their effect on system and algorithm design. Optics 
offers many benefits for interconnecting large numbers of processing elements, 
but may require us to rethink how we build parallel computer systems and 
communication networks, and how we write applications.  Fully exploring the 
capabilities of optical interconnection networks requires an interdisciplinary 
effort. It is critical that researchers from all related research areas are 
aware of each other's work and results. The intent of MPPOI is to assemble the 
leading researchers and to build towards a synergetic approach to MPP 
architectures, optical interconnections, operating systems, and software 
development.
\end{verbatim}
\newpage
\begin{verbatim}
*********************************************************************
                               LOCATION
*********************************************************************

SAN ANTONIO
American humorist and homespun philosopher Will Rogers once described San
Antonio as ``One of America's four unique cities". He had a natural instinct
for getting to the very essence of a subject, and his comment about San Antonio
is no exception. San Antonio truly is unique. From its founding in 1691 by
Spanish missionaries, San Antonio has grown from a sleepy little Texas pueblo
to the 9th largest city in the United States. Along the way it has been the
birthplace of the Texas revolution with the Battle of the Alamo in 1836.  It is
the new home of bioscience and hi-tech industry now. In all, over half a dozen
cultures, from Spanish and German to Lebanese and Greek, have impacted the
growth of San Antonio. And their influence is still evident in the architecture,
festivals, cuisine and customs which all contribute to the uniqueness and charm
of the city.

THE ALAMO
An old mission-fort, the Alamo, in San Antonio, has been called the "cradle of
Texas liberty." Its gallant defense and the horrible massacre of the more than
180 men who fought there inspired the cry, "Remember the Alamo!" Texas soldiers
shouted this at the battle of San Jacinto, which brought independence to Texas.

THE MENGER HOTEL
MPPOI '95 will be held in San Antonio's Menger Hotel, a historic landmark hotel.
It is next door to the Alamo, adjacent to Rivercenter Mall, the IMAX Theater and
River Walk and two blocks to the convention center. The hotel fronts Alamo Plaza
where the Sea World shuttle and sightseeing tours depart.

SPDP'95
For those that are interested in attending the SPDP'95: info. is available at
the following www cite: http://rabbit.cs.utsa.edu/Welcome.html. You must
register to SPDP'95 conference if you wish to attend it. The advance program
and other information may be obtain from the above location, or from:
Prof. Xiaodong Zhang, email: zhang@runner.utsa.edu , Phone: (210) 691-5541,
FAX: (210) 691-4437.

AIR TRANSPORTATION
United Airlines is the official airline of MPPOI '95. United will provide attendees 
round trip transportation to San Antonio on United, United Express or Shuttle by 
United scheduled service in the United States and Canada at fares of either 5% 
discount off any United, United Express or Shuttle by United published fares, 
including First Class, in effect when the tickets are purchased subject to all
applicable restrictions, or a 10% discount off applicable BUA, or like, fares 
in effect when tickets are purchased 7 days in advance. Reservations and schedule 
information may be obtained by calling the United Meetings desk at 1-800-521-4041 
and referencing Meeting ID Code 599XM.



ACCOMMODATION
The special MPPOI '95 Menger Hotel rate is US $90 for single or double.  Please
see the enclosed information for making your reservation directly with the hotel.

REGISTRATION
Please register to the conference using the attached form DIRECTLY with IEEE. 
TO HELP WITH THE PLANNING OF THE CONFERENCE PLEASE ALSO SEND email or fax, 
indicating the name(s) of people that register with IEEE and will attend 
(email: mppoi@research.nj.nec.com fax: +USA-609-951-2482 Att. Dr. Eugen Schenfeld).

LOCAL TRANSPORTATION
Star Shuttle provides van service from San Antonio International airport to the
Menger hotel for $6.00 per person each way. For more information and
reservations call +USA-(210)366-3183. Other transportation is available at the
airport, incl. taxi and busses.

CUSTOMS/PASSPORTS
It is suggested for those of other than US nationalities to check with a travel
agent and with an US consulate the requirements for VISA and passports to enter
the United States, as well as for US Customs regulations.

WEATHER & TIME
San Antonio's weather in late october ranges from low 60's to 70's Fahrenheit.
Climate is dry and perfect for sightseeing the many attractions the city and
surroundings have to offer.

JOIN US!
MPPOI'95 is in an ideal location to bring along family.  Your traveling
companions will be well entertained while you are participating in the
conference events.  For those that plan to spend the weekend before the
conference in San Antonio, we suggest to consult with a travel agent and with
the hotel for information on sightseeing and other local activities.  Please
note that the hotel rate is valid for the nights of Oct. 22-24. If you
wish to stay over a Sat. night (Oct. 21st), the hotel will try its best
to accommodate you with the same rate. Once you make a reservation,
please make sure to ask for the night of Oct. 21st. If not available,
you will be placed on a waiting lists. Chances are you may get it, but
currently it is not possible to confirm this.

==========================================
****** NSF TRAVEL SUPPORT (PENDING) ******
==========================================

The National Science Foundation (NSF) is considering to award travel support
for minority and female faculty members as well as for graduate students. This
travel award is pending final approval by the NSF and is available for authors
presenting papers at the MPPOI'95 conference. For details on the travel support
and to obtain a Request Form please contact (email, fax, or phone) the
Conference Chair at the above address.

\end{verbatim}
\newpage
\begin{verbatim}

STEERING COMMITTEE
==================

J. Goodman, Stanford University		     
L. Johnsson, University of Houston
S. Lee, University of California 	     
R. Melhem, University of Pittsburgh
E. Schenfeld, NEC Research Institute (Chair) 
P. Wang, George Mason University

CONFERENCE CHAIR:
================

        Dr. Eugen Schenfeld             (voice) (609)951-2742
        NEC Research Institute          (fax)   (609)951-2482
        4 Independence Way              email: MPPOI@RESEARCH.NJ.NEC.COM
        Princeton, NJ 08540, USA 

PUBLICITY CHAIR:   D. Quammen, George Mason University.
================

LOCAL ARRANGEMENTS CHAIR:  X. Zhang, University of Texas at San Antonio.
========================

PROGRAM COMMITTEE:
=================

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. for CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New York University, New York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of ECE, Purdue University, West Lafayette, IN
Lennart Johnsson, Dept. of Computer Science, University of Houston, Houston TX
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Ashok V. Krishnamoorthy, AT&T Bell Laboratories, Holmdel NJ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
A. B. Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

SESSION CHAIRS
==============

P. Dowd, State University of New York at Buffalo
E. E. E. Frietman, Delft University of Technology, The Netherlands
R. Kostuk, University of Arizona at Tucson
A. Krishnamoorthy, AT&T Bell Laboratories
S. Levitan, University of Pittsburgh
Y. Li, NEC Research Institut
A. Louri, University of Arizona at Tucson
P. Wang, George Mason University

PANEL MODERATORS
================

E. E. E. Frietman, Delft University of Technology, The Netherlands
Y. Li, NEC Research Institute

INVITED SPEAKERS
================
Michael Flynn, Stanford University
G. Fox, Northeast Parallel Architectures Center at Syracus University
S. L. Johnsson, University of Houston
H. S. Hinton, University of Colorado at Boulder
Alan Huang, Terabit Corp.
H. T. Kung, Harvard University
D. Miller, AT&T Bell Labs.
B. Smith, Tera Computers Corp.

========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
_____________________________________________________________________

                      MPPOI '95 PROGRAM SCHEDULE
_____________________________________________________________________

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
***** INVITED TALKS: 40 Minutes. REGULAR TALKS: 20 Minutes *****
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

========================================
SUNDAY, OCTOBER 22, 1995
========================================

6:00 PM - 7:30 PM
REGISTRATION 

========================================
Monday, October 23, 1995
========================================

7:00 AM - 8:00 AM       
CONTINENTAL BREAKFAST 
________________________________________

7:00 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:00 AM - 8:20 AM
OPENING REMARKS
Eugen Schenfeld, NEC Research Institute

8:20 AM - 10:00 AM
Session I
Chair: R. Kostuk, University of Arizona at Tucson

Hybrid SEED - Massively Parallel Optical Interconnections for Silicon ICs
D. Miller, AT&T Bell Labs. (INVITED)

Design Issues for Through-Wafer Optoelectronic Multicomputer Interconnects
P. May, N. M. Jokerst, D. S. Wills, S. Wilkinson, M. Lee, O. Vendier, S. Bond,
Z. Hou, G. Dagnall, M. A. Brooke, A. Brown, Georgia Institute of Technology

Design of a Terabit Free-Space Photonic Backplane for Parallel Computing
T. H. Szymanski, and  H. S. Hinton, McGill University and University of Colorado

Optical Interconnection Network for Massively Parallel Processors 
Using Beam-Steering Vertical Cavity Surface-Emitting Lasers
L. Fan, and M. C. Wu, University of California at Los Angeles;
H. C. Lee. and P. Grodzinski, Motorola Inc.
________________________________________

10:00 AM - 10:30 AM     
MID-MORNING BREAK
________________________________________

10:30 AM - 1:00 PM
PARALLEL SESSIONS: II AND III
_________________________________________

10:30 AM - 1:00 PM
Session II 
Chair: A. Krishnamoorthy, AT&T Bell Laboratories

The Role of Representation in Optimizing a Computer Architecture
Michael Flynn, Stanford University (INVITED)

An Evaluation of Communication Protocols for Star-Coupled
Multidimensional WDM Networks for Multiprocessors
K. R. Desai, and K. Ghose, State University of New York at Binghamton

Small Depth Beam-Steered Optical Interconnect
M. Murdocca, H. R. Nahata, and Y. Zhou, Rutgers University

Optical Fiber Interconnection System for Massively Parallel Processor Arrays
Y.-M. Zhang, X.-Q. He, G. Zhou, W.-Y. Liu, Y. Wang, 
Z.-P. Yin, and H.-Y. Wang, Tianjin University, P. R. of China

A Case Study for the Implementation of a Stochastic Bit Stream Neuron; 
The Choice Between Electrical and Optical Interconnects
M. A. Hands, W. Peiffer, H. Thienpont, A. Kirk, Vrije University, Belgium;
T. J. Hall, King's College, University of London, UK

Characterization of Massively Parallel Smart Pixels Systems for 
The Example of a Binary Associative Memory
D. Fey, Friedrich-Schiller University, Germany
________________________________________

10:30 AM - 1:00PM
Session III 
Chair: A. Louri, University of Arizona at Tucson

The Challenges Involved in the Design of a 100 Gb/s Internet
Alan Huang, Terabit Corp. (INVITED)

Fault-tolerance in Optically Implemented Multiprocessor Networks
P. Lalwaney, and I. Koren, University of Massachusetts at Amherst

A Speed Cache Coherence Protocol for an Optical
Multi-Access Interconnect Architecture
T. M. Pinkston, and J. Ha, University of Southern California

A Reconfigurable Optical Bus Structure for Shared Memory
Multiprocessors With Improved Performance
S. Ray, and H. Jiang, University of Nebraska-Lincoln

n-Dimensional Processor Arrays with Optical dBuses
G. Liu, and K. Y. Lee, University of Denver;
H. F. Jordan, University of Colorado at Boulder

The Difficulty of Finding Good Embeddings of 
Program Graphs onto the OPAM Architecture
B. Ramamurthy, and M. Krishnamoorthy, Rensselaer Polytechnic Institute
________________________________________

1:00 PM - 2:30 PM
CONFERENCE LUNCH (PROVIDED)
________________________________________

2:30 PM - 4:10 PM
Session IV
Chair: E. E. E. Frietman, Delft University of Technology, The Netherlands

Intelligent Optical Backplanes
H. S. Hinton, University of Colorado at Boulder (INVITED)

Connection Cube and Interleaved Optical Backplane for a Multiprocessor Data Bus
R. K. Kostuk, T. J. Kim, D. Ramsey, T.-H. Oh, and R. Boye
University of Arizona at Tucson

An Efficient 3-D Optical Implementation of Binary de Bruijn 
Networks with Applications to Massively Parallel Computing
A. Louri, and  H. Sung, 
University of Arizona at Tucson

Performance Evaluation of 3D Optoelectronic Computer
Architectures on FFT and Sorting Benchmarks
G. A. Betzos, and P. A. Mitkas, Colorado State University
________________________________________

4:10 PM - 4:30 PM
AFTERNOON BREAK
________________________________________

4:30 PM - 6:30 PM
CONFERENCE PANEL I
OPTICS FOR INTERCONNECTION: INDUSTRY'S INTERESTS and RESPONSIBILITIES
MODERATOR: Y. Li, NEC Research Institute

PANELISTS: R. Chen, University of Texas at Austin; N. Dutta, AT&T Bell Labs; 
N. Henmi, NEC Corp.; Y. S. Liu, General Electric; B. Pecor, Cray Research;
J. Rowlette, AMP Corp.; B. Smith, Tera Computers Corp.
________________________________________

7.00 PM - 8:30 PM
ACQUAINTANCE RECEPTION

Meet some of the MPPOI participants. 
Food and small talk opportunity provided.

========================================
TUESDAY, OCTOBER 24, 1995
========================================

7:00 AM - 8:00 AM
CONTINENTAL BREAKFAST
________________________________________

7:00 AM - 11:30 AM  
CONFERENCE REGISTRATION
________________________________________

8:00 AM - 10:00 AM
Session V
Chair: S. Levitan, University of Pittsburgh

Flow-Controlled ATM Switches for Available Bit Rate Services
H. T. Kung, Harvard University (INVITED)

Construction of Demonstration Parallel Optical Processors based on
CMOS/InGaAs Smart Pixel Technology
A. Walker, M. P. Y. Desmulliez, F. A. P. Tooley, 
D. T. Neilson, J. A. B. Dines, D. A. Baillie, 
S. M. Prince, L. C. Wilkinson, M. R. Taghizadeh, P. Blair, 
J. F. Snowdon, and B. S. Wherrett, Heriot-Watt University, Scotland;
C. Stanley, and F. Pottier, University of Glasgow, Scotland;
I. Underwood, and D. G. Vass, University of Edinburgh, Scotland;
W. Sibbett, and M. H. Dunn, University of St.-Andrews, Scotland.

General Purpose Bi-Directional Optical Backplane Bus
C. Zhao, S. Natarajan, and R. T. Chen, University of Texas, at Austin

Efficient Communication Scheme For Distributed Parallel Processor Systems
P. Kohler, and A. Gunzinger, Swiss Federal Institute of Technology, Switzerland

What Limits Capacity and Connectivity in Optical Interconnects
Y. Li, NEC Research Institute
________________________________________

10:00 AM - 10:30 AM      
MID-MORNING BREAK
________________________________________

________________________________________

10:30 AM - 12:30 PM
Session VI
Chair: P. Dowd, State University of New York at Buffalo

Data Partitioning for Load-Balance and Communication Bandwidth Preservation
S. L. Johnsson, University of Houston (INVITED)

Embedding Rings and Meshes in Partitioned Optical Passive Stars Networks
G. Gravenstreter, and R. G. Melhem, University of Pittsburgh

Optical Thyristor Based Subsystems for Digital Parallel Processing:  
Demonstrators and Future Perspectives
H. Thienpont, A. Kirk, and I. Veretennicoff, Vrije University, Belgium;
P. Heremans, B. Knupfer, and G. Borghs, IMEC Corp., Belgium;
M. Kuijk, and R. Vounckx, rije University, Belgium

Computer-Aided Design of Free-Space Optoelectronic Interconnection Systems
S. P. Levitan, P. J. Marchand, M. Rempel, D. M. Chiarulli, and F. B. McCormick, 
University of Pittsburgh and University of California at San Diego

Optical Design of a Fault Tolerant Self-Routing Switch for
Massively Parallel Processing Networks
M. Guizani, M. A. Memon, and S. Ghanta, King Fahd University, Saudi Arabia
________________________________________

12:30 PM - 1:30 PM   
LUNCH (ON YOUR OWN)
________________________________________

1:30 PM - 3:50 PM      
PARALLEL SESSIONS: VII and VIII
________________________________________

1:30 PM - 3:50 PM 
Session VII
Chair: P. Wang, George Mason University

Interconnection Networks for Shared Memory Parallel Computers
B. Smith, Tera Computers Corp. (INVITED)

A Comparative Study of One-to-Many WDM Lightwave 
Interconnection Networks for Multiprocessors
H. Bourdin, and A. Ferreira, CNRS - LIP ENS Lyon, France;
K. Marcus, ARTEMIS IMAG, Grenoble, France

Planar Optical Interconnections for 100Gb/s Packet Address Detection
S. H. Song and E.-H. Lee, 
Electronics & Telecommunications Research Institute, Taejon, South Korea

A Pipelined Self-Routing Optical Multichannel Time Slot Permutation Network
R. Kannan, H. F. Jordan, K. Y. Lee, and C. Reed,
University of Denver; University of Colorado at Boulder; 
and The Institute for Defense Analysis

Optical Interconnect Design for a Manufacturable Multicomputer
R. R. Krchnavek, R. D. Chamberlain, T. Barry, V. Malhotra, and Z. Dittia,
Washington University at St. Louis, Missouri

Hypercube Interconnection in TWDM Optical Passive Star Networks
S.-K. Lee, A. D. Oh, and H.-A. Choi, George Washington University
________________________________________

1:30 PM - 3:50 PM 
Session VIII
Chair: Y. Li, NEC Research Institut

From Today's Desktop Gigaflop to Tomorrow's Central Petaflop;
From Grand Challenges to the Information Age;
The Applications Driving Parallel Computing and Their Architecture Implications
G. Fox, Northeast Parallel Architectures Center at Syracus University (INVITED)

A Fiber-Optic Interconnection Concept for Scalable Massively Parallel Computing
M. Jonsson, K. Nilsson, and B. Svensson,
Halmstad University; and Chalmers University of Technology, Goteborg, Sweden

All-Optical Interconnects for Massively Parallel Processing
C. S. Ih, R. Tian, X. Xia, J. Chao, and Y. Wang, University of Delaware

Predictive Control of Opto-Electronic Reconfigurable 
Interconnection Networks Using Neural Networks
M. F. Sakr, S. P. Levitan, C. L. Giles, B. C. Horne, 
M. Maggini, and D. M. Chiarulli,
University of Pittsburgh; NEC Research Institute; and Firenze University, Italy

The Simultaneous Optical Multiprocessor Exchange Bus
J. Kulick, W. E. Cohen, C. Katsinis, E. Wells, A. Thomsen,
M. Abushagur, R. K. Gaede, R. Lindquist, G. Nordin, and D. Shen;
University of Alabama in Huntsville

On Some Architectural Issues of Optical Hierarchical Ring
Networks for Shared-Memory Multiprocessors
H. Jiang, C. Lam, and V. C. Hamacher, 
University of Nebraska-Lincoln; and Queen's University, Kingston, Canada
________________________________________

3:50 PM - 4:15 PM       
AFTERNOON BREAK
________________________________________




________________________________________

4:15 PM - 6:15 PM       
CONFERENCE PANEL II
OPTO-ELECTRONIC PROCESSING & NETWORKING IN MASSIVELY PARALLEL PROCESSING SYSTEMS
MODERATOR: E. E. E. Frietman, Delft University of Technology, The Netherlands

PANELISTS: C. Jesshope, University of Surrey, Surrey, UK; H. F. Jordan, 
University of Colorado at Boulder; G. D. Khoe, Eindhoven University of 
Technology, Eindhoven, The Netherlands; A. V. Krishnamoorthy, AT&T Bell Labs.; 
I. Koren, University of Massachusetts at Amherst; A. McAulay, Lehigh University;
I. MacDonald, Telecommunications Research Laboratories, Edmonton, Canada; 
M. Murdocca, Rutgers University; A. B. Ruighaver, Melbourne University, 
Australia; J. Sauer, University of Colorado at Boulder; H. Thienpont, Vrije 
Universiteit, Belgium; A. Walker, Heriot-Watt University, Edinburgh, Scotland; 
________________________________________

6:15 PM - 6:30 PM 

CLOSING REMARKS: ANNOUNCING MPPOI '96 AND FUTURE MEETING PLANS
Eugen Schenfeld NEC Research Institute
________________________________________

6:30 PM - 8:00 PM
CONFERENCE DINNER (PROVIDED)
________________________________________





















==============================================================================
\end{verbatim}
\newpage
\begin{verbatim}
                           Registration Form
                               MPPOI'95
                             Menger  Hotel
                           San Antonio, Texas
                          October 23-24, 1995

      TO REGISTER, MAIL OR FAX THIS FORM TO: MPPOI registration,
      IEEE Computer Society, 1730 Massachusetts Av, N.W.,
      Washington DC 20036-1992, USA. Fax: +USA-202-728-0884
      For information, call +USA-202-371-1013 - Sorry, no phone registration.

Name:----------------------------------------------------------------------
       Last                           First                        MI
Company:-------------------------------------------------------------------
Address:-------------------------------------------------------------------
City/State/Zip/Country:----------------------------------------------------
Daytime phone:----------------------- Fax number---------------------------
Company:-------------------------------------------------------------------
E-mail address:------------------------------------------------------------
IEEE/ACM/OSA/SPIE Member Number:   ------------------
Do you have any special needs: --------------------------------------------
---------------------------------------------------------------------------
Do not include my mailing address on:
-- Non-society mailing lists         -- Meeting Attendee lists

Please circle the appropriate registration fee:
Advance (before October 2, 1995)         Late(before October 16, 1995)/on site.
  Member $300                              Member $360
  Non-member $375                          Non-member $450
  Full-time student $150                   Full-time student $180

Total enclosed:$ --------------------------------
Please make all checks payable to: IEEE Computer Society. All checks must be in
US dollars drawn on US banks. Credit card charges will appear on statement as
"IEEE Computer Society Registration". Written requests for refunds must be
received by IEEE office before October 2, 1995. Refunds are subject to a $50
processing fees. Method of payment accepted (payment must accompany form):
-- Personal check               -- Company check        -- Traveler's check
-- American Express             -- Master Card          -- VISA
-- Dinner's club                -- Government purchase order (original)

Credit card number: -------------------------- Expiration date: ------------
Cardholder name   : --------------------------
Signature         : --------------------------

Non-student registration fees include conference attendance, proceedings,
continental breakfast, refreshment at breaks, conference reception, one conference
lunch and one conference dinner. Student registration fees ***DO NOT*** include
the lunch and ***DO NOT*** include the dinner.
===========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
______________________________________________________________________

                     MPPOI'95 HOTEL RESERVATION
                          The Menger Hotel
                         San Antonio, Texas
______________________________________________________________________

   PLEASE MAKE RESERVATIONS WITH THE MENGER HOTEL AS SOON AS POSSIBLE TO 
   GUARANTEE THE $90 RATE (HOTEL PHONE AND FAX NUMBERS ARE GIVEN AT BELLOW). 

 * The special MPPOI'95 group rate of US $90.00 (single or double) is available 
   from October 22 through October 25, 1995. All rates are subject to additional
   local and state taxes.  These rates will be available for reservations made 
   BEFORE September 22, 1995. Please note that the period Sep. to Nov. is the 
   high session in San Antonio and hotels are usually booked in advance. We 
   urge you to make early reservations as soon as possible.
   If you wish to stay over a Sat. night (Oct. 21st), the hotel will TRY its 
   best to accommodate you with the same rate. 

 * The MENGER HOTEL CONTACT POINTS: Phone: 1-800-345-9285 (for USA, or Canada)
   Phone: +USA-210-223-4361 (other countries);  Fax:  +USA-210-228-0022   

 * ALTERNATIVE LIST OF HOTELS (RATES AND RANK from AAA Tour Book): 

   IN CASE THE MENGER HOTEL IS FULL, here is a list of other nearby hotels (all 
   within walking distance of the Menger, in the downtown area of San Antonio). 
   These hotels have no arrangement with MPPOI and therefore you should not 
   identify yourself as a member of a group or conference. The arrangement with 
   these hotels is on a "one to one" basis, as with any other business traveler. 
   An early reservation is suggested. Also, it is always a good idea to look for 
   "specials" (i.e., advance paid rates, weekend specials, AAA rates, etc.). Also 
   please note that from the USA you may call the 800 directory (1-800-555-1212) 
   and ask for the 800 number of the hotel chain (such hotels are marked with a 
   '#' mark bellow), rates and ranking taken from AAA Tour Book 1994:

                                AAA   Typical       Phone            Fax 
                                Rank  Rate ($)     (+USA)           (+USA)
   
   * St. Anthony Hotel           4    106-130   (210)227-4392     none listed
   * Emily Morgan                3       85     (210)225-8486     none listed
   * Crocket Hotel               3     75-105   (210)225-6500     none listed
   # Hyatt Regency               4    119-170   (210)222-1234    (210)227-4925
   * La Mansion del Rio          4    135-220   (210)225-2581    (210)226-1365
   # Holiday Inn Riverwalk       3     95-119   (210)224-2500    (210)223-1302
   # Hilton Palacio del Rio      4    154-196   (210)222-1400    (210)270-0761
   * The Fairmounth Hotel        4    145-275   (210)224-8800    (210)224-2767
   # Marriott Riverwalk          4    135-150   (210)224-4555    (210)224-2754
   # La Quinta Motor Inn         3     83-90    (210)222-9181    (210)228-9816
   # Marriott Rivercenter        4      160     (210)223-1000    (210)223-6239
________________________________________________________________________________
\end{verbatim}

\end{document}




        Eugen Schenfeld
From owner-mpi-collcomm@CS.UTK.EDU Tue Aug 29 11:25:43 1995
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id LAA02895; Tue, 29 Aug 1995 11:25:43 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id LAA29690; Tue, 29 Aug 1995 11:24:51 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Tue, 29 Aug 1995 11:24:48 EDT
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id LAA29594; Tue, 29 Aug 1995 11:23:25 -0400
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id LAA24467(zingo); Tue, 29 Aug 1995 11:13:05 -0400
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id KAA05392(iris49); Tue, 29 Aug 1995 10:25:14 -0400
Date: Tue, 29 Aug 1995 10:25:14 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199508291425.KAA05392@iris49>
To: mppoi95@research.nj.nec.com
Subject: MPPOI'95 NSF Travel Awards



         FINANCIAL AID FOR MINORITY FACULTY AND STUDENTS TO ATTEND
                 THE SECOND INTERNATIONAL CONFERENCE ON
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS

                   FROM THE NATIONAL SCIENCE FOUNDATION

The National Science Foundation has provided a group travel grant to 
primarily support U.S. minority and female faculty members, as well as 
students studying in the U.S. who plan to attend the Second International 
Conference on Massively Parallel Processing Using Optical Interconnections. 
The conference will be held in San Antonio, Texas on October 23 and 24 1995. 
Faculty and students presenting papers in the conference will have higher 
priority for this support. If you are eligible, please submit the enclosed 
application before September 30, 1995. For information on the MPPOI'95 
conference please email to: MPPOI@RESEARCH.NJ.NEC.COM.

Faculty and students awarded the grant support will be reimbursed for the
actual expenses for attending the conference up to a maximum of $825 for
faculty members and $670 for students.  Any member of the U.S. scientific
community, irrespective of nationality, who performs work in the U.S. is
eligible to apply. Applicants supported by other grants and contracts are
expected to obtain travel monies from their supporting institutions. Since
only about ten awards will be made under this program, it is important that
other sources be used where available.

Applications received by September 30 will have priority.  Those received 
afterwards will be selected on a first come first served basis until funds 
run out.  Travel grant recipients are determined by a committee 
consisting of the following members:  

	Dr. Eugen Schenfeld, NEC Research Institute (chair)
	Prof. Rami Melhem, The University of Pittsburgh
	Prof. Pearl Wang, George Mason University

A short trip report along with documented conference registration and 
travel expense receipts is required within one month of the end of the 
conference. This material should be sent by award recipients to:

	Prof. Rami Melhem
	Department of Computer Science (219 MIB)
	The University of Pittsburgh,
	Pittsburgh, PA 15260.

*************************************************************************

	        APPLICATION FORM FOR MPPOI'95 TRAVEL GRANT
		     from National Science Foundation


NAME:______________________________________________________________________

TELEPHONE:_________________________ E-MAIL:________________________________

MINORITY GROUP:____________________ SEX:_____ STUDENT/FACULTY:_____________

If Student, 
please indicate Degree_____________ Expected year of completion____________

                Faculty Advisor __________________________________________

UNIVERSITY AFFILIATION: ___________________________________________________

DEPARTMENT: ________________________________SOCIAL SECURITY #: ____________

MAILING ADDRESS: __________________________________________________________

___________________________________________________________________________

___________________________________________________________________________

LIST PAPERS ACCEPTED TO MPPOI'95:
  1. TITLE:______________________________________________________________
     PAPER PRESENTER:____________________________________________________
  2. TITLE:______________________________________________________________
     PAPER PRESENTER:____________________________________________________

BRIEF DESCRIPTION OF YOUR TECHNICAL INTERESTS WITHIN THE SCOPE OF MPPOI:
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________

PLEASE STATE OTHER SOURCES OF TRAVEL MONEY:
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________

ESTIMATED TRAVEL EXPENSES:
    TRAVEL: (From _____________________, To__ ___________________)
    ESTIMATED DRIVING DISTANCE:________________________________________ or
    ROUND TRIP TRANSPORTATION COST:____________________________________
    ESTIMATED LODGING COST:____________________________________________
    REGISTRATION FEE:__________________________________________________

Return completed application form in e-mail to:

        Dr. Eugen Schenfeld - MPPOI'95 NSF Award
        NEC Research Institute
        4 Independence Way
        Princeton, NJ 08540

phone:  609 951 2742

  fax:  609 951 2482
email:  eugen@research.nj.nec.com (Inet)

==========================================================================



        Eugen Schenfeld
From owner-mpi-collcomm@CS.UTK.EDU Sat Sep  9 16:08:43 1995
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA28685; Sat, 9 Sep 1995 16:08:43 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA07590; Sat, 9 Sep 1995 16:07:30 -0400
X-Resent-To: mpi-collcomm@CS.UTK.EDU ; Sat, 9 Sep 1995 16:07:29 EDT
Errors-to: owner-mpi-collcomm@CS.UTK.EDU
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA07523; Sat, 9 Sep 1995 16:06:23 -0400
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id PAA08396(zingo); Sat, 9 Sep 1995 15:54:04 -0400
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id PAA02819(iris49); Sat, 9 Sep 1995 15:03:14 -0400
Date: Sat, 9 Sep 1995 15:03:14 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199509091903.PAA02819@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'95 Registration deadline+NSF travel grants

This is a reminder that the deadline to register to the MPPOI'95
conference at a reduced conference rate is Oct. 2. Registration
should be sent directly to IEEE CS (by fax if credit card is used
for payment) at the address listed in the enclosed registration
form (part of the MPPOI'95 advance program).

Also, the NSF (National Science Foundation) has awarded MPPOI
a grant to support US minority students and junior faculty to
attend the conference. Forms to apply for this award are available 
by email. Please send email to ask for a form.

With regards,

Dr. Eugen Schenfeld
MPPOI'95 Conference Chair

Encl.
=====

THE FOLLOWING IS IN LaTex Format. 

=================================================================================

\documentstyle[fullpage]{article}

\begin{document}

\begin{verbatim}
==========================================================================
                 The Second International Conference on
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS
===========================================================================

                          October 23-24, 1995
                              Menger Hotel
                        San-Antonio Texas,  USA

                             SPONSORED BY:
                         IEEE Computer Society
          IEEE Technical Committee on Computer Architecture (TCCA)

                          IN COOPERATION WITH:
           ACM Special Interest Group on Architecture (SIGARCH)
          The International Society for Optical Engineering (SPIE)
            The IEEE Lasers and Electro-optics Society (LEOS)
                   The Optical Society of America (OSA)

                        ADDITIONAL SUPPORT PROVIDED BY:
                  NSF - The National Science Foundation (pending)

========================================================================
                             ADVANCE PROGRAM
========================================================================

The Second International Conference on Massively Parallel Processing
Architectures using Optical Interconnections (MPPOI '95) is a continuation to 
a very successful first meeting held last year in Cancun, Mexico. This year we 
have an exciting program featuring eight invited talks from research and
industrial leaders in the fields of parallel computer systems, optical 
interconnections and technology, parallel applications and interconnection 
networks. We also have two panels with the participation of technological and 
academic experts, representing the current thoughts and trends of the field. 
And last, but not least, there are 34 regular papers accepted for presentation 
from authors all over the world. This rich and diverse program is sure to be 
most interesting and stimulate discussions and interactions among the 
researchers of this interdisciplinary field. The organizers of MPPOI 
strongly feel that massively parallel processing needs optical interconnections 
and optical interconnections need parallel processing. The Conference's focus 
is the possible use of optical interconnections for massively parallel 
processing systems, and their effect on system and algorithm design. Optics 
offers many benefits for interconnecting large numbers of processing elements, 
but may require us to rethink how we build parallel computer systems and 
communication networks, and how we write applications.  Fully exploring the 
capabilities of optical interconnection networks requires an interdisciplinary 
effort. It is critical that researchers from all related research areas are 
aware of each other's work and results. The intent of MPPOI is to assemble the 
leading researchers and to build towards a synergetic approach to MPP 
architectures, optical interconnections, operating systems, and software 
development.
\end{verbatim}
\newpage
\begin{verbatim}
*********************************************************************
                               LOCATION
*********************************************************************

SAN ANTONIO
American humorist and homespun philosopher Will Rogers once described San
Antonio as ``One of America's four unique cities". He had a natural instinct
for getting to the very essence of a subject, and his comment about San Antonio
is no exception. San Antonio truly is unique. From its founding in 1691 by
Spanish missionaries, San Antonio has grown from a sleepy little Texas pueblo
to the 9th largest city in the United States. Along the way it has been the
birthplace of the Texas revolution with the Battle of the Alamo in 1836.  It is
the new home of bioscience and hi-tech industry now. In all, over half a dozen
cultures, from Spanish and German to Lebanese and Greek, have impacted the
growth of San Antonio. And their influence is still evident in the architecture,
festivals, cuisine and customs which all contribute to the uniqueness and charm
of the city.

THE ALAMO
An old mission-fort, the Alamo, in San Antonio, has been called the "cradle of
Texas liberty." Its gallant defense and the horrible massacre of the more than
180 men who fought there inspired the cry, "Remember the Alamo!" Texas soldiers
shouted this at the battle of San Jacinto, which brought independence to Texas.

THE MENGER HOTEL
MPPOI '95 will be held in San Antonio's Menger Hotel, a historic landmark hotel.
It is next door to the Alamo, adjacent to Rivercenter Mall, the IMAX Theater and
River Walk and two blocks to the convention center. The hotel fronts Alamo Plaza
where the Sea World shuttle and sightseeing tours depart.

SPDP'95
For those that are interested in attending the SPDP'95: info. is available at
the following www cite: http://rabbit.cs.utsa.edu/Welcome.html. You must
register to SPDP'95 conference if you wish to attend it. The advance program
and other information may be obtain from the above location, or from:
Prof. Xiaodong Zhang, email: zhang@runner.utsa.edu , Phone: (210) 691-5541,
FAX: (210) 691-4437.

AIR TRANSPORTATION
United Airlines is the official airline of MPPOI '95. United will provide attendees 
round trip transportation to San Antonio on United, United Express or Shuttle by 
United scheduled service in the United States and Canada at fares of either 5% 
discount off any United, United Express or Shuttle by United published fares, 
including First Class, in effect when the tickets are purchased subject to all
applicable restrictions, or a 10% discount off applicable BUA, or like, fares 
in effect when tickets are purchased 7 days in advance. Reservations and schedule 
information may be obtained by calling the United Meetings desk at 1-800-521-4041 
and referencing Meeting ID Code 599XM.



ACCOMMODATION
The special MPPOI '95 Menger Hotel rate is US $90 for single or double.  Please
see the enclosed information for making your reservation directly with the hotel.

REGISTRATION
Please register to the conference using the attached form DIRECTLY with IEEE. 
TO HELP WITH THE PLANNING OF THE CONFERENCE PLEASE ALSO SEND email or fax, 
indicating the name(s) of people that register with IEEE and will attend 
(email: mppoi@research.nj.nec.com fax: +USA-609-951-2482 Att. Dr. Eugen Schenfeld).

LOCAL TRANSPORTATION
Star Shuttle provides van service from San Antonio International airport to the
Menger hotel for $6.00 per person each way. For more information and
reservations call +USA-(210)366-3183. Other transportation is available at the
airport, incl. taxi and busses.

CUSTOMS/PASSPORTS
It is suggested for those of other than US nationalities to check with a travel
agent and with an US consulate the requirements for VISA and passports to enter
the United States, as well as for US Customs regulations.

WEATHER & TIME
San Antonio's weather in late october ranges from low 60's to 70's Fahrenheit.
Climate is dry and perfect for sightseeing the many attractions the city and
surroundings have to offer.

JOIN US!
MPPOI'95 is in an ideal location to bring along family.  Your traveling
companions will be well entertained while you are participating in the
conference events.  For those that plan to spend the weekend before the
conference in San Antonio, we suggest to consult with a travel agent and with
the hotel for information on sightseeing and other local activities.  Please
note that the hotel rate is valid for the nights of Oct. 22-24. If you
wish to stay over a Sat. night (Oct. 21st), the hotel will try its best
to accommodate you with the same rate. Once you make a reservation,
please make sure to ask for the night of Oct. 21st. If not available,
you will be placed on a waiting lists. Chances are you may get it, but
currently it is not possible to confirm this.

==========================================
****** NSF TRAVEL SUPPORT (PENDING) ******
==========================================

The National Science Foundation (NSF) is considering to award travel support
for minority and female faculty members as well as for graduate students. This
travel award is pending final approval by the NSF and is available for authors
presenting papers at the MPPOI'95 conference. For details on the travel support
and to obtain a Request Form please contact (email, fax, or phone) the
Conference Chair at the above address.

\end{verbatim}
\newpage
\begin{verbatim}

STEERING COMMITTEE
==================

J. Goodman, Stanford University		     
L. Johnsson, University of Houston
S. Lee, University of California 	     
R. Melhem, University of Pittsburgh
E. Schenfeld, NEC Research Institute (Chair) 
P. Wang, George Mason University

CONFERENCE CHAIR:
================

        Dr. Eugen Schenfeld             (voice) (609)951-2742
        NEC Research Institute          (fax)   (609)951-2482
        4 Independence Way              email: MPPOI@RESEARCH.NJ.NEC.COM
        Princeton, NJ 08540, USA 

PUBLICITY CHAIR:   D. Quammen, George Mason University.
================

LOCAL ARRANGEMENTS CHAIR:  X. Zhang, University of Texas at San Antonio.
========================

PROGRAM COMMITTEE:
=================

Pierre Chavel, Institut d'Optique, Orsay, France
Alan Craig, Air Force Office of Scientific Research (AFOSR)
Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland
Jack Dennis, Lab. for CS, MIT, Boston, MA
Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY
Mary Eshaghian, Dept. of CS, NJIT, Newark, NJ
John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA
Michael Flynn, Department of EE, Stanford University, Stanford, CA
Edward Frietman, Faculty of Applied Physics, Delft U., Delft, The Netherlands
Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel
Kanad Ghose, Dept. of CS, SUNY at Binghamton, Binghamton NY
Allan Gottlieb, Dept. of CS, New York University, New York, NY
Joe Goodman, Department of EE, Stanford University, Stanford, CA
Alan Huang, Terabit Corp., Middletown, NJ
Oscar H. Ibarra, Department of Computer Science, UCSB, CA
Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan
Leah Jamieson, School of ECE, Purdue University, West Lafayette, IN
Lennart Johnsson, Dept. of Computer Science, University of Houston, Houston TX
Kenichi Kasahara, Opto-Electronics Basic Res. Lab., NEC Corporation, Japan
Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA
Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ
Ashok V. Krishnamoorthy, AT&T Bell Laboratories, Holmdel NJ
Sing Lee, Dept. of EE, UCSD, La Jolla, CA
Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA
Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany
Ahmed Louri, Dept. of ECE, U. of Arizona, Tucson, AZ
Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX
Rami Melhem, Dept. of CS, University of Pittsburgh, Pittsburgh, PA
J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY
Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ
John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO
Paul Prucnal, Department of EE, Princeton U., Princeton, NJ
Donna Quammen, Dept of CS, George Mason University, USA
John Reif, Department of CS, Duke University, Durham, NC
A. B. Ruighaver, Dept. of CS, U. of Melbourne, Victoria, Australia
A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA
Eugen Schenfeld, NEC Research Institute, Princeton, NJ
Charles W. Stirk, Optoelectronic Data Systems, Inc., Boulder, CO
Pearl Wang, Dept. of CS, George Mason University, USA
Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA
Albert Zomaya, Dept. of EAEE, U. of Western Australia, Western Australia

SESSION CHAIRS
==============

P. Dowd, State University of New York at Buffalo
E. E. E. Frietman, Delft University of Technology, The Netherlands
R. Kostuk, University of Arizona at Tucson
A. Krishnamoorthy, AT&T Bell Laboratories
S. Levitan, University of Pittsburgh
Y. Li, NEC Research Institut
A. Louri, University of Arizona at Tucson
P. Wang, George Mason University

PANEL MODERATORS
================

E. E. E. Frietman, Delft University of Technology, The Netherlands
Y. Li, NEC Research Institute

INVITED SPEAKERS
================
Michael Flynn, Stanford University
G. Fox, Northeast Parallel Architectures Center at Syracus University
S. L. Johnsson, University of Houston
H. S. Hinton, University of Colorado at Boulder
Alan Huang, Terabit Corp.
H. T. Kung, Harvard University
D. Miller, AT&T Bell Labs.
B. Smith, Tera Computers Corp.

========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
_____________________________________________________________________

                      MPPOI '95 PROGRAM SCHEDULE
_____________________________________________________________________

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
***** INVITED TALKS: 40 Minutes. REGULAR TALKS: 20 Minutes *****
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

========================================
SUNDAY, OCTOBER 22, 1995
========================================

6:00 PM - 7:30 PM
REGISTRATION 

========================================
Monday, October 23, 1995
========================================

7:00 AM - 8:00 AM       
CONTINENTAL BREAKFAST 
________________________________________

7:00 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:00 AM - 8:20 AM
OPENING REMARKS
Eugen Schenfeld, NEC Research Institute

8:20 AM - 10:00 AM
Session I
Chair: R. Kostuk, University of Arizona at Tucson

Hybrid SEED - Massively Parallel Optical Interconnections for Silicon ICs
D. Miller, AT&T Bell Labs. (INVITED)

Design Issues for Through-Wafer Optoelectronic Multicomputer Interconnects
P. May, N. M. Jokerst, D. S. Wills, S. Wilkinson, M. Lee, O. Vendier, S. Bond,
Z. Hou, G. Dagnall, M. A. Brooke, A. Brown, Georgia Institute of Technology

Design of a Terabit Free-Space Photonic Backplane for Parallel Computing
T. H. Szymanski, and  H. S. Hinton, McGill University and University of Colorado

Optical Interconnection Network for Massively Parallel Processors 
Using Beam-Steering Vertical Cavity Surface-Emitting Lasers
L. Fan, and M. C. Wu, University of California at Los Angeles;
H. C. Lee. and P. Grodzinski, Motorola Inc.
________________________________________

10:00 AM - 10:30 AM     
MID-MORNING BREAK
________________________________________

10:30 AM - 1:00 PM
PARALLEL SESSIONS: II AND III
_________________________________________

10:30 AM - 1:00 PM
Session II 
Chair: A. Krishnamoorthy, AT&T Bell Laboratories

The Role of Representation in Optimizing a Computer Architecture
Michael Flynn, Stanford University (INVITED)

An Evaluation of Communication Protocols for Star-Coupled
Multidimensional WDM Networks for Multiprocessors
K. R. Desai, and K. Ghose, State University of New York at Binghamton

Small Depth Beam-Steered Optical Interconnect
M. Murdocca, H. R. Nahata, and Y. Zhou, Rutgers University

Optical Fiber Interconnection System for Massively Parallel Processor Arrays
Y.-M. Zhang, X.-Q. He, G. Zhou, W.-Y. Liu, Y. Wang, 
Z.-P. Yin, and H.-Y. Wang, Tianjin University, P. R. of China

A Case Study for the Implementation of a Stochastic Bit Stream Neuron; 
The Choice Between Electrical and Optical Interconnects
M. A. Hands, W. Peiffer, H. Thienpont, A. Kirk, Vrije University, Belgium;
T. J. Hall, King's College, University of London, UK

Characterization of Massively Parallel Smart Pixels Systems for 
The Example of a Binary Associative Memory
D. Fey, Friedrich-Schiller University, Germany
________________________________________

10:30 AM - 1:00PM
Session III 
Chair: A. Louri, University of Arizona at Tucson

The Challenges Involved in the Design of a 100 Gb/s Internet
Alan Huang, Terabit Corp. (INVITED)

Fault-tolerance in Optically Implemented Multiprocessor Networks
P. Lalwaney, and I. Koren, University of Massachusetts at Amherst

A Speed Cache Coherence Protocol for an Optical
Multi-Access Interconnect Architecture
T. M. Pinkston, and J. Ha, University of Southern California

A Reconfigurable Optical Bus Structure for Shared Memory
Multiprocessors With Improved Performance
S. Ray, and H. Jiang, University of Nebraska-Lincoln

n-Dimensional Processor Arrays with Optical dBuses
G. Liu, and K. Y. Lee, University of Denver;
H. F. Jordan, University of Colorado at Boulder

The Difficulty of Finding Good Embeddings of 
Program Graphs onto the OPAM Architecture
B. Ramamurthy, and M. Krishnamoorthy, Rensselaer Polytechnic Institute
________________________________________

1:00 PM - 2:30 PM
CONFERENCE LUNCH (PROVIDED)
________________________________________

2:30 PM - 4:10 PM
Session IV
Chair: E. E. E. Frietman, Delft University of Technology, The Netherlands

Intelligent Optical Backplanes
H. S. Hinton, University of Colorado at Boulder (INVITED)

Connection Cube and Interleaved Optical Backplane for a Multiprocessor Data Bus
R. K. Kostuk, T. J. Kim, D. Ramsey, T.-H. Oh, and R. Boye
University of Arizona at Tucson

An Efficient 3-D Optical Implementation of Binary de Bruijn 
Networks with Applications to Massively Parallel Computing
A. Louri, and  H. Sung, 
University of Arizona at Tucson

Performance Evaluation of 3D Optoelectronic Computer
Architectures on FFT and Sorting Benchmarks
G. A. Betzos, and P. A. Mitkas, Colorado State University
________________________________________

4:10 PM - 4:30 PM
AFTERNOON BREAK
________________________________________

4:30 PM - 6:30 PM
CONFERENCE PANEL I
OPTICS FOR INTERCONNECTION: INDUSTRY'S INTERESTS and RESPONSIBILITIES
MODERATOR: Y. Li, NEC Research Institute

PANELISTS: R. Chen, University of Texas at Austin; N. Dutta, AT&T Bell Labs; 
N. Henmi, NEC Corp.; Y. S. Liu, General Electric; B. Pecor, Cray Research;
J. Rowlette, AMP Corp.; B. Smith, Tera Computers Corp.
________________________________________

7.00 PM - 8:30 PM
ACQUAINTANCE RECEPTION

Meet some of the MPPOI participants. 
Food and small talk opportunity provided.

========================================
TUESDAY, OCTOBER 24, 1995
========================================

7:00 AM - 8:00 AM
CONTINENTAL BREAKFAST
________________________________________

7:00 AM - 11:30 AM  
CONFERENCE REGISTRATION
________________________________________

8:00 AM - 10:00 AM
Session V
Chair: S. Levitan, University of Pittsburgh

Flow-Controlled ATM Switches for Available Bit Rate Services
H. T. Kung, Harvard University (INVITED)

Construction of Demonstration Parallel Optical Processors based on
CMOS/InGaAs Smart Pixel Technology
A. Walker, M. P. Y. Desmulliez, F. A. P. Tooley, 
D. T. Neilson, J. A. B. Dines, D. A. Baillie, 
S. M. Prince, L. C. Wilkinson, M. R. Taghizadeh, P. Blair, 
J. F. Snowdon, and B. S. Wherrett, Heriot-Watt University, Scotland;
C. Stanley, and F. Pottier, University of Glasgow, Scotland;
I. Underwood, and D. G. Vass, University of Edinburgh, Scotland;
W. Sibbett, and M. H. Dunn, University of St.-Andrews, Scotland.

General Purpose Bi-Directional Optical Backplane Bus
C. Zhao, S. Natarajan, and R. T. Chen, University of Texas, at Austin

Efficient Communication Scheme For Distributed Parallel Processor Systems
P. Kohler, and A. Gunzinger, Swiss Federal Institute of Technology, Switzerland

What Limits Capacity and Connectivity in Optical Interconnects
Y. Li, NEC Research Institute
________________________________________

10:00 AM - 10:30 AM      
MID-MORNING BREAK
________________________________________

________________________________________

10:30 AM - 12:30 PM
Session VI
Chair: P. Dowd, State University of New York at Buffalo

Data Partitioning for Load-Balance and Communication Bandwidth Preservation
S. L. Johnsson, University of Houston (INVITED)

Embedding Rings and Meshes in Partitioned Optical Passive Stars Networks
G. Gravenstreter, and R. G. Melhem, University of Pittsburgh

Optical Thyristor Based Subsystems for Digital Parallel Processing:  
Demonstrators and Future Perspectives
H. Thienpont, A. Kirk, and I. Veretennicoff, Vrije University, Belgium;
P. Heremans, B. Knupfer, and G. Borghs, IMEC Corp., Belgium;
M. Kuijk, and R. Vounckx, rije University, Belgium

Computer-Aided Design of Free-Space Optoelectronic Interconnection Systems
S. P. Levitan, P. J. Marchand, M. Rempel, D. M. Chiarulli, and F. B. McCormick, 
University of Pittsburgh and University of California at San Diego

Optical Design of a Fault Tolerant Self-Routing Switch for
Massively Parallel Processing Networks
M. Guizani, M. A. Memon, and S. Ghanta, King Fahd University, Saudi Arabia
________________________________________

12:30 PM - 1:30 PM   
LUNCH (ON YOUR OWN)
________________________________________

1:30 PM - 3:50 PM      
PARALLEL SESSIONS: VII and VIII
________________________________________

1:30 PM - 3:50 PM 
Session VII
Chair: P. Wang, George Mason University

Interconnection Networks for Shared Memory Parallel Computers
B. Smith, Tera Computers Corp. (INVITED)

A Comparative Study of One-to-Many WDM Lightwave 
Interconnection Networks for Multiprocessors
H. Bourdin, and A. Ferreira, CNRS - LIP ENS Lyon, France;
K. Marcus, ARTEMIS IMAG, Grenoble, France

Planar Optical Interconnections for 100Gb/s Packet Address Detection
S. H. Song and E.-H. Lee, 
Electronics & Telecommunications Research Institute, Taejon, South Korea

A Pipelined Self-Routing Optical Multichannel Time Slot Permutation Network
R. Kannan, H. F. Jordan, K. Y. Lee, and C. Reed,
University of Denver; University of Colorado at Boulder; 
and The Institute for Defense Analysis

Optical Interconnect Design for a Manufacturable Multicomputer
R. R. Krchnavek, R. D. Chamberlain, T. Barry, V. Malhotra, and Z. Dittia,
Washington University at St. Louis, Missouri

Hypercube Interconnection in TWDM Optical Passive Star Networks
S.-K. Lee, A. D. Oh, and H.-A. Choi, George Washington University
________________________________________

1:30 PM - 3:50 PM 
Session VIII
Chair: Y. Li, NEC Research Institut

>From Today's Desktop Gigaflop to Tomorrow's Central Petaflop;
>From Grand Challenges to the Information Age;
The Applications Driving Parallel Computing and Their Architecture Implications
G. Fox, Northeast Parallel Architectures Center at Syracus University (INVITED)

A Fiber-Optic Interconnection Concept for Scalable Massively Parallel Computing
M. Jonsson, K. Nilsson, and B. Svensson,
Halmstad University; and Chalmers University of Technology, Goteborg, Sweden

All-Optical Interconnects for Massively Parallel Processing
C. S. Ih, R. Tian, X. Xia, J. Chao, and Y. Wang, University of Delaware

Predictive Control of Opto-Electronic Reconfigurable 
Interconnection Networks Using Neural Networks
M. F. Sakr, S. P. Levitan, C. L. Giles, B. C. Horne, 
M. Maggini, and D. M. Chiarulli,
University of Pittsburgh; NEC Research Institute; and Firenze University, Italy

The Simultaneous Optical Multiprocessor Exchange Bus
J. Kulick, W. E. Cohen, C. Katsinis, E. Wells, A. Thomsen,
M. Abushagur, R. K. Gaede, R. Lindquist, G. Nordin, and D. Shen;
University of Alabama in Huntsville

On Some Architectural Issues of Optical Hierarchical Ring
Networks for Shared-Memory Multiprocessors
H. Jiang, C. Lam, and V. C. Hamacher, 
University of Nebraska-Lincoln; and Queen's University, Kingston, Canada
________________________________________

3:50 PM - 4:15 PM       
AFTERNOON BREAK
________________________________________




________________________________________

4:15 PM - 6:15 PM       
CONFERENCE PANEL II
OPTO-ELECTRONIC PROCESSING & NETWORKING IN MASSIVELY PARALLEL PROCESSING SYSTEMS
MODERATOR: E. E. E. Frietman, Delft University of Technology, The Netherlands

PANELISTS: C. Jesshope, University of Surrey, Surrey, UK; H. F. Jordan, 
University of Colorado at Boulder; G. D. Khoe, Eindhoven University of 
Technology, Eindhoven, The Netherlands; A. V. Krishnamoorthy, AT&T Bell Labs.; 
I. Koren, University of Massachusetts at Amherst; A. McAulay, Lehigh University;
I. MacDonald, Telecommunications Research Laboratories, Edmonton, Canada; 
M. Murdocca, Rutgers University; A. B. Ruighaver, Melbourne University, 
Australia; J. Sauer, University of Colorado at Boulder; H. Thienpont, Vrije 
Universiteit, Belgium; A. Walker, Heriot-Watt University, Edinburgh, Scotland; 
________________________________________

6:15 PM - 6:30 PM 

CLOSING REMARKS: ANNOUNCING MPPOI '96 AND FUTURE MEETING PLANS
Eugen Schenfeld NEC Research Institute
________________________________________

6:30 PM - 8:00 PM
CONFERENCE DINNER (PROVIDED)
________________________________________





















==============================================================================
\end{verbatim}
\newpage
\begin{verbatim}
                           Registration Form
                               MPPOI'95
                             Menger  Hotel
                           San Antonio, Texas
                          October 23-24, 1995

      TO REGISTER, MAIL OR FAX THIS FORM TO: MPPOI registration,
      IEEE Computer Society, 1730 Massachusetts Av, N.W.,
      Washington DC 20036-1992, USA. Fax: +USA-202-728-0884
      For information, call +USA-202-371-1013 - Sorry, no phone registration.

Name:----------------------------------------------------------------------
       Last                           First                        MI
Company:-------------------------------------------------------------------
Address:-------------------------------------------------------------------
City/State/Zip/Country:----------------------------------------------------
Daytime phone:----------------------- Fax number---------------------------
Company:-------------------------------------------------------------------
E-mail address:------------------------------------------------------------
IEEE/ACM/OSA/SPIE Member Number:   ------------------
Do you have any special needs: --------------------------------------------
---------------------------------------------------------------------------
Do not include my mailing address on:
-- Non-society mailing lists         -- Meeting Attendee lists

Please circle the appropriate registration fee:
Advance (before October 2, 1995)         Late(before October 16, 1995)/on site.
  Member $300                              Member $360
  Non-member $375                          Non-member $450
  Full-time student $150                   Full-time student $180

Total enclosed:$ --------------------------------
Please make all checks payable to: IEEE Computer Society. All checks must be in
US dollars drawn on US banks. Credit card charges will appear on statement as
"IEEE Computer Society Registration". Written requests for refunds must be
received by IEEE office before October 2, 1995. Refunds are subject to a $50
processing fees. Method of payment accepted (payment must accompany form):
-- Personal check               -- Company check        -- Traveler's check
-- American Express             -- Master Card          -- VISA
-- Dinner's club                -- Government purchase order (original)

Credit card number: -------------------------- Expiration date: ------------
Cardholder name   : --------------------------
Signature         : --------------------------

Non-student registration fees include conference attendance, proceedings,
continental breakfast, refreshment at breaks, conference reception, one conference
lunch and one conference dinner. Student registration fees ***DO NOT*** include
the lunch and ***DO NOT*** include the dinner.
===========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
______________________________________________________________________

                     MPPOI'95 HOTEL RESERVATION
                          The Menger Hotel
                         San Antonio, Texas
______________________________________________________________________

   PLEASE MAKE RESERVATIONS WITH THE MENGER HOTEL AS SOON AS POSSIBLE TO 
   GUARANTEE THE $90 RATE (HOTEL PHONE AND FAX NUMBERS ARE GIVEN AT BELLOW). 

 * The special MPPOI'95 group rate of US $90.00 (single or double) is available 
   from October 22 through October 25, 1995. All rates are subject to additional
   local and state taxes.  These rates will be available for reservations made 
   BEFORE September 22, 1995. Please note that the period Sep. to Nov. is the 
   high session in San Antonio and hotels are usually booked in advance. We 
   urge you to make early reservations as soon as possible.
   If you wish to stay over a Sat. night (Oct. 21st), the hotel will TRY its 
   best to accommodate you with the same rate. 

 * The MENGER HOTEL CONTACT POINTS: Phone: 1-800-345-9285 (for USA, or Canada)
   Phone: +USA-210-223-4361 (other countries);  Fax:  +USA-210-228-0022   

 * ALTERNATIVE LIST OF HOTELS (RATES AND RANK from AAA Tour Book): 

   IN CASE THE MENGER HOTEL IS FULL, here is a list of other nearby hotels (all 
   within walking distance of the Menger, in the downtown area of San Antonio). 
   These hotels have no arrangement with MPPOI and therefore you should not 
   identify yourself as a member of a group or conference. The arrangement with 
   these hotels is on a "one to one" basis, as with any other business traveler. 
   An early reservation is suggested. Also, it is always a good idea to look for 
   "specials" (i.e., advance paid rates, weekend specials, AAA rates, etc.). Also 
   please note that from the USA you may call the 800 directory (1-800-555-1212) 
   and ask for the 800 number of the hotel chain (such hotels are marked with a 
   '#' mark bellow), rates and ranking taken from AAA Tour Book 1994:

                                AAA   Typical       Phone            Fax 
                                Rank  Rate ($)     (+USA)           (+USA)
   
   * St. Anthony Hotel           4    106-130   (210)227-4392     none listed
   * Emily Morgan                3       85     (210)225-8486     none listed
   * Crocket Hotel               3     75-105   (210)225-6500     none listed
   # Hyatt Regency               4    119-170   (210)222-1234    (210)227-4925
   * La Mansion del Rio          4    135-220   (210)225-2581    (210)226-1365
   # Holiday Inn Riverwalk       3     95-119   (210)224-2500    (210)223-1302
   # Hilton Palacio del Rio      4    154-196   (210)222-1400    (210)270-0761
   * The Fairmounth Hotel        4    145-275   (210)224-8800    (210)224-2767
   # Marriott Riverwalk          4    135-150   (210)224-4555    (210)224-2754
   # La Quinta Motor Inn         3     83-90    (210)222-9181    (210)228-9816
   # Marriott Rivercenter        4      160     (210)223-1000    (210)223-6239
________________________________________________________________________________
\end{verbatim}

\end{document}




        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Sat Oct 28 20:39:10 1995
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id UAA03392; Sat, 28 Oct 1995 20:39:09 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id UAA25095; Sat, 28 Oct 1995 20:37:08 -0400
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id UAA25007; Sat, 28 Oct 1995 20:35:30 -0400
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id UAA20117(zingo); Sat, 28 Oct 1995 20:29:21 -0400
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id TAA25466(iris49); Sat, 28 Oct 1995 19:58:30 -0400
Date: Sat, 28 Oct 1995 19:58:30 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199510282358.TAA25466@iris49>
To: mppoi@research.nj.nec.com
Subject: JPDC Special Issue - CFP



                              CALL FOR PAPERS

                           Special Issue of the
            Journal of Parallel and Distributed Computing (JPDC)

                                    on

              Parallel Computing with Optical Interconnections 

Papers are solicited for a special issue of the  Journal  of  Parallel and  
Distributed Computing (JPDC) to be published in December 1996. 

This special issue will cover different aspects of applying optical technologies 
to  interconnections. The topics  of interest include but are not limited to the 
following:

      -Optical Computational Models
      -Optical Parallel Architectures
      -Optical Interconnection Networks
      -Control and Routing in Optical Networks
      -Packaging and Layout of Optical Interconnections
      -Electro-optical, and opto-electronic components
      -Design and Mapping of Parallel/Optical Algorithms
      -Relative Merits of Optical Technologies and Devices
      -Cost/performance Studies in using Optical Interconnects
      -Experimental/Commercial Optical Systems and Applications

Authors should follow the JPDC  manuscript  format as described in the 
Information for Authors at the end of each issue of JPDC.  Five copies of 
complete double-spaced manuscript (maximum 35 doubled-spaced pages) should be 
sent to either one of the two co-guest editors by December 1, 1995. Authors
will be notified of the final publication decision by May 1, 1996. Only
original, unpublished work will be considered; manuscripts resembling any
previously published work in a journal are unacceptable.


                          Co-Guest Editors

Prof. Mary M. Eshaghian                         Dr. Eugen Schenfeld 
Dept. of Computer & Information Sci.            NEC Research Institute
New Jersey Institute of Technology              4 Independence Way
Newark, NJ 07102 USA                            Princeton, NJ 08540 USA
Tel: (201)596-3244                              Tel: (609)951-2742
Fax: (201)596-5777                              Fax:  609 951 2482
Email: mary@cis.njit.edu                        Email: eugen@research.nj.nec.com 


=================================================================================


        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Thu Jan 25 23:15:07 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id XAA14938; Thu, 25 Jan 1996 23:15:07 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id XAA06024; Thu, 25 Jan 1996 23:15:38 -0500
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id XAA05859; Thu, 25 Jan 1996 23:13:09 -0500
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id XAA08969(zingo); Thu, 25 Jan 1996 23:08:04 -0500
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id VAA22135(iris49); Thu, 25 Jan 1996 21:57:58 -0500
Date: Thu, 25 Jan 1996 21:57:58 -0500
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199601260257.VAA22135@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'96 CFP


                                  Call for Papers 

                        The Third International Conference on  

                        MASSIVELY PARALLEL PROCESSING USING 
                        OPTICAL INTERCONNECTIONS (MPPOI'96) 

                               The Westin Maui Hotel 
                                   Maui, Hawaii  

                                October 27-29, 1996  

                                   Sponsored by 

             IEEE CS TCCA (Technical Committee on Computer Architecture)

                                In Cooperation with 

                 ACM Special Interest Group on Architecture (SIGARCH) 
                   The IEEE Lasers and Electro-optics Society (LEOS) 
                        The Optical Society of America (OSA) 
                The International Society for Optical Engineering (SPIE)

The third annual conference on Massively Parallel Processing Architectures
using Optical Interconnections (MPPOI'96) will be held on Oct. 27-29, 1996
in the Westin Maui Hotel, Maui, Hawaii.  The Conference will focus on the potential
for using optical interconnections in massively parallel processing systems, and their
effect on system and algorithm design. Optics offer many benefits for
interconnecting large numbers of processing elements, but may require us to
rethink how we build parallel computer systems and communication networks,
and how we write applications.  Fully exploring the capabilities of optical
interconnection networks requires an interdisciplinary effort.  It is
critical that researchers in all areas of the field are aware of each
other's work and results. The intent of MPPOI is to assemble the leading
researchers and to build towards a synergetic approach to MPP architectures,
optical interconnections, operating systems, and software development. The 
conference will feature invited speakers, followed by several sessions of submitted 
papers, and will conclude with a panel discussion. 

The topics of interest include but are not limited to the following:

- Optical interconnections, Reconfigurable Architectures,
- Embedding and mapping of applications and algorithms,
- Packaging and layout of optical interconnections,
- Electro-optical, and opto-electronic components,
- Relative merits of optical technologies (free-space, fibers, wave guides),
- Passive optical elements, Algorithms and applications exploiting,
- Data distribution and partitioning,
- Characterizing parallel applications,
- Cost/performance studies. 

Authors are invited to submit manuscripts which demonstrate original 
unpublished research in areas of computer architecture and optical 
interconnections.  Papers submitted must not be under considerations for 
another conference.

SUBMITTING PAPERS
=================
Authors are invited to submit manuscripts which demonstrate original
unpublished research in the above areas. Papers submitted must not be 
under considerations for another conference. Send eight (8) copies of the 
complete paper (not to exceed 15 single spaced, single sided pages) to: 

Dr. Eugen Schenfeld, MPPOI'96 Conference, NEC Research Institute,
4 Independence Way, Princeton, NJ 08540, USA, (voice) (609)951-2742,
(fax) (609)951-2482, email: MPPOI@RESEARCH.NJ.NEC.COM. 

Manuscripts must be received by May 1, 1996 
Notification of review decisions will be mailed by July 15, 1996.  
Camera ready papers are due August 30, 1996.  
Fax or electronic submissions WILL NOT BE CONSIDERED. The proceedings will be 
published by the IEEE CS Press and will be available at the conference.


CONFERENCE CHAIR
================

Rami Melhem, University of Pittsburgh 

PROGRAM CO-CHAIRS
=================

Allan Gottlieb, NYU; Yao Li, NEC Research Institute


PUBLICITY and PUBLICATION CHAIR
===============================

Eugen Schenfeld, NEC Research Institute 


PROGRAM COMMITTEE 
=================

T. Ae, Hiroshima University (Japan) 
D. Agrawal, North Carolina State University (USA) 
K. Batcher, University of Kent (USA) 
J. Bristow, Honeywell (USA) 
T. Casavant, University of Iowa, Iowa City (USA) 
P. Chavel, Institut d'Optique (France) 
R. Chen, University of Texas (USA) 
A. Chien, UIUC (USA) 
T. Cloonan, AT&T Bell Labs (USA) 
S. Dickey, Pace University (USA) 
N. Dutta, AT&T Bell Labs (USA)
M. Eshaghian, NJIT, Newark (USA) 
M. Flynn, Stanford University (USA) 
L. Giles, NEC Research Institute (USA) 
C. Georgiou, IBM T. J. Watson Research Center (USA) 
K. Ghose, SUNY at Binghamton (USA) 
J. Goodman, Stanford University (USA) 
J. Goodman, University of Wisconsin (USA) 
M. Goodman, Bellcore (USA) 
J. Grote, USAF Wright Patterson (USA) 
A. Gupta, Stanford University (USA) 
S. Hinton, University of Colorado (USA) 
F. Hsu, Fordham University (USA) 
Y. Ichioka, Osaka University (Japan) 
H. Inoue, Hitachi (Japan) 
H. Johnsson, University of Houston, (USA) 
N. Jokerst, Georgia Tech., (USA) 
H. Jordan, University of Colorado (USA)
K. Kasahara, NEC Corp. (Japan) 
F. Kiamilev, University of N. Carolina, Charlotte (USA) 
T. Knight, MIT (USA) 
A. Krishnamoorthy, AT&T Bell Labs (USA) 
S. Lee, UCSD (USA) 
K. Li, Princeton University (USA) 
A. Lohmann   University of Erlangen-Nurnberg (Germany)
A. Louri, University of Arizona (USA) 
Y.-D. Lyuu, National Taiwan University (Taiwan) 
T. Maruyama, NEC Corp. (Japan) 
M. Murdocca, Rutgers University (USA) 
J. Neff, University of Colorado (USA) 
L. Ni, Michigan State University (USA) 
A. Nowatzyk, Sun Microsystems (USA) 
Y. Patt, University of Michigan (USA) 
W. Paul, Universitaet des Saarlandes-Saarbruecken (Germany) 
T. Pinkston, USC (USA) 
C. Qiao, SUNY at Buffalo (USA) 
J. Reif, Duke university (USA) 
J. Rowlette, AMP (USA) 
H. J. Siegel, Purdue University (USA) 
S. Sahni, University of Florida (USA) 
A. Smith, UC Berkeley (USA) 
M. Snir, IBM T. J. Watson Research Center (USA) 
G. Sohi, University of Wisconsin (USA) 
Q. Song, Syracuse University (USA) 
T. Sterling, USRA  CESDIS (USA) 
B. Tarjan, Princeton University (USA) 
S. Tomita, Kyoto University, Kyoto (Japan) 
F. Tooley, McGill University (Canada) 
L. Valiant, Harvard University (USA) 
A. Walker, Heriot-Watt University (UK) 
P. Wang, George Masson University (USA)
S. Yokoyama, Hiroshima University (Japan) 
Y. Zhang, Tianjin University (China) 


===============


        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Tue Mar 12 00:04:19 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id AAA28151; Tue, 12 Mar 1996 00:04:19 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id AAA24594; Tue, 12 Mar 1996 00:05:42 -0500
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id AAA24478; Tue, 12 Mar 1996 00:04:37 -0500
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id AAA07468(zingo); Tue, 12 Mar 1996 00:03:38 -0500
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id WAA14014(iris49); Mon, 11 Mar 1996 22:57:40 -0500
Date: Mon, 11 Mar 1996 22:57:40 -0500
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199603120357.WAA14014@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'96 CFP - May 1, '96


                                  Call for Papers 

                        The Third International Conference on  

                        MASSIVELY PARALLEL PROCESSING USING 
                        OPTICAL INTERCONNECTIONS (MPPOI'96) 

                               The Westin Maui Hotel 
                                   Maui, Hawaii  

                                October 27-29, 1996  

                                   Sponsored by 

             IEEE CS TCCA (Technical Committee on Computer Architecture)

                                In Cooperation with 

                 ACM Special Interest Group on Architecture (SIGARCH) 
                   The IEEE Lasers and Electro-optics Society (LEOS) 
                        The Optical Society of America (OSA) 
                The International Society for Optical Engineering (SPIE)

The third annual conference on Massively Parallel Processing Architectures
using Optical Interconnections (MPPOI'96) will be held on Oct. 27-29, 1996
in the Westin Maui Hotel, Maui, Hawaii. The Conference will focus on the 
potential for using optical interconnections in massively parallel processing 
systems, and their effect on system and algorithm design. Optics offer many 
benefits for interconnecting large numbers of processing elements, but may 
require us to rethink how we build parallel computer systems and communication 
networks, and how we write applications.  Fully exploring the capabilities of 
optical interconnection networks requires an interdisciplinary effort.  It is
critical that researchers in all areas of the field are aware of each other's 
work and results. The intent of MPPOI is to assemble the leading researchers 
and to build towards a synergetic approach to MPP architectures, optical 
interconnections, operating systems, and software development. The conference 
will feature invited speakers, followed by several sessions of submitted 
papers, and will conclude with a panel discussion. 

The topics of interest include but are not limited to the following:

- Optical interconnections, Reconfigurable Architectures,
- Embedding and mapping of applications and algorithms,
- Packaging and layout of optical interconnections,
- Electro-optical, and opto-electronic components,
- Relative merits of optical technologies (free-space, fibers, wave guides),
- Passive optical elements, Algorithms and applications exploiting,
- Data distribution and partitioning,
- Characterizing parallel applications,
- Cost/performance studies. 

Authors are invited to submit manuscripts which demonstrate original 
unpublished research in areas of computer architecture and optical 
interconnections.  Papers submitted must not be under considerations for 
another conference.

SUBMITTING PAPERS
=================
Authors are invited to submit manuscripts which demonstrate original
unpublished research in the above areas. Papers submitted must not be 
under considerations for another conference. Send eight (8) copies of the 
complete paper (not to exceed 15 single spaced, single sided pages) to: 

Dr. Eugen Schenfeld, MPPOI'96 Conference, NEC Research Institute,
4 Independence Way, Princeton, NJ 08540, USA, (voice) (609)951-2742,
(fax) (609)951-2482, email: MPPOI@RESEARCH.NJ.NEC.COM. 

 ===========================================
 MANUSCRIPTS MUST BE RECEIVED BY MAY 1, 1996 
 ===========================================

Notification of review decisions will be mailed by July 15, 1996.  
Camera ready papers are due August 30, 1996.  
Fax or electronic submissions WILL NOT BE CONSIDERED. The proceedings will be 
published by the IEEE CS Press and will be available at the conference.


CONFERENCE CHAIR
================

Rami Melhem, University of Pittsburgh 

PROGRAM CO-CHAIRS
=================

Allan Gottlieb, NYU; Yao Li, NEC Research Institute


PUBLICITY and PUBLICATION CHAIR
===============================

Eugen Schenfeld, NEC Research Institute 


PROGRAM COMMITTEE 
=================

T. Ae, Hiroshima University (Japan) 
D. Agrawal, North Carolina State University (USA) 
K. Batcher, University of Kent (USA) 
J. Bristow, Honeywell (USA) 
T. Casavant, University of Iowa, Iowa City (USA) 
P. Chavel, Institut d'Optique (France) 
R. Chen, University of Texas (USA) 
A. Chien, UIUC (USA) 
T. Cloonan, AT&T Bell Labs (USA) 
S. Dickey, Pace University (USA) 
N. Dutta, AT&T Bell Labs (USA)
M. Eshaghian, NJIT, Newark (USA) 
M. Flynn, Stanford University (USA) 
L. Giles, NEC Research Institute (USA) 
C. Georgiou, IBM T. J. Watson Research Center (USA) 
K. Ghose, SUNY at Binghamton (USA) 
J. Goodman, Stanford University (USA) 
J. Goodman, University of Wisconsin (USA) 
M. Goodman, Bellcore (USA) 
J. Grote, USAF Wright Patterson (USA) 
A. Gupta, Stanford University (USA) 
S. Hinton, University of Colorado (USA) 
F. Hsu, Fordham University (USA) 
Y. Ichioka, Osaka University (Japan) 
H. Inoue, Hitachi (Japan) 
K. Jenkins, University of Southern California (USA)
L. Johnsson, University of Houston, (USA) 
N. Jokerst, Georgia Tech., (USA) 
H. Jordan, University of Colorado (USA)
K. Kasahara, NEC Corp. (Japan) 
F. Kiamilev, University of N. Carolina, Charlotte (USA) 
T. Knight, MIT (USA) 
R. Kostuk, University of Arizona at Touson (USA) 
A. Krishnamoorthy, AT&T Bell Labs (USA) 
S. Lee, UCSD (USA) 
K. Li, Princeton University (USA) 
A. Lohmann   University of Erlangen-Nurnberg (Germany)
A. Louri, University of Arizona (USA) 
Y.-D. Lyuu, National Taiwan University (Taiwan) 
T. Maruyama, NEC Corp. (Japan) 
M. Murdocca, Rutgers University (USA) 
J. Neff, University of Colorado (USA) 
L. Ni, Michigan State University (USA) 
A. Nowatzyk, Sun Microsystems (USA) 
Y. Patt, University of Michigan (USA) 
W. Paul, Universitaet des Saarlandes-Saarbruecken (Germany) 
B. Pecor, Cray (USA)
T. Pinkston, USC (USA) 
C. Qiao, SUNY at Buffalo (USA) 
J. Reif, Duke university (USA) 
J. Rowlette, AMP (USA) 
H. J. Siegel, Purdue University (USA) 
S. Sahni, University of Florida (USA) 
A. Smith, UC Berkeley (USA) 
M. Snir, IBM T. J. Watson Research Center (USA) 
G. Sohi, University of Wisconsin (USA) 
Q. Song, Syracuse University (USA) 
T. Sterling, USRA  CESDIS (USA) 
B. Tarjan, Princeton University (USA) 
S. Tomita, Kyoto University, Kyoto (Japan) 
F. Tooley, McGill University (Canada) 
L. Valiant, Harvard University (USA) 
O. Wada, Fujitsu Corp. (Japan) 
A. Walker, Heriot-Watt University (UK) 
P. Wang, George Masson University (USA)
S. Yokoyama, Hiroshima University (Japan) 
Y. Zhang, Tianjin University (China) 

=============



        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Mon Mar 18 20:34:23 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id UAA19230; Mon, 18 Mar 1996 20:34:23 -0500
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id UAA24753; Mon, 18 Mar 1996 20:35:40 -0500
Received: from zingo by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id UAA24626; Mon, 18 Mar 1996 20:34:29 -0500
Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16)
	id UAA21237(zingo); Mon, 18 Mar 1996 20:33:09 -0500
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id TAA05488(iris49); Mon, 18 Mar 1996 19:17:08 -0500
Date: Mon, 18 Mar 1996 19:17:08 -0500
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199603190017.TAA05488@iris49>
To: optics@research.nj.nec.com
Subject: Research Position Available


Research Position at the NEC Research Institute, Princeton NJ
=============================================================

The NEC Research Institute in Princeton, NJ has an opening for a Post-Doc 
position, initially for a one year term, in the area of high-speed Opto-Electronic
circuits and systems.  

We are looking for a candidate who will contribute to the design, implementation 
and experiments of a free-space interconnection network for parallel processing 
architecture. Candidates should have knowledge of related topics, including the 
setup of experiments and measuring instruments, and the design of high speed 
electronic circuit for opto-electronic communication.  The successful candidate 
should have practical lab. experience. A Ph.D. in EE is needed.
NEC is an equal opportunity employer.

Interested applicants should send (by fax, email or regular mail) a 
resumes, copies of few of their recent papers, and the name, phone number and email 
address of 3 people who can recommend them to:

Dr. Eugen Schenfeld
NEC Research Institute
4 Independence Way
Princeton, NJ  08540
Phone: 609-951-2742
FAX: 609-951-2482
email: eugen@research.nj.nec.com



        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Sun Apr 14 18:25:50 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id SAA09195; Sun, 14 Apr 1996 18:25:50 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id SAA21120; Sun, 14 Apr 1996 18:26:51 -0400
Received: from zingo.nj.nec.com (zingo.nj.nec.com [138.15.150.106]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id SAA20981; Sun, 14 Apr 1996 18:25:20 -0400
Received: from iris49 (iris49 [138.15.150.129]) by zingo.nj.nec.com (8.7.4/8.7.3) with SMTP id SAA03806; Sun, 14 Apr 1996 18:08:47 -0400 (EDT)
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id RAA03839(iris49); Sun, 14 Apr 1996 17:12:19 -0400
Date: Sun, 14 Apr 1996 17:12:19 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199604142112.RAA03839@iris49>
To: mppoi@research.nj.nec.com
Subject: CFP: MPPOI'96-Deadline 5/1/96


PLEASE NOTE:
The deadline for submitting papers to MPPOI'96 is May 1st, 1996
===============================================================


                                  Call for Papers 

                        The Third International Conference on  

                        MASSIVELY PARALLEL PROCESSING USING 
                        OPTICAL INTERCONNECTIONS (MPPOI'96) 

                               The Westin Maui Hotel 
                                   Maui, Hawaii  

                                October 27-29, 1996  

                                   Sponsored by 

             IEEE CS TCCA (Technical Committee on Computer Architecture)

                                In Cooperation with 

                 ACM Special Interest Group on Architecture (SIGARCH) 
                   The IEEE Lasers and Electro-optics Society (LEOS) 
                        The Optical Society of America (OSA) 
                The International Society for Optical Engineering (SPIE)

The third annual conference on Massively Parallel Processing Architectures
using Optical Interconnections (MPPOI'96) will be held on Oct. 27-29, 1996
in the Westin Maui Hotel, Maui, Hawaii.  The Conference will focus on the 
potential for using optical interconnections in massively parallel processing 
systems, and their effect on system and algorithm design. Optics offer many 
benefits for interconnecting large numbers of processing elements, but may 
require us to rethink how we build parallel computer systems and communication 
networks, and how we write applications.  Fully exploring the capabilities of 
optical interconnection networks requires an interdisciplinary effort.  It is
critical that researchers in all areas of the field are aware of each
other's work and results. The intent of MPPOI is to assemble the leading
researchers and to build towards a synergetic approach to MPP architectures,
optical interconnections, operating systems, and software development. The 
conference will feature invited speakers, followed by several sessions of 
submitted papers, and will conclude with a panel discussion. 

The topics of interest include but are not limited to the following:

- Optical interconnections, Reconfigurable Architectures,
- Embedding and mapping of applications and algorithms,
- Packaging and layout of optical interconnections,
- Electro-optical, and opto-electronic components,
- Relative merits of optical technologies (free-space, fibers, wave guides),
- Passive optical elements, Algorithms and applications exploiting,
- Data distribution and partitioning,
- Characterizing parallel applications,
- Cost/performance studies. 

Authors are invited to submit manuscripts which demonstrate original 
unpublished research in areas of computer architecture and optical 
interconnections.  Papers submitted must not be under considerations for 
another conference.

SUBMITTING PAPERS
=================
Authors are invited to submit manuscripts which demonstrate original
unpublished research in the above areas. Papers submitted must not be 
under considerations for another conference. Send eight (8) copies of the 
complete paper (not to exceed 15 single spaced, single sided pages) to: 

Dr. Eugen Schenfeld, MPPOI'96 Conference, NEC Research Institute,
4 Independence Way, Princeton, NJ 08540, USA, (voice) (609)951-2742,
(fax) (609)951-2482, email: MPPOI@RESEARCH.NJ.NEC.COM. 

Manuscripts must be received by May 1, 1996 
Notification of review decisions will be mailed by July 15, 1996.  
Camera ready papers are due August 30, 1996.  
Fax or electronic submissions WILL NOT BE CONSIDERED. The proceedings will be 
published by the IEEE CS Press and will be available at the conference.


CONFERENCE CHAIR
================

Rami Melhem, University of Pittsburgh 

PROGRAM CO-CHAIRS
=================

Allan Gottlieb, NYU; Yao Li, NEC Research Institute


PUBLICITY and PUBLICATION CHAIR
===============================

Eugen Schenfeld, NEC Research Institute 


PROGRAM COMMITTEE 
=================

T. Ae, Hiroshima University (Japan) 
D. Agrawal, North Carolina State University (USA) 
K. Batcher, University of Kent (USA) 
J. Bristow, Honeywell (USA) 
T. Casavant, University of Iowa, Iowa City (USA) 
P. Chavel, Institut d'Optique (France) 
R. Chen, University of Texas (USA) 
A. Chien, UIUC (USA) 
T. Cloonan, AT&T Bell Labs (USA) 
S. Dickey, Pace University (USA) 
N. Dutta, AT&T Bell Labs (USA)
M. Eshaghian, NJIT (USA) 
M. Flynn, Stanford University (USA) 
L. Giles, NEC Research Institute (USA) 
C. Georgiou, IBM T. J. Watson Research Center (USA) 
K. Ghose, SUNY at Binghamton (USA) 
J. Goodman, Stanford University (USA) 
J. Goodman, University of Wisconsin (USA) 
M. Goodman, Bellcore (USA) 
J. Grote, USAF Wright Patterson (USA) 
A. Gupta, Stanford University (USA) 
S. Hinton, University of Colorado (USA) 
F. Hsu, Fordham University (USA) 
Y. Ichioka, Osaka University (Japan) 
H. Inoue, Hitachi (Japan) 
K. Jenkins, University of Southern California (USA)
L. Johnsson, University of Houston, (USA) 
N. Jokerst, Georgia Tech., (USA) 
H. Jordan, University of Colorado (USA)
K. Kasahara, NEC Corp. (Japan) 
F. Kiamilev, University of N. Carolina, Charlotte (USA) 
T. Knight, MIT (USA) 
R. Kostuk, University of Arizona at Touson (USA) 
A. Krishnamoorthy, AT&T Bell Labs (USA) 
S. Lee, UCSD (USA) 
K. Li, Princeton University (USA) 
A. Lohmann   University of Erlangen-Nurnberg (Germany)
A. Louri, University of Arizona (USA) 
Y.-D. Lyuu, National Taiwan University (Taiwan) 
T. Maruyama, NEC Corp. (Japan) 
M. Murdocca, Rutgers University (USA) 
J. Neff, University of Colorado (USA) 
L. Ni, Michigan State University (USA) 
A. Nowatzyk, Sun Microsystems (USA) 
Y. Patt, University of Michigan (USA) 
W. Paul, Universitaet des Saarlandes-Saarbruecken (Germany) 
B. Pecor, Cray (USA)
T. Pinkston, USC (USA) 
C. Qiao, SUNY at Buffalo (USA) 
J. Reif, Duke university (USA) 
J. Rowlette, AMP (USA) 
H. J. Siegel, Purdue University (USA) 
S. Sahni, University of Florida (USA) 
A. Smith, UC Berkeley (USA) 
M. Snir, IBM T. J. Watson Research Center (USA) 
G. Sohi, University of Wisconsin (USA) 
Q. Song, Syracuse University (USA) 
T. Sterling, USRA  CESDIS (USA) 
B. Tarjan, Princeton University (USA) 
S. Tomita, Kyoto University (Japan) 
F. Tooley, McGill University (Canada) 
L. Valiant, Harvard University (USA) 
O. Wada, Fujitsu Corp. (Japan) 
A. Walker, Heriot-Watt University (UK) 
P. Wang, George Masson University (USA)
S. Yokoyama, Hiroshima University (Japan) 
Y. Zhang, Tianjin University (China) 




        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Tue Aug 13 18:34:38 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id SAA07427; Tue, 13 Aug 1996 18:34:38 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id SAA22199; Tue, 13 Aug 1996 18:36:20 -0400
Received: from zingo.nj.nec.com (zingo.nj.nec.com [138.15.150.106]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id SAA22087; Tue, 13 Aug 1996 18:35:18 -0400
Received: from iris49 (iris49 [138.15.150.129]) by zingo.nj.nec.com (8.7.4/8.7.3) with SMTP id SAA00770; Tue, 13 Aug 1996 18:33:40 -0400 (EDT)
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id RAA02048(iris49); Tue, 13 Aug 1996 17:13:17 -0400
Date: Tue, 13 Aug 1996 17:13:17 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199608132113.RAA02048@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'96 Advance Program: Maui, Oct. 27-29, '96

\documentstyle[fullpage]{article}

\begin{document}

\begin{verbatim}
==========================================================================
                 The Third International Conference on
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS
===========================================================================


                         The Westin Maui Hotel
                             Maui, Hawaii
                          October 27-29, 1996

                             SPONSORED BY:
          IEEE Technical Committee on Computer Architecture (TCCA)

                          IN COOPERATION WITH:
           ACM Special Interest Group on Architecture (SIGARCH)
          The International Society for Optical Engineering (SPIE)
            The IEEE Lasers and Electro-optics Society (LEOS)
                   The Optical Society of America (OSA)

                        ADDITIONAL SUPPORT PROVIDED BY:
                     NSF - The National Science Foundation 

______________________________________________________________________

PLEASE NOTE:
===========

THIS IS A PRELIMINARY MAILING INTENDED FOR TRIP PLANNING. MORE DETAILED
INFORMATION WILL BE AVAILABLE LATTER, TO INCLUDE INFORMATION ABOUT MAUI,
GETTING THERE (AIRPORTS), ALTERNATIVE HOTEL RESERVATIONS, AND OTHER USEFUL
INFORMATION. THE CURRENT DOCUMENT LISTS THE ADVANCED PROGRAM, REGISTRATION
FORM, AND THE WESTIN MAUI HOTEL INFORMATION.

FOR MORE INFORMATION PLEASE CONTACT: mppoi@research.nj.nec.com
or fax to Dr. Eugen Schenfeld, +USA-609-951-2482

-----------------------------------------------------------------------

The third annual conference on Massively Parallel Processing Architectures
using Optical Interconnections (MPPOI'96) will be held on Oct. 27-29, 1996
in the Westin Maui Hotel, Maui, Hawaii.  The Conference will focus on the
potential for using optical interconnections in massively parallel processing
systems, and their effect on system and algorithm design. Optics offer many
benefits for interconnecting large numbers of processing elements, but may
require us to rethink how we build parallel computer systems and communication
networks, and how we write applications.  Fully exploring the capabilities of
optical interconnection networks requires an interdisciplinary effort.  It is
critical that researchers in all areas of the field are aware of each
other's work and results. The intent of MPPOI is to assemble the leading
researchers and to build towards a synergetic approach to MPP architectures,
optical interconnections, operating systems, and software development. The
conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion.

The topics of interest include but are not limited to the following:

- Optical interconnections, Reconfigurable Architectures,
- Embedding and mapping of applications and algorithms,
- Packaging and layout of optical interconnections,
- Electro-optical, and opto-electronic components,
- Relative merits of optical technologies (free-space, fibers, wave guides),
- Passive optical elements, Algorithms and applications exploiting,
- Data distribution and partitioning,
- Characterizing parallel applications,
- Cost/performance studies.


CONFERENCE CHAIR
================

Rami Melhem, University of Pittsburgh

PROGRAM CO-CHAIRS
=================

Allan Gottlieb, NYU; Yao Li, NEC Research Institute


PUBLICITY and PUBLICATION CHAIR
===============================

Eugen Schenfeld, NEC Research Institute


PROGRAM COMMITTEE
=================

T. Ae, Hiroshima University (Japan)
D. Agrawal, North Carolina State University (USA)
K. Batcher, University of Kent (USA)
J. Bristow, Honeywell (USA)
T. Casavant, University of Iowa, Iowa City (USA)
P. Chavel, Institut d'Optique (France)
R. Chen, University of Texas (USA)
A. Chien, UIUC (USA)
T. Cloonan, AT&T Bell Labs (USA)
S. Dickey, Pace University (USA)
N. Dutta, AT&T Bell Labs (USA)
M. Eshaghian, NJIT (USA)
M. Flynn, Stanford University (USA)
L. Giles, NEC Research Institute (USA)
C. Georgiou, IBM T. J. Watson Research Center (USA)
K. Ghose, SUNY at Binghamton (USA)
J. Goodman, Stanford University (USA)
J. Goodman, University of Wisconsin (USA)
M. Goodman, Bellcore (USA)
J. Grote, USAF Wright Patterson (USA)
A. Gupta, Stanford University (USA)
S. Hinton, University of Colorado (USA)
F. Hsu, Fordham University (USA)
Y. Ichioka, Osaka University (Japan)
H. Inoue, Hitachi (Japan)
K. Jenkins, University of Southern California (USA)
L. Johnsson, University of Houston, (USA)
N. Jokerst, Georgia Tech., (USA)
H. Jordan, University of Colorado (USA)
K. Kasahara, NEC Corp. (Japan)
F. Kiamilev, University of N. Carolina, Charlotte (USA)
T. Knight, MIT (USA)
R. Kostuk, University of Arizona at Touson (USA)
A. Krishnamoorthy, AT&T Bell Labs (USA)
S. Lee, UCSD (USA)
K. Li, Princeton University (USA)
A. Lohmann   University of Erlangen-Nurnberg (Germany)
A. Louri, University of Arizona (USA)
Y.-D. Lyuu, National Taiwan University (Taiwan)
T. Maruyama, NEC Corp. (Japan)
M. Murdocca, Rutgers University (USA)
J. Neff, University of Colorado (USA)
L. Ni, Michigan State University (USA)
A. Nowatzyk, Sun Microsystems (USA)
Y. Patt, University of Michigan (USA)
W. Paul, Universitaet des Saarlandes-Saarbruecken (Germany)
B. Pecor, Cray (USA)
T. Pinkston, USC (USA)
C. Qiao, SUNY at Buffalo (USA)
J. Reif, Duke university (USA)
J. Rowlette, AMP (USA)
H. J. Siegel, Purdue University (USA)
S. Sahni, University of Florida (USA)
A. Smith, UC Berkeley (USA)
M. Snir, IBM T. J. Watson Research Center (USA)
G. Sohi, University of Wisconsin (USA)
Q. Song, Syracuse University (USA)
T. Sterling, USRA  CESDIS (USA)
B. Tarjan, Princeton University (USA)
S. Tomita, Kyoto University (Japan)
F. Tooley, McGill University (Canada)
L. Valiant, Harvard University (USA)
O. Wada, Fujitsu Corp. (Japan)
A. Walker, Heriot-Watt University (UK)
P. Wang, George Masson University (USA)
S. Yokoyama, Hiroshima University (Japan)
Y. Zhang, Tianjin University (China)

STEERING COMMITTEE
=================

J. Goodman, Stanford University
L. Johnsson, University of Houston
S. Lee, University of California at San Diego
R. Melhem, University of Pittsburgh
E. Schenfeld, NEC Research Institute (Chair)
P. Wang, George Mason University

========================================================================
\end{verbatim}
\newpage
\begin{verbatim}


CUSTOMS/PASSPORTS:  It is suggested for those of other than US
nationalities to check with a travel agent and with an US consulate the
requirements for VISA and passports to enter the United States, as well as for
US Customs regulations.

================================
****** NSF TRAVEL SUPPORT ******
================================

The National Science Foundation (NSF) is considering travel support for 
minority and female faculty members as well as for graduate students. This travel 
award is pending final approval by the NSF and would be available for qualified
authors presenting papers at the MPPOI'95 conference. For details on the travel support
and to obtain a Request Form please contact (email, fax, or phone) the
Conference Chair at the above address.


========================================================================
                         MPPOI '96 ADVANCE PROGRAM
========================================================================

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
***** INVITED TALKS: 40 Minutes. REGULAR TALKS: 20 Minutes *****
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

========================================
Saturday, October 26, 1996
========================================

8:00 PM - 9:30 PM
REGISTRATION 

========================================
Sunday, October 27, 1996
========================================

7:00 AM - 8:00 AM       
CONTINENTAL BREAKFAST 
________________________________________

7:00 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:00 AM - 8:15 AM
OPENING REMARKS - WELCOME
R. Melhem, University of Pittsburgh

8:15 AM - 8:30 AM
TECHNICAL PROGRAM OVERVIEW
A. Gottlieb, NYU and Yao Li, NEC Research Institute
________________________________________

8:30 AM - 10:10 AM
Session I - Comparative Studies for Optical Interconnects
Chair: K. Kasahara, NEC Corp. Japan

Optical Geometrical Transformations Used for Parallel Communication
A. W. Lohmann, Erlangen University (INVITED)

Towards an Optimal Foundation Architecture for Optoelectronic Computing 
H. M. Ozaktas, Bilkent University

Fundamental Advantages of Free-space Optical Interconnects
M. W. Haney, and M. P. Christensen, George Mason University

A Comparative Study of Cost Effective Multiplexing
Approaches in Optical Networks
C. Qiao, and Y. Mei, SUNY at Buffalo

________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:10 PM
Session II - Interconnection Networks and System Architectures
Chair: T. M. Pinkston, University of Southern California

Scalable Parallel Systems: Past, Present and Future (from an IBM perspective)
M. Snir, IBM T. J. Watson Research Center (INVITED)

Design of a Parallel Photonic FFT Processor
R. G. Rozie, F. E. Kiamilev, University of North Carolina 
at Charlotte, and A. V. Krishnamoorthy, Lucent Technologies

SIMPil:  An OE Integrated SIMD Architecture for
Focal Plane Processing Applications
H. H. Cat, A. Gentile, J. C. Eble, M. Lee, O. Vendier,
Y. J. Joo, D. S. Wills, M. Brooke, N. M. Jokerst, 
and A. S. Brown, Georgia Institute of Technology, and
R. Leavitt, Army Research Laboratory

Design of a 64-bit microprocessor core IC for hybrid
CMOS-SEED technology
F. E. Kiamilev, J. S. Lambirth, and R. G. Rozier,
University of North Carolina at Charlotte, and
A. V. Krishnamoorthy, Lucent Technologies

________________________________________

12.10 PM - 2:00 PM
LUNCH BREAK (ON YOUR OWN)
________________________________________

2:00 PM - 3:40 PM
Session III - WDM in MPP Systems 
Chair: P. Prucnal, Princeton University

High-performance Parallel Processors based on Star-coupled 
WDM Optical Interconnects
A. J. De Groot, R. J. Deri, R. E. Haigh, F. G. Patterson, and
S. P. DiJaili, Lawrence Livermore National Laboratory

Dynamic Alignment of Pulses in Bit-Parallel Wavelength
Links Using Shepherd Pulse in Nonlinear Fibers for
Massively Parallel Processing Computer Networks
L. Bergman, and C. Yeh, California Institute of Technology

Planar Diffraction Grating for Board-Level WDM Applications
R. A. Livingston, and R. R. Krchnavek, Washington University 

Time-Deterministic WDM Star Network for Massively
Parallel Computing in Radar Systems
M. Jonsson, A. Ahlander, and B. Svensson, Halmstad University,
M. Taveniku, and B. Svensson, Chalmers University of Technology,
and M. Taveniku, Ericsson Microwave Systems AB

The AMOEBA chip:  an opto-electronic switch for
multiprocessor networking using dense-WDM
A. V. Krishnamoorthy, J. E. Ford, K. W. Goossen, J. A. Walker,
S. P. Hui, J. E. Cunningham, W. Y. Jan, T. K. Woodward, M. C. Nuss,
R. G. Rozier, and D. A. B. Miller,  Lucent Technologies, and
F. E. Kiamilev, University of North Carolina

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________

4:00 PM - 5:40 PM
CONFERENCE PANEL I - Intra-System Optical Interconnects: 
Performance, Cost, Functionality - Pick Any Two
MODERATOR: A. G. Nowatzyk, Sun Microsystems
PANELISTS: H. Davidson, Sun Microsystems, R. Newhall, Silicon Graphics,
P. Prucnal, Princeton University, J. Sauer, University of Colorado
at Boulder, M. Snir, IBM T. J. Watson Research Center 

________________________________________

6.00 PM - 7:30 PM
GET ACQUAINTANCED RECEPTION

Meet some of the MPPOI participants
Food, booze, and small-talk opportunities provided

========================================
Monday, October 28, 1996
========================================

7:30 AM - 8:30 AM
CONTINENTAL BREAKFAST
________________________________________

7:30 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:30 AM - 10:10 AM
Session IV - Scalable Interconnection Networks
Chair: A. Nowatzyk, Sun Microsystems

Exploiting Optical Interconnects to Eliminate Serial Bottlenecks
J. Goodman, University of Wisconsin-Madison (INVITED)

Scalable Network Architectures Using the Optical Transpose
Interconnection System (OTIS)
F. Zane, P. Marchand, R. Paturi, and S. Esener,
University of California at San Diego

A Scalable Recirculating Shuffle Network with Deflection Routing
S. P. Monacos, California Institute of Technology, and
A. A. Sawchuk, University of Southern California

Improved embeddings in POPS networks through stack-graph models
P. Berthome, Laboratoire LIP - EBS Lyon, and
A. Ferreira, CNRS Carleton University
________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:10 PM
Session V - Optical Networks: Architecture Issues 
Chair: T. H. Szymanski, McGill University

Optically Interconnected Electronics - Challenges and Choices
F. Tooley, McGill University (INVITED)

Optoelectronic Stochastic Processor Array:  Demonstration
of Video Rate Simulated Annealing Noise Cleaning Operation
P. Chavel, P. Lalanne, J.-C. Rodier, Institut d'Optique Orsay

High Throughput Optical Algorithms for the FFT and
sorting via Data Packing
K. Bergman, P. Prucnal, C. Read, Princeton University,
G. Burdge, University of Maryland, D. Carlson, N. Coletti, and
C. Reed, Institute for Defense Analyses, H. Jordan, and
D. Straub, University of Colorado at Boulder, R. Kannan, and
K. Lee, University of Denver, and P. Merkey, USRA CESDIS

Bit-Parallel Completely Connected Optoelectronic
Switching Networks for Massively Parallel Processing:
Principle and Optical Architecture
V. B. Fyodorov, Russian Academy of Sciences Moscow

________________________________________

12:10 PM - 2:00 PM
CONFERENCE LUNCH (PROVIDED)
________________________________________

2:00 PM - 3:40 PM
Session VI - Guided-Wave Components for Optical Interconnects
Chair: J. Bristow, Honeywell Technology Center

Flexible Optical Backplane Interconnects
M. A. Shahid and W. R. Holland, Bell Laboratories, 
Lucent Technologies, Inc. (INVITED)

1-GHz Clock Signal Distribution for Multi-processor Super Computers
S. Tang, R. R. Chen, Radiant Research Inc., T. Li, F. Li, 
M. Dubinovsky, R. T. Chen, University of Texas at Austin,
and R. Wickman, Cray Research Inc. 

Low-Loss High-Thermal-Stability Polymer Interconnects
for Low-Cost High-Performance Massively Parallel Processing
L. Eldada, C. Xu, K. M. T. Stengel, L. W. Shacklette,
R. A. Norwood, and J. T. Yardley, AlliedSignal Inc. 

Two-dimensional parallel optical data link:  Experiment
K. Kitayama, and M. Nakamura, Communication Research Laboratory
of the Ministry of Posts and Telecommunications Japan,
Y. Igasaki, Hamamatsu Photonics K.K., and K. Kaneda, Fujikura Ltd.

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________

4:00 PM - 5:40 PM
CONFERENCE PANEL II -  The Roles of University and Industry in 
Developing Optical Interconnect Systems
MODERATOR: R. K. Kostuk, University of Arizona at Tucson
PANELISTS: J. Bristow, Honeywell Technology Center, J. W. Goodman, 
Stanford University, M. Haney, George Mason University, 
S. Lee, University of California at San Diego, 
B. R. Pecor, Cray Research, and J. R. Rowlette, Amp Inc.

========================================
Tuesday, October 29, 1996
========================================

7:30 AM - 8:30 AM
CONTINENTAL BREAKFAST
________________________________________

7:30 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION
________________________________________

8:30 AM - 10:10 AM
Session VII - Multiprocessor Networks and Systems
Chair: B. R. Pecor, Cray Research

Network of PCs as High-Performance Servers
K. Li, Princeton University (INVITED)

Design of an Efficient Shared Memory Architecture Using Hybrid 
Opto-Electronic VLSI Circuits and Space Invariant Optical Busses
P. Lukowicz, University of Karlsruhe

A Novel Interconnection Network using Semiconductor
Optical Amplifier Gate Switches for Shared Memory Multiprocessors
Y. Maeno, Y. Suemura, and N. Henmi, NEC Optoelectronics Research 
Laboratories Japan

Hierarchical Optical Ring Interconnection (HORN): A WDM-based Scalable 
Interconnection-Network for  Multiprocessors and Multicomputers
A. Louri and R. Gupta, University of Arizona at Tucson

________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:10 PM
Session VIII - Performance Evaluation, Modeling and Devices
Chair: J. Grote, WL/AADO at the Wright-Patterson Air Force Base

OPTOBUS I:  Performance of a 4 Gb/s Optical Interconnect
D. B. Schwartz, C. K. Y. Chun, J. Grula, S. Planer, G. Raskin, and
S. Shook, Motorola Inc.

Performance Modeling of Optical Interconnection
Technologies for Massively Parallel Processing System
J. L. Cruz-Rivera, W. S. Lacy, D. S. Wills, T. K. Gaylord,
and E. N. Glytsis, Georgia Institute of Technology

Basic Considerations of Improving Communication
Performances  for Parallel Multi-Processor System (PMPS) with
Optical Interconnection Network
Y.-M. Zhang, W.-Y. Liu, G. Zhou, H. Zhang, X.-Q. Hem and F. Hua,
Tianjin University China 

VCSEL/CMOS Smart Pixel Arrays for Free-space Optical Interconnects
J. Neff, C. Chen, T. McLaren, C.-C. Mao, A. Fedor, W. Berseth,
and Y. C. Lee, University of Colorado at Boulder

A Compact Fractal Hexagonal 36 by 36 Self-Routing Switch using
Polarization Controlled VCSEL Array Holographically Interconnected
B. Piernas, and P. Cambon, Institute Superieur d'Electronique
de Bretagne (ISEB), and L. Plouzennnec, Ecole Nationale
Superieure de Telecommunications de Bretagne

________________________________________

12.10 PM - 2:00 PM
LUNCH BREAK (ON YOUR OWN)
________________________________________

2:00 PM - 3:40 PM
Session IX - Optical Backplanes
Chair: F. E. Kiamilev, University of North Carolina

Optical interconnection technologies based on VCSELs and smart pixels
T. Kurokawa,  NTT Optoelectronics Labs (INVITED)

A Multistage Optical Backplane Demonstration System
D. V. Plant, B. Robertson, M. H. Ayliffe, G. C. Boisset, D. Kabak,
R. Iyer, Y. S. Liu, D. R. Rolston, M. Venditti, and T. H. Szymanski,
McGill University, H. S. Hinton, and D. J. Goodwill, University of
Colorado at Boulder, W. M. Robertson, Middle Tennessee State University,
and M. R. Taghizadeh, Heriot-Watt University

Hybrid optoelectronic backplane bus for multiprocessor-based
computing systems
C. Zhao, and R. T. Chen, Univer,sity of Texas at Austin

Reconfigurable Computing with Optical Backplanes
T. H. Szymanski, and B. Supmonchai, McGill University

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________


4:00 PM - 5:40 PM
Session X - Optical Interconnection Technology
Chair: A. V. Krishnamoorthy, Lucent Technologies, Inc.

Optimal Transmission Schedule in WDM Broadcast-and-
Select Networks with Multiple Transmitters and Receivers
S.-K. Lee, and H.-A. Choi, George Washington University, and
A. D. Oh, Uiduk University

Single Chip 8x8 Optical Interconnect Using
Micromachined Free-Space Micro-Optical Bench Technology
L. Fan, S. S. Lee, and M. C. Wu, University of California
at Los-Angeles, and H. C. Lee, and P. Grodzinski, Motorola Inc.
Phoenix Corporate Research Laboratories

A 3D optoelectronic parallel processor for smart pixel
processing units
D. Fey, A. Kurschat, B. Kasche, W. Erhard, Friedrich-Schiller
Universitat Jena

Vertical Cavity X-Modulators for Reconfigurable Optical
Interconnection and Routing
J. S. Powell, M. Morf, J. S. Harriss, Jr., Stanford University, and
J. A. Trezza, Sanders Lockheed Martin Corp.

Demonstration of parallel optical data input for arrays of
PnpN optical thyristors
A. Kirk, H. Thienpont, V. Baukens, N. Debaes, A. Goulet,
M. Kuijk, G. Borghs, R. Vounckx, I. Veretennicoff, Vrije Universiteir
Brussel, and P. Heremans, IMEC

________________________________________

5:40 PM - 6:00 PM

CLOSING REMARKS: MPPOI '97 
J. Goodman, Stanford University 
________________________________________

6:20 PM - 8:00 PM
CONFERENCE DINNER (PROVIDED)
________________________________________

==============================================================================
\end{verbatim}
\newpage
\begin{verbatim}
                           Registration Form
                              MPPOI '96
                         The Westin Maui Hotel
                             Maui, Hawaii
                          October 27-29, 1996

      TO REGISTER, MAIL OR FAX THIS FORM TO: MPPOI registration, IEEE 
      Computer Society, 1730 Massachusetts Av, N.W., Washington DC 20036-1992, 
      USA. Fax: +USA-202-728-0884. For information, call +USA-202-371-1013 - 
      Sorry, no phone registration. No registrations will be accepted at IEEE 
      Computer Society Headquarters after 5:00pm on October 7, but must be 
      processed on-site. Registration forms without payment will not be accepted.

Name:----------------------------------------------------------------------
       Last                           First                        MI
Company:-------------------------------------------------------------------
Address:-------------------------------------------------------------------
City/State/Zip/Country:----------------------------------------------------
Daytime phone:----------------------- Fax number---------------------------
Company:-------------------------------------------------------------------
E-mail address:------------------------------------------------------------
IEEE/ACM/OSA/SPIE Member Number:   ------------------
Do you have any special needs: --------------------------------------------
---------------------------------------------------------------------------
Do not include my mailing address on:
-- Non-society mailing lists         -- Meeting Attendee lists

Please circle the appropriate registration fee:
Advance (before September 30, 1995)        Late(before October 7, 1995)/on site.
  Member $300                              Member $360
  Non-member $375                          Non-member $450
  Full-time student $150                   Full-time student $180

SOCIAL EVENTS: The following tickets are for sale in advance. These tickets will
               be subject to higher fees when purchased on site:
               ___Lunch  $25         ___Reception $20          ___Dinner  $50

Total enclosed:$ ________________________________
Please make all checks payable to: IEEE Computer Society. All checks must be in
US dollars drawn on US banks. Credit card charges will appear on statement as
"IEEE Computer Society Registration". Written requests for refunds must be
received by IEEE office before October 7, 1996. Refunds are subject to a $50
processing fees. Method of payment accepted (payment must accompany form):
-- Personal check               -- Company check        -- Traveler's check
-- American Express             -- Master Card          -- VISA
-- Dinner's club                -- Government purchase order (original)

Credit card number: -------------------------- Expiration date: ------------
Cardholder name   : --------------------------
Signature         : --------------------------

Non-student registration fees include conference attendance, proceedings,
continental breakfast, refreshment at breaks, conference reception, one conference
lunch and one conference dinner. Student registration fees include the above 
**EXCEPT** they ***DO NOT*** include the lunch and ***DO NOT*** include the dinner.
===========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
______________________________________________________________________

                     MPPOI '96 HOTEL RESERVATION
                        The Westin Maui Hotel
                            Maui, Hawaii
                         October 27-29, 1996

______________________________________________________________________

   PLEASE MAKE RESERVATIONS WITH THE MENGER HOTEL AS SOON AS POSSIBLE TO
   GUARANTEE THE SPECIAL RATE (HOTEL PHONE AND FAX NUMBERS ARE GIVEN AT BELLOW).

 * The special MPPOI '96 group rate of US $140.00 (single or double) is available
   from October 23 through November 1, 1996. All rates are subject to additional
   local and state taxes.  These rates will be available for reservations made
   BEFORE September 22, 1996. 

 * The Westin Maui Hotel: Phone: 1-808-526-4111 Fax: 1-808-661-5764
   In USA and Canada: 1-800-228-3000

 * You should ask for the "IEEE 3rd International Conference MPPOI"

The above is a special rate. Only a limited number of rooms are available
at this rate. There are other hotels at the Kannapali Beach area in Maui.
You should check with your travel agent for prices and availability.

\end{verbatim}

\end{document}






        Eugen Schenfeld

From owner-mpi-collcomm@CS.UTK.EDU Mon Sep  2 22:12:11 1996
Return-Path: <owner-mpi-collcomm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id WAA08495; Mon, 2 Sep 1996 22:12:10 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id WAA28814; Mon, 2 Sep 1996 22:14:24 -0400
Received: from zingo.nj.nec.com (zingo.nj.nec.com [138.15.150.106]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id WAA28727; Mon, 2 Sep 1996 22:13:16 -0400
Received: from iris49 (iris49 [138.15.150.129]) by zingo.nj.nec.com (8.7.4/8.7.3) with SMTP id WAA23358; Mon, 2 Sep 1996 22:11:26 -0400 (EDT)
Received:  by iris49 (940816.SGI.8.6.9/cliff's joyful mailer #2)
	id VAA01005(iris49); Mon, 2 Sep 1996 21:36:06 -0400
Date: Mon, 2 Sep 1996 21:36:06 -0400
From: eugen@research.nj.nec.com (Eugen Schenfeld)
Message-Id: <199609030136.VAA01005@iris49>
To: mppoi@research.nj.nec.com
Subject: MPPOI'96 - Maui, Oct. 27-29, '96


===============================================================================
==========                                                          ===========
            PLEASE NOTE:  Hotel rooms are limited at the special conference
            ===========   Rate. We suggest you make your reservation early
                          to make sure you have the room. 

            Room Sharing: please email to Ms. Sue Bredhoff, at:
                          sue@research.nj.nec.com with your preference
                          (non-smoking/smoking, male/female etc.) and we
                          will do the best to accommodate.

            Conference Registration: Please register early (BEFORE Spetember 30)
                          to the conference to avoid paying the higher fees and
                          help us prepare better. AFTER Oct. 7, only on-site 
                          registration will be accepted. 
==========                                                          ===========
===============================================================================


\documentstyle[fullpage]{article}

\begin{document}

\begin{verbatim}
==========================================================================
                 The Third International Conference on
        MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS
===========================================================================


                         The Westin Maui Hotel
                             Maui, Hawaii
                          October 27-29, 1996

                             SPONSORED BY:
          IEEE Technical Committee on Computer Architecture (TCCA)

                          IN COOPERATION WITH:
           ACM Special Interest Group on Architecture (SIGARCH)
          The International Society for Optical Engineering (SPIE)
            The IEEE Lasers and Electro-optics Society (LEOS)
                   The Optical Society of America (OSA)

                        ADDITIONAL SUPPORT PROVIDED BY:
                     NSF - The National Science Foundation 

______________________________________________________________________

PLEASE NOTE:
===========

THIS IS A PRELIMINARY MAILING INTENDED FOR TRIP PLANNING. MORE DETAILED
INFORMATION WILL BE AVAILABLE LATTER, TO INCLUDE INFORMATION ABOUT MAUI,
GETTING THERE (AIRPORTS), ALTERNATIVE HOTEL RESERVATIONS, AND OTHER USEFUL
INFORMATION. THE CURRENT DOCUMENT LISTS THE ADVANCED PROGRAM, REGISTRATION
FORM, AND THE WESTIN MAUI HOTEL INFORMATION.

FOR MORE INFORMATION PLEASE CONTACT: mppoi@research.nj.nec.com
or fax to Dr. Eugen Schenfeld, +USA-609-951-2482

-----------------------------------------------------------------------

The third annual conference on Massively Parallel Processing Architectures
using Optical Interconnections (MPPOI'96) will be held on Oct. 27-29, 1996
in the Westin Maui Hotel, Maui, Hawaii.  The Conference will focus on the
potential for using optical interconnections in massively parallel processing
systems, and their effect on system and algorithm design. Optics offer many
benefits for interconnecting large numbers of processing elements, but may
require us to rethink how we build parallel computer systems and communication
networks, and how we write applications.  Fully exploring the capabilities of
optical interconnection networks requires an interdisciplinary effort.  It is
critical that researchers in all areas of the field are aware of each
other's work and results. The intent of MPPOI is to assemble the leading
researchers and to build towards a synergetic approach to MPP architectures,
optical interconnections, operating systems, and software development. The
conference will feature invited speakers, followed by several sessions of
submitted papers, and will conclude with a panel discussion.

The topics of interest include but are not limited to the following:

- Optical interconnections, Reconfigurable Architectures,
- Embedding and mapping of applications and algorithms,
- Packaging and layout of optical interconnections,
- Electro-optical, and opto-electronic components,
- Relative merits of optical technologies (free-space, fibers, wave guides),
- Passive optical elements, Algorithms and applications exploiting,
- Data distribution and partitioning,
- Characterizing parallel applications,
- Cost/performance studies.


CONFERENCE CHAIR
================

Rami Melhem, University of Pittsburgh

PROGRAM CO-CHAIRS
=================

Allan Gottlieb, NYU; Yao Li, NEC Research Institute


PUBLICITY and PUBLICATION CHAIR
===============================

Eugen Schenfeld, NEC Research Institute


PROGRAM COMMITTEE
=================

T. Ae, Hiroshima University (Japan)
D. Agrawal, North Carolina State University (USA)
K. Batcher, University of Kent (USA)
J. Bristow, Honeywell (USA)
T. Casavant, University of Iowa, Iowa City (USA)
P. Chavel, Institut d'Optique (France)
R. Chen, University of Texas (USA)
A. Chien, UIUC (USA)
T. Cloonan, AT&T Bell Labs (USA)
S. Dickey, Pace University (USA)
N. Dutta, AT&T Bell Labs (USA)
M. Eshaghian, NJIT (USA)
M. Flynn, Stanford University (USA)
L. Giles, NEC Research Institute (USA)
C. Georgiou, IBM T. J. Watson Research Center (USA)
K. Ghose, SUNY at Binghamton (USA)
J. Goodman, Stanford University (USA)
J. Goodman, University of Wisconsin (USA)
M. Goodman, Bellcore (USA)
J. Grote, USAF Wright Patterson (USA)
A. Gupta, Stanford University (USA)
S. Hinton, University of Colorado (USA)
F. Hsu, Fordham University (USA)
Y. Ichioka, Osaka University (Japan)
H. Inoue, Hitachi (Japan)
K. Jenkins, University of Southern California (USA)
L. Johnsson, University of Houston, (USA)
N. Jokerst, Georgia Tech., (USA)
H. Jordan, University of Colorado (USA)
K. Kasahara, NEC Corp. (Japan)
F. Kiamilev, University of N. Carolina, Charlotte (USA)
T. Knight, MIT (USA)
R. Kostuk, University of Arizona at Touson (USA)
A. Krishnamoorthy, AT&T Bell Labs (USA)
S. Lee, UCSD (USA)
K. Li, Princeton University (USA)
A. Lohmann   University of Erlangen-Nurnberg (Germany)
A. Louri, University of Arizona (USA)
Y.-D. Lyuu, National Taiwan University (Taiwan)
T. Maruyama, NEC Corp. (Japan)
M. Murdocca, Rutgers University (USA)
J. Neff, University of Colorado (USA)
L. Ni, Michigan State University (USA)
A. Nowatzyk, Sun Microsystems (USA)
Y. Patt, University of Michigan (USA)
W. Paul, Universitaet des Saarlandes-Saarbruecken (Germany)
B. Pecor, Cray (USA)
T. Pinkston, USC (USA)
C. Qiao, SUNY at Buffalo (USA)
J. Reif, Duke university (USA)
J. Rowlette, AMP (USA)
H. J. Siegel, Purdue University (USA)
S. Sahni, University of Florida (USA)
A. Smith, UC Berkeley (USA)
M. Snir, IBM T. J. Watson Research Center (USA)
G. Sohi, University of Wisconsin (USA)
Q. Song, Syracuse University (USA)
T. Sterling, USRA  CESDIS (USA)
B. Tarjan, Princeton University (USA)
S. Tomita, Kyoto University (Japan)
F. Tooley, McGill University (Canada)
L. Valiant, Harvard University (USA)
O. Wada, Fujitsu Corp. (Japan)
A. Walker, Heriot-Watt University (UK)
P. Wang, George Masson University (USA)
S. Yokoyama, Hiroshima University (Japan)
Y. Zhang, Tianjin University (China)

STEERING COMMITTEE
=================

J. Goodman, Stanford University
L. Johnsson, University of Houston
S. Lee, University of California at San Diego
R. Melhem, University of Pittsburgh
E. Schenfeld, NEC Research Institute (Chair)
P. Wang, George Mason University

========================================================================
\end{verbatim}
\newpage
\begin{verbatim}


CUSTOMS/PASSPORTS:  It is suggested for those of other than US
nationalities to check with a travel agent and with an US consulate the
requirements for VISA and passports to enter the United States, as well as for
US Customs regulations.

================================
****** NSF TRAVEL SUPPORT ******
================================

The National Science Foundation (NSF) is considering travel support for 
minority and female faculty members as well as for graduate students. This travel 
award is pending final approval by the NSF and would be available for qualified
authors presenting papers at the MPPOI'95 conference. For details on the travel support
and to obtain a Request Form please contact (email, fax, or phone) the
Conference Chair at the above address.


========================================================================
                         MPPOI '96 ADVANCE PROGRAM
========================================================================

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
***** INVITED TALKS: 40 Minutes. REGULAR TALKS: 20 Minutes *****
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

========================================
Saturday, October 26, 1996
========================================

8:00 PM - 9:30 PM
REGISTRATION 

========================================
Sunday, October 27, 1996
========================================

7:00 AM - 8:00 AM       
CONTINENTAL BREAKFAST 
________________________________________

7:00 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:00 AM - 8:15 AM
OPENING REMARKS - WELCOME
R. Melhem, University of Pittsburgh

8:15 AM - 8:30 AM
TECHNICAL PROGRAM OVERVIEW
A. Gottlieb, NYU and Yao Li, NEC Research Institute
________________________________________

8:30 AM - 10:10 AM
Session I - Comparative Studies for Optical Interconnects
Chair: K. Kasahara, NEC Corp. Japan

Optical Geometrical Transformations Used for Parallel Communication
A. W. Lohmann, Erlangen University (INVITED)

Towards an Optimal Foundation Architecture for Optoelectronic Computing 
H. M. Ozaktas, Bilkent University

Fundamental Advantages of Free-space Optical Interconnects
M. W. Haney, and M. P. Christensen, George Mason University

A Comparative Study of Cost Effective Multiplexing
Approaches in Optical Networks
C. Qiao, and Y. Mei, SUNY at Buffalo

________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:10 PM
Session II - Interconnection Networks and System Architectures
Chair: T. M. Pinkston, University of Southern California

Scalable Parallel Systems: Past, Present and Future (from an IBM perspective)
M. Snir, IBM T. J. Watson Research Center (INVITED)

Design of a Parallel Photonic FFT Processor
R. G. Rozier, F. E. Kiamilev, University of North Carolina 
at Charlotte, and A. V. Krishnamoorthy, Lucent Technologies

SIMPil:  An OE Integrated SIMD Architecture for
Focal Plane Processing Applications
H. H. Cat, A. Gentile, J. C. Eble, M. Lee, O. Vendier,
Y. J. Joo, D. S. Wills, M. Brooke, N. M. Jokerst, 
and A. S. Brown, Georgia Institute of Technology, and
R. Leavitt, Army Research Laboratory

Design of a 64-bit microprocessor core IC for hybrid
CMOS-SEED technology
F. E. Kiamilev, J. S. Lambirth, and R. G. Rozier,
University of North Carolina at Charlotte, and
A. V. Krishnamoorthy, Lucent Technologies

________________________________________

12.10 PM - 2:00 PM
LUNCH BREAK (ON YOUR OWN)
________________________________________

2:00 PM - 3:40 PM
Session III - WDM in MPP Systems 
Chair: P. Prucnal, Princeton University

High-performance Parallel Processors based on Star-coupled 
WDM Optical Interconnects
A. J. De Groot, R. J. Deri, R. E. Haigh, F. G. Patterson, and
S. P. DiJaili, Lawrence Livermore National Laboratory

Dynamic Alignment of Pulses in Bit-Parallel Wavelength
Links Using Shepherd Pulse in Nonlinear Fibers for
Massively Parallel Processing Computer Networks
L. Bergman, and C. Yeh, California Institute of Technology

Planar Diffraction Grating for Board-Level WDM Applications
R. A. Livingston, and R. R. Krchnavek, Washington University 

Time-Deterministic WDM Star Network for Massively
Parallel Computing in Radar Systems
M. Jonsson, A. Ahlander, and B. Svensson, Halmstad University,
M. Taveniku, and B. Svensson, Chalmers University of Technology,
and M. Taveniku, Ericsson Microwave Systems AB

The AMOEBA chip:  an opto-electronic switch for
multiprocessor networking using dense-WDM
A. V. Krishnamoorthy, J. E. Ford, K. W. Goossen, J. A. Walker,
S. P. Hui, J. E. Cunningham, W. Y. Jan, T. K. Woodward, M. C. Nuss,
R. G. Rozier, and D. A. B. Miller,  Lucent Technologies, and
F. E. Kiamilev, University of North Carolina

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________

4:00 PM - 5:40 PM
CONFERENCE PANEL I - Intra-System Optical Interconnects: 
Performance, Cost, Functionality - Pick Any Two
MODERATOR: A. G. Nowatzyk, Sun Microsystems
PANELISTS: H. Davidson, Sun Microsystems, R. Newhall, Silicon Graphics,
P. Prucnal, Princeton University, J. Sauer, University of Colorado
at Boulder, M. Snir, IBM T. J. Watson Research Center 

________________________________________

6.00 PM - 7:30 PM
GET ACQUAINTANCED RECEPTION

Meet some of the MPPOI participants
Food, booze, and small-talk opportunities provided

========================================
Monday, October 28, 1996
========================================

7:30 AM - 8:30 AM
CONTINENTAL BREAKFAST
________________________________________

7:30 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION 
________________________________________

8:30 AM - 10:10 AM
Session IV - Scalable Interconnection Networks
Chair: A. Nowatzyk, Sun Microsystems

Exploiting Optical Interconnects to Eliminate Serial Bottlenecks
J. Goodman, University of Wisconsin-Madison (INVITED)

Scalable Network Architectures Using the Optical Transpose
Interconnection System (OTIS)
F. Zane, P. Marchand, R. Paturi, and S. Esener,
University of California at San Diego

A Scalable Recirculating Shuffle Network with Deflection Routing
S. P. Monacos, California Institute of Technology, and
A. A. Sawchuk, University of Southern California

Improved embeddings in POPS networks through stack-graph models
P. Berthome, Laboratoire LIP - EBS Lyon, and
A. Ferreira, CNRS Carleton University
________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:30 PM
Session V - Optical Networks: Architecture Issues 
Chair: T. H. Szymanski, McGill University

Optically Interconnected Electronics - Challenges and Choices
F. Tooley, McGill University (INVITED)

A Smart-Pixel Parallel Optoelectronic Computing System with
Free-Space Dynamic Interconnections
N. McArdke, M. Naruse, T. Komuro, H. Sakaida, M. Ishikawa,
Tokyo University, Y. Kobayashi, and H. Toyoda, Hamamatsu 
Photonics K.K. Japan

Optoelectronic Stochastic Processor Array:  Demonstration
of Video Rate Simulated Annealing Noise Cleaning Operation
P. Chavel, P. Lalanne, J.-C. Rodier, Institut d'Optique Orsay

High Throughput Optical Algorithms for the FFT and
sorting via Data Packing
K. Bergman, P. Prucnal, C. Read, Princeton University,
G. Burdge, University of Maryland, D. Carlson, N. Coletti, and
C. Reed, Institute for Defense Analyses, H. Jordan, and
D. Straub, University of Colorado at Boulder, R. Kannan, and
K. Lee, University of Denver, and P. Merkey, USRA CESDIS

Bit-Parallel Completely Connected Optoelectronic
Switching Networks for Massively Parallel Processing:
Principle and Optical Architecture
V. B. Fyodorov, Russian Academy of Sciences Moscow

________________________________________

12:30 PM - 2:00 PM
CONFERENCE LUNCH (PROVIDED)
________________________________________

2:00 PM - 3:40 PM
Session VI - Guided-Wave Components for Optical Interconnects
Chair: J. Bristow, Honeywell Technology Center

Flexible Optical Backplane Interconnects
M. A. Shahid and W. R. Holland, Bell Laboratories, 
Lucent Technologies, Inc. (INVITED)

1-GHz Clock Signal Distribution for Multi-processor Super Computers
S. Tang, R. R. Chen, Radiant Research Inc., T. Li, F. Li, 
M. Dubinovsky, R. T. Chen, University of Texas at Austin,
and R. Wickman, Cray Research Inc. 

Low-Loss High-Thermal-Stability Polymer Interconnects
for Low-Cost High-Performance Massively Parallel Processing
L. Eldada, C. Xu, K. M. T. Stengel, L. W. Shacklette,
R. A. Norwood, and J. T. Yardley, AlliedSignal Inc. 

Two-dimensional parallel optical data link:  Experiment
K. Kitayama, and M. Nakamura, Communication Research Laboratory
of the Ministry of Posts and Telecommunications Japan,
Y. Igasaki, Hamamatsu Photonics K.K., and K. Kaneda, Fujikura Ltd.

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________

4:00 PM - 5:40 PM
CONFERENCE PANEL II -  The Roles of University and Industry in 
Developing Optical Interconnect Systems
MODERATOR: R. K. Kostuk, University of Arizona at Tucson
PANELISTS: J. Bristow, Honeywell Technology Center, J. W. Goodman, 
Stanford University, M. Haney, George Mason University, 
S. Lee, University of California at San Diego, 
B. R. Pecor, Cray Research, and J. R. Rowlette, Amp Inc.

========================================
Tuesday, October 29, 1996
========================================

7:30 AM - 8:30 AM
CONTINENTAL BREAKFAST
________________________________________

7:30 AM - 11:30 AM  AND  2:00 PM - 4:00 PM
CONFERENCE REGISTRATION
________________________________________

8:30 AM - 10:10 AM
Session VII - Multiprocessor Networks and Systems
Chair: B. R. Pecor, Cray Research

Network of PCs as High-Performance Servers
K. Li, Princeton University (INVITED)

Design of an Efficient Shared Memory Architecture Using Hybrid 
Opto-Electronic VLSI Circuits and Space Invariant Optical Busses
P. Lukowicz, University of Karlsruhe

A Novel Interconnection Network using Semiconductor
Optical Amplifier Gate Switches for Shared Memory Multiprocessors
Y. Maeno, Y. Suemura, and N. Henmi, NEC Optoelectronics Research 
Laboratories Japan

Hierarchical Optical Ring Interconnection (HORN): A WDM-based Scalable 
Interconnection-Network for  Multiprocessors and Multicomputers
A. Louri and R. Gupta, University of Arizona at Tucson

________________________________________

10:10 AM - 10:30 AM
MID-MORNING BREAK
________________________________________

10:30 AM - 12:10 PM
Session VIII - Performance Evaluation, Modeling and Devices
Chair: J. Grote, WL/AADO at the Wright-Patterson Air Force Base

OPTOBUS I:  Performance of a 4 Gb/s Optical Interconnect
D. B. Schwartz, C. K. Y. Chun, J. Grula, S. Planer, G. Raskin, and
S. Shook, Motorola Inc.

Performance Modeling of Optical Interconnection
Technologies for Massively Parallel Processing System
J. L. Cruz-Rivera, W. S. Lacy, D. S. Wills, T. K. Gaylord,
and E. N. Glytsis, Georgia Institute of Technology

Basic Considerations of Improving Communication
Performances  for Parallel Multi-Processor System (PMPS) with
Optical Interconnection Network
Y.-M. Zhang, W.-Y. Liu, G. Zhou, H. Zhang, X.-Q. Hem and F. Hua,
Tianjin University China 

VCSEL/CMOS Smart Pixel Arrays for Free-space Optical Interconnects
J. Neff, C. Chen, T. McLaren, C.-C. Mao, A. Fedor, W. Berseth,
and Y. C. Lee, University of Colorado at Boulder

A Compact Fractal Hexagonal 36 by 36 Self-Routing Switch using
Polarization Controlled VCSEL Array Holographically Interconnected
B. Piernas, and P. Cambon, Institute Superieur d'Electronique
de Bretagne (ISEB), and L. Plouzennnec, Ecole Nationale
Superieure de Telecommunications de Bretagne

________________________________________

12.10 PM - 2:00 PM
LUNCH BREAK (ON YOUR OWN)
________________________________________

2:00 PM - 3:40 PM
Session IX - Optical Backplanes
Chair: F. E. Kiamilev, University of North Carolina

Optical interconnection technologies based on VCSELs and smart pixels
T. Kurokawa,  NTT Optoelectronics Labs (INVITED)

A Multistage Optical Backplane Demonstration System
D. V. Plant, B. Robertson, M. H. Ayliffe, G. C. Boisset, D. Kabak,
R. Iyer, Y. S. Liu, D. R. Rolston, M. Venditti, and T. H. Szymanski,
McGill University, H. S. Hinton, and D. J. Goodwill, University of
Colorado at Boulder, W. M. Robertson, Middle Tennessee State University,
and M. R. Taghizadeh, Heriot-Watt University

Hybrid optoelectronic backplane bus for multiprocessor-based
computing systems
C. Zhao, and R. T. Chen, Univer,sity of Texas at Austin

Reconfigurable Computing with Optical Backplanes
T. H. Szymanski, and B. Supmonchai, McGill University

________________________________________

3:40 PM - 4:00 PM
AFTERNOON BREAK
________________________________________


4:00 PM - 5:40 PM
Session X - Optical Interconnection Technology
Chair: A. V. Krishnamoorthy, Lucent Technologies, Inc.

Optimal Transmission Schedule in WDM Broadcast-and-
Select Networks with Multiple Transmitters and Receivers
S.-K. Lee, and H.-A. Choi, George Washington University, and
A. D. Oh, Uiduk University

Single Chip 8x8 Optical Interconnect Using
Micromachined Free-Space Micro-Optical Bench Technology
L. Fan, S. S. Lee, and M. C. Wu, University of California
at Los-Angeles, and H. C. Lee, and P. Grodzinski, Motorola Inc.
Phoenix Corporate Research Laboratories

A 3D optoelectronic parallel processor for smart pixel
processing units
D. Fey, A. Kurschat, B. Kasche, W. Erhard, Friedrich-Schiller
Universitat Jena

Vertical Cavity X-Modulators for Reconfigurable Optical
Interconnection and Routing
J. S. Powell, M. Morf, J. S. Harriss, Jr., Stanford University, and
J. A. Trezza, Sanders Lockheed Martin Corp.

Demonstration of parallel optical data input for arrays of
PnpN optical thyristors
A. Kirk, H. Thienpont, V. Baukens, N. Debaes, A. Goulet,
M. Kuijk, G. Borghs, R. Vounckx, I. Veretennicoff, Vrije Universiteir
Brussel, and P. Heremans, IMEC

________________________________________

5:40 PM - 6:00 PM

CLOSING REMARKS: MPPOI '97 
J. Goodman, Stanford University 
________________________________________

6:20 PM - 8:00 PM
CONFERENCE DINNER (PROVIDED)
________________________________________

==============================================================================
\end{verbatim}
\newpage
\begin{verbatim}
                           Registration Form
                              MPPOI '96
                         The Westin Maui Hotel
                             Maui, Hawaii
                          October 27-29, 1996

TO REGISTER, MAIL OR FAX THIS FORM TO: MPPOI registration, IEEE 
Computer Society, 1730 Massachusetts Av, N.W., Washington DC 20036-1992, 
USA. Fax: +USA-202-728-0884. For information, call +USA-202-371-1013 - 
Sorry, no phone registration. No registrations will be accepted at IEEE 
Computer Society Headquarters after 5:00pm on October 7, but must be 
processed on-site. Registration forms without payment will not be accepted.

Name:----------------------------------------------------------------------
       Last                           First                        MI
Company:-------------------------------------------------------------------
Address:-------------------------------------------------------------------
City/State/Zip/Country:----------------------------------------------------
Daytime phone:----------------------- Fax number---------------------------
Company:-------------------------------------------------------------------
E-mail address:------------------------------------------------------------
IEEE/ACM/OSA/SPIE Member Number:   ------------------
Do you have any special needs: --------------------------------------------
---------------------------------------------------------------------------
Do not include my mailing address on:
-- Non-society mailing lists         -- Meeting Attendee lists

Please circle the appropriate registration fee:
Advance (before September 30, 1995)        Late(before October 7, 1995)/on site.
  Member $300                              Member $360
  Non-member $375                          Non-member $450
  Full-time student $150                   Full-time student $180

SOCIAL EVENTS: EXTRA tickets (for spouse, etc.) are for sale in advance. These 
tickets will be subject to higher fees when purchased on site (please note: if 
you register as a non-student you do not need to buy these for yourself):
               ___Lunch  $25         ___Reception $20          ___Dinner  $50

Total enclosed:$ ________________________________
Please make all checks payable to: IEEE Computer Society. All checks must be in
US dollars drawn on US banks. Credit card charges will appear on statement as
"IEEE Computer Society Registration". Written requests for refunds must be
received by IEEE office before October 7, 1996. Refunds are subject to a $50
processing fees. Method of payment accepted (payment must accompany form):
-- Personal check               -- Company check        -- Traveler's check
-- American Express             -- Master Card          -- VISA
-- Dinner's club                -- Government purchase order (original)

Credit card number: -------------------------- Expiration date: ------------
Cardholder name   : --------------------------
Signature         : --------------------------

Non-student registration fees include conference attendance, proceedings,
continental breakfast, refreshment at breaks, conference reception, one conference
lunch and one conference dinner. Student registration fees include the above 
**EXCEPT** they ***DO NOT*** include the lunch and ***DO NOT*** include the dinner.
===========================================================================
\end{verbatim}
\newpage
\begin{verbatim}
______________________________________________________________________

                     MPPOI '96 HOTEL RESERVATION
                        The Westin Maui Hotel
                            Maui, Hawaii
                         October 27-29, 1996

______________________________________________________________________

   PLEASE MAKE RESERVATIONS WITH THE MENGER HOTEL AS SOON AS POSSIBLE TO
   GUARANTEE THE SPECIAL RATE (HOTEL PHONE AND FAX NUMBERS ARE GIVEN AT BELLOW).

* The special MPPOI '96 group rate of US $140.00 (single or double) is available
  from October 23 through November 1, 1996. All rates are subject to additional
  local and state taxes, and a $5 resort fee. These rates will be available for   reservations made BEFORE September 22, 1996. 

* The Westin Maui Hotel: Phone: 1-808-526-4111 Fax: 1-808-661-5764
  In USA and Canada: 1-800-228-3000

* You should ask for the "IEEE 3rd International Conference MPPOI" rate.

The above is a special rate. Only a limited number of rooms are available
at this rate. There are other hotels at the Kaannapali Beach area in Maui.
You should check with your travel agent for prices and availability.

\end{verbatim}

\end{document}


==========================================================================




        Eugen Schenfeld

