NA Digest Thursday, October 8, 1987 Volume 87 : Issue 72

This weeks Editor: Cleve Moler

Today's Topics:

----------------------------------------------------------------------


Date: Mon, 28 Sep 87 20:07:24 EDT
From: Henry Wolkowicz <hplabs!hwolkowicz%water.waterloo.edu@relay.cs.net>
To: na.moler@score.stanford.edu
Subject: Query on Topological Closure

I am interested in references to the problem: given 2 sets C,D, in a
seperated topological vector space X, when is the sum C+D closed?
In particular, I am interested in the case when C and D are closed,
cones. I am aware of several results, e.g. under local compactness
of one of the sets, a sufficient condition is that the
intersection of the 'recession cones' of C and -D
is 0. Are there any characterizations for the closure for finite or
infinite dimensions? Note that even the sum of 2 closed subspaces in
Hilbert space need not be closed.













d 1

------------------------------

Date: Mon 5 Oct 87 20:46:14-PDT
From: Cleve Moler <na.moler@score.stanford.edu>
Subject: Underflow in EISPACK
To: na@Score.Stanford.EDU

UNDERFLOW IN EISPACK

by Eric Grosse, Bell Labs, Murray Hill, NJ
and Cleve Moler, Dana Computer, Sunnyvale, CA

We recently came across an interesting case where EISPACK fails
to give the correct eigenvalues for what appears to be an easy
matrix. The difficulties can be traced to floating point underflow.
They are most insidious in double precision arithmetic on the VAX [*]
where the "D" floating point format has an unfortunately small exponent
range. However, a scaled version of the example can fail on any machine,
including ones which fully conform to the IEEE floating point standard.
We recommend a simple change to the EISPACK top level routine "RS"
which should protect most users from the problem.

The example is due to Guenter Ziegler of the University of Augsburg
in West Germany and Andrew Odlyzko of AT&T Bell Laboratories. They
were investigating a question raised by Amir Dembo of Brown University
regarding the distribution of rank in real symmetric Hankel
matrices whose elements are +1 and -1. (A Hankel matrix is constant
along each anti-diagonal, but that's irrelevant for what concerns us
here.) One of their matrices is 9-by-9:

-1 1 1 -1 -1 1 1 -1 -1
1 1 -1 -1 1 1 -1 -1 1
1 -1 -1 1 1 -1 -1 1 1
-1 -1 1 1 -1 -1 1 1 -1
-1 1 1 -1 -1 1 1 -1 -1
1 1 -1 -1 1 1 -1 -1 1
1 -1 -1 1 1 -1 -1 1 -1
-1 -1 1 1 -1 -1 1 -1 1
-1 1 1 -1 -1 1 -1 1 1

It is not obvious, but this matrix happens to have four eigenvalues
equal to zero, and hence its rank is five. From the many possible
ways to compute the rank of such matrices, Zeigler and Odlyzko chose
for convenience to use the EISPACK routine RS (for Real Symmetric) and
count the number of negligible computed eigenvalues. For this example,
running on a VAX in D format double precision, EISPACK incorrectly
claimed there were five eigenvalues on the order of roundoff error.
The same program, running on almost any other computer, would produce
the correct answer, which is only four negligible eigenvalues.

The problem turns out to be a catastrophic underflow in the EISPACK
routine TQLRAT. This is a square-root-free variant of the QR algorithm for
finding eigenvalues of a symmetric tridiagonal matrix. It operates on
the squares of off-diagonal elements. On the VAX, the square of
double precision roundoff error is roughly 10^(-34) and the underflow
limit is only 10^(-38). There is not enough room between those two
numbers for TQLRAT to operate properly. On other computers, similar
difficulties will occur if the example is scaled by a factor on the
order of the square root of the underflow limit. For IEEE machines,
the scale factor would have to be about 10^(-150), so such examples are
much less likely in practice, but TQLRAT might not properly handle
any which do turn up.

The easiest solution is to replace

CALL TQLRAT(N,W,FV2,IERR)

in EISPACK routine RS by

CALL TQL1(N,W,FV1,IERR).

Since TQL1 does not work with the squares of the tridiagonal elements,
it is much less prone to underflow trouble. No change is needed in
the case when eigenvectors are being computed, since RS then calls TQL2
rather than TQLRAT.

An alternate solution, an improved version of TQLRAT, is available from
the authors. But its range of applicability is still limited to a smaller
portion of the floating point exponent range than TQL1 and TQL2.

Ironically, advances in floating point hardware make the need for
square-root-free algorithms less pressing. On one recent chip,
the builtin square root is even slightly faster than division!

[*] VAX is a trademark of Digital Equipment Corporation.

------------------------------

Date: Tue, 6 Oct 87 10:31 EST
From: Dan Warner <WARNER@prism.clemson.edu>
Subject: Workshop on Parallel Computing
To: NA@SCORE.STANFORD.EDU
X-VMS-To: IN%"NA@SCORE.STANFORD.EDU"

Clemson University
presents a

WORKSHOP
on
PARALLEL COMPUTING*

This workshop is designed for researchers who are familiar with
traditional scientific computing but who are not up-to-speed with
the recentJdevelopments in parallel computing. The workshop will
be largely tutorial and will be focused on providing a firm
foundation in both architecture and algorithms. Particular
emphasis will be placed on the ascendant hypercube architecture.
After completing this workshop, attendees should be well prepared
to assess the significance and applicability of current work in
this rapidly evolving area.

Date and Time:
The workshop will start at 8:00 am Wednesday, November 18, and
will run until noon on Thursday, November 19. RHands onS demos of
the FPS T-20 hypercube will be available Thursday afternoon.

Key Topics:
The Evolution of Parallel Architectures
Measures of Parallel Performance
Optimal Communications in Hypercubes
Developments in Parallel Languages
Computational Fluid Dynamics
Multigrid on Hypercubes

Principal Lecturers:
Dr. Paul O. Frederickson, Los Alamos Scientific Laboratory
Dr. Michael W. George, Aeropropulsion Methods, Northrop Corp.
Dr. Roy P. Pargas, Computer Science, Clemson University
Dr. Dennis E. Stevenson, Computer Science, Clemson University
Dr. Daniel D. Warner, Mathematical Sciences, Clemson University

Accommodations are available at the Ramada Inn in Clemson (803)
654-7501. Mention the Workshop on Parallel Computing to obtain
the conference rate.

Attendance will be limited and advance registration is required.
The registration fee is $25.

Write to:
Department of Mathematical Sciences
Attn: Workshop on Parallel Computing
Clemson University
Clemson, SC 29634-1907

Phone:
Kay Powers (803) 656-2883 or Dept. Office (803) 656-3434

*This workshop is being funded by the Office of Naval Research
through the University Research Initiative Program, Contract No.
N00014-86-K-0693.

------------------------------

Date: Wed, 7 Oct 87 20:39:46 cdt
From: Rick Stevens <stevens@anl-mcs.ARPA>
To: na@score.stanford.edu
Subject: Parallel Programming Class

Argonne National Laboratory has set up an Advanced Computing
Research Facility (ACRF) for the study of parallel computing. It
currently features an 8-processor Alliant FX/8, a 20-processor
Encore Multimax, a 12-processor Sequent Balance 21000, and a
32-processor Intel iPSC hypercube machine. Local projects utilizing
the ACRF include investigations in parallel logic programming and
parallel linear algebra and the development of portable parallel
programming methodologies.

To encourage use of the ACRF as a national user facility,
Argonne is sponsoring various classes to familiarize potential
users with the ACRF multiprocessors and with parallel programming
in general. The next classes will be held on November 11-13, 1987
and January 13-15, 1988i. The first two days will cover parallel
programming on the Alliant, Encore,
Sequent, and Intel computers; the third day will be devoted to
consideration of each attendee's particular project. Fortran will be
emphasized as the primary programming language, although C will be discussed.
This will be a hands-on class; at its completion participants will
have written and run a number of programs on each machine, and should
be familiar with the ACRF environment.

Those interested in the classes should contact

Teri Huml
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, IL 60439-4844

(312) 972-7163
huml@anl.mcs.arpa

There will be a $25.00 charge for this class, no financial support
for attendees is available.

------------------------------

End of NA Digest

**************************
-------