TeraBit October 1994

Table of Contents


What's New on NSCEE's Web


Desktop Publishing at NSCEE

Mathew Ronshaugen, staff

NSCEE maintains a broad suite of utilities for desktop publishing.

Hardware

With such a comprehensive assortment of hardware and software applications, NSCEE has great flexibility in its ability to integrate many formats of text and image data for publication or presentation.


Internet Digital Library of USEPA's NALC Data Sets

Dr. Ding Yuan, Research Scientist



In the past two decades, US Landsat satellites have collected a large number of satellite images covering the surface of the Earth. In light of the potential environmental applications of the historical image data, the U. S. Environmental Protection Agency started the North American Landscape Characterization (NALC) Landsat Pathfinder project in 1992 to compile a complete Landsat MSS dataset for the U. S., Mexico, and Central American countries. The complete data collection of NALC consists of 803 triplicates, each triplicate consists of three co-registered and geometrically rectified Landsat MSS scenes acquired in the early 70's, middle 80's, and early 90's respectively.

The availability of such datasets has created a great potential for the global change research community as well as for regional land use planners. For instance, land cover change maps can be derived from the comparisons of the image scenes obtained in the 70's, 80's, and 90's respectively. Since those historical images are already georeferenced and geometrically rectified, the image comparison can be performed either on a direct pixel by pixel basis, or scene by scene basis. Thus, the rates of land degradation, deforestation, urbanization and other statistics can be estimated. These statistics are useful for regional land management as well as for global change research.



In order to advocate the utilization of the NALC data sets for scientific and environmental research, and facilitate the distribution of the results of scientific research conducted at the EPA, the U.S. Environmental Protection Agency is currently sponsoring an experimental project on the Internet distribution of the NALC data set at the NSCEE. This project involves issues such as the management of the database, management of the data storage, information completion, and enhancement, Internet access and security, Internet user interface development and data compression, and transmission. Currently NSCEE has received about 180 NALC triplicates from USEPA. NSCEE is actively working on the database design, data compression and user interface. We expect to have a prototype Internet site for this dataset in the near future.

Figure 1 shows the intensity image for MSS path 43 row 33 obtained in 1990. Figure 2 shows the DEM image for the same region.

(Images were prepared by Prasad Sadhu).


The REECO/PVM Decomposition Explained

Matt Au, Visualization Specialist

In the previous article we examined using PVM to speed up the execution of a computationally complex program being developed by David Cawlfield and Thomas Lindstrom of REECO, and George Miel of NSCEE. The difficult part was computing the upper half of a symmetric matrix of coefficients that needs to be used to solve an integral equation. The method for computing these coefficients is embarrassingly parallel. To compute the coefficient for a particular element one needs to know the element's position within the matrix and a constant called gamma. Thus decomposition of the REECO problem was quite straight forward, assign a matrix element to a processor and collect the answer when it finishes.

The design of the REECO/PVM code was quite simple. There is one main process which keeps track of:

  1. parts of the matrix waiting to be solved,
  2. parts currently being solved,
  3. parts that have been solved,
  4. machine assignments to solve specific elements,
  5. CPU time to solve an element, and
  6. real time (or wall clock time) elapsed since the start of the program.

There are multiple solver processes which do nothing more than to:

  1. compute the coefficient for a given matrix element,
  2. record CPU time elapsed, and
  3. report results back to the main process.

With PVM the less time spent in message passing, the better. Very low communication overhead is achieved in part by the initialization phase. The REECO/PVM implementation initially asks each of the solvers to create a look up table of matrix elements. A single integer can be sent to the solvers and used to look up a particular row and column for a matrix element. Also during the initialization phase, the gamma is read in by the main and sent to each solver.

The look up table is generated for an upper diagonal matrix with the main diagonal appearing at the top of the table, the next diagonal following the main diagonal, and so on until the table is full. This scheme effects load balancing by minimizing the wait time for computing a particular element.

Fault tolerance is always a concern when dealing with parallel code. Though PVM has mechanisms to sense when a solver is no longer available, and to dynamically reconfigure the virtual machine, much work must be done on the programmer's part to insure that completed work is preserved up until the point of a failure.

In the situation with the REECO/PVM code, unexpected divide-by-zero errors were caught on the Convex C220 because that hardware does not support the full IEEE 754 standard. During one of the runs, one of the machines was rebooted causing the results for two matrix elements to be lost.

The main program deals with these errors by logging all results to a checkpoint file and by continuing to administer work to the available solvers. Once all of the work is given out, the main waits up to 15 minutes for the solvers to complete. When a solver returns a result, the main resets the timer and waits again for 15 minutes. Should the time-out limit be reached, the main prints a message stating the name of the machine which did not return results, tells all solvers to quit, and exits. It is the user's responsibility to restart the computations from the checkpoint file. The main determines what has not been computed and gives out only those to the solvers. If the machine running the main, or the main program itself fails, the computation can be restarted from the checkpoint. There is no time-out mechanism for the solvers, which will be addressed in future work.

The overall design of the REECO/PVM code is:

  1. to have a main program control multiple solvers,
  2. to minimize communications, and
  3. to address fault tolerance through the use of checkpoints and time-out thresholds

In the next article we will discuss code portability issues and PVM debugging strategies.


Standard Benchmark Tests

Richard Marciano, Computational Scientist

In the previous issue (September 94) we presented certain criteria with which to compare and contrast the functionality of benchmarking programs. This month we present a suite of the most commonly encountered benchmarks in high performance computing.

Benchmarks seem to be useful first-step estimations. When executing the same benchmark on different machines, you eventually will need to address such questions as:

Finally, we will mention the very recent HINT benchmark (Hierarchical INTegration) from John Gustafson and Quinn Snell at the Ames, Iowa, Laboratory (former authors of the SLALOM benchmark). The measure of performance is given in QUIPS (Quality Improvement Per Second). HINT tries to remove the need for measures such as Mflops/s, MIPS, or Mbytes/s. HINT reveals memory bandwidth and memory regimes, scales with memory size and increasing numbers of processors, allowing it to compare computing from PCs to the largest supercomputers. This sounds like an interesting new approach since QUIPS will reveal, for example, the fact that many RISC workstations depend heavily on data residing in primary or secondary cache, and that their performance will drop drastically on large applications that do not cache well.

Stay tuned for more detailed information on specific benchmark suites!

For questions or suggestions on benchmarking please contact Richard Marciano at (702) 895-4000 or marciano@nye.nscee.edu

[an error occurred while processing this directive]