SCENE July 1992

Table of Contents

Supercomputing at UNLV

Until July 1990, the faculty, staff, and students conducting much of UNLV's basic and applied research were operating under a severe handicap; trying to solve space age problems with antiquated computers. The Congress of the United States appropriated $10,000,000 in 1989 for the purchase of a supercomputer system to be operated by UNLV. The National Supercomputing Center for Energy and the Environment (NSCEE) was established when a Cray Y-MP 2/216 and ancillary equipment were procured and installed with appropriated funding and became operationally available in July 1990.

The figure above shows the total cumulative Cray YMP-2/216 utilization since January 1, 1992. One system billing unit (SBU) is approximately equivalent to 1 operating hour per CPU on the system. The Cray YMP-2/216 has 2 CPUs. There are approximately a total of 13,000 SBUs available to users per operating year of the system.

With the Cray Y-MP supercomputer, UNLV has become a part of a national and state initiative to establish our competitiveness and leadership in science, engineering, and technological advancement and economic development. With a powerful supercomputer we are able to theorize, analyze, simulate, and design with extreme detail to reach scientific and engineering frontiers never reached before. A supercomputer amplifies the capabilities of our researchers within the state of Nevada by many orders of magnitude. Computational problems that were not even conceivable before now have become routine practice.

Table 1: Performance of Various Computers
Using Standard Linear Equations Software (as of March 31, 1992).

Machine Best Efforts MFLOPs Percent of a Cray Y-MP 2
Macintosh II 0.0064 0.001%
IBM-PC/AT w/80287 0.012 0.002%
Apple Mac SE/30 0.1 0.017%
VAX 11/780 FPA 0.11 0.018%
Sun-3/280, 20 MHz 68881 0.11 0.018%
Apple Macintosh IIsi 0.12 0.020%
IBM PS/2-70 (20 MHz) 0.15 0.025%
PC Craft 2400/25MHz w/80387 0.17 0.028%
Apple Macintosh IIfx 0.41 0.068%
Sun-3/280 + FPA 0.46 0.076%
DEC VAX 6000/410 1.5 0.248%
NeXTCube 1.4 0.232%
Compac Deskpro 486 w/487 1.4 0.232%
Sun SparcStation 1 1.4 0.232%
Sun SparcStation 1+ 1.8 0.298%
Sun 4/490 3.6 0.596%
Sun SparcStation 2 4.0 0.662%
Sun 4/600 MP (1 proc) 4.3 0.712%
SGI 4D/220 5.9 0.977%
IBM RS/6000-320 38.0 6.291%
Convex C-210 (1 processor) 44.0 7.285%
IBM-3090/180E 71.0 11.755%
Convex C-220 (2 processors) 87.0 14.404%
Cray YMP-2/216 (1 processor) 324.0 53.642%
Cray YMP-2/216 (2 processors) 604.0 100.000%

How Fast Is a Supercomputer?

The fundamental unit of performance measurement in scientific and engineering supercomputing is MFLOPS (millions of floating point operations per second). To provide a perspective for our Cray Y-MP supercomputer, we can compare its computational power against other computing resources with which we may be more accustomed. Table 1 summarizes the best effort performance of some of the computers we use today in Nevada.

The CPU speed is not the only advantage that supercomputers offer. The Cray Y-MP also offers significantly faster memory, disk, and input/output channels. The scientific and engineering software environment on the Cray is also very rich and complete. Thousands of application software programs in engineering, sciences, arts, business, economics, geosciences, data base management, graphics and visualization, and real time computing have already been developed for the Cray Y-MP system.

However, supercomputing is more than just running our dusty FORTRAN program on another (faster) computer. Supercomputers are extremely fast and somewhat complex machines. Their appropriate use requires education and training. The key to performance gain in supercomputing is vectorization, which is a form of parallelism. In fact, most of the cost of a supercomputer is in its "vector" hardware. Therefore, it is important to take advantage of this hardware to achieve the highest performance possible for a given application. Fortunately, the FORTRAN and C compilers will do most of the work; but the code developer must design appropriate numerical algorithms to take advantage of the hardware. For most users, all of this is very transparent and what they will see is significantly faster turn around. Most of the applications programs already take advantage of the special features of the Cray hardware.

Impact of Having a Supercomputer

Figure 1 shows the research performance enhancement achievable with a supercomputer. Computing tasks that were not feasible or thinkable before, become routine tasks with the supercomputer.

Figure 1: The Supercomputer Impact

The Cray Y-MP supercomputer puts UNLV and the state of Nevada in the elite class of national laboratories, high technology commercial industry, and leading universities that use supercomputing to advance their science and technology. The timing is right and we can make a significant impact. In fact, we have already made an impact by acquiring a Cray Y-MP supercomputer in Nevada. Now the rest is in the hands of our researchers to advance the frontiers of science and engineering. As educators, we also need to learn and teach this new technology and use it in the classroom.

A prime mission of the Supercomputer Center is user education. A number of short courses on the use of the supercomputer are planned and will be offered throughout the year at regular intervals. In addition to the courses, two full time staff will assist users of the facility with vectorization issues and/or any problems that the researchers may have.

Who Uses the Supercomputer?

During the last eight months, approximately 200 research projects, two undergraduate courses (CSC 130 and MAT 466), and a graduate course (CHE 795) have been supported by NSCEE computers. Table 2 shows the various organizations and academic departments that are actively using the NSCEE computers for research and education. From January 1, 1992 through May 27, 1992 the Cray YMP system logged 4375 system hours which is approximately 60% of the total capacity of the system.

A graduate course entitled "Supercomputing" will be offered this fall. A course description can be obtained by calling 702-597-4153.

How Does One Gain Access to the Supercomputer?

There are four ways to obtain access to NSCEE computers and the Cray YMP supercomputer:

  1. Call 702-597-4153 and request a start-up grant application form. An application form will be mailed (or faxed) to you.

  2. Fax a request for application to 702-597-4156. An application form will be faxed (or mailed) to you.

  3. If you have a personal computer with a modem, dial 702-597-4154 (8 bit, no parity, 1 stop bit setting at 300, 1200, 2400, or 9600 baud); press the return key a few times; at the %nscee> prompt, type telnet; when asked to login type newuser; and respond to the questions asked. An account will be established for you and the password information will be mailed to you.

  4. If you have access to Internet (i.e., your computer is networked); type telnet; when asked to login, type newuser; respond to the questions asked. An account will be established for you and the password information will be mailed to you.

Methods 3 and 4 are the fastest way to obtain an account.

Table 2: NSCEE Users (as of March 31,1992)

UCCSN University of Nevada, Reno Government Affiliates Industry

To: All Scientists and Engineers Utilizing Supercomputing in
Analysis and Simulation

by Hugh Patrick, Cray Research, Inc.

After receiving numerous calls inquiring as to the validity of rumors and published articles which are being spread by representatives of IBM, I have decided to write this letter in an attempt to clarify misconceptions which have resulted from the purchase last year of several IBM workstations by a research group at Lawrence Livermore National Laboratories (LLNL). Typical of these articles is one entitled "David Gelernter's Romance with Linda" by Mr. John Markoff, which appeared on January 19 in the New York Times.

"Last fall computer scientists at LLNL in California unplugged a Cray supercomputer and replaced it with 14 IBM RS/6000 workstations, each the size of an orange crate, and wired them together. When the scientists turned on the new network, which cost about $1 million, they discovered that it was just as powerful as the old Cray X-MP, which had cost $20 million several years before. And the scientists believe that for solving some types of scientific problems, the network can easily be made as powerful as the latest model Cray, the Y-MP C-90 which costs $20 million to $30 million."

The truth however, differs substantially from the article. I have spoken with analysts at LLNL and others familiar with this situation and have obtained the following facts:

Until last spring, the referenced unclassified research group at LLNL was allowed access to a CRAY X-MP system which belongs to a classified division. When the owners informed them that they needed to reclaim the system for an upcoming project, the group was forced to seek computational resources elsewhere. The CRAY X-MP system that they had been using continues to be used 24 hours a day, seven days a week by the classified division. There are no plans to replace this, or any installed Cray system by any number of workstations.

The group had a budget of approximately $1 million with which to obtain replacement computational resources. This group was persuaded by the local "killer workstation" evangelist to spend the entire amount on workstations. A competition was conducted using scalar codes and the IBM RS/6000 was chosen.

The "Linda" software discussed in this article is not even used on these workstations. In fact, although they are connected by a network and do share files, no applications have yet been run by this set of workstations in a multiprocessing mode (one copy of a code executing on more than one CPU simultaneously). They are used as individual workstations and therefore the assertion that the "new network is just as powerful as the old CRAY X-MP" is analogous to saying, "this new group of 14 Cessna airplanes is just as powerful as a Boeing 757." True, all of them will fly and carry passengers, but the 757 will carry many more passengers and, for all but very short trips, will get them to their destination much sooner.

In the CRAY Y-MP EL system, Cray Research today offers the processing capability equivalent to a CRAY X-MP CPU for approximately $1 million.

Despite the claims of the article, three months ago LLNL placed an order for the largest CRAY Y-MP C-90 system offered today by Cray Research (16 CPUs, 2,048 MBytes of common memory.

Since the IBM RS/6000 is a highly regarded product in its own right in comparison with other scientific workstations, it is unfortunate and very disappointing that IBM's marketing force continues to perpetuate the falsehoods contained in articles such as the one mentioned above. A supercomputer is not a workstation; a workstation (or even a group of workstations) is not a supercomputer. They have been designed for completely different tasks and, when used effectively, can complement one another and enhance the abilities of the scientist or engineer who is conducting research which involves computational analysis or simulation.

It has been the privilege of Cray Research for the last 20 years to provide supercomputer systems to the world's premier researchers, and it is our operating objective to continue to design and build the fastest and most versatile, general-purpose supercomputers available at any point in time.

As you continue in your research on the cutting edge of technology and understanding, we plan to continue to provide the cutting edge computational tools to assist you.

New Hardware Acquired

Sun SPARCserver 690MP

by Michael Ekedahl, Senior System Analyst

The Sun SPARCserver 4/490 will be upgraded to a Sun SPARCserver 690MP sometime during July 1992.

Hardware changes include the addition of 12 gigabytes of disk, and three additional disk controllers. System memory (RAM) is being increased from 32 megabytes to 256 megabytes. The MP will have four CPUs. Two 5 gigabyte 8mm tape drives are also being added. It is estimated that these system improvements will improve overall throughput by a factor of 10.

NSCEE workstations will also benefit from the installation of an NFS accelerator. NFS throughput should increase by a factor of four.

Software changes include an operating system upgrade from Sun OS 4.1.1 to Sun OS 4.1.2MP. File systems will be relocated to balance system load across the four available disk controllers. Additionally, the Sun Online DiskSuite was purchased to support disk striping, disk mirroring, and support for large file systems up to one terabyte.

NSCEE will attempt to minimize downtime and user impact. However, all users should be aware of the following:

If you have specific concerns about the upgrade mail questions to

New Software Acquired

SAS Software available on the Sun SPARCserver 690MP

by Michael Ekedahl, Senior System Analyst

The NSCEE has acquired the SAS software system for the Sun SPARCserver 690MP. Historically, SAS has been considered a statistical software package. However, recent software developments coupled with the NSCEE's ability to purchase all SAS modules make SAS a complete applications system.

Base SAS is the foundation of the SAS system. It provides the routines to access, manage, analyze, and present data. Release 6.06 of the SAS System supports indexing, and data set compression to improve performance and reduce disk storage.

SAS/STAT is integrated into the SAS software system and provides a broad range of statistical capabilities including:

SAS/GRAPH performs the presentation and information graphics function within the SAS system. Using SAS/GRAPH it is possible to create bar, pie and 3-D block charts. Using the SAS/GRAPH map data sets, it is also possible to project graphical information on map data. Contour Plots and 3-D graphics are also supported by the SAS/GRAPH system.

SAS/ASSIST can be used to access the SAS system through a menu-driven user interface. Additionally, SAS/FSP supports full-screen data entry, editing, and querying.

Econometric and time series analysis are supported by the SAS/ETS software. The FORECAST procedure uses trend extrapolation to forecast univariate time series. Also, seasonal adjustment is supported using the standard U.S. Bureau of Census X-11 Method. Other modules include SAS/OR for project management, decision support and mathematical programming, SAS/IML, an interactive matrix language for advanced mathematical and engineering applications, and SAS/AF to create user friendly front ends to SAS software applications.

SAS, SAS/STAT, SAS/GRAPH, SAS/ASSIST, SAS/FSP, SAS/ETS, SAS/OR, SAS/IML, are registered trademark of SAS Institute Inc., Cary, NC, USA. Questions about SAS should be directed to

MSC/NASTRAN V66B Software Now Available on the Cray Y-MP

by Sam West, Cray Research, Inc.

MSC/NASTRAN is a large scale, general purpose digital computer program which solves a wide variety of engineering problems by the finite element method. MSC/NASTRAN, a version of the NASTRAN general purpose structural analysis program, has been developed and is maintained by the MacNeal-Schwendler Corporation (MSC). NASTRAN is a registered trademark of the National Aeronautics and Space Administration (NASA).

MSC/NASTRAN is fully documented in a collection of manuals published by MacNeal-Schwendler Corporation. This manual set consists of a two volume User's Manual, a two volume Application Manual, a Demonstration Problem Manual, as well as a variety of other manuals. Additionally, a short man(1) page is provided that describes options that can be supplied to the `nastran' command.

On clark, MSC/NASTRAN is invoked with the `nastran' command; e.g., if `file.dat' is an MSC/NASTRAN input file in the user's current working directory, then:

% nastran file

would invoke the MSC/NASTRAN program with `file.dat' as input.

The following example uses a file delivered in MSC/NASTRAN's Test Program Library (TPL) and demonstrates several nastran options:

%nastran jid=am761 mem=1024k aft=23:30 prt=no

This command will execute MSC/NASTRAN with the input file am761.dat, after 11:30pm of the current day, with memory utilization limited to 1024k (note that 1024k was selected through evaluation, for a three-dimensional problem with 210 grid points, of the equation in Section 7.3.2 of the Application Manual.) Specifying prt=no will cause the files am761.f04, am761.f06, and am761.log, which are created during the MSC/NASTRAN execution, to remain in the user's directory and not be printed. The .f04 file will contain execution summary messages, the .f06 file will contain the MSC/NASTRAN output, and the .log file will contain the system log messages. Further description of the contents of these files is discussed in Section of the Application Manual.

The files that comprise the Test Program Library are located in /msc/n66b/tpl and are publicly readable. The data files that are discussed in the Demonstration Problem Manual reside in /msc/n66b/demo and are publicly readable.

For further information, please see the MSC/NASTRAN documentation located in room TBE A-308 at NSCEE. Also, see the nastran man(1) page for quick descriptions of the options to the `nastran' command. Questions about SAS should be directed to

xnetlib(3.0) - an X interface to netlib

by Sushart Kumar Pijari, Graduate Research Assistant

A new X-windows based version of netlib (called xnetlib) recently developed at the University of Tennessee and Oak Ridge National Laboratory is available on Unlike netlib, which uses electronic mail to process requests for software, xnetlib uses an X Window graphical user interface and a socket-based connection between the user's machine and the xnetlib server machine to process software requests. Xnetlib is available to anyone who has access to the TCP/IP Internet. Using xnetlib one can get the software that are available at netlib directly through the netlib server at Oak Ridge National Laboratory, ( Most development of xnetlib has been done on an IBM RS/6000 running the X11R5 server and libraries. It has been tested on Sun, Decstation 5000, SGI 4D/25 running IRIX 3.3.3 with X11R4, Sequent, NeXT Dimension (w/CoExist), HP9000, and Convex. For further information on xnetlib check out the man page on xnetlib or read the TUTORIAL.

Scientific Visualization and Graphics Software


by Frederick J. Haab, Graduate Research Assistant

"An integrated software development environment for information processing and data visualization", KHOROS can be used with an extremely high level graphics interface to quickly complete signal and image processing tasks, and easily plot two and three dimensional data sets or user defined functions. A high level visual language allows development of user defined programs. The internal format for images, VIFF, is accompanied by several conversion programs for import and export of data. KHOROS also provides many application specific data display and processing libraries, including image processing, digital signal processing, numerical analysis, and display of graphics and images. Also included is a set of interactive X Windows based programs that allow colormap manipulation, animation, plotting, warping of image data, and surface visualization. A set of meta-system calls allow distributed computing and efficient data transport. Users may select computing locations using the visual language.

Using KHOROS edge extraction, the original and edges were XOR'ed together.

3-D Plotting by KHOROS

Created by xprism3, a KHOROS utility

KHOROS is available on the Cray YMP and will be available soon on, and on the NSCEE Silicon Graphics Workstation. Example data is available. Users can direct questions to

NCSA Polyview and Isovis

by Frederick J. Haab, Graduate Research Assistant

CSA Polyview displays an HDF Vset (information on this format is available in the documentation) file of polygons or points as a two or three dimensional interactive image with optional annotation which may be written to a raster image file. It allows users to; change display projection, draw data as points, lines or polygons, choose constant or gourad-shaded polygons, load and manipulate the color map, animate a series of data sets, and view a fly-by of the data using a script file. The program is intended for researchers and engineers working with polygonal data. It assists in the analysis of simulation results and the presentation of this to others. Just arrived is Polyview 3.0, beta release, which improves on the user interface. Sample data is available.

NCSA Isovis (Isosurface Visualizer) is a non-interactive batch utility to allow users to easily create three dimensional animations of time dependent data. Special features allow; control orientation of geometry including independent x, y, z translation, rotation and scaling, control of light source (location and color), specification of material properties, saving of hardware rendered images to disk for three dimensional rendered animation sequences (including full hidden surface removal), and saving of polygonal isosurfaces in either NCSA HDF Vset format or ASCII format. Sample data is available.

Both Polyview and Isovis are now available to NSCEE Silicon Graphics Workstation users. Questions about either Polyview or Isovis can be directed to


by Sam West, Cray Research, Inc.

MPGS (MultiPurpose Graphic System) is an application for visualizing analyses from many scientific and engineering applications including finite element, finite difference, fluid dynamic, chemistry, and combustion codes. It is distributed between a CRI system running UNICOS and various graphics workstations (including SGI and IBM RS/6000) across a TCP/IP network. The various tasks are distributed transparently to the user to maximize productivity by leveraging the strengths of both the CRI system and the graphics workstation for large analysis problems. This distribution of tasks makes it practical to do many kinds of visual analyses interactively that are too unproductive otherwise.

MPGS performs and creates transformations, hidden surfaces, hidden lines, contours, vectors, particle traces, clipping, and also contains a false color map. Transient data can also be used to produce time-dependent visualization. There is extensive animation capability with easy connections to video. Online help is available and the user interface is MOTIF-based.

MPGS is a product of the Industry, Science & Technology group of Cray Research, Inc.

For further information on MPGS please see the MPGS Reference Manual, CRI Publication APR-5525, and the MPGS User and Command Language Reference Manual, CRI Publication APR-5527. These manuals are available in room TBE A-308 at NSCEE. Questions can be directed to:

ORACLE Database Management System available on the Sun MP

by Michael Ekedahl, Senior System analyst

The ORACLE Database Management System has been installed at the NSCEE and is currently being tested and configured. We expect the software to be available for production use during August 1992.

Included in the NSCEE ORACLE installation are SQL*Forms 3.0, and SQL*ReportWriter. These tools allow the creation of database applications without the writing procedural programs in C or FORTRAN.

For users requiring the additional capabilities of a procedural programming language, the Pro*C preprocessor provides access to ORACLE Database Management System, SQL*Forms 3.0 and SQL*ReportWriter without giving up the power and control provided by the C programming language.

For more information about ORACLE or to obtain ORACLE access, send mail to

Newuser Available on Nye

by Katharine Macke, Graduate Student Assistant

A software application called `newuser' has been installed on nye. Newuser will allow a person to request a start-up account on the Sun 4/490 and Cray YMP-2 systems using electronic mail as opposed to coming into the center or setting up an account via the regular mail.

Users on the local system can simply type newuser:

% login: newuser

Users not on the local system must first telnet to nye:

% login: newuser

Newuser will ask several questions including name, address, college or other affiliation, e-mail address, telephone number, project title, and a brief description of the project and why one specifically needs to use the Cray. Project title and description are required.

The request will then be turned over to the Director of NSCEE for approval. Upon approval, NSCEE will mail the new user a security agreement which must be signed and returned. When the security agreement is signed and returned by the new user, the account is opened and made ready for use.

Research Reports

Water Flow In the Vessels of Plants

by Paul J. Schulte & Arthur L. Cattle Jr.
Department of Biological Science
University of Nevada, Las Vegas

The ability of plants to conduct water from the soil to the leaves is an important research topic in biology. Plants have evolved a variety of specialized cells with diverse internal structures that function in this transport of water. The relationship between the structure of water transport cells and their ability to transport water has been the subject of numerous studies. Most, however, have been limited by the lack of suitable mathematical models for fluid flow through the intricate obstructions found in many types of conducting cells. The current generation of supercomputers and software systems for fluid dynamical analysis have made feasible a numerical simulation approach to studying the flow of water through plant cells.

One particular type of plant cell that is specialized in water conduction is called a vessel. More correctly, these cells are called vessel members because the vessels, which may be over a meter in length, are made up of individual cells roughly one mm long. For many plant species, the junction between each vessel member involves a structure called a perforation plate that appears as a series of parallel bars crossing the cell. A central question in studies of plant water transport and in the evolution of water conducting cells is: Are these perforation plates significant obstructions to water flow? The physical characteristics of these perforation plates are highly variable within the plant kingdom. How is flow affected by

  1. the number of bars
  2. the thickness, width, and height of the bars, and
  3. the angle (and length) of this plate with respect to the axis of the vessel.

Figure 1. Finite element mesh of the model for a plant vessel (left)
and for the perforation plate (right) which is inserted into the vessel.

This research project was designed to apply the Fluid Dynamics Analysis Package (FIDAP, Fluid Dynamics International, Inc.) to the above questions. The FIDAP program uses the finite element method to solve the relevant partial differential equations for, in our case, steady-state flow of a non-compressible fluid. A special format for constructing the finite element mesh for an actual plant vessel with a perforation plate was developed (Figure 1). This format enabled us to construct the plate separate from the remainder of the cell and then place it within the cell. We can construct plates of different thickness and at different angles relative to the vessel axis.

Figure 2. Velocity vector plot for water flow through the pores of the perforation plate. The modeled cell is 30 m in diameter.
Figure 3. Velocity contour plot in the plane of the perforation plate.

The solutions to this model show water is accelerated through the pores of the plate (Figure 2). Viewed within the plane of the perforation plate, each pore has a velocity peak and independent profile (Figure 3). For the central pore, the peak velocity is at the center but is offset away from the vessel wall for the other pores in the plate. The overall effect of the plate on hydraulic resistance to water flow through the vessel was estimated from the pressure gradient along the axis of the vessel. Assuming a peak water flow rate of 3mm/s, the pressure drop along a typical vessel member for this species (40 um diameter, 810 um length) would be about 33 Pa, of which 8 Pa can be attributed tothe perforation plate. Thus, results to date have suggested that the perforation plates may account for up to 25% of the total resistance to water flow along a plant vessel. This conclusionis of prime interest to plant physiologists and plant anatomists who have been considering the relative significance of these perforation plates in obstructing the flow of water through the water conducting tissues of plants. Future simulations with this model will include testing the effect of the perforation plate on flow at serveral different plate angles and for plates of different thickness. such simulations will help us to understand the role of these structures in the hydraulic resistance of plant cells.

Topical Reports

IMSL Source

by David A. Ence, Student Assistant

IMSLX: is a utility which enables one to get a copy of the source code of a desired IMSL library routine. There are two options that can be used with IMSLX:

When using the `t' option, one must specify a specific library name. The library names are imsl_sfun.a, imsl_math.a and imsl_stat.a. When using the `p' option, then one must first specify the library desired, and then the desired function second.


i) imslx p imsl_math.a z9orc.f

This would place a copy of the source, of the function z9orc.f in the math library, in the current working directory.

ii) imslx t imsl_stat.a

This would give a listing of functions in the imsl_stat.a library.

For more information type man imslx.

COS to UNICOS Migration Tools

by Sam West, Cray Research, Inc

Migration Tools 6.0 has been installed on clark. The Migration Tools are a collection of programs and libraries that aid in the re-targeting of user codes and data from Cray's COS operating system to UNICOS. Version 6.0 is the latest revision of that collection.

The Migration Tools consist of, among other things, the following items:

For further information about the Migration Tools, please see the User's Guide for COS-to-UNICOS Migration, CRI publication SG-7030, which is available in room TBE A-308 at NSCEE.


by Sam West, Cray Research, Inc.

In the last issue of SCENE, we discussed the hardware performance monitor and the hpm(1) command. That discussion detailed some aspects of the hardware circuitry that comprise the hardware performance monitor and some of the limitations imposed on the operation of the hpm(1) command. To refresh your memory, the limitations on hpm(1) are:

This month, we will discuss another tool, perftrace, that allows the savvy user to overcome the second of those limitations and focus HPM attention on an individual program unit.

perftrace comes in two parts:

perftrace operates by using the flowtrace option on the compile of a user's program to interpose flowtrace calls before and after user subroutine calls. The user's program is then linked with the perftrace library (/lib/libperf.a) instead of allowing the flowtrace routines to be linked (please see the man page for flowtrace for further information on it.) Then, the user's program is executed. During execution, the perftrace library causes raw data to be written to the file, in the user's current working directory, which is then analyzed using the perfview(1) command.

cf77 example:

$ cf77 -F -l perf myprog.f   # compile with Flowtrace on and link the perftrace library
$ ./a.out   # execute the program with the default HPM group (group 0)
$ perfview -LBcM > perfview.out   # analyze and report the raw data

alternatively, the following set of commands:

	$ env PERF_GROUP=0 PERF_DATA=group0.raw ./a.out 
	$ env PERF_GROUP=3 PERF_DATA=group3.raw ./a.out 
	$ cat group[03].raw | perfview -LBcM - > perfview.out

will gather data from HPM groups 0 and 3 for your program's execution, concatenate that data and produce a cumulative report for both datasets.

To limit data gathering and reporting to a single program unit, e.g. sub.f, within myprog, the following could be used:

$ cf77 -c myprog.f   # this line produces myprog.o
$ cf77 -F -l sub.f myprog.o   # only the subroutine(s) contained in the file sub.f will generate perftrace data.
$ ./a.out $ perfview -LBcM > perfview.out

As implied in the above examples, there are a number of environment variables that may be manipulated when invoking a program that has been compiled with flowtrace on and linked with the perftrace library. These variables may be set before invoking your program (see sh(1) and csh(1)) or the env(1) command may be used, as demonstrated above, to change the values of selected variables for the execution of your program only.

For further information on perftrace and perfview(1) please see the appropriate chapter in the UNICOS Performance Utilities Reference Manual, CRI publication SR-2040.

General Information

Dialing-In via Modem

For users with terminals, IBM-PC's, Apple computers, and other microcomputers, connection to the supercomputing telephone can be accomplished with a modem or through the campus network. The modem and communication software must be set for no parity, 8 bits per character, 1 stop bit, 1200, 2400, or 9600 baud.

To access the NSCEE Center you initially dial-in to one of our modems.

The dial-in phone numbers are given below:

597-4154 (300 - 9600 baud modems)
597-4155 (for the 1200 or 2400 baud modems)

When your computer responds with CONNECT 1200, CONNECT 2400, or CONNECT 9600, slowly hit the [enter] key a few times.

You will soon be connected and receive the prompt:


At this point you will type in the command:

	rlogin hostname

to access to the systems on the NSCEE Internet. The host names are given below.

	Example: rlogin

The following list contains the desired host names for the computers in the Center and their IP numbers. All would fall under the domain name of ""

Computer Host and Domain Name IP Address

Cray Y-MP 2/216 or
SUN 4/490 or

[an error occurred while processing this directive]