MGRIDMichigan Grid Research and Infrastructure Development
Copyright 1988-2018, All rights reserved.
References in periodicals archive ?
For HYDRO2D, MGRID, and TOMCATV 100% of misses are temporal.
Figures 12(a)-(d) divide the intranest misses of APSI (25%), TOMCATV (27%), MGRID (35%), and TURB3D (35%) into self and group, conflict and capacity misses.(2) Figures 13 and 14 divide internest and program misses into self and group, capacity and conflict misses, again plotting the fraction of misses against their distance.
In line with Assertion 3, MGRID's intranest misses are 100% capacity misses, and most are at a distance of [2.sup.16] references (remember the cache size is 32K = [2.sup.12] words, and many of these references have locality).
(These results echo the high amount of intranest reuse shown in Figure 3.) Ninety-eight percent of the replacing lines in SWIM and 75% in MGRID are reused extensively both temporally and spatially.
In SWIM, MGRID, and TOMCATV three nests contribute to between 80% and 99% of the overall misses, while most other codes exhibit a more regular distribution.
Only four programs contained significant numbers (25% to 35%) of intranest misses: APSI, TOMCATV, MGRID, and TURB3D.
SU2COR and MGRID have around 15% self-temporal misses.
Prediction Accuracy of FP Loads [%] Benchmark Stride Last-Value Register File tomcatv#2 22.95 6.32 0.16 swim#1 86.21 82.78 0.00 swim#2 18.03 26.09 1.57 su2cor#2 38.87 39.44 21.22 hydro2d# 88.72 89.63 46.56 mgrid 18.81 18.33 4.71 Average 45.59 43.76 12.37 Prediction Accuracy of FP Computation Instruction [%] Benchmark Stride Last-Value Register File tomcatv#2 21.88 15.08 2.13 swim#1 23.15 19.88 1.79 swim#2 15.54 21.38 0.16 su2cor#2 16.36 16.63 9.99 hydro2d# 89.68 89.89 42.79 mgrid 7.11 6.87 4.04 Average 28.95 28.28 10.15 When the prediction accuracy is measured for ALU instructions in the floating-point benchmarks, it reveals several more interesting results as illustrated by Table V.
Spec95 Integer Spec95 Floating Point Benchmark [%] Benchmark [%] go 8.83 tomcatv#1 13.93 m88ksim 16.42 tomcatv#2 55.14 gcc1 12.79 swim#1 8.98 gcc2 15.44 swim#2 65.9 compress95 6.52 su2cor#1 9.35 li 15.22 su2cor#2 27.74 ijpeg 36.37 hydro2d#1 16.28 perl1 7.57 hydro2d#2 15.09 perl2 15.27 mgrid 51.4 vortex 30.34 average#1 12.14 average 16.48 average#2 43.06 This table, however, may lead the reader to an incorrect conclusion about the effectiveness of the stride predictor in exploiting nonzero strides and its significance to the expected ILP improvement.
It indicates that in some benchmarks like m88ksim, li, and perl the contribution of load value prediction is significant while in some other benchmarks like compress, vortex, and mgrid it is barely noticeable.
In the floating-point benchmarks, swim and mgrid, all the value predictors achieve similar ILP.
The stride predictor increases the ILP of swim (in the computation phase) from 47 to 104, and in the benchmark mgrid it increases the ILP from 53 to 73.