ALVINN

AcronymDefinition
ALVINNAutonomous Land Vehicle in a Neural Network (Robotics Institute at Carnegie Mellon University; Pittsburgh, PA)
References in periodicals archive ?
Dean, "Alvinn: an autonomous land vehicle in a neural network", Advances in Neural Information Processing System, vol.
alvinn, nasa7, tomcatv, doduc, and fpppp are the programs with high register pressure, and here we see the limitations of the all-or-nothing approach to spilling that is the foundation of Chaitin-style register allocation.
Examples are alvinn, espresso, nasa7, and tomcatv (Figure 15).
In general, alvinn, espresso, gcc, and doduc prefer a small size of the base region.
The code fragment of Figure 22 is taken from the function update_weights of alvinn. Presumably there are three registers.
Time in seconds (ratio to graph coloring) Benchmark Usage counts Linear scan espresso 21.3 (6.26) 4.0 (1.18) compress 131.7 (3.42) 43.1 (1.12) li 13.7 (2.80) 5.4 (1.10) alvinn 26.8 (1.15) 24.8 (1.06) tomcatv 263.9 (4.62) 60.5 (1.06) swim 273.6 (6.66) 44.6 (1.09) fpppp 1039.7 (11.64) 90.8 (1.02) wc 18.7 (4.67) 5.7 (1.43) sort 9.8 (2.97) 3.5 (1.06) Time in seconds (ratio to graph coloring) Benchmark Graph coloring Binpacking espresso 3.4 (1.00) 4.0 (1.18) compress 38.5 (1.00) 42.9 (1.11) li 4.9 (1.00) 5.1 (1.04) alvinn 23.3 (1.00) 24.8 (1.06) tomcatv 57.1 (1.00) 59.7 (1.05) swim 41.1 (1.00) 44.5 (1.08) fpppp 89.3 (1.00) 87.8 (0.98) wc 4.0 (1.00) 4.3 (1.07) sort 3.3 (1.00) 3.3 (1.00) The measurements in Figure 8 and Table II indicate that linear scan makes a fair performance tradeoff.
Benchmark Time in seconds (ratio to graph coloring) SCC-based Full liveness analysis analysis espresso 22.7 (6.68) 4.0 (1.18) compress 134.4 (3.49) 43.1 (1.12) li 14.2 (2.90) 5.4 (1.10) alvinn 40.2 (1.73) 24.8 (1.06) tomcatv 290.8 (5.09) 60.5 (1.06) swim 303.5 (7.38) 44.6 (1.09) fpppp 484.7 (5.43) 90.8 (1.02) wc 23.2 (5.80) 5.7 (1.43) sort 10.6 (3.21) 3.5 (1.06) 6.2 Numbering Heuristics
Benchmark Time in seconds Depth-first Linear (layout) espresso 4.0 4.0 compress 43.3 43.6 li 5.3 5.5 alvinn 24.9 25.0 tomcatv 60.9 60.4 swim 44.8 44.4 fpppp 90.8 91.1 wc 5.7 5.8 sort 3.5 3.6 6.3 Spilling Heuristics
We first illustrate how the CMEs are analyzed to drive this optimization with a loop nest from alvinn program, a SPECfp benchmark.
5.1.1 Padding for a Loop Nest in alvinn. When we run our equation generator on the loop nests, it generates a collection of CMEs summarizing the memory behavior of each nest.
For example in Figure 4, when using the ESP training set from alvinn to predict the branches in ear(ea), ear has an 8% miss rate.
In Figure 4, the C program training sets that provided the worst prediction came from li, wdiff, and alvinn. In Figure 5, the Fortran programs that provided the worst training set for prediction are not as concentrated as in the C programs, and they include mdljsp2, SDS, fpppp, swm256, and tomcatv.