(1) Fielding and Anderson (1983) Read through the article and, in a personal journal
entry, respond to the following:
a. According to Fielding and Anderson, (i) which are the three types of statistics
available from “Section 15” data and (ii) which are the three types of
measures that can be calculated using those and what do they measure?
b. Based on your experience from NTD and the class discussions, is the
performance-evaluation method the authors suggest still feasible? Is the
categorization of statistics (i.e., data collected), measures, and performance
components (or concepts) still representative of the NTD data collected and
measures reported?
Reference: Fielding, G. J., & Anderson, S. C. (1983). Public transit performance
evaluation: application to Section 15 Data. Transportation Research Record, 947(1983),
fielding__g._j.____anderson__s._c.__1983_..pdf

Unformatted Attachment Preview

1
Transportation Research Record 947
Public Transit Performance Evaluation: Application to
Section 15 Data
GORDON J . FIELDING AND SHIRLEY C. ANDERSON
Pe rfo rmance ind icator~ nre quantitative measures that enoble managers and
policymakers to monitor the current position of an agency and outline strategies to improve performance. Because public services have many different
dimensions of performance, a large number of performance indicators are
normally used. In this paper a conceptual model is used to help select a few
performance indicators that represent all the important performance con·
cepts. Data were obtained from a national sample of 311 urban bus transit
systems in the first year that data were reported under Section 15 of the
Urban Man Tra nsportation Ant of 1965, as amended. The steps in tho
perfo rmarice·ovaluatlon procedure Involve defining a conceptual model of
performance and designing a balancod set of performance indicators that
represent all performance concepts. Factor analysis is then used to select
the indicators that best represent all dimensions of performance. This
smell, representative set of performance Indicators is used to analyze performance and to establish peer-group rankin9s.
The results of applying a performance-evaluation
procedure for publicly owned enterprises to a national sample of 311 urban bus transit systems are
presented in this paper. The research was sponsored
by. UMTA to test the usefulness for performance analysis of a new data bank resulting from the Section
15 reporting requirements (1).
Section 15 of the
Urban •Mass Transportation A~”t of 1964, as amended,
has improved the comparability and coverage of transit statistics by requiring a uniform set of statist i cs f rom al l urban trans it appl icants f o r ope r ating
a ssist ance. The fir s t ye ar of statistics r epo rted
under Sec t i on 15 I fi scal year (FY) 1978- 1979) ha s
been used in this study Ill·
Federal and state government sponsors of local
public services often attempt to evaluate relative
performance of the service agencies to account for
use of public subsidy funds and promote efficient
and effective service delivery.
Performance measurement is also important to management because
performance indicators are the quantitative measures
that enable managers and policymakers to determine
the current position of an agency and outline strategies to improve performance.
But public services typically have many different
dimensions of performance, giving rise to large numbers of performance indicators.
In’ this paper a
conceptual model is used to help select a few performance indicators that represent all the important
performance concepts. This method reduces both data
collection and analysis requ~rements.
surement techniques (3). Many of the measures and
standards used by transit today were documented in
this study. Beca use of the limited availability of
transit statistics, early applications of performance evaluation relied on regiona l da t a . Adaptation of the theoretical work on perfo rmance evaluation to transit in Cal ifor nia was accomplished by
Fielding et al. (4) in 1977. The per.forma nce concepts developed
Fielding et al. are c ur r ently
used by California, Florida, Iowa, Mich iga n, and
Pennsylvania to develop performance monitoring and
reporting requirements 12>· The Fieldi ng conceptual
model has been used in this study. The 12 per formance concepts selected are given in Table 1 as the
group headings of 60 performance measures.
A s econd problem associated with use of per formance i ndica tors is the amount of data that needs to
be collected. If many indicators are desired, then
much data is required, and the output of the indicators is confusing and time cons uming to a nalyze. In
this paper current Section 15 data are us ed to analyze performance by finding a small, representative
set from the 60 performance measures given in Table
1. By using factor analysis on a set of performance
indicators that are numerically balanced across the
d i fferent performance concep ts , an optimal number of
independent dimensions of performance can be deter,1 mined. The most representative performance measures
for each factor dimension will constitute a small
set that covers the dimensions of the much larger
set.
Factor an alysis is a general method for iden tifying and a nalyzing patterns of variation in a data
set. In this method linear combinations of the variables in the data set (called factors) are computed, which are then used to (a) summarize the variance in the original variables, and (b) organize
the original variables into subgroups. These factors, which are uncorrelated with one another, can
be used as a reduced set of summary performance indica t o rs (6). Alte rna tively, as done in this paper,
the f actors can be us ed to identify a reduced set of
performance indicators (those most strongly correlated with each of the factors), whose standardized
values can then be used to rank the performance of
the systems.
The following alternate approaches have been explo red in previous r esear c h Ill to solve the data
p r oblem and to rank transi t systems by performance:
bY
PERFORMANCE-EVALUATION METHOD
The object of any peer-group performance-evaluation
process, such as the one to be described, is to select from that group systems that have extremely
high or extremely low performance. But two significant problems associated with performance ~valuation
must be considered. The first is the methodological
problem of devising a complete and workable model of
pe r f ormance by categoriz ing performance obj ective s
in to concepts and us i ng uniform quantifiable measures of each concept.
Evaluation of transit performance and the development of performance indicators is not new.
In
1958 the National Committee on Urban Transportation
specified s e r vi ce standa rds, objectives , and mea-
1. use of many performance indicators and a simple method of analyzing the averages and totals of
the indicators:
2. use of all the performance measures in factor
analysis, but a reduction of the amount of output to
be analyzed to the sum of the factor scores: and
3. Use of a conceptual model and factor analysis
to select a few performance indicators that represent all the important. performance concepts: both
data collection and analysis requirements are reduced by this method.
The results of the three approaches to performance ranking were compared by using historical data
on 57 u.s. bus transit systems. A Wilcoxon matched-
Transportation Research Record 947
2
Table 1. Performance measures by concept.
Concepts and Performance Measures
Cost-efficiency measures
I. Labor efficiency
Vehicle hours per employee
Revenue vehicle hours per operating employee hour
Vehicle miles per employee
Peak vehicles per executive, professional, and
supervisory employees
Peak vehicles per operating personnel
Peak vehicles per maintenance, support, and
servicing personnel
II. Vehicle efficiency
Vehicle hours per active vehicle
Vehicle hours per peak vehicle requirement
Vehicle miles per active vehicle
Vehicle miles per peak vehicle requirement
Revenue vehicle miles per vehicle mile
Revenue capacity miles per vehicle mile”
111. Fuel efficiency
Revenue vehicle miles per gallon diesel
Vehicle miles (bus) per gallon diesel
Revenue capacity miles (bus) per gallon diesel•
IV. Maintenance efficiency
Total vehicles per maintenance expense
Vehicle miles per maintenance employee
l ,000,000 vehicle miles per roadcall
V. Output per dollar cost
Revenue vehicle hours per operating expense
Vehicle miles per operating expense
Revenue capacity miles per operating expense•
Revenue vehicle hours per total labor and
fringe expenses
Revenue vehicle hours per operations labor and
fringe expenses
Revenue vehicle hours per vehicle maintenance
Revenue vehicle hours per administrative labor
and fringe expenses
Variable
TVH/EMP
RVH/OEMP
TVM/EMP
PVEH/ADM
PVEH/OP
PVEH/MNT
TVH/AVEH
TVH/PVEH
TVM/AVEH
TVM/PVEH
RVM/TVM
RCM/TVM
RVM/FUEL
TVM/FUEL
RCM/FUEL
TVEH/MEXP
TVM/MNT
TVM/RCAL
RVH/OEXP
TVM/OEXP
RCM/OEXP
RVH/TWG
RVH/OWAG
Passenger miles per vehicle capacity mile 3
Passenger miles per passenger•
Service-effectiveness measures (continued)
Vil. Social effectiveness
Revenue vehicle hours per seivice area population
Passengers per service area population
Passengers per elderly population
Passengers per automobileless population
Frequency of service’
VIII. Operating safety
1,000,000 vehicle miles per accident
Revenue vehicle hours per accident
IX. Revenue generation
Passenger revenue per peak vehicle
Passenger revenue per revenue vehicle hour
Operating revenue per revenue vehicle hour
Passenger revenue per p3ssenger
Passenger revenue per vehicle capacity mile•
X. Public assistance
Revenue vehicle hours per local capital and
RVH/VMWG
RVH/ADWG
TPAS/RVH
TPAS/RVM
TPAS/PVH
PASM/RCM
PASM/TPS
Variable
RVH/POP
TPAS/POP
TPAS/ELD
TPAS/AUT
FREQ
TVM/ACC
RVH/ACC
REV/PVEH
REV/RVH
TREV/RVH
REV/TPAS
REV/RCM
RVH/LSUB
operating assistance8
Revenue vehicle hours per state capital and
opera ting assistancea
Revenue vehicle hours per total ope1 a ting assistance
Re 1enue vehicle hours per total capital and operating nsslstance
Passeng•rs per local operating assistance•
Passengers per total capital and operating assistance
Passenger revenue per total capital and operating
RVH/SSUB
RVH/OSUB
RVH/TSUB
TPAS/LOA
PAS/TSUB
REV/TSUB
assislance
Urban area population per total operating assistance
Urban area population per total capital and
operating assistance
Passenger revenue per total operating assistance
Passengers per total operating assistance
Service-effectiveness measures
VI. Utilization of service
Passenger trips per revenue vehicle hour
Passenger trips per revenue vehicle mile
Passenger trips per peak vehicle
Concepts and Performance Measured
POP/OSUB
POP/TSUB
REV/OSUB
PAS/OSUB
Cost-effectiveness measures
XI. Service consumption per expense
Passengers per operating expense
Passenger miles per operating expense•
Passengers per total labor and fringe benefits
Passengers per gallon diesel fuel
Passenger miles per total expense•
XII. Revenue generation per expense
Ratio of operating revenue to operating expense
Ratio of total revenue to total expense
PAS/OEXP
PASM/OEX
PAS/TWAG
PAS/FUEL
PASM/TEX
REV/OEXP
REV/TEX
~ote:
DennHions for statistics are provided in the Urban Mass Transportation Industry Uniform System of Accounts and Records and Reporting System, January 1977, Volume IL
Dropped because of missing values or inconsistent data.
pairs sign-rank test indicated that all three rankings were essentially equivalent (at the 0.05 level
of significance). It was also determined that the
sum of individual factor scores {number 2) was
slightly less accurate in representing the total sel
of indicators than a small set of one indicator per
performance concept (number 3). Thus in this study
the method of factor analysis and the selection of a
small set of indicators to represent all dimensions
of transit performance were chosen.
APPLICATION TO SECTION 15 DATA SET
The steps in the
are as follows:
performance-evaluation procedure
1. Define a conceptual model of performance for
classifying the individual performance measures, as
in Table l;
2. Balance the number of performance indicators
representing each concept in accordance with a desired conceptual weighting scheme;
3. Use factor analysis to find the set of factors that represent all the different orthogonal dimensions in the performance-concept spacei
4. Select variables that have a high correlation
with each independent factor to represent each performance dimensioni and
5. Use the small representative set as performance indicators to analyze performance and to establish peer-group rankings.
Candidate Pe rformance Measures
As shown in Figure 1, th r ee types of statistics are
available from Section 15 data and census reports to
represent transit performance concepts: service input, service output, and service consumption statistics. Together they can be used to monitor both the
costs of producing service and its utilization. The
three categories of statistics yield three types of
performance measures: cost efficiency, service effectiveness, and cost-effectiveness.
Cost efficiency measures the resources expended to produce
transit service (e.g., labor cost per hour); service
effectiveness measures the extent to which service
provided is used {e.g., passengers per hour) i and
cost-effectiveness measures the service used against
the resources expended {e.g., passengers per dollar
operating cost) •
A wide range of transit performance measures is
possihle. As illustrated by the data in Table 1, 60
performance measures are 1 isted and grouped into 12
concepts that can be calculated by using Section 15
data. There are other performance measures, but a
sufficient number have been listed to demonstrate
their use in transit analysis .
In selecting performance measures, consideration
was given to the completeness and reliability of the
data.
Financial statistics are the most reliable.
Passenger statistics are the least reliable–particularly passenger miles of travel.
Census data were added to calculate the popula-
3
Transportation Research Record 947
Figure 1. Transit performance concepts.
for some systems.
State and local assistance measures also were deleted because there was no way of
ascertaining whether the reported value of 0.0 meant
no assistance or a missing value.
The data used in the analysis were also carefully
edited for unreasonable data values. Both the editing process and missing values eliminated 50 percent
of the systems.
Nevertheless, the remaining 155
systems constitute a substantial data set.
SERVICE
INPUTS
labor
Capital
Fuel
Preliminary Analysis
service-effectiveness
SERVICE
OUTPUTS
Ve hi cl e hours
Vehicle miles
Capacity miles
Service reliability
SERVICE
CONSUMPTION
Passengers
Passenger rni les
Operating revenue
Operating safety
tion and automobile ownership measures for social
effectiveness and for the public assistance performance ratios in Table l. All demographic variables
were taken from the County and City Data Book, 1972:
the Rand McNally Commercial Atlas and Marketing
Guide, 1980: or the UMTA Transit Directory.
The
population figure used for each bus system is the
total urbanized area population (where it could be
obtained): otherwise the most relevant city population was used.
Controllability was another consideration in selecting performance measures. It is advantageous if
performance indicators reflect those aspects that
are under the control of the transit managers. Generally, system assets (fixed facilities) and system
environment (service area and its characteristics)
are more or less fixed and are not under operator
control in the short run, whereas service input and
output can be controlled to a greater degree. Service consumption (demand) is more difficult to control because demand for transit depends on system
environment as well as disposable income, fares, and
levels and quality of service.
Although the performance measures were limited by
availability, reliability, and controllability, the
list of feasible measures is far more than transit
managers can use when analyzing transit performance. Parts of a transit organization may use many
individual indicators, but a smaller, more representative set is required for system management.
From the list of 60 performance indicators, 12
had to be deleted because of missing data or measurement error.
(Deleted indicators are marked with
a footnote in Table 1.)
All performance measures
that used passenger mile data were deleted because
fewer than 80 of the 311 systems reported passenger
miles.
Performance measures that used revenue capacity miles were also deleted because of both a
high percentage of missing cases and because revenue
capacity was inconsistently measured across systems:
i.e., many systems reported the same value for revenue vehicle miles and for revenue capacity miles.
Another deletion was the frequency-of-service variable, which was computed by using the number of line
miles. This variable appeared to be double-counted
Several analyses were conducted before choosing the
balanced set of 32 variables used to represent the
performance concepts defined in Table 1 and shown in
Figure 1.
(Deleted variables are indicated by a
footnote in Table 2,) First, a preliminary analysis
was performed on the 48 variables listed in Table 2,
which are grouped by the same 12 concepts used in
Table l.
The rotated factor matrix indicated that
10 factors were sufficient to describe all 12 concepts. However, two of the factors represented the
public-assistance concept, and one of these factors
represented only the two public-assistance measures
based on urban population. These two variables were
then dropped because their factor was too narrowly
defined to be useful, and also because the urbanpopulation measure is not consistently related to
the service-area population. For example, small bus
systems in large cities could have the same urbanpopulation measure as the regional transportation
authority for that city, but actually serve much
smaller populations.
Second, to assure that the definition of the simple structure of performance was not being distorted
by data measurement error or by the paucity of measurements for the safety and fuel-efficiency concepts, the set of 46 indicators was further culled
for insufficiently measured variables and was balanced by performance concept.
Otherwise, given a
weakness in data definition, the structure defined
by the factor analysis with varimax rotation (which
tends to spread the variability equally among the
factors) might submerge the safety and fuel-efficiency concepts and might split other concepts into
several subconcepts, such as public assistance and
public assistance per population. Although only two
indicators were available for fuel and safety, three
indicators were used for each of the other concepts. The following criteria were used for choosing the best indicators for each concept:
1. The consistency and comparability of the data
values: and
2. The ability of the variable to define a single factor: i.e., the retained variables were those
with the highest loadings on a factor in the preliminary analysis.
Based on means, standard deviations, and the correlations among the performance measures calculated
for the 311 bus systems, one performance measure
(RVM/TVM) was dropped from further analysis because
it had so little variance among systems that it
could not act as a discriminator of performance.
PVEH/ADM was dropped because it was not correlated
with any other variables and therefore did not contribute to any factor dimension with eigenvalue
greater than l.
This variable was subject to measurement error in the 1979 data set because purchased transportation (contract service) was recorded as an administration expense by many systems.
For labor efficiency, TVH/EMP and RVH/OEMP were
dropped because they are more related to output per
dollar and revenue generation than to any other efficiency measures.
One maintenance efficiency mea-
Transportation Research Record 947
4
Table 2. Variables used in analysis grouped by performance
concept.
Concepts , Performance Measures, and
Variable Number
Variable
Cost-efficiency measures
I. Labor efficiency
1•
2•
3
4•
5
6
TVH/EMP
RVH/OEMP
TVM/EMP
PVEH/ADM
PVEH/OP
PVEH/MNT
II. Vehicle efficiency
7
8
9•
10
11•
TVH/AVEH
TVH/PVEH
TVM/ AVEH
TVM/PVEH
RVM/TVM
III. Fuel efficiency
12
13
RVM/FUEL
TVM/FUEL
IV. Maintenance efficiency
v.
14
15
16″
TVEH/MEXP
TVM/MNT
TVM/RCAL
Output per dollar cost
17
18″
19
20
21 •
22•
RVH /OEXP
TVM/OEXP
RVH/TWG
RVH/OWAG
RVH/VMWG
RVH/ADWG
Service-effectiveness measures
VI. Utilization of service
23
24
25
TPAS/RVH
TPAS/RVM
TPAS/PVH
VI! . Social effectivenes5
26
27
28
29•
R …
Purchase answer to see full
attachment