More commonly used in the commercial sector, this approach to strategic assessment can be adapted to higher education.
Since the 1990s, accountability in higher education has become a challenging issue for higher education. Increasingly, institutions of higher learning have been required to provide performance indicators empirical evidence of their value to state, alumni, prospective student, and other external stakeholders. State commissions of higher education and boards of regents have, in numerous states, developed “report cards” that grade colleges and universities according to their level of performance in a variety of categories. Surveys in the popular press and on the Internet rank institutions according to their retention and graduation rates, resources, academic reputation, and more.
Though substantial energy and effort have been expended to collect, organize, and present performance information, few would argue that the emphasis on the various report cards and surveys has dramatically changed the operational performance of most major universities. Commenting on the inadequacy of performance indicators for higher education, H.R. Kells (1990) warns of the following:
[This] notion to reduce complexity is acceptable if such reduction does not remove or reduce our ability to judge true worth... The lists of performance indicators presented in study after study make little or no reference to the intentions (goals) of the organization to be described and virtually no reference to programme quality with respect to the speciﬁc results of instruction and research. (p. 261–62)
With important stakes such as increasing ﬁnancial resources, encouraging high-quality student applicants, and attracting faculty dependent upon how they “measure up,” universities are rightly concerned with how best to present themselves. Institutions attempt to improve accountability while dealing with the more difficult and complex issue of how to improve university effectiveness. The assumption of many externally derived accountability programs is that emphasis on one will result in the other. However, until performance indicators are linked to the drivers of institutional effectiveness in a meaningful way, the desired improvements in service, productivity, and impact are unlikely to occur. The real test for institutions is to create meaningful systems for strategic organizational assessment and then use that information in internal policy and resource allocation decisions.
Performance indicators can be powerful tools, at both the university and the college/department levels, for internal evaluation and strategic assessment. Though similarities exist between the indicators used for external reporting and internal assessment - indeed, many of the same data can be used for both - the development of internal indicators requires more attention to the contextual characteristics and operational goals of the university. Under these circumstances, performance indicators can provide substantive information for strategic decision making.
The differences between the use of performance indicators for external accountability and internal assessment are clear (see table 1). Performance indicators developed for external audiences are generally aimed at informing three types of stakeholders: consumers (i.e., students and parents), governing bodies (i.e., legislators and accrediting agencies), and potential revenue providers (i.e., alumni, donors, and funding agencies). The external audiences are often limited in their area of interest and have speciﬁc ideas of what might be acceptable institutional outcomes. These external audiences tend to adopt incomplete and one-dimensional views of per performance. A quick review of higher education report cards used to assess public colleges and universities in various states shows a principal focus on undergraduate education. This focus is consistent with the interest of many consumer groups and governing bodies associated with higher education. To present complex information in an easy-to-read and attractive format, external indicators are often presented in the form of rankings or report cards. Furthermore, it is common for external bodies to use a single set of indicators to measure many institutions across a wide range of missions.
For colleges and universities affected by external assessment, the management task is to learn the art of image management (Wu and Petrshiuses 1987). Since many external stakeholders have resources (ﬁnancial, student, and accreditation) that are of interest to the institution, understanding the formulaic relationships between the performance numbers and how they inﬂuence perception of success or failure is key. Thus, the emphasis of the university is primarily on external perception of success and manipulation of image and only secondarily on improved institutional effectiveness. This conclusion is based not on cynicism, but on the reality that the former is easier and more quickly inﬂuenced and changed than the latter.
To be useful internally, performance indicators must be tied to the values and goals of the particular university and should emanate from the institution’s performance objectives. These objectives translate the broad goals of the institution into speciﬁc research problems that can be studied and around which strategies for improvement can be developed. A different type of institutional stakeholder—university decision makers (i.e., faculty, academic administrators, and nonacademic administrators)—uses performance indicators developed for internal audiences. The internal audience represents a very broad spectrum of perspectives and interests with a wide range of opinions regarding what might be acceptable institutional outcomes. These internal audiences tend to adopt multidimensional views of performance. Often, issues are studied in great depth with information presented in the form of long, complex faculty reports. At times, the focus on the higher goals and values precludes speciﬁc action due to a lack of a supporting political coalition and/or criteria by which to evaluate the plan. Though institutional effectiveness and enhanced academic reputation are common goals, there is often lack of consensus about how institutional processes may actually have an impact on those goals.
For college and university decision makers engaged in internal assessment, the management task is to learn the art and science of institutional strategic assessment. Since consensus and buy-in are critical to many university initiatives, providing an acceptable mechanism or process for thinking about difﬁcult strategic questions is key to any real institutional improvement. And because the training of many faculty and academic administrators creates respect for theory and data analysis, presentation of institutional information in a conceptual model with supporting data can often facilitate both debate and decision making. Using data to support hypotheses about institutional strengths and weaknesses can affect decision processes and increase speed of both decision making and implementation of program changes. Making the appropriate linkage between the values and goals of the internal audience, the strategic tasks required, and the data collection and analysis necessary is important for useful internal performance assessment.
In 1992, Robert S. Kaplan and David P. Norton introduced the balanced scorecard, a set of measures that allow for a holistic, integrated view of business performance. The scorecard was originally created to supplement “traditional ﬁnancial measures with criteria that measured performance from three additional perspectives—those of customers, internal business processes, and learning and growth” (Kaplan and Norton 1996, p. 75). By 1996, user companies had further developed it as a strategic management system linking long-term strategy to short-term targets. The development of the balanced scorecard method occurred because many business organizations realized that focus on a one-dimensional measure of performance (such as return on investment or increased proﬁt) was inadequate. Too often, bad strategic decisions were made in an effort to increase the bottom line at the expense of other organizational goals. The theory of the balanced scorecard suggested that rather than the focus, ﬁnancial performance is the natural outcome of balancing other important goals. These other organizational goals interact to support excellent overall organizational performance. If any individual goal is out of balance with other goals, the performance of the organization as a whole will suffer. The balanced scorecard system also emphasizes articulation of strategic targets in support of goals. In addition, measurement systems are developed to provide data necessary to know when targets are being achieved or when performance is out of balance or being negatively affected.
The Kaplan and Norton balanced scorecard looks at a company from four perspectives:
Financial: How do we look to shareholders?
Internal business processes: What must we excel at?
Innovation and learning: Can we continue to improve and create value?
Customer: How do customers see us?
By viewing the company from all four perspectives, the balanced scorecard provides a more comprehensive understanding of current performance. While these perspectives are not completely inappropriate for use by colleges and universities, it is possible to adapt the balanced scorecard theory using a paradigm more traditional to higher education.