International Journal of Japan Association for Management Systems
Online ISSN : 2188-2460
Print ISSN : 1884-2089
ISSN-L : 1884-2089
Evaluation of the Impact of a Matrix-based Approach on the Adaptive Management of Performance Indicators and Decision Making
Kasei Miura Seiko Shirasaka
Author information
JOURNAL FREE ACCESS FULL-TEXT HTML

2025 Volume 17 Issue 1 Pages 28-40

Details
Abstract

Performance measurement has long been utilized for the purpose of providing feedback for decision-making in the formulation and execution of organizational strategies. However, maintaining alignment between an organization’s evolving context and its performance indicators remains a challenge, often preventing timely feedback for decision making. This study examines how a performance-indicator derivation approach, grounded in a matrix of organizational goals and strategies, can be integrated into the performance measurement lifecycle to enable the adaptive management of performance indicators. Focusing on the relationship between performance measurement and decision making, we conducted a case study to explore both the adaptive management of performance indicators and the impact of this approach on practitioners and decision makers. The results show that by linking organizational goals and strategies, it was enabled to maintain alignment between performance indicators and the organizational context throughout the performance measurement lifecycle. We also discussed the conditions in which adaptive management is effective, comparing with the use of performance indicators as internal controls. Adaptive management of performance measurement was found to have short-term benefits in terms of responding to change, but it was suggested that a balance with fixed performance indicators is needed in the long term stability.

1. Introduction

The capacity for human decision-making is inherently constrained by factors such as cognitive ability, the availability of information, and temporal limitations [1]. Consequently, the feedback based on performance measurement is imperative to support organizational decision-making [2]. Performance measurement serves as a vital means of gauging how effectively and efficiently an organization is achieving its goals, providing insights into the execution of strategy and guiding subsequent revisions [3,4]. Such feedback is especially important in highly uncertain environments, where it underpins the adaptability necessary for developing and implementing strategies [5].

However, research has indicated that feedback derived from performance measurement often lacks alignment between evolving organizational contexts and performance indicators [6]. Performance indicators must be consistent with the organizational context - namely, the organization’s goals, strategies, and surrounding environment [7]. This context frequently shifts in response to external factors such as technological advances, rapid changes in social conditions, and regulatory requirements [8]. Even during these changes, performance indicators must adapt to deal with uncertainty. When performance indicators do not reflect a changing context, the risk of misinterpreting the measurement results increases [9,10]. Some studies report unintended consequences of performance measurement when organizational misalignment occurs, leading to inefficiency in decision making and hindering both strategic execution and goal attainment [11].

Therefore, studies on the dynamics of performance measurement systems have introduced concepts for adapting to rapidly changing environments [12]. A dynamic performance measurement system requires the capability to collect and analyze information both internally and externally [13], thus enabling performance measurement to adapt quickly to contextual shifts. Additionally, other research indicates that performance measurement and decision-making processes should be flexible enough to accommodate organizational contexts [14]. In relatively stable and simple contexts, conventional performance measurement focused on internal control may suffice. As contexts become more complex or ambiguous, exploratory approaches to performance measurement and learning by performance measurement have become increasingly essential [15,16,17]. These studies show that cooperation between decision makers and practitioners, contest-specific responsiveness, and flexibility in the design of performance measures play important roles in implementing performance measurement that is adaptive to changing contexts.

However, there is a dearth of empirical research on how to design and maintain performance indicators that adapt to a dynamic context, pointing to a gap between academic concepts and real-world practice [2].

In addition to the limited empirical validation of adaptive management of performance measurement, there is a risk of excessive adaptation, and there is a tension in organizational management between internal control and adaptative management [18]. If adaptive management becomes excessive, it can lead to short-term responses to change and increase long-term instability. It has also been suggested that as the number of performance indicators increases, the information to be considered become more complex and the burden of decision making increases [19]. It is therefore necessary to assess the conditions under which adaptive management of performance measurement is desirable. This study aims to empirically validate adaptive management methods for performance indicators and to evaluate how adaptive management can be implemented and what risks can be identified.

Miura et al. reported on deriving performance indicators based on a matrix of organizational goals and strategies. Their Strategic Performance Measurement Indicator Derivation Framework (SPIDF) is designed to facilitate dialog between practitioners and decision makers, thereby supporting the derivation of performance indicators that align with the organization’s goals and strategies. The SPIDF establishes traceability between an organization’s context and its performance indicators. This traceability enables an evaluation of how changes in either the organization’s context or its performance impact the interrelated elements, allowing for appropriate updates. Nevertheless, their research has largely focused on the initial design process, leaving the adaptive management of performance indicators via the SPIDF unexplored.

Accordingly, the purpose of this study is to evaluates how the SPIDF functions under dynamic conditions, specifically how it contributes to the adaptive management of performance indicators and how this, in turn, affects decision making. In this study we address the following three research questions.

  • - Research Question 1: How can the SPIDF be applied to the performance measurement lifecycle ?
  • - Research Question 2: How does the application of the SPIDF change performance indicators ?
  • - Research Question 3: How does the SPIDF influence the flexibility, responsiveness, and cooperation in performance measurement and decision making?

A case study was conducted within the context of a quality management system. By applying the SPIDF to each phase of the performance measurement lifecycle and closely observing both the process and the outcomes, we assessed the framework’s effectiveness. Through this analysis, we elucidated how the adaptability of performance indicators influences strategic decision-making. Ultimately, this study seeks to offer a practical methodology for designing and updating performance indicators in dynamic environments.

2. Previous Research

2.1 Performance Measurement Overview

Although there is no universally accepted definition of performance measurement, Neely defines it as a quantitative assessment of both the effectiveness and efficiency of an organization’s activities. Neely also proposed a performance measurement lifecycle [20,21]. The four phases proposed by Neely - design, implement, use, and maintain - are widely accepted as constituting this lifecycle [20]. These stages guide how organizations manage performance indicators to assess goals and execute strategies effectively. Each phase addresses the management of the performance indicators, the interpretation and application of the measurement results, and the ongoing refinement of those performance indicators.

Performance measurement has evolved in design and implementation in response to changing contexts [22]. From the past, financial indicator was used for internal control purposes, such as manufacturing, to manage activities aimed at achieving organizational goals. The concept of performance measurement has evolved to encompass a range of perspectives, including long-term, stakeholder, sustainability, and innovation [23]. In the context of business management, these additional dimensions are proposed as a means to offer a more comprehensive assessment of organizational performance in addition to financial indicators. However, it is pointed out that the static nature of performance measurement does not respond to changes, and it has been suggested that performance indicators need to be reviewed strategically and dynamically [23,24].

In recent years, the need for dynamic management of performance indicators has become increasingly apparent due to three factors: external environmental changes accelerated by the information revolution and advances in digital technology, the inherently static nature of traditional performance measurement systems, and the behavioral side effects associated with the introduction of indicators (measure fixation, moving goalpost, short-termism, and data manipulation) [25]. Since the late 1990s, Bititci has proposed a dynamic PMS that incorporates review mechanisms and IT [12], and in the 2010s, an extended BSC has been proposed that incorporates system dynamics to improve environmental adaptability [26]. On the other hand, the fixation on performance indicators can lead to neglect of unmeasured aspects or excessive numerical manipulation, so the measurement system itself must be a “dynamic architecture” that is updated in response to changes in the external environment and organizational behavior. However, the implementation of dynamic performance measurement systems faces challenges such as a lack of theoretical understanding and empirical validation, low generalizability due to its ubiquity in certain industries, and behavioral dysfunction that can occur in both dynamic and static management [22].

2.2 Relationship between decision making and performance measurement

Simon states that decision making in organizations has bounded rationality [1]. Decision making is considered rational if the following are done properly: listing options, evaluating the outcomes of the options, and comparing the outcomes of the options. However, in actual behavior, none of these are carried out in a comprehensive manner, so that full rationality is not demonstrated.

Performance indicators are also intended to simplify a complex reality, and their rationality is bounded [19]. Therefore, continuous changes in performance indicators are thought to increase the complexity of the situation and do not necessarily support decision making. It has also been reported that if there are too many performance indicators, cognitive overload occurs and optimal decisions cannot be made [19]. Therefore, to support decision making, we need to consider the conditions in which adaptive management is desirable, the means of adaptation and how to manage the degree of adaptation. The Sense-making frameworks, such as Cynefin framework and Performance Alignment Matrix, have been evaluated for managing decision making and performance measurement under various contextual conditions [27,28,29,30]. The Cynefin framework classifies contexts into five different domains and suggests decision making guidelines for each domain [14]. In the structured domain (Simple & Complicated), the cause-effect relationship is clear or can be clarified. In this domain, internal controls focused on strategic stability are appropriate because the cause-and-effect relationship can be clearly identified. In the “complicated” domain, it is possible to analyze cause-and-effect relationships, but excessive analysis may lead to delays in decision making. In the unstructured domain (Complex & Chaotic), the cause-and-effect relationship is unclear or unpredictable. In this domain, adaptive management and learning are important because of the high level of uncertainty in the situation. In the complex domain, an experimental approach is suggested, where small changes are tried, the results are observed and feedback loops are used to adapt. The sense-making framework suggests attitudes toward context-sensitive decision making and performance measurement, but does not discuss how to change performance indicators.

2.3 Potential Risk of Adaptive Management

When considering the means and appropriate degree of adaptive management, it is necessary to consider the dilemmas of organizational management. There is a fundamental dilemma regarding the overuse of adaptive management as well as internal control, as there is a tension between adaptation and control in organizational management [18]. While adaptability enables us to respond to changes in the environment, the risk of “hyper-adaptation” has been described from the perspective of evolutionary biology [31] and social systems theory [32] as a paradoxical phenomenon where short-term adaptation creates long-term system vulnerability.

“Hyper-adaptation” is a phenomenon in which excessive responses to short-term environmental changes lead to a loss of strategic consistency [33]. In contrast, “strategic drift” is a phenomenon in which responses to environmental changes are delayed or slow, leading to a gradual widening of the gap between strategy and reality [34]. To avoid both phenomena, it is essential to establish a balancing mechanism that dynamically maintains the alignment between environmental changes, strategy, and performance indicator.

Adaptive management can be accompanied by unintended behavioral side effects, in particular two major risks: (1) “moving goalpost,” where rules are changed after goals are set to unfairly lower performance standards, thereby undermining employee motivation and rational judgement [35], and (2) “short-termism,” where prioritizing the optimization of short-term results hinders long-term investment [36]. These two risks can be mutually reinforcing and have a potentially severe negative impact on an organization’s sustainable performance. For example, there are two ways of bridging the gap between targets and reality: “improving reality” and “lowering goals”. Senge identifies the “creative tension” encourages the former, but under practical constraints the latter “lowering of goals” is more likely to occur [37]. It is therefore necessary to consider the situation in which adaptation is required and the feedback on adaptation.

To avoid unintended phenomenon (“hyper-adaptation” and “strategic drift”) and behavioral side effects (“moving goalpost” and “short-termism”), it is essential to establish counterbalancing mechanisms such as cooperation in the target-setting process, impact assessments when rules are changed, and feedback, including flexibility and responsiveness in the weighting of non-financial indicators. When designing and implementing performance indicators, it is therefore necessary to continuously monitor whether these unintended consequences are occurring and to take corrective action when necessary.

This study investigates the impact of applying the SPIDF to the performance measurement lifecycle as a means of adaptively managing performance indicators. By enhancing the adaptability of performance indicators within dynamic environments, this research evaluates how performance measurement influences decision making.

2.4 Strategic Performance Indicator Derivation Framework (SPIDF)

2.4.1 Viewpoints of SPIDF

The SPIDF is a method for deriving performance indicators based on a matrix of organizational goals and strategies (Figure 1) [38]. By explicitly linking these indicators to the organization’s goals and strategies, the SPIDF supports the derivation of performance indicators that align with the organizational context.

Figure 1 SPIDF

The SPIDF consists of four primary elements. First, from the “Organization State” viewpoint, the organization’s current state and goal state are considered, as well as the next intermediate state that must be achieved to reach the goal state. Second, “Enabler” viewpoint identifies the capabilities necessary for fulfilling the target states. Third, “Intervention” viewpoint specifies the functions played by strategic initiatives. Lastly, “Measurement” viewpoint integrates on these three elements to derive the performance indicators.

The SPIDF also incorporates three supporting viewpoints. The first, “Causality”, involves analyzing the causal relationships among factors that influence performance changes [39]. It provides hypotheses about how interventions arising from strategic initiatives and external environment cause organizational change by system dynamics simulation. The second, “Context”, emphasizes consideration of both internal interventions and external environmental factors, thereby enabling a broader examination of an organization’s context [40]. The third, “Certainty”, helps analyze the organization’s available evidence and knowledge, guiding the prioritization of essential feedback during the decision-making process. This “Certainty” perspective is operationalized through the Strategic Performance Measurement Cycle (Figure 2) to address uncertainty.

Figure 2 Strategic Performance Measurement Cycle

Process (a) clarifies the purpose of performance measurement by examining the organization’s goals and strategies. An analysis of the organization’s goals identifies the desired end state, the current state, and the next intermediate state. Next, the enablers required to achieve each state are identified. Finally, the intervention functions of the strategic initiatives to be applied to the organization are identified.

Process (b) focuses on generating candidate performance indicators, identifying both existing evidence and knowledge, as well as any additional information that needs to be acquired for feedback. A matrix is created and integrate perspectives on the organization’s enablers and intervention functions identified in Process (a). If performance indicators are already in use, map them onto the matrix. Prioritize the areas of the matrix that are highly uncertain for the organization and for which performance information is needed. The choice of priority performance indicators depends on how clearly the organization can define its goals and strategic initiatives based on available evidence and knowledge.

Process (c) uses the causal relationship perspective to refine these candidate indicators, deriving a set of indicators that can be measured. In Process (b), plan how to measure the selected performance indicators for the area to be measured. Plan to obtain an initial performance baseline and benchmark, and at least one performance measure to evaluate changes in the performance indicators.

Process (d) implements performance measurement, gathering evidence. Measure and analyze performance using the measurement method planned in Process (c).

In Decision-Making Process, verify that the performance information obtained in Process (d) has reduced uncertainty to a level that can be used for decision making. The cycle on the right is repeated until the level of evidence is adequate to support decision making. When the information necessary for decision making has been obtained, provide feedback on the organization’s goals and strategies that were the premise for the performance indicators based on the results of the decision making.

By combining these primary and supportive viewpoints, the SPIDF enables the consideration of alignment and prioritization among performance indicators, organizational goals, strategies, and the external environment. This structure allows the assessment of how changes in context affect related factors, thereby facilitating the adaptation of performance indicators within dynamic environments.

2.4.2 Guideline of SPIDF

As a prerequisite for application, there must be at least one decision maker who defines the organization’s goals and strategies, and one practitioner who implements the strategies and measures performance. The flowchart for SPIDF using the SPID Matrix (Figure 3) is shown in Figure 4. This flowchart is mainly applicable to process (a) and (b). In each stage, the questions listed guide how to consider using SPID Matrix. In process (c), specific measurement methods are established based on the SPID matrix. Then, in process (d), the measurement results reflect in the SPID matrix and interpret.

Figure 3 SPID Matrix
Figure 4 Flow chart of SPIDF

2.5 Comparison with previous research

Ravelomanantsoa classifies performance indicator design approaches into the structural architecture and the procedural architecture [47]. Structural architecture is an approach that predetermines the dimensions for which performance indicators should be established. On the other hand, the procedural architecture is an approach that provides procedures for designing and maintain performance indicators. In addition to this classification, we compared performance measurement approaches from the perspectives of context sensitivity, causal analysis, and adaptability (Table 1).

Table 1 Comparison with Previous Research

SPIDF Balanced Scorecard, Performance Prism Dynamic PMS System dynamics-based BSC Sense-making Frameworks
Structural Architecture ✔, flexible ✔, fixed set ✔, fixed set
Procedural Architecture
Context Sensitivity
Causal Analysis
Adaptability
Reference Miura [37] Kaplan [49], Neely [50] Bititci [12] Barnabe [25] Alexander [29], Melnyk [14]

Multi-dimensional performance measurement frameworks, such as Balanced Scorecard [51] and Performance Prism [52], have a structural architecture, but have been criticized for being static in their management and lack of causality. On the other hand, Dynamic PMS is the procedural approach for dynamically managing performance indicators, but does not include methods for addressing the structure or causal relationships of performance indicators [12]. Furthermore, Barnabe proposed a system dynamics-based BSC to supplement the analysis of causality, but was unable to address organizational context [48]. Alexander evaluates sense-making frameworks of performance measurement, but do not propose design procedures [30]. SPIDF incorporates structural and procedural architecture, as well as perspectives on context sensitivity, causal analysis, and adaptability to address change. In particular, by using a matrix that integrates the intervention perspective and the organizational goal perspective, it is possible to derive appropriate performance indicators according to the level of abstraction of each intervention and organizational goal.

3. Evaluation Method

3.1 Preparation of the Case

In this study, we employed a case study to investigate how adaptive performance measurement is managed and how it affects organizational decision making. The case study method was chosen to observe actual conditions within the organization and to gain a detailed understanding of the impact of the SPIDF. By examining whether the organization can properly update its performance indicators in response to a changing environment, we aimed to assess the positive and negative impact of SPIDF. This case study was conducted in the quality assurance function responsible for managing their Quality Management System (QMS). We secured the cooperation of practitioners and decision-makers, thus enabling data collection and analysis from both operational and managerial perspectives. The study focused on two organizational goals within the QMS. The first objective, “Enhance QMS resilience”, improves sustainability in changing environments. The second objective, “Pursuit QMS digital transformation”, seeks to leverage digital technologies to increase process efficiency and improve processes.

3.2 Data Collection and Analysis Method

To evaluate how the SPIDF facilitates the adaptive management of performance indicators and influences organizational decision making, we devised a detailed plan for data collection and analysis. We used open coding to analyze the results of the interviews with practitioners and decision makers, as well as the discussions in the management review. The categories created based on the results of open coding were evaluated for their consistency with the performance measurement lifecycle and the viewpoints of the SPIDF. To ensure objectivity, the results of the analysis were reviewed with both practitioners and decision makers to confirm that there was no bias in the interpretation. In addition to triangulation through verification by the participants in the experiment, the interpretation was conducted using multiple sources of information, including the results of each performance measurement lifecycle phase and the derived performance indicators, to reduce bias.

In analyzing the adaptive management of performance indicators under the SPIDF and its effects on the organization, we established observation points and evaluation criteria according to the characteristics of each context, employing multiple data sources (Table 2). For the adaptive management of performance indicators using the SPIDF, the assessment items included comprehensibility, usability, and adaptability. Regarding the framework’s influence on organizational decision making, the assessment items focused on flexibility, responsiveness, and co-operation.

Table 2 Evaluation Approach


4. Evaluation Results

This study was conducted as a case study. Consequently, the design and maintenance of performance indicators relating to “Enhance QMS resilience” followed the organization’s predetermined QMS schedule. Because the management review was held annually, the study period allowed for participation in two such reviews. Furthermore, performance measurement was conducted quarterly, providing four observational opportunities over the course of the study.

In contrast, the activity relating to “Pursuit QMS digital transformation” was in an exploratory phase. As a result, organizational goals and strategies were reassessed on a more ad hoc basis in addition to the annual review. Although external factors and internal process changes occasionally necessitated reviewing performance indicators outside of the schedule, a mechanism was in place to incorporate these external influences into performance indicators. Thus, these changes did not introduce unforeseen complications but were instead reflected in the relevant performance indicators.

Design phase

Figure 5 presents the outcomes of applying the SPIDF to the performance measurement for “Enhance QMS resilience.” In this organizational objective, the concept of resilience engineering [48] was used to define the “desired resilient state” as the organization’s target state and “response, monitoring, learning, and anticipation” as its enablers. During the functional analysis of the strategic initiatives, the QMS processes (leadership, product realization, monitoring, and support) were set as strategic functions. Based on this analysis of organizational goals and functions, each QMS process that contributed to the enablers was identified, and corresponding performance indicators were established. Before applying the SPIDF, the organization had set performance indicators such as on-time completion rates and the rate of repeated deviations for each QMS process. However, their causal relationship with organizational goals had not been clearly articulated. After SPIDF was introduced, some of the previously used performance indicators were retained, but their relationship to organizational goals were explicitly identified.

Figure 5 Example of Application to “Enhance QMS Resilience”

Figure 6 presents the outcomes of applying SPIDF to “Pursuit QMS digital transformation.” For this initiative, the enterprise architecture (EA) approach was used to clarify the target state and the functions of key strategic initiatives [49]. To streamline processes through digital technology, EA perspectives were used to define the enablers (Business Architecture, Data Architecture, Application Architecture, Technology Architecture). Moreover, to account for interventions, the organization considered not only processes related to IT system implementation and improving team members’ digital literacy, but also external factors: namely, the corporate headquarters’ standpoint in leading IT system adoption, and regulatory requirements tied to IT system usage. By integrating existing digital concepts and external environmental viewpoints, the digital transformation process enabled an exploratory approach to elucidate the structure and relationships needed to derive performance indicators.

Figure 6 Example of Application to “Pursuit QMS digital transformation”

Implement phase

Practitioners used the SPID Matrix to explain the performance indicators to decision makers and stakeholders. Facilitating performance-related discussions through the SPID Matrix revealed gaps in how various stakeholders perceived current organizational performance. Therefore, performance indicators have not been determined with a single application of SPIDF. However, recognizing such gaps provided crucial input for revisiting the performance indicators. More specifically, by explaining the rationale behind the performance indicators through the SPID Matrix, gaps in performance recognition were identified. As a result, the stakeholder discussions intensified, and the performance indicators were subsequently revised. In addition, it was determined that performance indicators are not necessary for the status of organizations that already have recognition. Notably, as practitioners and decision makers achieved a shared recognition of the goals and scope of performance measurement, the indicators came to be implemented in a manner more closely aligned with the organization’s overarching goals and strategies.

Use phase

Performance was assessed according to the agreed indicators. This assessment yielded feedback on both the organization’s progress toward its goals and the status of strategy implementation. Given that ambiguities or uncertainties concerning the organization’s goals and strategies were used to prioritize the evidence that needed to be collected, only a limited set of performance indicators required measurement.

Before the SPIDF was introduced, there was an gap between practitioners and decision makers in their interpretation of the measurement results. Practitioners tended to focus on how much performance had changed from its baseline prior to the application of strategic initiatives, whereas decision makers were more concerned with how closely the measured performance aligned with the desired target state following the implementation of the initiatives. By applying the SPIDF, both groups were able to recognize each other’s concern and arrive at a shared interpretation of the results. This improvement can be attributed to the preliminary discussions within the SPIDF, which explicitly addressed how the organization’s state would change over time.

Maintain phase

The input obtained from performance measurement results and the changes in the context prompted two types of changes. The type I change was regarding performance indicators, and the type II change was regarding organizational goals and strategies. When the measurement results indicated that a strategic initiative had produced noticeable effects and that performance was evolving, the same performance indicators were retained for ongoing monitoring. When no performance change was observed, the performance indicators were replaced with others that, based on causal analysis, would more promptly capture changes in performance (Type I change). Moreover, if the analysis revealed issues within a strategic initiative, that initiative was re-examined (Type II change). When performance was deemed to have reached its target state, the organization’s goals were updated to the next state (Type II change), and corresponding performance indicators were introduced (Type I change).

Additionally, the organization sometimes had to respond to unanticipated changes in goals and strategies cascaded from headquarters or changes in external regulations. Such external factors were integrated into the analysis of intervention functions, triggering reviews of whether existing performance indicators required revision (Type I change led by Type II change). Drawing on the information gleaned from the performance measurement; the decision makers could reduce uncertainty by the evidence.

Through ongoing application, the SPIDF supported the maintenance of performance indicators aligned with the organization’s changing context, thus preserving the consistency of performance measurement in dynamic environments. Furthermore, the use of the SPIDF did not merely revise performance indicators but also spurred the organization to update its goals and strategies, reflecting the role of feedback from performance measurement.

Table 3 shows the results of interviews and observations of practitioners, decision makers, and stakeholders in each phase.

Table 3 Interview and Observation results Used SPIDF perspectives are added: (S): State, (E): Enabler, (I): Intervention, (M): Measurement, (Co): Context, (Ca): Causality, (Ce): Certainty

Practitioner Decision-Maker Stakeholder/External
Design Phase

  • - There were many indicators related to formative evaluation (pre-SPIDF)
  • - Needed specific input on goals and strategies (S)(E)(I)
  • - The relationship between PI, goal, and strategy was clarified (M)(Ca)

  • - Clarified ambiguities about goals and strategies (S)(E)(I)(Ce)
  • - We were able to select the missing rationale for evaluating performance (S)(E)(I)

  • - There was awareness of alignment with high-level goals and strategies from higher-level organizations (Co)
  • - There was input from laws and regulations affecting the goals (Co)

Implement Phase

  • - Gaps were identified in decision makers’ perceptions of current performance (Ce)
  • - Obtained feedback from stakeholders (M)(Co)
  • - Priorities for indicators were reviewed (Ce)

  • - Performance indicators could be explained based on their relationship to goals and strategies (S)(E)(I)(M)

  • - Stakeholders understood their contributions from the relevant enablers. (E)(I)(Co)
  • - Strategy achievement criteria were dearly understood by PI (I)(M)

Use Phase

  • - Only ease of measurement was measured (pre-SPIDF)
  • - Interpretation of performance measurement results was consistent (M)
  • - Could focus on critical performance (Ce)
  • - Formative and summative evaluation of strategies was possible (M)

  • - Much of the evaluation was formative, making it difficult to make decisions based on performance status (pre-SPIDF)
  • - Were able to evaluate the effectiveness of the strategy (I)(M)
  • - Were able to evaluate the achievement of the goal. (S)(E)(M)

  • - There was a report on the status of accomplishments related to Pls (M)(Co)

Maintain Phase

  • - Indicators identified for continuation or update (Ce)
  • - Pls updated based on prioritization of evidence needed (Ce)
  • - Strategic initiative improvements have been implemented (I)
  • - Performance indicators have bear revised based on external changes (Co)

  • - Revised goals or strategies based on performance results (S)(E)(I)(Ce)

  • - Timing of changes in regulatory requirements reflected in strategies and performance indicators (Co)
  • - Scope of impact identified through alignment with external changes and Pls (M)(Co)

5. Discussion

5.1 Research Question

Research Question 1: How can the SPIDF be applied to the performance measurement lifecycle ?

The SPIDF was employed across the four phases of the performance measurement lifecycle - design, implement, use, and maintain. Leveraging inputs from both organizational stakeholders and external sources at each phase, the SPIDF enabled the design and revision of performance indicators. In the design phase, performance indicators were derived based on a matrix of organizational goals and strategies, producing performance indicators aligned with those goals and strategies. During the implementation phase, stakeholders provided feedback on the performance indicators, prompting sense-making and adjustments. The SPID Matrix, which clarified the links between the performance indicators and the organization’s goals and strategies, facilitated this step. In the use phase, the measurement results were analyzed using linkages among the organization’s goals, strategies, and external environment, allowing feedback to decision making. Finally, in the maintenance phase, measurement results indicated changes in performance that prompted a review of indicators (type I change) and consideration of whether organizational goals and strategies needed to be revised (type II change). In addition, by incorporating changes in the external environment as an input, SPIDF established a framework for assessing the impact of these changes on performance and guiding the updating of indicators.

We conducted a scenario analysis to evaluate the impact of SPIDF on the degree of adaptive management of performance indicators. The scenario analysis in this study showed that SPIDF is a system that dynamically manages performance indicators in the SPM cycle, obtains information necessary for decision making, and reduces uncertainty. When uncertainty increases, adding performance indicators reduces short-term uncertainty. However, adding performance indicators increases the complexity of the decision and the burden on the decision maker, which can lead to a decrease in the quality of the decision. In the long run, this can increase uncertainty and create a negative reinforcement loop associated with “hyper-adaptation”. When using SPIDF, even in cases of uncertainty, performance indicators are set with a focus on high-priority areas, which serves as feedback to adjust the increase in burden on decision makers. On the other hand, performance indicators are reduced in areas of low uncertainty. In this case, sensitivity to uncertainty and change decreases, leading to a decline in decision quality and an increased risk of “strategic drift”. Although there is a temporary increase in uncertainty due to the decrease in decision quality, it is possible to add and improve performance indicators in response to the increase in uncertainty, so that in the long run a balanced loop is expected to be formed. In summary, while adaptive management of performance indicators alone increases the risk of “hyper-adaptation,” the SPM cycle is proposed to act as a feedback to mitigate this risk.

Research Question 2: How does the application of the SPIDF change performance indicators ?

By applying the SPIDF, the alignment between the performance indicators and the organization’s goals and strategies was improved. Performance indicators were explicitly tied to goals, and the performance measurement results provided feedback to decision making. In addition, there was no increase in the number of performance indicators compared to before implementation. This is due to the “certainty” perspective, which prioritizes the evidence needed for decision making. Under the previous performance measurement system, many performance indicators lacked clear connections to organizational goals. However, the SPIDF provides a causal relationship between performance indicators with the organization’s goals and strategies.

Compared with the pre-implementation practices, the performance indicators became more context sensitive, allowing them to be flexibly adapted in response to change in the external environment or strategic priorities. In turn, these revised performance indicators remain well suited to changing conditions, facilitating timely responsiveness to external factors and the evolution of strategies.

On the other hand, there were performance indicators that were agreed to continue to be used unchanged even after SPIDF was applied. These were not used to measure the effectiveness of strategic actions, but rather to monitor whether the organization was performing stably. It was suggested that when using SPIDF, it is necessary to clarify the purpose of the performance measurement indicators.

Research Question 3: How does the SPIDF influence performance measurement and decision making ?

Impact to Practitioner

Prior to the introduction of SPIDF and during its initial use, there were two types of gaps in the understanding of performance among practitioners, decision makers, and other stakeholders. These gaps were thought to result from a lack of methods for assessing the alignment of performance indicators with organizational goals and strategies. Once the SPIDF was introduced and discussions using the SPID Matrix commenced, practitioners began incorporating feedback from decision makers and stakeholders based on the causal analysis when selecting performance indicators.

Furthermore, this feedback was no longer confined to internal organizational concerns; it also considered external factors into account supported by “Context viewpoint.” Consequently, the organization became more capable of adapting its performance indicators to its evolving context. The changes in the inputs used to derive the performance indicators are considered to be based on the facilitation by the visualization of the deriving performance indicator process using the SPID matrix and “Context” perspectives.

Impact to Decision-Maker

Decision-makers are inclined to make decisions based on bounded rationality; consequently, activities based on these decisions should be evaluated based on performance. However, there are often cases where it is necessary to decide whether to continue or improve a strategic initiative without sufficient information from performance measurement. One prevailing cause for this phenomenon is the prevalence of performance indicators that are skewed towards a formative perspective. Because of applying SPIDF, practitioners shifted from an overemphasis on formative assessments of strategic initiatives toward selecting performance indicators that aligned with their goals. Conversely, performance indicators derived from SPIDF have the capacity to facilitate the derivation of formative as well as summative performance indicators. Consequently, decision-makers were empowered to make informed decisions, leveraging valuable information that was particularly pertinent in highly uncertain environments.

By sharing the relationship between performance indicators and the organizational context with stakeholders via the SPIDF, decision makers were able to foster a sense-making of strategic issues and expedite the implementation of improvements. This outcome stems from the SPID Matrix’s role in reducing gaps in situational awareness. Making the indicator-derivation process transparent reinforced cooperation between practitioners and decision makers, enabling them to swiftly identify problems with proposed or existing initiatives. In turn, decision makers could assess the validity of the proposed action plans more precisely and streamline the approval process. This transparent process is also considered to have helped avoid the behavioral side effects of performance indicators, such as “moving goalposts” and “short-termism”.

Because SPIDF selects performance indicators based on areas of highest uncertainty in decision making, there was limited impact on cognitive load due to excessive information. For areas of high uncertainty, performance indicators were selected to reduce uncertainty. On the other hand, for areas of low uncertainty, uncertainty level was not reduced because it was already accepted. The cognitive range of decision makers was expanded by using a matrix of goals and initiatives to force candidate performance indicators. On the other hand, outside of the recognition of the goals and strategic initiatives, the cognitive range was not expanded, so there is uncertainty that is not recognized.

Impact to cooperation between Practitioner and Decision-Maker

Previously, while organizational goals and strategies were thoroughly explained to stakeholders, there was no comparable level of detail shared regarding the derivation of performance indicators. These performance indicators were simply presumed to be consistent with the organization’s goals and strategies, without much attention paid to how that alignment was achieved. Therefore, it has been pointed out that unexpected results may occur due to manipulation such as “moving goalposts” and “short-termism.” Consequently, this study identified the two types of gap between practitioners and decision makers about the purpose of the performance indicators and how to interpret the measurement results. Such gaps undermined the efficiency of strategy formulation and execution. Moreover, when the organization’s goals or strategic initiatives changed, the performance indicators were not updated to reflect these shifts, resulting in a misalignment between the performance indicators and the organizational context - ultimately contributing to dysfunctional performance measurement. In contrast, the use of the SPIDF requires clarifying the connections between the organizational context and its performance indicators. Visualizing and discussing the derivation process of these performance indicators helps expose and mitigate inconsistencies, fostering a shared understanding between practitioners and decision makers.

According to Hofstede’s six-dimensional model [50], the applicability of SPIDF is influenced by three primary factors: Power Distance, Uncertainty Avoidance, and Short-term/Long-term orientation. In organizations characterized by a minor disparity in power, decision-makers and practitioners can engage in collaboration on an equitable basis. Consequently, the SPIDF process - which involves the identification of uncertainty, the alignment of goals and strategies, and the collaborative conception of indicators - functions efficiently. Decision-makers base their decisions on limited rationality, so their decisions should be evaluated based on performance. However, they are often forced to decide whether to continue or improve strategic initiatives without having sufficient information from performance measurements. Therefore, the establishment of a system to bridge the hierarchical gap, such as workshops and cross-functional teams, with top management providing leadership, is imperative. In organizations characterized by a low propensity to evade uncertainty, the SPIDF philosophy of “narrowing down to the most salient uncertainty at the time and making decisions based on the minimum necessary indicators” has been shown to be efficacious. Conversely, organizations with a pronounced tendency to evade uncertainty have a propensity to implement an excessive number of indicators in pursuit of a sense of security. This results in a swift escalation of the cognitive load stemming from the measurement costs and information overload. Consequently, governance is imperative to explicitly delineate the indicator selection criteria and the prioritization process, thereby curtailing the proliferation of superfluous additions. Thirdly, in regard to the divergence between short-term and long-term orientations, organizations with a pronounced long-term orientation exhibit a more adept capacity to meticulously delineate the causal relationship between their desired future vision (long-term state) and the immediate results (short-term state). The “portfolio design of short-term and long-term indicators” advocated by SPIDF has been empirically substantiated as an effective approach. Conversely, organizations with a pronounced short-term orientation tend to modify their indicators with high frequency to align with immediate concerns. However, this adaptability can lead to the “hyper-adaptation”. To mitigate this risk, it is recommended that organizations maintain a minimum set of long-term monitoring indicators and establish procedural barriers to prevent excessive change.

Summary

By strengthening the alignment between performance indicators and the organizational context, the SPIDF increased the effectiveness of performance measurement in the decision-making process. In particular, performance indicators came to more timely reflect both strategic progress and goal attainment. Moreover, cooperation between practitioners and decision makers improved substantially, and smoother information sharing deepened their mutual understanding of the organization’s strategic direction and the results. As a result, the organization achieved sense-making more quickly and effectively across organization layers. Following the SPIDF’s introduction, practitioners were better able to update and create new performance indicators, allowing for flexibility.

Given the bounded rationality of decision-makers, who cannot exhaustively acquire all information, the primary role of performance indicators is to provide information that reduces uncertainty during the decision-making process to an acceptable level. Moreover, the provision of timely feedback on the outcomes of activities undertaken based on these decisions enables the supplementation of information that was previously lacking, thereby facilitating the establishment of a learning loop that determines the necessity of revising measures. However, it is imperative to note that an excessive proliferation of indicators can lead to an undue cognitive burden, characterized by information overload. This, in turn, can impede the efficacy of decision-making processes by increasing the complexity of the decision-making process itself. Consequently, it is imperative to identify the uncertainties that significantly impact decision-making, prioritize them, and subsequently design and operate indicators.

The SPIDF fosters flexibility in the performance measurement process. Even in dynamic environments, it provides practitioners with the means to maintain robust performance indicators, allowing them to be rapidly adjusted to reflect both external changes and internal factors. By making performance measurement more adaptable, indicators can be updated promptly in response to evolving strategies or environments, thus keeping assessments current and accurate.

Moreover, the SPIDF supports decision makers’ responsiveness by providing key evidence. By updating performance indicators in alignment with strategic priorities and doing so at opportune times, the SPIDF ensures that decision makers receive the evidence they need in a timely manner. In particular, the SPIDF’s integrative approach to analyzing the measurement data and environmental factors has led to improved decision quality.

Finally, the SPIDF promotes cooperation between practitioners and decision makers, helping bridge gaps in understanding across the organization. Through open discussion centered on performance indicators, both parties form a shared perspective on their meaning and usage. As recognition of the performance indicators’ alignment with strategic goals grew, trust in the decision-making process was also enhanced. With discrepancies in perception reduced, decisions became more consistent organization-wide, making performance indicators an even more effective tool. As a result, the overall organizational performance improved and the strategies were executed with fewer obstacles.

5.2 Identified Issues during Implementation of SPIDF

Three issues were identified in the application of SPIDF. One was that there were performance indicators that remained unchanged and continued to be used even after SPIDF was applied. The other one was a sense-making issue that arose during the design of the performance indicator, and the third one was an increase in measurement burden that arose during the implementation of the performance indicators.

Some performance were not change during this study. One possible reason for this is that the study period was limited. On the other hand, quality assurance requires that there is no deviation from the planned quality, so it is expected that performance is continuously demonstrated at the same level. Therefore, it was suggested that fixed indicators in performance were also needed to monitor the stability of the organization. Such indicators should detect unexpected effects by regularly monitoring the intended performance. This suggests that it is necessary to monitor the effects of uncertainties that are not captured by fixed performance indicators, as well as performance indicators that necessarily need to adapt to changing situations.

When practitioners derived performance indicators, they had difficulty analyzing the state of the organization because they did not have a sufficient understanding of the desired state of the organization. Decision makers set the goals in the abstract with the intention of cascading them through the organization. To make the goals concrete, information about the organization’s roles was needed. This arose because the decision-makers who define the organization’s goals and strategies were separate from the practitioners who formulate performance indicators. To solve this situation, leadership and communication of the decision makers are required. In addition, the decision makers themselves were unable to clearly specify target states due to the uncertainty of the situation, creating a situation in which it became necessary to explore the effectiveness of intervention measures. Therefore, it was difficult to derive performance indicators by using SPIDF once, and it was necessary to iteratively discuss and specify organizational goals, organizational strategies, and performance indicators.

The problem with updating indicators to adapt to changes is that it increases the burden of performance monitoring. With fixed performance indicators, it is possible to optimize measurement methods and implement efficient measurements. On the other hand, when performance indicators change, practitioners must establish new measurement methods and obtain benchmarks or measure the pre-intervention state as a baseline for evaluating changes in performance. In this study, performance indicators were revised only in limited cases, but each time performance indicators were changed, it was necessary to obtain information on the comparison targets, which increased the burden on practitioners. As a solution, it is believed that digital technology and IT systems can be used to efficiently collect and analyze data.

5.3 Limitation

SPIDF has proven effective in the adaptive management of performance measurement, complementing the bounded rationality of decision-makers in contexts characterized by ambiguous or complex causal relationships. However, its efficacy is constrained in organizations where the causal relationship is clear and quantitative indicators can be readily established. There is a risk that long-term monitoring indicators will be diminished due to the prioritization of responsiveness to change. Consequently, it is imperative to establish a balance between adaptive and fixed indicators. The implementation of the system is contingent on the leadership of decision-makers and collaboration with practitioners, and its scope of application is constrained in organizations with low uncertainty and those that necessitate comprehensive performance evaluation. This study compared R&D departments and QMS departments, which have different levels of uncertainty, to suggest the scope of applicability of the SPIDF. However, quantitative verification in a variety of industries and environments is a future issue.

5.4 Future Research Direction

In order to enhance the applicability of the framework, it is imperative to continuously improve the tools and procedures. While the SPIDF effectively supports organizations, further refinements are necessary to improve ease of use during implementation. Specifically, there is a need to simplify the procedures and develop more intuitive tools to facilitate user interaction with the framework.

Furthermore, a comprehensive investigation into the long-term utility and potential challenges of the SPIDF is imperative, necessitating the implementation of system dynamics modeling to quantitatively assess impacts over an extended timeframe.

Given the potential influence of organizational culture on the effectiveness of SPIDF, it is essential to expand the study to include organizations with diverse cultural backgrounds to assess the differential impact of SPIDF.

6. Conclusion

Feedback based on performance measurement has long been used to inform decision-making in the formulation and implementation of organizational strategy. However, maintaining consistency between evolving organizational conditions and performance indicators remains a challenge and often prevents timely decision support. This study examined how an approach for deriving performance indicators based on a matrix of organizational goals and strategies can be applied to the adaptive management of those indicators and how it influences organizational decision making. Through a case study, we confirmed that SPIDF can be used in all four phases of the performance measurement process to derive and revise indicators aligned with the organization’s goals and strategies. This finding illustrates how the SPIDF enhances practitioners’ flexibility in selecting and updating performance indicators, thereby enabling rapid feedback to strategic initiatives. Moreover, applying the SPIDF strengthened decision makers’ responsiveness by adapting performance indicators to strategic goals and environmental changes, which in turn expedited decision making. It also improved coordination between practitioners and decision makers, reducing gaps in performance perception and fostering more consistent decisions throughout the organization. The efficacy of adaptive management was also discussed in the context of its application in comparison to the utilization of performance indicators as internal controls. The findings indicated that adaptive management of performance measurement yielded short-term benefits in terms of responsiveness to change. However, it was proposed that a balance with fixed performance indicators is necessary to ensure long-term stability.

7. References
 
© 2025 Japan Association for Management Systems
feedback
Top