|

Courting Quality: A Survey of Quality Management Practices in American Courts

by

Vol. 108 No. 2 (2024) | Judges Under Siege? | Download PDF Version of Article

Quality management1 — or the practices an organization creates to ensure customer requirements are met — is usually associated with the corporate world.2 But its aims are just as relevant to state-run entities like courts. An overview of those practices tells us how far we have come in adopting quality practices as well as how far we have to go, when it comes to making court administration efficient and effective.

A History of Court Quality Management Practices

Since the 1990s, academics and state court leaders have developed several quality-based concepts and tools to measure and evaluate the effectiveness of court operations. Until recently, though, these state practices have not fully incorporated the broader quality management landscape applied in other areas of government. And formalized quality management practices in the federal courts are still in their infancy.

Early Development of Quality Management Practices in the Courts

Current performance measurement and management principles in the courts originated out of the “total quality management”3 movement created largely as part of the post-World War II economic redevelopment of Japan.4 While initially focused on private-sector manufacturing, quality management concepts eventually garnered the attention of public and court administrators, such as during the Clinton administration’s Reinventing Government initiative of the 1990s.

Within the judiciary, former Vice President for the National Center for State Courts (NCSC) Alexander Aikman’s handbook for judicial administrators identified four likely benefits to courts that implement quality management: (1) improved productivity, (2) improved service to the public, (3) improved processing to facilitate improved judge and litigant work, and (4) improved relationships with other court partners.5 To support interested courts, Aikman outlined a three-year, incremental plan for implementing a quality management program.6 It included collecting data, establishing quality standards and measures, and reviewing and modifying those standards.7 The concept of such standards within the courts was new at the time, with Aikman’s handbook highlighting the work of only 11 courts — from federal, state, and municipal levels — engaged in quality management activities.

Trial Court Performance Standards and CourTools

Over a three-year period from 1987 to 1990, the NCSC established the Trial Court Performance Standards Project to “develop measurable performance standards for the nation’s general jurisdiction state trial courts.”8 The standards were designed not to measure the performance of individual judges but the whole trial court organization by using a self-assessment-based system.9 The project created 22 performance standards — with accompanying measurements — organized around five areas: “(1) access to justice; (2) expedition and timeliness; (3) equality, fairness, and integrity; (4) independence and accountability; and (5) public trust and confidence.”10 Over time, these 22 performance standards increased in number to as many as 75, until settling at 68 based on continued field application and court input.11 Yet less than a decade later, in 2005, the NCSC replaced the Trial Court Performance Standards with CourTools, which built upon the original five areas of the standards but focused on only ten measurements.12

In reviewing the history of performance measurement standards in the courts, Richard Schauffler, NCSC director of research services, observed “how intimidated the court community was by the notion of performance measures,” which led to the inclusion of a disclaimer in the Trial Court Performance Standards that “the measures were only to be used for a court’s internal management,” which was a message “not lost on the states” — namely, that failure to implement carried little consequence and likely gave little incentive for courts to take action.13 Schauffler offered other reasons for limited adoption of the standards, including the excessive number of measurements, the lack of consistent leadership, and “insulat[ion of] courts from pressures [or compulsion] to adopt performance measure[s]” like their executive agency counterparts.14

The move to the slimmed-down CourTools resulted from new judicial leadership to promote “effective judicial governance and accountability.”15 Like the Trial Court Performance Standards, the ten measurements that comprised CourTools looked at broad organizational trends, such as surveying court users on perceived access and fairness, disposition and case clearance rates, and costs by case.16 However, CourTools also provided simple, clear guidance, standards, and methodology that courts of any type could readily access, implement, analyze, and report. By taking a more focused approach to consistent data collection and analytical methods than the Trial Court Performance Standards, CourTools “provided the basis for creating a new perception that measurement could be done fairly, accurately, and consistently within and across courts within a given state, and among states.”17 Accordingly, CourTools was more widely adopted than its predecessor.18 Those involved with creating CourTools noted early on the initial response from the court community was favorable and the “small but well-considered set of outcomes” were “widely accepted as valuable” in demonstrating both outcomes to the public users of courts and the cost-effectiveness of court operations.19

Appellate Court Performance Standards, Appellate CourTools, and Model Time Standards

Appellate courts must be “consistent, fair, and timely” in resolving cases at their second, or even third, level of review, according to the Appellate Court Performance Standards of the mid-1990s.20 By using performance standards, they can “foster the trust and confidence of their constituents” regardless of the appellate court’s jurisdiction or place in the hierarchy of the court system.21

The Appellate Court Performance Standards — an initiative from state appellate courts in Oregon, Montana, and Arizona — eventually led to the release of an appellate-focused version of CourTools in 2009: Appellate CourTools. Designers of Appellate CourTools believed the use of such measurements would solidify appellate courts’ “own independence and their leadership role within the judicial branch.”22 Appellate CourTools reduced the number of measurements from ten in CourTools to six, but continued to focus on the same performance areas and to provide accessible measurement tools to aid in implementation.23 The Oregon Court of Appeals adapted four of these performance measurements to focus on three values to drive new accountability: (1) quality, (2) timely resolution of cases, and (3) “cultivation of public trust and confidence.”24

Five years after the release of Appellate CourTools, a collaboration of the Joint Court Management Committee of the Conference of Chief Justices and the Conference of State Court Administrators issued new model time standards for both state intermediate appellate and courts of last resort either to adopt Appellate CourTools as is or “to modify them to establish time standards based on their own particular circumstances.”25 In establishing these time standards, the authors emphasized that appellate courts must be accountable “for achieving the goals of productivity and efficiency while maintaining the highest quality in resolving cases before them.”26 The decision to focus on prompt resolution was due to timeliness being “probably the most widely accepted objective measure of court operations and is also, fairly or otherwise, a primary concern of the other branches of government and the public regarding the courts.”27 Although no studies yet showed that having time standards led to faster resolution of cases, the mere establishment of such standards demonstrated, according to the developers of the Model Time Standards, a “[c]ommitment toward ensuring efficiency and timeliness in the resolution of appellate cases.”28

High Performance Court Framework

State court administrators, however, still lacked a unifying management framework for using either CourTools or performance standards to enhance court operations. In response to an emerging crisis of loss of state court funding coupled with a “decline in the trust and confidence” in state courts by citizens, the NCSC in 2010 launched a new effort to provide a “common way” to evaluate court performance management activities: the High Performance Court Framework.29 By moving away from specific performance measurements and instead focusing on broader performance indicators that would proactively identify issues within courts, the new framework organized indicators into four areas: (1) effectiveness, (2) procedural satisfaction, (3) efficiency, and (4) productivity.30 In contrast to the “conceptualized” approach of the Trial Court Performance Standards, the High Performance Court Framework was designed to “focus[] on case processing quality . . . to assure each person’s constitutional right of due process” based on four underlying administrative principles: “giving every case individual attention,” “treating cases proportionately,” “demonstrating procedural justice,” and “exercising judicial control over the legal process.”31

In the context of performance measurement and management, the High Performance Court Framework recommended courts use a balanced scorecard tool to direct “overall business strategy into specific quantifiable goals and to monitor the organization’s performance in terms of achieving these goals.”32 Because the traditional balanced scorecard tool used in the private sector — which focuses on financial, customer, internal, and growth concerns33 — did not readily transfer into the court context, the framework defined four of its own points of focus: customer, internal operating, innovation, and social value.34 The framework concluded by identifying strategies that courts could use to begin implementation, including the use of a “quality cycle” — a five-step cyclical process for continuous improvement based on problem identification, data collection, data analysis, response to the analysis (“corrective action”), and then evaluation.35

Shortly after publication of the framework, its authors summarized in Future Trends in State Courts 2011 the basic managerial ingredients of a high performance court as (1) administrative principles “that define and support the vision of high administrative performance,” (2) a managerial culture “committed to achieving high performance,” (3) performance measurement through a systematic assessment of a court’s ability to “complet[e] and follow[] through on” its goals, (4) performance management that “responds to performance results and develops its creative capacity,” and (5) use of the quality cycle to “support[] ever-improving performance.”36

International Framework for Court Excellence

Concurrently on the international front, in 2007 court administrators from several countries formed the International Consortium for Court Excellence “to develop a framework of values, concepts and tools for courts and tribunals, with the ultimate aim of improving the quality of justice and judicial administration.”37 This framework, called the International Framework for Court Excellence and now in its third edition, asks courts to assess and score themselves on seven areas of court excellence: court leadership; strategic court management; court workforce; court infrastructure, proceedings and processes; court user engagement; affordable and accessible court services; and public trust and confidence.38

The self-assessment scoring guidelines follow a maturity model approach (one geared toward achieving a certain performance level),39 establishing a series of values statements, and requiring court administrators to assign a score (from 0 to 5) as to how the court approaches various general statements within each of the seven areas.40 Each area also has an effectiveness statement and a separate scoring table for courts to evaluate how well they perform in each area.41 After completing the self-assessment, court participants tabulate the points for each section and the “overall indication of the court’s performance.”42 From this score, courts are encouraged to create an improvement plan to address the various issues, including measurements for performance and progress.43

The initial 2008 version of the international framework, though, lacked in the area of performance measurement and did not include the current self-assessment statements. In their publication introducing the High Performance Court Framework, researchers Brian Ostrom and Roger Hanson characterized the International Framework (as well as the Trial Court Performance Standards) as “lofty [in] nature” and presented with a “high level of abstraction,” which made them “not easily defined for use in a systematic way to assess court performance in the real world.”44 Specifically, the international framework focused on an image of the “ideal court” with a “more limited emphasis on measurement and the identification of particular indicators of performance.”45

Following similar feedback from the international court community, the International Consortium issued its first Global Measures for Court Performance in 2012, which identify performance measurement and management as essential tools for courts. The global measures also provide “focused, clear, and actionable performance” standards that align with the areas of court excellence found in the international framework.46 Of the current 11 measures, nine were adaptations of the CourTools for trial and appellate courts.47 The global measures, now in their third edition, provide detailed methodology for implementation, as well as examples of real-life application.

Since its adoption, the international framework has been well-received by many courts around the world. In a 2017 research paper for the International Consortium for Court Excellence, Elizabeth Richardson summarized its use in 13 different courts.48 She concluded that courts using the international framework found the self-assessment process to be a “useful tool for identifying areas of operation and engagement that need improvement.”49 Even so, she noted, self-assessments may be scored inconsistently due to local variation.50

Application of Judiciary Quality Management Practices

Both before and since the release of the High Performance Court Framework in 2010, several courts have implemented quality management practices. Two illustrations follow.

New Mexico. As a pre-High Performance Court Framework model, New Mexico launched a four-year “total quality service” program across its courts,51 managing significant organizational cultural change that included fostering a positive work environment and delivering consistent organizational performance and quality customer service.52 Taking an incremental approach, it then defined various performance measurements and indicators, which led to identifying 12 areas of focus for process-improvement teams.53 Of note, the New Mexico courts developed and used a self-assessment tool based on the 1999 Malcolm Baldrige National Quality Award.54 Ultimately, the state created new performance indicators and at least one corresponding process.

Arizona. As a post-High Performance Court Framework model, the Maricopa (Ariz.) Probate Department requested the NCSC evaluate its probate program. The program evaluators, who included two of the developers of the High Performance Court Framework, used the framework “to examine . . . efforts to increase accountability and to allocate judicial officer and court staff resources more proportionately in monitoring . . . cases.”55 56 The report called the probate department’s collection of data and its following a clear plan in its work successful uses of a continuous improvement process. It concluded that these actions resulted in “a system for organizing . . . work that enables ongoing review and future systematic evaluation” — a primary goal of the High Performance Court Framework.57

Commentary on Court Quality Management Practices

Despite progress, current use of court quality management practices have their limitations.

First, current performance measures, such as CourTools, are useful for providing a high-level overview of court performance, but as court executive Jake Chatters has argued, “often provide little value to most staff, supervisors, and line managers.”58 Instead, he has advocated for “operational-level performance measure[s] . . . that focus on the timeliness and quality of the activities performed by line staff,” such as the “percentage of documents processed within a certain number of days.”59 In contrast, Chatters views “backlog reports” — noting how behind staff may be — as inherently negative and backward-
facing, with limited ability to inform court administrations on how to address new and future challenges.60 Although Chatters did not propose a new performance measurement system, he identified several “implementation principles” to be used in crafting these frontline measurements, including considering both timeliness and quality together, defining success through measurements, and avoiding creating cumbersome measurement systems such as the Trial Court Performance Standards.61

Second, Professor Ingo Keilitz has argued that empirically based performance measurements can drive court development, particularly in the international context.62 Building on the two versions of CourTools and the International Framework, Keilitz provided a new working definition of performance measurement and management for the courts:

The discipline of monitoring, analyzing, and using organizational . . . performance data on a regular and continuous basis in real or near-real time for the purposes of improvements in organizational efficiency and effectiveness, in transparency and accountability, and in increased public trust and confidence in the organization.63

Keilitz then posed a series of items for courts to consider in self-evaluating an organization’s performance, including comparative performance measurements from baseline to current levels; performance trends over time; variability and predictability in performance over time; and identification of actions and strategies to start, stop, or continue based on measurement results.64 Keilitz contended that the focus on performance measurements reflected a move away from prior top-down approaches within international justice systems to an emphasis on local ownership of efforts like capacity development and legitimization.65 Keilitz concluded that current performance management efforts remained “relatively limited” and needed to be documented and promoted to maintain consistency and harmonization at all levels of judicial governance.66

This author has previously criticized existing judiciary quality management tools — CourTools and the International Framework — as insufficiently independent to effectively evaluate the quality of a court’s performance.67 Instead, this author has both argued and demonstrated in practice that a combined system of neutral maturity evaluation and the integration of established quality management systems (such as Lean Six Sigma) can provide courts with a more robust method for establishing their own, customizable quality management system using a gradual implementation based on an existing, incremental framework, such as the ASQ/ANSI G1:2021 Guidelines for Evaluating the Quality of Government Operations and Services (ASQ/ANSI G1), discussed below in more detail.68

Finally, William Raftery of the NCSC has observed that most states have now adopted time performance standards but that a one-size-fits-all solution does not work.69 Specifically, Raftery noted that “standards often appear to be aspirational rather than based on actual performance,” which can “lead to individuals or organizations simply giving up on trying to meet the standards at all.”70 Any performance standards should instead be attainable and achievable, including accounting for continued backlogs and disruptions resulting from the recent pandemic.71 In other words, if there is no connection between actual operational performance and the performance goals and purpose of the court’s performance area, the application of standards for standards’ sake will not lead to quality or improved performance.

Given these deficiencies, court administrators might consider looking at other government entities with a longer history and broader use of quality management practices to see how quality practices can be better integrated into court operations.

Quality Management Practices in the Public Sector More Broadly

As noted, quality methods began to take hold in the public sector in the 1980s, culminating in the creation of the National Performance Review Office as part of the Clinton administration’s Reinventing Government initiative. The goal was “clarification of the purposes of each [public] institution and definition of the appropriate measures to gauge progress toward those specific organizational objectives.”72

Following the Clinton initiative, Gregory H. Watson and Jeffrey E. Martin endeavored to craft an operational definition of quality in government and to identify accompanying quality practices.73 They encouraged government to follow recognized quality management principles and practices from the private sector,74 to focus on good customer service, to integrate performance excellence measures, and to use private-sector benchmarks.75

In the ensuing 20 years, public-sector organizations have pursued quality management integration primarily by adopting Lean Six Sigma methodologies. Developed by Toyota in the 1950s and 1960s, “Lean” is an approach that focuses on eliminating waste from a system or process using nontechnical tools. “Six Sigma,” developed by Motorola in the 1980s, is a method to eradicate process variation using statistical process controls and statistical applications. The two continuous improvement approaches were combined in the early 2000s and dubbed Lean Six Sigma.

Lean practices have received greater attention and adoption within government, with a 2015 study by the American Society for Quality (ASQ) Government Division reporting that approximately 20 percent of state government offices had established Lean improvement programs.76 From a follow-up study two years later, respondents identified favorable improvements in operational efficiency and effectiveness but also reported that barriers still existed, namely, a lack of leadership support.77 Similar obstacles and challenges remain in the overall adoption of Lean in government, including that there has been an “overreliance on individual tools rather than incorporating the philosophy [of Lean] . . . to the organization” even though “Lean is the dominant methodology used in many areas of the public sector.”78

In Lean Six Sigma for the Public Sector, Brandon Cole outlined nine challenges to adopting Lean Six Sigma in the public sector that do not exist in the private sector: “1. Hierarchical or stove piped environment, 2. Limited sense of urgency, 3. Lack of leadership support, 4. Lack of profit or revenue focus, 5. Lack of common goals, 6. Lack of customer focus, 7. High employee turnover, 8. Complexity of the public sector, [and] 9. Mix of various employee types.”79 However, in the face of these challenges, Cole identified different Lean Six Sigma approaches, tools, and methods that could be used to overcome each of these issues. Cole then provided readers with a road map for government agencies to set up their own Lean Six Sigma programs, introduced basic Lean and quality tools, and offered recommendations to create a sustained culture of continuous improvement.

Quality Standards in Government
ISO 9001

Aside from work management methodologies, such as Lean Six Sigma, and specific application tools, such as CourTools, government entities have also considered using independently established quality standards, including third-party quality standards such as ISOs. ISOs are a system of numbered voluntary rules an organization may adopt to manage quality that are developed by the International Organization for Standardization, which was founded in the 1940s in Geneva. Currently, the international standard for organizations designing a quality management system — and one adopted by some government organizations — is ISO 9001.80 ISO 9001 lists standards that organizations should follow for managing their quality systems. The benefits of having an ISO 9001-aligned quality management system include being able to provide consistent services that meet customer requirements.81 Through the adoption of a quality management system, organizations align their management in pursuit of quality by “understand[ing] the needs of their customers and . . . anticipat[ing] future needs” where “[q] uality isn’t everything; it’s the only thing.”82 However, the benefits of ISO 9001 may be outweighed by it being perceived by some as “overly complex and not fully applicable in many branches of government.”83

Maturity Modeling and ASQ/ANSI G1

As an alternate approach, the ASQ Government Division has developed a structured system management standard based on defining and documenting “best known operational practice” for each manager, as well as the application of Lean improvement efforts to system design.84 In this approach, an organization is scored in four areas: systems purpose and structure; goal directedness through measures and feedback; management of intervening variables and risk; and alignment, evaluation, and improvement.85 The goal was that a government standard would “provide an objective professional opinion [through self-directed or independent audits] of the quality of management of any public entity in a report-card format.”86 And the resulting benefit of such an approach would be to fully ingrain quality management into government practices and use the public pressure of conformity to auditable requirements — as with comparable financial audits — to make it very difficult for agencies to abandon quality practices.

In February 2021, the American National Standards Institute (ANSI) adopted the standard to evaluate the quality of government operations and services.87 As with ISO 9001, ASQ/ANSI G1 called for users to design processes around inputs and outputs and to define requirements and measurements for success, optimizing these processes into best practices. Unlike other performance standards, ASQ/ANSI G1 focuses on the evaluation and activities of individual managers in specific business activity groups — not an all-inclusive, top-down approach — and “provides objective scoring of the maturity of the use of well-known and beneficial quality practices at the organizational front-line.”88

This evaluative method follows a six-level maturity model for evaluation of a process or system of a government organization. As with ISO 9001, ASQ/ANSI G1 integrates risk management, analysis, and mitigation requirements into the evaluation requirements and assessment of the maturity level so that “the organization’s managers [know] how much risk they are accepting based on the maturity of their processes and system.”

Finally, evaluations under ASQ/ANSI G1 are performed either by internal examiners of the organization or by trained, volunteer external examiners provided by the ASQ Government Division. Examinations are expected to conform with standard quality auditing practices in considering the appropriate maturity level of the organization’s process or system on a six-level scale. Organizations submitting for external evaluation can be validated by the ASQ Government Division and, depending on the maturity level, receive award recognition.89

In 2022, the Clerk’s Office of the U.S. Court of Appeals for the Federal Circuit became the first government organization and court to adopt and to receive award-level validation under ASQ/ANSI G1.90 The office subsequently produced a case study detailing its use and application of ASQ/ANSI G1.91 As a simpler approach than ISO 9001, ASQ/ANSI G1 provides a new standard for government organizations, including courts, to build and expand existing quality practices. However, additional application within government and the judiciary is necessary to fully evaluate the actual impact and effectiveness of this new quality resource.

***

With the many available resources now available to court leaders, where does one start? Based on firsthand experience, this author recommends a combined approach of ASQ/ANSI G1 and the court-specific tools available. Incorporating quality management into a court unit is neither easy nor quick, yet quality management needs to start somewhere and truly never ends. As W. Edwards Deming — one of the founders of the total quality management movement — advised in his 14 Points on Quality Management, “[i]mprove constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.” By demonstrating validated quality practices delivering cost-effective services to the public, such as those identified above, all American courts can serve as champions not only of the rule of law and individual rights but also of good government worthy of the public’s continued trust and confidence. Through this article, court leaders at all levels now have a resource to help them begin their quality journeys.


Jarrett Perlow is the circuit executive and clerk of court for the U.S. Court of Appeals for the Federal Circuit and an Institute for Court Management fellow of the National Center for State Courts. He previously completed his juris doctor, cum laude, and his bachelor of arts from American University. He completed his Lean Six Sigma Master Black Belt under the direction of Gregory H. Watson and is a senior member and current chair of the all-volunteer Center for Quality Standards in Government with the American Society for Quality.


  1. The views expressed in this paper are solely those of the author’s and neither reflect nor represent the views of the U.S. Court of Appeals for the Federal Circuit, any other entity of the U.S. government, nor ASQ. This article is based on the author’s independent ICM fellowship research through the NCSC completed in July 2023. The author thanks and acknowledges the following people for their advisory, research, and editorial assistance with this article: Joshua Adrian, James Alvino, John Baranzelli, Patrick Chesnut, J. D. Gingrich, Richard Mallory, Mandy Sarkissian, Kathleen Shambaugh, Gregory H. Watson, and Jason Woolley, as well as Amelia Thorn and the other editors at Judicature.
  2. Int’l Org. for Standardization, ISO Standard 9000 (2015); The History of Quality, American Society for Quality, https://asq.org/quality-resources/history-of-quality (last visited July 29, 2024).
  3. What was originally called “total quality management” is now referred to as either “quality management” or a “quality management system,” which incorporates the management philosophy of total quality management but focuses on formalized processes and procedures to achieve quality in an organization. See History of Total Quality Management, American Society for Quality, https://asq.org/quality-resources/total-quality-management/tqm-history (last visited July 1, 2023).
  4. See Alexander B. Aikman, Total Quality Management in the Courts: A Handbook for Judicial Policy Makers and Administrators 1 (1994), https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/18/. For additional recounting of the history of quality management with a focus on early application within the courts, see generally Rudolph H. Ehrenberg, Jr., and Ronald J. Stupak, Administrative Theories Applicable to the Implementation of Total Quality Management in Public Sector and Service Organizations ( 1992); Anne Thompson, Total Quality Management: A Court Application (1993).
  5. Aikman, supra note 4, at 18.
  6. Id. at 45.
  7. Id. at 18.
  8.  Commission on Trial Court Performance Standards, Trial Court Performance Standards with Commentary 1 (1990).
  9. Id. at 2.
  10. Id. at 5.
  11. Commission on Trial Court Performance Standards, Trial Court Performance Standards and Measurement System Implementation Manual 1–2 (NCSC 1997).
  12. Trial Court Performance Measures, National Center for State Courts, CourTools, https://www.courtools.org/trial-court-performance-measures (last visited Aug. 23, 2024) [hereinafter “CourTools”].
  13. Richard Y. Schauffler, Judicial Accountability in the U.S. State Courts Measuring Court Performance, 3 Utrech L. Rev. 112, 119 (2007).
  14. Id. at 119–20.
  15. Id. at 120 (internal quotations and citation omitted).
  16. See CourTools, supra note 12.
  17. Schauffler, supra note 13, at 123.
  18. As one illustration of the use of these performance measures in courts and analysis identifying the benefits of such a system, see Alexis Allen, Organizing Performance: A Review of Performance Measures and Tools (2014), https://www.azcourts.gov/Portals/128/Docs/ICMF/AllenAlexis.pdf.
  19. William E. Hewitt, Brian Ostrom, and Richard
    Schauffler, Performance Measurement Gains Momentum Through CourTools, in Future Trends in State Courts 2006 95, 97, https://msa.maryland.gov/megafile/msa/speccol/sc5300/sc5339/000113/004000/004857/unrestricted/20071754e.pdf.
  20. See Roger A. Hanson, Appellate Court Performance Standards and Measures x (NCSC 1999).
  21. Id.
  22. John Doerner and Ingo Keilitz, Performance Measurement and Management in State Supreme Courts and Intermediate Courts of Appeal, Future Trends in State Courts 2009 114, 115, https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1486/.
  23. Appellate Court Performance Measures, National Center for State Courts, https://www.courtools.org/appellate-court-performance-measures (last visited Aug. 23, 2024).
  24. David Brewer, Appellate Court Performance Measurement: Transforming Process and Building Trust in the Oregon Court of Appeals, Future Trends in State Courts 2010 132, 132, https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1605/rec/17.
  25. National Center for State Courts, Model Time Standards for State Appellate Courts i (2014).
  26. Id. at 4.
  27. Id.
  28. Id. at 5–6.
  29. Brian Ostrom and Roger Hanson, Achieving High Performance: A Framework for Courts i–iii (2010).
  30. Id. at 2–3.
  31. Id. at 10–12.
  32. Id. at 31. Robert S. Kaplan and David P. Norton first proposed the balanced scorecard based on their observations of similar practices at Hewlett-Packard and Analog Devices. From these observations, Kaplan and Norton developed the scorecard as an easy-to-comprehend visual summary for executives to view financial and operational measures to align the organization and to drive future improved performance. See Robert S. Kaplan and David P. Norton, The Balanced Scorecard — Measures that Drive Performance, Harv. Bus. Rev., Jan.-Feb. 1992, at 71, 72. https://hbr.org/1992/01/the-balanced-scorecard-measures-that-drive-performance-2.
  33. Catherine Cote, What Is a Balanced Scorecard? Harv. Bus. Sch. Online: Bus. Insights (Oct. 26, 2023), https://online.hbs.edu/blog/post/balanced-scorecard#:~:text=The%20balanced%20scorecard%20is%20a,captures%20value%20creation%27s%20four%20perspectives.
  34. Ostrom and Hanson, supra note 29, at 32.
  35. Id. at 82–85. Although not discussed in detail in the High Performance Court Framework, this cycle was based on existing and longstanding quality management practices — the Plan, Do, Check, Act (PDCA) cycle — from the total quality movement and can serve as a methodological framework for courts implementing quality and continuous improvement practices. The PDCA cycle is the generally accepted model for continuous improvement programs and projects. See Quality Resources, American Society for Quality, https://asq.org/quality-resources/pdca-cycle (last visited June 19, 2023).
  36. Brian J. Ostrom, Matthew Kleiman, and Roger A. Hanson, The High Performance Court Framework, Future Trends in State Courts 2011 140, 141, https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1820/rec/16.
  37. International Consortium for Court Excellence, International Framework for Court Excellence 4 (2020) [hereinafter International Framework].
  38. Id. at 8.
  39. A maturity model is a method of measuring an organization’s ability to satisfy a certain level of performance, often in the context of continuous improvement.
  40. International Framework, supra note 37, at 13.
  41. Id. at 16.
  42. Id. at 17.
  43. Id. at 18–20, 35–37.
  44. Ostrom and Hanson, supra note 29, at 10.
  45. Id. at 6–7.
  46. International Consortium for Court Excellence, Global Measures of Court Performance 1 (2020).
  47. Id. at 21.
  48. See Elizabeth Richardson, The Use, Modification and Impact of the International Framework for Court Excellence: A Research Paper 9–34 (2017).
  49. Id. at 37.
  50. Id. at 39.
  51. New Mexico Administrative Office of the Courts, Implementing Total Quality Management in New Mexico’s Courts 1 (2001).
  52. Id. at 5.
  53. Id. at 7–8.
  54. Id. at 16–26. Established by Congress in 1987, the Malcolm Baldridge National Quality Award is the highest level of national recognition in the United States for performance excellence. The award is presented to organizations with a system that “ensures continuous improvement in overall performance in delivering products and/or services [and] provides an approach for satisfying and responding to customers and stakeholders.” Baldridge Award, Nat’l Inst. of Standards and Tech., https://www.nist.gov/baldrige/baldrige-award (last visited Apr. 14, 2023).
  55. Brian J. Ostrum, Matthew Kleiman, Alicia Davis, Scott Graves, and Shannon Roth, The Application of the High Performance Court Quality Cycle in the Superior Court of Arizona in Maricopa County ii (2013).
  56. Id. at 3–4.
  57. Id. at 31.
  58. Jake Chatters, Defining Operational Successes: Measuring the Performance of a Court’s Front-Line Staff, Future Trends in State Courts 2009 118, 118, https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1486/rec/18.
  59. Id. at 119.
  60. Id.
  61. Id. at 119–20.
  62. Ingo Keilitz, How Are We Doing? A Greater Role for Organizational Performance Measurement and Management in International Development, Nat’l Ctr. for State Courts Libr eCollection, at 1–2, http://ncsc.contentdm.oclc.org/cdm/ref/collection/ctadmin/id/2204.
  63. Id. at 11 (emphasis in original)
  64. Id. at 12–13.
  65. Id. at 17, 39–41.
  66. Id. at 42–43.
  67. Jarrett B. Perlow, Organizational Maturity in Court Administration: A New Evaluative Standard for Court Administrators, The Ct. Admin., Summer 2022, at 24, 24–28,https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4644869.
  68. Id.
  69. William Raftery, Case Processing Time Standards Take Hold in State Courts, 106 Judicature 2, 2–3 (2023).
  70. Id.
  71. Id.
  72. Schauffler, supra note 13, at 118.
  73. Gregory H. Watson and Jeffrey E. Martin, Toward an Operational Definition of Quality Government 8 (2003).
  74. Id.
  75. Id. at 13.
  76. Richard E. Mallory, Achievements and Barriers: The Results of Lean and Quality Initiatives in Government 1 (2017).
  77. Id. at 16–17.
  78. Bryan Rodgers and Jiju Antony, Lean and Six Sigma Practices in the Public Sector: A Review, Int’l J. of Quality & Reliability Mgmt. 437, 447 (2019).
  79. Brandon Cole, Lean-Six Sigma for the Public Sector 14 (2011).
  80. John Baranzelli, Making Government Great Again: Mapping the Road to Success with ISO 9001:2008 13–17 (2010).
  81. Int’l Organization for Standardization, Quality Management Systems – Requirements (ISO 9001:2015) (5th ed. 2015).
  82. Baranzelli, supra note 80, at 9.
  83. Richard E. Mallory, Quality Standards for Highly Effective Government 39 (1st ed. 2018).
  84. Id. at 40.
  85. Id. at 46–47.
  86. Id. at 62.
  87. American National Standards Institute, Guidelines for Evaluating the Quality of Government Operations and Services (2021).
  88. Richard E. Mallory, Evaluating the Quality of Government Operations and Services: A Guide to Implementation of the ASQ/ANSI G1 Standard 1 (2023).
  89. See id.
  90. “Clerk’s Office Earns Award for Cutting Case Processing Time in Half,” U.S. Courts, (Mar. 8, 2022), https://www.uscourts.gov/news/2022/03/08/clerks-office-earns-award-cutting-case-processing-time-half.
  91. Jarrett B. Perlow, Patrick B. Chesnut and Jason Woolley, Journey of Excellence: A Case Study on the Use of the ASQ/ANSI G1:2021 Standard in the Federal Judiciary (2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641462.