Quality management1 — or the practices an organization creates to ensure customer requirements are met — is usually associated with the corporate world.2 But its aims are just as relevant to state-run entities like courts. An overview of those practices tells us how far we have come in adopting quality practices as well as how far we have to go, when it comes to making court administration efficient and effective.
Since the 1990s, academics and state court leaders have developed several quality-based concepts and tools to measure and evaluate the effectiveness of court operations. Until recently, though, these state practices have not fully incorporated the broader quality management landscape applied in other areas of government. And formalized quality management practices in the federal courts are still in their infancy.
Current performance measurement and management principles in the courts originated out of the “total quality management”3 movement created largely as part of the post-World War II economic redevelopment of Japan.4 While initially focused on private-sector manufacturing, quality management concepts eventually garnered the attention of public and court administrators, such as during the Clinton administration’s Reinventing Government initiative of the 1990s.
Within the judiciary, former Vice President for the National Center for State Courts (NCSC) Alexander Aikman’s handbook for judicial administrators identified four likely benefits to courts that implement quality management: (1) improved productivity, (2) improved service to the public, (3) improved processing to facilitate improved judge and litigant work, and (4) improved relationships with other court partners.5 To support interested courts, Aikman outlined a three-year, incremental plan for implementing a quality management program.6 It included collecting data, establishing quality standards and measures, and reviewing and modifying those standards.7 The concept of such standards within the courts was new at the time, with Aikman’s handbook highlighting the work of only 11 courts — from federal, state, and municipal levels — engaged in quality management activities.
Over a three-year period from 1987 to 1990, the NCSC established the Trial Court Performance Standards Project to “develop measurable performance standards for the nation’s general jurisdiction state trial courts.”8 The standards were designed not to measure the performance of individual judges but the whole trial court organization by using a self-assessment-based system.9 The project created 22 performance standards — with accompanying measurements — organized around five areas: “(1) access to justice; (2) expedition and timeliness; (3) equality, fairness, and integrity; (4) independence and accountability; and (5) public trust and confidence.”10 Over time, these 22 performance standards increased in number to as many as 75, until settling at 68 based on continued field application and court input.11 Yet less than a decade later, in 2005, the NCSC replaced the Trial Court Performance Standards with CourTools, which built upon the original five areas of the standards but focused on only ten measurements.12
In reviewing the history of performance measurement standards in the courts, Richard Schauffler, NCSC director of research services, observed “how intimidated the court community was by the notion of performance measures,” which led to the inclusion of a disclaimer in the Trial Court Performance Standards that “the measures were only to be used for a court’s internal management,” which was a message “not lost on the states” — namely, that failure to implement carried little consequence and likely gave little incentive for courts to take action.13 Schauffler offered other reasons for limited adoption of the standards, including the excessive number of measurements, the lack of consistent leadership, and “insulat[ion of] courts from pressures [or compulsion] to adopt performance measure[s]” like their executive agency counterparts.14
The move to the slimmed-down CourTools resulted from new judicial leadership to promote “effective judicial governance and accountability.”15 Like the Trial Court Performance Standards, the ten measurements that comprised CourTools looked at broad organizational trends, such as surveying court users on perceived access and fairness, disposition and case clearance rates, and costs by case.16 However, CourTools also provided simple, clear guidance, standards, and methodology that courts of any type could readily access, implement, analyze, and report. By taking a more focused approach to consistent data collection and analytical methods than the Trial Court Performance Standards, CourTools “provided the basis for creating a new perception that measurement could be done fairly, accurately, and consistently within and across courts within a given state, and among states.”17 Accordingly, CourTools was more widely adopted than its predecessor.18 Those involved with creating CourTools noted early on the initial response from the court community was favorable and the “small but well-considered set of outcomes” were “widely accepted as valuable” in demonstrating both outcomes to the public users of courts and the cost-effectiveness of court operations.19
Appellate courts must be “consistent, fair, and timely” in resolving cases at their second, or even third, level of review, according to the Appellate Court Performance Standards of the mid-1990s.20 By using performance standards, they can “foster the trust and confidence of their constituents” regardless of the appellate court’s jurisdiction or place in the hierarchy of the court system.21
The Appellate Court Performance Standards — an initiative from state appellate courts in Oregon, Montana, and Arizona — eventually led to the release of an appellate-focused version of CourTools in 2009: Appellate CourTools. Designers of Appellate CourTools believed the use of such measurements would solidify appellate courts’ “own independence and their leadership role within the judicial branch.”22 Appellate CourTools reduced the number of measurements from ten in CourTools to six, but continued to focus on the same performance areas and to provide accessible measurement tools to aid in implementation.23 The Oregon Court of Appeals adapted four of these performance measurements to focus on three values to drive new accountability: (1) quality, (2) timely resolution of cases, and (3) “cultivation of public trust and confidence.”24
Five years after the release of Appellate CourTools, a collaboration of the Joint Court Management Committee of the Conference of Chief Justices and the Conference of State Court Administrators issued new model time standards for both state intermediate appellate and courts of last resort either to adopt Appellate CourTools as is or “to modify them to establish time standards based on their own particular circumstances.”25 In establishing these time standards, the authors emphasized that appellate courts must be accountable “for achieving the goals of productivity and efficiency while maintaining the highest quality in resolving cases before them.”26 The decision to focus on prompt resolution was due to timeliness being “probably the most widely accepted objective measure of court operations and is also, fairly or otherwise, a primary concern of the other branches of government and the public regarding the courts.”27 Although no studies yet showed that having time standards led to faster resolution of cases, the mere establishment of such standards demonstrated, according to the developers of the Model Time Standards, a “[c]ommitment toward ensuring efficiency and timeliness in the resolution of appellate cases.”28
State court administrators, however, still lacked a unifying management framework for using either CourTools or performance standards to enhance court operations. In response to an emerging crisis of loss of state court funding coupled with a “decline in the trust and confidence” in state courts by citizens, the NCSC in 2010 launched a new effort to provide a “common way” to evaluate court performance management activities: the High Performance Court Framework.29 By moving away from specific performance measurements and instead focusing on broader performance indicators that would proactively identify issues within courts, the new framework organized indicators into four areas: (1) effectiveness, (2) procedural satisfaction, (3) efficiency, and (4) productivity.30 In contrast to the “conceptualized” approach of the Trial Court Performance Standards, the High Performance Court Framework was designed to “focus[] on case processing quality . . . to assure each person’s constitutional right of due process” based on four underlying administrative principles: “giving every case individual attention,” “treating cases proportionately,” “demonstrating procedural justice,” and “exercising judicial control over the legal process.”31
In the context of performance measurement and management, the High Performance Court Framework recommended courts use a balanced scorecard tool to direct “overall business strategy into specific quantifiable goals and to monitor the organization’s performance in terms of achieving these goals.”32 Because the traditional balanced scorecard tool used in the private sector — which focuses on financial, customer, internal, and growth concerns33 — did not readily transfer into the court context, the framework defined four of its own points of focus: customer, internal operating, innovation, and social value.34 The framework concluded by identifying strategies that courts could use to begin implementation, including the use of a “quality cycle” — a five-step cyclical process for continuous improvement based on problem identification, data collection, data analysis, response to the analysis (“corrective action”), and then evaluation.35
Shortly after publication of the framework, its authors summarized in Future Trends in State Courts 2011 the basic managerial ingredients of a high performance court as (1) administrative principles “that define and support the vision of high administrative performance,” (2) a managerial culture “committed to achieving high performance,” (3) performance measurement through a systematic assessment of a court’s ability to “complet[e] and follow[] through on” its goals, (4) performance management that “responds to performance results and develops its creative capacity,” and (5) use of the quality cycle to “support[] ever-improving performance.”36
Concurrently on the international front, in 2007 court administrators from several countries formed the International Consortium for Court Excellence “to develop a framework of values, concepts and tools for courts and tribunals, with the ultimate aim of improving the quality of justice and judicial administration.”37 This framework, called the International Framework for Court Excellence and now in its third edition, asks courts to assess and score themselves on seven areas of court excellence: court leadership; strategic court management; court workforce; court infrastructure, proceedings and processes; court user engagement; affordable and accessible court services; and public trust and confidence.38
The self-assessment scoring guidelines follow a maturity model approach (one geared toward achieving a certain performance level),39 establishing a series of values statements, and requiring court administrators to assign a score (from 0 to 5) as to how the court approaches various general statements within each of the seven areas.40 Each area also has an effectiveness statement and a separate scoring table for courts to evaluate how well they perform in each area.41 After completing the self-assessment, court participants tabulate the points for each section and the “overall indication of the court’s performance.”42 From this score, courts are encouraged to create an improvement plan to address the various issues, including measurements for performance and progress.43
The initial 2008 version of the international framework, though, lacked in the area of performance measurement and did not include the current self-assessment statements. In their publication introducing the High Performance Court Framework, researchers Brian Ostrom and Roger Hanson characterized the International Framework (as well as the Trial Court Performance Standards) as “lofty [in] nature” and presented with a “high level of abstraction,” which made them “not easily defined for use in a systematic way to assess court performance in the real world.”44 Specifically, the international framework focused on an image of the “ideal court” with a “more limited emphasis on measurement and the identification of particular indicators of performance.”45
Following similar feedback from the international court community, the International Consortium issued its first Global Measures for Court Performance in 2012, which identify performance measurement and management as essential tools for courts. The global measures also provide “focused, clear, and actionable performance” standards that align with the areas of court excellence found in the international framework.46 Of the current 11 measures, nine were adaptations of the CourTools for trial and appellate courts.47 The global measures, now in their third edition, provide detailed methodology for implementation, as well as examples of real-life application.
Since its adoption, the international framework has been well-received by many courts around the world. In a 2017 research paper for the International Consortium for Court Excellence, Elizabeth Richardson summarized its use in 13 different courts.48 She concluded that courts using the international framework found the self-assessment process to be a “useful tool for identifying areas of operation and engagement that need improvement.”49 Even so, she noted, self-assessments may be scored inconsistently due to local variation.50
Both before and since the release of the High Performance Court Framework in 2010, several courts have implemented quality management practices. Two illustrations follow.
New Mexico. As a pre-High Performance Court Framework model, New Mexico launched a four-year “total quality service” program across its courts,51 managing significant organizational cultural change that included fostering a positive work environment and delivering consistent organizational performance and quality customer service.52 Taking an incremental approach, it then defined various performance measurements and indicators, which led to identifying 12 areas of focus for process-improvement teams.53 Of note, the New Mexico courts developed and used a self-assessment tool based on the 1999 Malcolm Baldrige National Quality Award.54 Ultimately, the state created new performance indicators and at least one corresponding process.
Arizona. As a post-High Performance Court Framework model, the Maricopa (Ariz.) Probate Department requested the NCSC evaluate its probate program. The program evaluators, who included two of the developers of the High Performance Court Framework, used the framework “to examine . . . efforts to increase accountability and to allocate judicial officer and court staff resources more proportionately in monitoring . . . cases.”55 56 The report called the probate department’s collection of data and its following a clear plan in its work successful uses of a continuous improvement process. It concluded that these actions resulted in “a system for organizing . . . work that enables ongoing review and future systematic evaluation” — a primary goal of the High Performance Court Framework.57
Despite progress, current use of court quality management practices have their limitations.
First, current performance measures, such as CourTools, are useful for providing a high-level overview of court performance, but as court executive Jake Chatters has argued, “often provide little value to most staff, supervisors, and line managers.”58 Instead, he has advocated for “operational-level performance measure[s] . . . that focus on the timeliness and quality of the activities performed by line staff,” such as the “percentage of documents processed within a certain number of days.”59 In contrast, Chatters views “backlog reports” — noting how behind staff may be — as inherently negative and backward-
facing, with limited ability to inform court administrations on how to address new and future challenges.60 Although Chatters did not propose a new performance measurement system, he identified several “implementation principles” to be used in crafting these frontline measurements, including considering both timeliness and quality together, defining success through measurements, and avoiding creating cumbersome measurement systems such as the Trial Court Performance Standards.61
Second, Professor Ingo Keilitz has argued that empirically based performance measurements can drive court development, particularly in the international context.62 Building on the two versions of CourTools and the International Framework, Keilitz provided a new working definition of performance measurement and management for the courts:
The discipline of monitoring, analyzing, and using organizational . . . performance data on a regular and continuous basis in real or near-real time for the purposes of improvements in organizational efficiency and effectiveness, in transparency and accountability, and in increased public trust and confidence in the organization.63
Keilitz then posed a series of items for courts to consider in self-evaluating an organization’s performance, including comparative performance measurements from baseline to current levels; performance trends over time; variability and predictability in performance over time; and identification of actions and strategies to start, stop, or continue based on measurement results.64 Keilitz contended that the focus on performance measurements reflected a move away from prior top-down approaches within international justice systems to an emphasis on local ownership of efforts like capacity development and legitimization.65 Keilitz concluded that current performance management efforts remained “relatively limited” and needed to be documented and promoted to maintain consistency and harmonization at all levels of judicial governance.66
This author has previously criticized existing judiciary quality management tools — CourTools and the International Framework — as insufficiently independent to effectively evaluate the quality of a court’s performance.67 Instead, this author has both argued and demonstrated in practice that a combined system of neutral maturity evaluation and the integration of established quality management systems (such as Lean Six Sigma) can provide courts with a more robust method for establishing their own, customizable quality management system using a gradual implementation based on an existing, incremental framework, such as the ASQ/ANSI G1:2021 Guidelines for Evaluating the Quality of Government Operations and Services (ASQ/ANSI G1), discussed below in more detail.68
Finally, William Raftery of the NCSC has observed that most states have now adopted time performance standards but that a one-size-fits-all solution does not work.69 Specifically, Raftery noted that “standards often appear to be aspirational rather than based on actual performance,” which can “lead to individuals or organizations simply giving up on trying to meet the standards at all.”70 Any performance standards should instead be attainable and achievable, including accounting for continued backlogs and disruptions resulting from the recent pandemic.71 In other words, if there is no connection between actual operational performance and the performance goals and purpose of the court’s performance area, the application of standards for standards’ sake will not lead to quality or improved performance.
Given these deficiencies, court administrators might consider looking at other government entities with a longer history and broader use of quality management practices to see how quality practices can be better integrated into court operations.
As noted, quality methods began to take hold in the public sector in the 1980s, culminating in the creation of the National Performance Review Office as part of the Clinton administration’s Reinventing Government initiative. The goal was “clarification of the purposes of each [public] institution and definition of the appropriate measures to gauge progress toward those specific organizational objectives.”72
Following the Clinton initiative, Gregory H. Watson and Jeffrey E. Martin endeavored to craft an operational definition of quality in government and to identify accompanying quality practices.73 They encouraged government to follow recognized quality management principles and practices from the private sector,74 to focus on good customer service, to integrate performance excellence measures, and to use private-sector benchmarks.75
In the ensuing 20 years, public-sector organizations have pursued quality management integration primarily by adopting Lean Six Sigma methodologies. Developed by Toyota in the 1950s and 1960s, “Lean” is an approach that focuses on eliminating waste from a system or process using nontechnical tools. “Six Sigma,” developed by Motorola in the 1980s, is a method to eradicate process variation using statistical process controls and statistical applications. The two continuous improvement approaches were combined in the early 2000s and dubbed Lean Six Sigma.
Lean practices have received greater attention and adoption within government, with a 2015 study by the American Society for Quality (ASQ) Government Division reporting that approximately 20 percent of state government offices had established Lean improvement programs.76 From a follow-up study two years later, respondents identified favorable improvements in operational efficiency and effectiveness but also reported that barriers still existed, namely, a lack of leadership support.77 Similar obstacles and challenges remain in the overall adoption of Lean in government, including that there has been an “overreliance on individual tools rather than incorporating the philosophy [of Lean] . . . to the organization” even though “Lean is the dominant methodology used in many areas of the public sector.”78
In Lean Six Sigma for the Public Sector, Brandon Cole outlined nine challenges to adopting Lean Six Sigma in the public sector that do not exist in the private sector: “1. Hierarchical or stove piped environment, 2. Limited sense of urgency, 3. Lack of leadership support, 4. Lack of profit or revenue focus, 5. Lack of common goals, 6. Lack of customer focus, 7. High employee turnover, 8. Complexity of the public sector, [and] 9. Mix of various employee types.”79 However, in the face of these challenges, Cole identified different Lean Six Sigma approaches, tools, and methods that could be used to overcome each of these issues. Cole then provided readers with a road map for government agencies to set up their own Lean Six Sigma programs, introduced basic Lean and quality tools, and offered recommendations to create a sustained culture of continuous improvement.
Aside from work management methodologies, such as Lean Six Sigma, and specific application tools, such as CourTools, government entities have also considered using independently established quality standards, including third-party quality standards such as ISOs. ISOs are a system of numbered voluntary rules an organization may adopt to manage quality that are developed by the International Organization for Standardization, which was founded in the 1940s in Geneva. Currently, the international standard for organizations designing a quality management system — and one adopted by some government organizations — is ISO 9001.80 ISO 9001 lists standards that organizations should follow for managing their quality systems. The benefits of having an ISO 9001-aligned quality management system include being able to provide consistent services that meet customer requirements.81 Through the adoption of a quality management system, organizations align their management in pursuit of quality by “understand[ing] the needs of their customers and . . . anticipat[ing] future needs” where “[q] uality isn’t everything; it’s the only thing.”82 However, the benefits of ISO 9001 may be outweighed by it being perceived by some as “overly complex and not fully applicable in many branches of government.”83
As an alternate approach, the ASQ Government Division has developed a structured system management standard based on defining and documenting “best known operational practice” for each manager, as well as the application of Lean improvement efforts to system design.84 In this approach, an organization is scored in four areas: systems purpose and structure; goal directedness through measures and feedback; management of intervening variables and risk; and alignment, evaluation, and improvement.85 The goal was that a government standard would “provide an objective professional opinion [through self-directed or independent audits] of the quality of management of any public entity in a report-card format.”86 And the resulting benefit of such an approach would be to fully ingrain quality management into government practices and use the public pressure of conformity to auditable requirements — as with comparable financial audits — to make it very difficult for agencies to abandon quality practices.
In February 2021, the American National Standards Institute (ANSI) adopted the standard to evaluate the quality of government operations and services.87 As with ISO 9001, ASQ/ANSI G1 called for users to design processes around inputs and outputs and to define requirements and measurements for success, optimizing these processes into best practices. Unlike other performance standards, ASQ/ANSI G1 focuses on the evaluation and activities of individual managers in specific business activity groups — not an all-inclusive, top-down approach — and “provides objective scoring of the maturity of the use of well-known and beneficial quality practices at the organizational front-line.”88
This evaluative method follows a six-level maturity model for evaluation of a process or system of a government organization. As with ISO 9001, ASQ/ANSI G1 integrates risk management, analysis, and mitigation requirements into the evaluation requirements and assessment of the maturity level so that “the organization’s managers [know] how much risk they are accepting based on the maturity of their processes and system.”
Finally, evaluations under ASQ/ANSI G1 are performed either by internal examiners of the organization or by trained, volunteer external examiners provided by the ASQ Government Division. Examinations are expected to conform with standard quality auditing practices in considering the appropriate maturity level of the organization’s process or system on a six-level scale. Organizations submitting for external evaluation can be validated by the ASQ Government Division and, depending on the maturity level, receive award recognition.89
In 2022, the Clerk’s Office of the U.S. Court of Appeals for the Federal Circuit became the first government organization and court to adopt and to receive award-level validation under ASQ/ANSI G1.90 The office subsequently produced a case study detailing its use and application of ASQ/ANSI G1.91 As a simpler approach than ISO 9001, ASQ/ANSI G1 provides a new standard for government organizations, including courts, to build and expand existing quality practices. However, additional application within government and the judiciary is necessary to fully evaluate the actual impact and effectiveness of this new quality resource.
***
With the many available resources now available to court leaders, where does one start? Based on firsthand experience, this author recommends a combined approach of ASQ/ANSI G1 and the court-specific tools available. Incorporating quality management into a court unit is neither easy nor quick, yet quality management needs to start somewhere and truly never ends. As W. Edwards Deming — one of the founders of the total quality management movement — advised in his 14 Points on Quality Management, “[i]mprove constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.” By demonstrating validated quality practices delivering cost-effective services to the public, such as those identified above, all American courts can serve as champions not only of the rule of law and individual rights but also of good government worthy of the public’s continued trust and confidence. Through this article, court leaders at all levels now have a resource to help them begin their quality journeys.
Jarrett Perlow is the circuit executive and clerk of court for the U.S. Court of Appeals for the Federal Circuit and an Institute for Court Management fellow of the National Center for State Courts. He previously completed his juris doctor, cum laude, and his bachelor of arts from American University. He completed his Lean Six Sigma Master Black Belt under the direction of Gregory H. Watson and is a senior member and current chair of the all-volunteer Center for Quality Standards in Government with the American Society for Quality.