• The Government Performance and Results Act ‒ Modernization Act (GPRAMA) requires government agencies to ensure data quality by implementing a verification and validation (V&V) process for performance measures.
  • After conducting numerous performance measurement verification and validation (V&V) assessments for federal clients, Energetics identified several common pitfalls that can hamstring clients as they define and implement performance measures that are valid, complete, consistent, accurate, and timely.
  • Energetics created a series of fictionalized V&V case studies that bring these pitfalls to life. Although the case studies are fictional, they describe real measurement issues that programs encounter and present solutions for overcoming them.

How confident are you in the quality of your organization’s performance data? Can you use it to inform decisions? Does it help to tell your organization’s story and communicate its value to stakeholders?

The Government Performance and Results Act ‒ Modernization Act (GPRAMA) requires government agencies to ensure data quality by implementing a verification and validation (V&V) process for performance measures. Poor quality performance data can hamstring an organization, by limiting data-driven decisions and confounding stakeholder perceptions of organizational value. 

V&V assessments can help organizations improve data quality and build an internal measurement capability. Over the last 5 years Energetics has conducted almost 50 performance measurement V&V assessments for federal clients, evaluating performance measures against five standard criteria – valid, complete, consistent, accurate, and timely. 

In conducting these V&V assessments, the Energetics team has learned that there is a big difference between knowing the criteria and actually implementing measures that meet them. There are many common pitfalls that clients can encounter, and these mistakes can unknowingly create an issue with one or more of the criteria. 

To ease the implementation process, Energetics created a series of fictionalized V&V case studies that bring these pitfalls to life. Although the case studies are fictional, they describe real measurement issues that programs encounter. Each case study demonstrates how one of these pitfalls can occur, explains why it is a problem, and presents solutions for overcoming it.

•    The Case of the Befuddled Unit of Analysis
•    The Case of the Co-opted Composite
•    The Case of the Muddled Milestones
•    The Case of the Estranged Indicators
•    The Case of the Uniformed Rater
•    The Case of the Misleading Measure
•    The Case of the Detached Deterrent