Author Topic: PEFA Monitoring Report 2010 on Repeat Assessments  (Read 585 times)

Napodano

  • Administrator
  • PFM Member
  • *****
  • Posts: 682
PEFA Monitoring Report 2010 on Repeat Assessments
« on: June 30, 2011, 07:37:27 GMT »
This is info from the PEFA Newlettter. The report is attached below.

Monitoring Report 2010 on Repeat Assessments

1 .The fourth monitoring report on the roll-out of the PEFA Framework has been prepared by the Secretariat. The Monitoring Report 2010 (MR 10) analyzed repeat assessments including changes in PFM systems performance measured by means of PEFA indicators.

2. The main purpose of the MR10 was to assess if the PEFA framework is able to provide reliable measurement of performance changes over time. One of the objectives of a repeat assessment (RA) is to measure performance since the previous assessment (PA); a RA looks at the specific changes in system performance by verifying what has changed and by how much. RAs are emerging in significant numbers as many baseline assessments took place 3-6 years ago. Between the launch of the PEFA Framework in June 2005 and a stocktake in October 2010, forty five RAs have been carried out in 38 countries.

3. The MR10 seek answers to the following questions: (i) what are the frequency of and drivers behind the repeat assessments, (ii) does the Framework effectively enable measuring changes and could changes be measured with better validity and reliability and (iii) what trends in PFM Performance do repeat assessments reveal?

4. The main findings of the MR10 may be summarized as follows:
4.1 The vast majority of PEFA assessments are followed by RAs within the recommended 3-5 year interval; this suggests that they are implemented quite consistently across countries and largely within the recommended interval. The most important reasons for carrying out the RA were (i) to measure progress; (ii) contribute to the design or monitoring of a PFM action plan or reform program; (iii) to facilitate dialog with donors, and, (iv) link to ongoing or future budget support. In some cases, the purpose was to establish a new and more widely agreed baseline, for which reason the RA and the PA are not comparable.
4.2 Scoring issues (e.g. new evidence for the PA rating, different definitions used, different sampling used, or different interpretation of similar data) or ‘no score’ in the PA or RA hinders measuring change over time. However, since ‘no score’ for an indicator dimension is easily detectable when comparing pairs of scores, users of the scoring data will know where and when this is a factor. A comparability level across all indicator dimensions of 80% or more - between PA and RA – was considered satisfactory when ‘no scores’ are excluded from the data set. This robust level of comparability was reached for 76% of the reports. Comparability is above 80% for 51 of the 74 dimensions of the Framework when ‘no scores’ are excluded.
4.3 The analysis indicates that overall, PFM systems are improving, but with significant variance among systems features. Formal PFM features where progress can be achieved through adopting a new law, regulation, or technical tool, or focusing on no more than a few agencies, or at an early stage in the budget cycle are more likely to improve or maintain a high score than functional PFM features where progress requires actually implementing a new law or regulation, or coordinating the work of many agencies, or working downstream in the budget cycle. Click to view report.

 

RSS | Mobile

© 2002-2024 Taperssection.com
Powered by SMF