Follow/Share


Author Topic: Conversation with Marc Robinson on performance budgeting  (Read 17423 times)

0 Members and 1 Guest are viewing this topic.

Napodano

  • *****
  • Posts: 611
Re: Conversation with Marc Robinson on performance budgeting
« Reply #15 on: September 17, 2012, 17:40:45 GMT »
 Hi, Marc;

In response to Glen (question #4) you mentioned UK (under Blair) as reference country which applied performance budgeting, wisely.

Question #9
Can you mention a developing country and an emerging country which can be taken as reference? Any MoF website which you consider worth consulting for indicators of performance?
« Last Edit: September 17, 2012, 18:39:25 GMT by Napodano »

Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #16 on: September 18, 2012, 12:33:43 GMT »
EMERGING OR DEVELOPING COUNTRIES

Hi Mauro. Amongst emerging countries, South Africa has I think done pretty well in developing program performance indicators within the framework of a reasonably sound program budgeting system. Indeed, the South African Treasury's Framework for Managing Program Performance Information (http://www.treasury.gov.za/publications/guidelines/FMPI.pdf , also attached below) (2007) is one of the better program performance indicator guides which have been produced internationally.

The Framework document is particularly impressive for a conceptual clarity and readability which is so often lacking in this type of guide. And I would give 6.5/10 for the actual program performance indicators which have actually been developed by spending ministries, and which can be seen in the Budget Estimates documents (http://www.treasury.gov.za/documents/national%20budget/2012/ene/FullENE.pdf, also attcahed below). The great positive about these South African program performance indicators is that, for the most part, they avoid the mass of activity and input indicators which clog up program budgets in so many countries. At the same time, however, the big remaining problem with them is that there are far too few effectiveness (i.e. outcome) indicators – most of the program performance indicators are output indicators, and more specifically output quantity indicators (i.e. they tell you about the volume of services which are being delivered to the public). I think one point which this underlines is that the government should give spending ministries clear directions on the key types of performance indicators which they are expected to develop for program budget purposes – making it explicit that ministries are expected to develop effectiveness, output quantity, output quality and efficiency indicators.

South Africa's program classification is also reasonably good, if in need of some further work – in particular, to connect more systematically to outcomes as well as outputs.
« Last Edit: September 18, 2012, 15:02:26 GMT by Napodano »

harnett

  • *****
  • Posts: 201
    • REPIM
Re: Conversation with Marc Robinson on performance budgeting
« Reply #17 on: September 18, 2012, 14:41:09 GMT »
Marc

Thanks for a truly enlightening discussion on our favourite topic!

To follow on from your response to a useful reference country, I'm looking for best reference work..... the South African example you cite is useful for us practitioners, but as you say - only 6.5/10.  In my work in Albania I was often asked to assist with the organisation of programmes within a ministry and also appropriate outputs (not outcomes which I suggested was often beyond the influence of a single ministry!) and performance indicators to satisfy the requirements of the MTBP submission.  I spent many hours trawling through Australian, NZ, UK and other examples with varying degrees of success. As you indicate earlier in the fireside chat this is easier in some ministries than others (vaccinations easy, deterrence in the MOD difficult).  So...

Question #10

Allowing for the fact that all countries have different circumstances, can you point to any reference work which would provide a starting point for us practitioners when advising on programme structure, outputs and performance indicators
« Last Edit: September 18, 2012, 15:04:15 GMT by Napodano »

harnett

  • *****
  • Posts: 201
    • REPIM
Re: Conversation with Marc Robinson on performance budgeting
« Reply #18 on: September 18, 2012, 14:53:41 GMT »
On a different note...

I'm not sure how familiar you are with the UK's PFM but the introduction of performance targets is often blamed for a reduction in goodwill among many public servants - the most prominent of which being education and health professionals (I admit a certain interest in this as an ex secondary school teacher).  During the last 20 years the degree to which teachers perform after school activities and nurses / doctors perform caring services above and beyond their technical duties has purportedly decreased significantly.  We are even now faced with a situation whereby nurses are now to be faced with "caring" targets (don't ask me how this will be measured!) to redress the imbalances of past targets.

Question #11

Any thoughts on how these issues of "goodwill" should be handled?
« Last Edit: September 18, 2012, 15:03:52 GMT by Napodano »

Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #19 on: September 19, 2012, 07:34:13 GMT »
Programs, Outcomes and Outputs

A couple of points about Hartnett's interesting observations:
1. Yes indeed, the budget should be appropriating outputs rather than outcomes – in the sense that programs are groups of services delivered by ministries for which the budget is allocating funding.
2. However, at the same time programs should be based on outcomes as well as outputs, because what the groups of outputs (services delivered) which constitute programs primarily have in common is that they have a common intended outcome. For example, a "preventative health" program brings together a range of different types of preventative health services (outputs), but they all have in common that they are aiming to reduce the incidence of preventable disease and injury (an outcome).
3. It is for this reason that the debate about with the programs should be output-focused or outcome focused is beside the point. Program should be both output focused and outcome focused.
4. Program terminology used around the world just adds to the confusion on this point. In Australia, for example, programs are called "Outcomes", but they are still groups of outputs and the use of this terminology doesn't change what you are appropriating for.
5. It is, however, really important that we don't just focus on the outputs – and forget the outcomes – when we define programs and when we develop associated statements of program objectives. This is one of the problems with the South African system which I was getting at in my comments yesterday.

In terms of guidance material, I hope it's not in modest to suggest that my recent manual is quite a good guide on these points – which I think, unfortunately, are not all that well dealt with in manuals developed by most ministries of finance.
« Last Edit: September 19, 2012, 15:26:03 GMT by Marc Robinson »

Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #20 on: September 19, 2012, 07:49:56 GMT »
Targets and "Public Service Motivation"

Our colleague Hartnett that also raises the interesting issue of the allegedly negative impact of performance targets on the morale and performance of civil servants including front-line workers such as teachers, nurses and doctors. Yes, I'm very much aware that this is an issue which was much debated in the UK during the Labour years when performance targets were so extensively used. It is the same issue which gets raised in relation to performance pay, and is an issue which I looked at in detail in a chapter in my 2007 book. I would not in any way dismiss the reality of the issue. So-called "public service motivation" – the altruistic commitment of many public sector workers to delivering a good service, or to achieving social objectives which are important to them (e.g. protecting the environment) – is very important to the effectiveness of government, and it is important to manage personnel in such a way as to build rather than erode public service motivation. It is important for this reason not to introduce such a large component of performance pay in remuneration that it swamps other motivations and distorts performance. This danger arises, of course, from the inevitable imperfection of the performance indicators upon which performance pay is based -- some dimensions of performance are not captured in the indicators upon which performance pay is based, and these dimensions of performance could suffer.

However, I completely disagree that public service motivation is a reason for not using performance targets in government. For example, in education I very much applauded the setting by the UK Labour government of targets for raising literacy and numeracy levels – and I note that under the PSA regime measured performance in these areas improved very substantially. The argument that gets raised against this type of target setting is that it is supposedly damages performance in areas which are not measured (e.g. in the case of education in the teaching of values and of subjects other than reading and arithmetic), and this damage is supposedly so great that it outweighs any improvements in the dimensions of performance which are measured and targeted. In my view, this is an empirical proposition that has to be tested -- rather than just asserted on theoretical grounds -- and I do not read the available evidence as supporting it. Expressed differently, while I don't doubt that targets do have some perverse effects, it seems to me that the benefits of rational target-setting can -- and in the UK case, probably did -- outweigh the perverse effects. In fact, I think that part of the reason this is the case is that the existence of so-called "public service motivation" greatly reduces the extent of the perverse effects which may in principle arise from target setting. For example, even if you set cost reduction targets in hospitals (or achieve the same effect by linking hospital funding to treatment volumes), doctors and nurses are not going to immediately go out and sacrifice the quality of patient care in order to cut costs. They will not, in other words, behaved like ruthless "homo economicus", precisely because they do have a genuine commitment to the quality of patient care and patient outcomes.
« Last Edit: September 19, 2012, 15:30:22 GMT by Marc Robinson »

Glen Wright

  • *
  • Posts: 59
Re: Conversation with Marc Robinson on performance budgeting
« Reply #21 on: September 19, 2012, 08:29:12 GMT »
The example from South Africa is quite interesting.  I particularly like the structure and organization of this as one of the main problems I have dealt with in several assignments is how to develop a very concise, but comprehensive, budget presentation. 

With regard to this issue of outputs dominating outcomes, the main problems I see are that outcomes are not easily counted, often can only be realized several years later that the budgeted funds for them, and are highly variable on a unit cost basis.  Outcomes across programs are not easily comparable for budget decision making, but outputs, based on personnel cost, etc, are more easily counted and compared. 
« Last Edit: September 19, 2012, 08:54:16 GMT by Napodano »

Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #22 on: September 19, 2012, 09:30:45 GMT »
OUTCOMES AND BUDGETS

Glen makes excellent points about the measurement difficulties which often arise with outcomes, and the fact that outcomes are often achieved in the medium (or even long) term rather than immediately.However, there are two separate issues we need to discuss here. The first is the question of outcome measurement, and the other is the question of the link between programs and outcomes. The fact that outcomes may be sometimes difficult to measure does not change the need to clearly define the link between each program and outcomes.

For example, school education programs should be defined not just as programs for teaching kids, but as programs for raising the level of knowledge of children. It is not good enough, as I find that education ministries around the world tend to do, to have a statement of program objective which read something like "the delivery of quality education to the nation's young people". Because the failure to explicitly link the program to outcomes, and the tendency to mention only the outputs which the program delivers, leads to a focus in the program performance measures exclusively upon output measures to the neglect of outcome (effectiveness) measures. I am astonished at how few education ministries under program budgeting systems have included basic effectiveness indicators such as literacy and numeracy levels amongst their program performance indicators. They seem to consistently prefer to rely exclusively on output measures such as enrolment rates. Yet outcome measures are enormously important, not just in the most obvious ways but sometimes in ways that are not generally understood. For example, in respect to gender equity in schools, education ministries tend overwhelmingly to focus on measures such as enrolment or attendance rate differentials (i.e. the ratio of female school enrolment rates/male school enrolment rates). Well, this is a useful measure. But an outcome equity measure – namely, the literacy level of female school students/male students and similar sorts of indicators – are more useful and important. So leading country such as France and the UK use precisely these types of outcome-based equity measures.

So instead of the objective of the school education program being defined as "the delivery of quality education to the nation's young people", it should be defined and outcome-focused terms as "well educated and socialised young people". (Incidentally, the widespread inclusion of references to "quality" education in school education program objective statements does not make these statements outcome-oriented. Quality is an output concept, not an outcome concept).

The other issue to which Glen draws our attention  is the impossibility of budgeting on a unit costs basis for outcomes. He is absolutely right – there is no stable unit cost for, for example, the number of lives saved in malaria treatment. However, I would draw attention once again to the limitations upon unit cost budgeting even in the context of outputs, which I discussed earlier in this conversation. In short one can only calculate budget requirements based on output unit costs for types of outputs which are either quite standardised (every client gets pretty much the same service, at the same cost) or where variations in client costs average out over a large number of clients (which is broadly the case of many types of hospital treatments). At the national government level, only a minority of services delivered by government meet this criteria and are therefore candidates for budgeting based on unit costs. It is in my view an unfortunate element of the technical assistance which has been given to many developing countries that these countries have been told that they should use unit cost methodology across the board to estimate the budgetary requirements of their programs.
« Last Edit: September 19, 2012, 09:42:01 GMT by Marc Robinson »

FitzFord

  • *
  • Posts: 154
Re: Conversation with Marc Robinson on performance budgeting
« Reply #23 on: September 19, 2012, 15:50:37 GMT »
Marc,

I like where this discussion is going. I don't know if you have discussed this elsewhere: it seems to me that it would be useful to separate the timeframe for assessing outputs and outcomes, but nevertheless, link them. If we continue with the education example, as you and my colleagues in this discussion have noted - education has mutiple outputs and outcomes and they manifest themselves in different ways and timeframes.

Question #12

A more complete system would be to specify the outputs and timeframes and the (hypothesized) outcomes and timeframes. In this framework, the evaluation systems could be more appropriately linked, planned and applied, and we would learn more efficiently and accurately what is working and why. Has this approach been applied anywhere? To any sectors?

Fitz.
« Last Edit: September 19, 2012, 17:21:25 GMT by Napodano »

Napodano

  • *****
  • Posts: 611
Re: Conversation with Marc Robinson on performance budgeting
« Reply #24 on: September 19, 2012, 17:56:44 GMT »
With the last post from Fitz, we conclude the conversation with Marc.

Marc,
please answer the last question and I will lock and archive our conversation.
I take this opportunity to thank you on behalf of all PFM Board members. Please come back to the Board to share your knowledge whenever you feel like it.

Let me remember our members to follow Marc on his blog http://blog.pfmresults.com/wordpress/

All the best to you all.


Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #25 on: September 20, 2012, 17:54:40 GMT »
Fitz, a good question. I think it's something we all need to work on. And in differentiating the timelines of outputs and outcomes, we need also to bear in mind that -- in general -- higher-level outcomes are realized over a longer time-frame than intermediate outcomes. In education, student literacy and numeracy levels are intermediate outcomes which are realized relatively quickly (over several years), whereas stronger economic growth and lower unemployment are higher-level -- and longer-term -- outcomes of education.

In signing out, let me thank Mauro for organizing this excellent discussion, and for doing so a great job with PFM Board. PFM Board is a wonderful contribution to the PFM discipline.

John Mercer

  • *
  • Posts: 2
Using the performance information
« Reply #26 on: January 17, 2013, 19:35:27 GMT »
One of the key points discussed here is that the inclusion of performance goals and results in a performance budget is just the starting point, and that to be meaningful that data has to be a basis for more in-depth performance analysis. I agree, because I think the real question then becomes, “Why?” This question should apply whether or not a program’s goals were met. It is important to understand the underlying reasons as to why a program is or is not operating efficiently and effectively in achieving the desired results.

This may be easier at the output level, where measures of program activity and their direct results are often used – e.g., number of safety inspections conducted. If a certain number of work hours of inspection are engaged in, then the goal for number of inspections should be the direct result. It is more challenging (and often much more so) at the outcome level, especially when you are addressing end outcomes – e.g., reduction in the number of accidents and their seriousness. Whether or not the target for the number of accidents was met by a safety inspection program, you still need to know to what degree there is a meaningful relationship between the number of inspections conducted and the number of accidents that occur. This then can raise questions about the nature of the inspections themselves (e.g., how thorough and complete), as well as requiring an examination of other factors that impact the number of accidents – which are often outside the control of the particular program or even of the government itself.

When I originally drafted the Government Performance and Results Act of 1993 (GPRA), I focused on requirements that US federal government agencies develop and publish annual performance goals and the associated results. I also included a requirement that when a goal is not met, the agency must explain why not. However, there was no specific requirement that this information actually be used to shape policy, formulate budgets, and manage programs. Silly me, I actually thought this would be unnecessary to require because of course it would happen routinely.

But it did not happen. In fact, several years ago we started hearing complaints by the Office of Management and Budget (OMB) that while federal agencies had been getting better at complying with GPRA requirements for outcome and output measures, they were not actually using the resulting data to affect policy and managerial decisions. So OMB recently worked with Congress to develop and enact an update to GPRA, called the GPRA Modernization Act, which includes a requirement for each agency to conduct Quarterly Performance Updates that specifically address the performance data relating to its priority goals. Sometimes what may seem like just common sense to some people apparently requires a statutory mandate to move others.

John Mercer

  • *
  • Posts: 2
Re: Conversation with Marc Robinson on performance budgeting
« Reply #27 on: January 17, 2013, 19:39:06 GMT »
Another of the issues addressed here is whether performance measures should have targets attached to them. In other words, should it be sufficient to define the performance indicator (e.g., “The number of accidents”) and then simply report the result each year, or should there be measurable target amount attached as part of the goal (e.g., “50 or fewer accidents”).

From my own experience, I would say that defining the right indicator (or set of indicators) and generating accurate performance data is far more important than identifying a performance target level. If you get those two items right, then at least over time the organization will probably show improved performance – particularly if it is subject to regular oversight/scrutiny. And this is especially the case if incentives for improved performance are provided – e.g., better personnel performance reviews for program managers and staff, greater ease in getting budget requests approved, standing out in comparison to similar organizations, etc.

However, it is probably unrealistic to argue that when money is appropriated to a program, there should be no indication of what will be achieved with it. “What will we get for our money?” is not an unreasonable question. Legislators will ask it and the public may come to expect it. Nonetheless, it is the setting of the performance target itself that often generates real fear of and resistance to PBB within many organizations. And I have seen instances of where the actual target was seemingly “picked out of the air” (such as in Congress a few years ago, for the purpose of embarrassing the Administration when it would not be able to meet the statutory target of a 50% reduction in illegal drug use.)

One way to “compromise” this issue would be to not set targets for some (or most) measures during the initial years of PBB, and instead wait until after the first several years so that the organization gains some experience and comfort with program performance measurement. And at that point, with the trends in program performance having been identified, the setting of longer-term and annual targets linked to spending levels can then be done perhaps a bit more realistically and effectively. I recognize that I have just described may be the “ideal” in some respects, but it may not reflect political reality in many situations.

I want to add another thought relating to measuring improvement in program performance. It is that there are at least two ways to think of how to measure improved performance. One is that for the same amount of money expended, the level of performance is increased the next year. Another is that for the same level of performance, the amount of money expended is decreased the next year. This choice (or a blend of the two) can also relate to the setting of a performance target – i.e., “This level of performance is already high enough to be satisfactory, but now get you get it for us for less money?”

Napodano

  • *****
  • Posts: 611
Re: Conversation with Marc Robinson on performance budgeting
« Reply #28 on: January 17, 2013, 20:36:56 GMT »
Thanks, John;

just a bit of background for the other members.
John made these comments on a LinkedIn group (http://www.linkedin.com/groups?gid=4518607&trk=myg_ugrp_ovr )where I posted the link about this conversation. I asked him to share them with us at the Board and here they are.

Thank you again, John and welcome to our community.
« Last Edit: January 17, 2013, 20:38:51 GMT by Napodano »

Marc Robinson

  • *
  • Posts: 18
Re: Conversation with Marc Robinson on performance budgeting
« Reply #29 on: February 03, 2013, 13:42:10 GMT »
Dear John,

Firstly, my apologies for the delay in responding to your thoughtful comment.

I agree with you very much in respect to your core point about not rushing the setting of performance targets. It is crucial that, before the setting of any targets is contemplated, a performance indicator is in place for some time so as to permit the establishment of a robust performance baseline and, as one element of this, to ensure that the indicator is being measured reliably. Indicators are, as statistics, subject to some degree of randomness and only with several years of experience is one able to distinguish a baseline from any degree of random "noise" which might be present.

However, I remain firmly convinced that – even in the long term – it is inappropriate to set performance targets for all performance indicators. In particular, it seems to me that there are some outcomes which it is extremely important to measure but over which our control is so imperfect that it is simply self-deluding to set think that we can set a target such as a 30% improvement over five years. I think of examples such as cancer rates in childhood obesity rates. Of course the objective is to produce these disease rates, and to reduce them substantially. But are we really in any position to set precise quantitative targets for this? I know that some countries do this (an example being the UK under the former labour government), but I think that such targets are inescapably arbitrary and setting them serves little purpose.

A further point I would make concerns centrally-imposed targets, as distinct from targets which spending ministries or other government agencies may set internally for themselves. I believe that there is a strong case for strictly limiting the number of centrally-imposed targets because, if too many such targets are set, it becomes impractical for central agencies (e.g. the finance Ministry or President/Prime Minister's office) to monitor performance against these targets and to intervene appropriately when there is an unjustifiable failure to meet the target. Far better to follow the UK Public Service Agreement example of limiting centrally-imposed targets several hundred than to have thousands of performance targets in the budget papers – as some African countries do – but to have no follow-up for them. This suggests limiting centrally-imposed targets to a selection of the most important performance indicators.

One final somewhat different point: another common practice in developing countries to which I take strong exception is the notion that for every performance target, targets should be set on an annual basis. In other words, if one sets a target for something, then there should be a target set for the variable in twelve months time, twenty-four months time et cetera. Again, it seems to me that this is part of a mentality which exaggerates greatly our degree of control over many of the variables concerned.

Overall, while I personally accept that target setting is a valuable performance management tool, I think we have to avoid delusions of a Soviet-planning approach on the matter.

 

RSS | Mobile

The PFM Board © 2010
Powered by SMF