Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs , particularly about their effectiveness and efficiency .
79-397: In both the public sector and private sector , as well as the voluntary sector , stakeholders might be required to assess—under law or charter—or want to know whether the programs they are funding, implementing, voting for, receiving or opposing are producing the promised effect. To some degree, program evaluation falls under traditional cost–benefit analysis , concerning fair returns on
158-416: A cycle framework to represent the continuing process of evaluation. Though program evaluation processes mentioned here are appropriate for most programs, highly complex non-linear initiatives, such as those using the collective impact (CI) model, require a dynamic approach to evaluation. Collective impact is "the commitment of a group of important actors from different sectors to a common agenda for solving
237-544: A given geographic area and what their demographics are. Rossi, Lipsey and Freeman (2004) caution against undertaking an intervention without properly assessing the need for one, because this might result in a great deal of wasted funds if the need did not exist or was misconceived. Needs assessment involves the processes or methods used by evaluators to describe and diagnose social needs This is essential for evaluators because they need to identify whether programs are effective and they cannot do this unless they have identified what
316-892: A graphical depiction of the "if-then" (causal) relationships between the various elements leading to the outcome. However, the logic model is more than the graphical depiction: it is also the theories, scientific evidences, assumptions and beliefs that support it and the various processes behind it. Logic models are used by planners, funders, managers and evaluators of programs and interventions to plan, communicate, implement and evaluate them. They are being employed as well by health scientific community to organize and conduct literature reviews such as systematic reviews. Domains of application are various, e.g. waste management, poultry inspection, business education, heart disease and stroke prevention. Since they are used in various contexts and for different purposes, their typical components and levels of complexity varies in literature (compare for example
395-409: A group setting (Rossi et al., 2004). These factors may result in 'noise' which may obscure any effect the program may have had. Only measures which adequately achieve the benchmarks of reliability, validity and sensitivity can be said to be credible evaluations. It is the duty of evaluators to produce credible evaluations, as their findings may have far reaching effects. A discreditable evaluation which
474-433: A logic model helps articulate the problem, the resources and capacity that are currently being used to address the problem, and the measurable outcomes from the program. Looking at the different components of a program in relation to the overall short-term and long-term goals allows for illumination of potential misalignments. Creating an actual logic model is particularly important because it helps clarify for all stakeholders:
553-485: A measurement instrument is the 'extent to which the measure produces the same results when used repeatedly to measure the same thing' (Rossi et al., 2004, p. 218). The more reliable a measure is, the greater its statistical power and the more credible its findings. If a measuring instrument is unreliable, it may dilute and obscure the real effects of a program, and the program will 'appear to be less effective than it actually is' (Rossi et al., 2004, p. 219). Hence, it
632-417: A non-payer cannot be excluded from (such as street lighting), services which benefit all of society rather than just the individual who uses the service. Public enterprises, or state-owned enterprises, are self-financing commercial enterprises that are under public ownership which provide various private goods and services for sale and usually operate on a commercial basis. Organizations that are not part of
711-445: A particular program. The guiding questions change from "what is being done?" to "what needs to be done"? McCawley suggests that by using this new reasoning, a logic model for a program can be built by asking the following questions in sequence: By placing the focus on ultimate outcomes or results, planners can think backward through the logic model to identify how best to achieve the desired results. Here it helps managers to 'plan with
790-429: A problem; and if so, how it might best be dealt with. This includes identifying and diagnosing the actual problem the program is trying to address, who or what is affected by the problem, how widespread the problem is, and what are the measurable effects that are caused by the problem. For example, for a housing program aimed at mitigating homelessness, a program evaluator may want to find out how many people are homeless in
869-408: A program evaluation can be broken up into four parts: focusing the evaluation, collecting the information, using the information, and managing the evaluation. Program evaluation involves reflecting on questions about evaluation purpose, what questions are necessary to ask, and what will be done with information gathered. Critical questions for consideration include: The "shoestring evaluation approach"
SECTION 10
#1732779958208948-531: A program, but this is a poor indicator of outcomes. Likewise it is relatively easy to measure the amount of work done (e.g. number of workers or number of years spent), but the workers may have just been 'spinning their wheels' without getting very far in terms of ultimate results or outcomes. However, nature of outcomes varies. To measure the progress toward outcomes, some initiatives may require an ad hoc measurement instrument. In addition, in programs such as in education or social programs , outcomes are usually in
1027-410: A similar methodology known as policy analysis . Some universities also have specific training programs, especially at the postgraduate level in program evaluation, for those who studied an undergraduate subject area lacking in program evaluation skills. Program evaluation may be conducted at several stages during a program's lifetime. Each of these stages raises different questions to be answered by
1106-502: A specific social problem" and typically involves three stages, each with a different recommended evaluation approach: Recommended evaluation approach: Developmental evaluation to help CI partners understand the context of the initiative and its development: "Developmental evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change." Recommended evaluation approach: Formative evaluation to refine and improve upon
1185-410: A theory that shows a larger number of concepts shows greater breadth of understanding of the program. The depth is the percentage of concepts that are the result of more than one other concept. This is based on the idea that, in real-world programs, things have more than one cause. Hence, a concept that is the result of more than one other concept in the theory shows better understanding of that concept;
1264-479: A theory with a higher percentage of better-understood concepts shows a greater depth of understanding of the program. Process analysis looks beyond the theory of what the program is supposed to do and instead evaluates how the program is being implemented. This evaluation determines whether the components identified as critical to the success of the program are being implemented. The evaluation determines whether target populations are being reached, people are receiving
1343-406: A visual diagram of those propositions. Then, the researcher examines the number of concepts and causal relationships between them (circles and arrows on the diagram) to measure the breadth and depth of understanding reflected in the theory's structure. The measure for breadth is the number of concepts. This is based on the idea that real-world programs involve a lot of interconnected parts, therefore
1422-568: Is a matter of representing the circumstances defined as the outcome by means of observable indicators that vary systematically with changes or differences in those circumstances. Outcome measurement is a systematic way to assess the extent to which a program has achieved its intended outcomes. According to Mouton (2009) measuring the impact of a program means demonstrating or estimating the accumulated differentiated proximate and emergent effect, some of which might be unintended and therefore unforeseen. Outcome measurement serves to help understand whether
1501-497: Is accurate, i.e. is successful change of an intermediate outcomes provokes the hypothesized subsequent effects in the causal pathway. Finally, outcomes may easily be achieved through processes independent of the program and an evaluation of those outcomes would suggest program success when in fact external outputs were responsible for the outcomes. Many authors and guides use the following template when speaking about logic model: Many refinements and variations have been added to
1580-898: Is considered to be a relatively recent phenomenon. However, planned social evaluation has been documented as dating as far back as 2200 BC. Evaluation became particularly relevant in the U.S. in the 1960s during the period of the Great Society social programs associated with the Kennedy and Johnson administrations. Extraordinary sums were invested in social programs, but the impacts of these investments were largely unknown. Program evaluations can involve both quantitative and qualitative methods of social research . People who do program evaluation come from many different backgrounds, such as sociology , psychology , economics , social work , as well as political science subfields such as public policy and public administration who have studied
1659-468: Is designed to assist evaluators operating under limited budget, limited access or availability of data and limited turnaround time, to conduct effective evaluations that are methodologically rigorous(Bamberger, Rugh, Church & Fort, 2004). This approach has responded to the continued greater need for evaluation processes that are more rapid and economical under difficult circumstances of budget, time constraints and limited availability of data. However, it
SECTION 20
#17327799582081738-423: Is difficult to determine. One main reason for this is self selection bias. People select themselves to participate in a program. For example, in a job training program, some people decide to participate and others do not. Those who do participate may differ from those who do not in important ways. They may be more determined to find a job or have better support resources. These characteristics may actually be causing
1817-680: Is for the evaluation to be a joint project between evaluators and stakeholders. A wide range of different titles are applied to program evaluators, perhaps haphazardly at times, but there are some established usages: those who regularly use program evaluation skills and techniques on the job are known as Program Analysts ; those whose positions combine administrative assistant or secretary duties with program evaluation are known as Program Assistants, Program Clerks (United Kingdom), Program Support Specialists, or Program Associates; those whose positions add lower-level project management duties are known as Program Coordinators. The process of evaluation
1896-500: Is important to ensure that the instruments (for example, tests, questionnaires, etc.) used in program evaluation are as reliable, valid and sensitive as possible. According to Rossi et al. (2004, p. 222), 'a measure that is poorly chosen or poorly conceived can completely undermine the worth of an impact assessment by producing misleading estimates. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'. The reliability of
1975-482: Is important to ensure the evaluation is as reliable as possible. The validity of a measurement instrument is 'the extent to which it measures what it is intended to measure' (Rossi et al., 2004, p. 219). This concept can be difficult to accurately measure: in general use in evaluations, an instrument may be deemed valid if accepted as valid by the stakeholders (stakeholders may include, for example, funders, program administrators, et cetera). The principal purpose of
2054-637: Is located geographically and socially would require knowledge about abused children, the characteristics of perpetrators and the impact of the problem throughout the political authority in question. This can be difficult considering that child abuse is not a public behavior, also keeping in mind that estimates of the rates on private behavior are usually not possible because of factors like unreported cases. In this case evaluators would have to use data from several sources and apply different approaches in order to estimate incidence rates. There are two more questions that need to be answered: Evaluators need to also answer
2133-405: Is needed in order to verify the validity of this model. The POSLM approach makes use of the logic model with a strong focus on tracking progressive improvement towards racial disparity outcomes. To measure the progress towards outcomes, this type of logic model states short, intermediate and long-term outcomes as "stage 1", "stage 2" and "stage 3. Each stage is uniquely defined and used to depict
2212-463: Is not always possible to design an evaluation to achieve the highest standards available. Many programs do not build an evaluation procedure into their design or budget. Hence, many evaluation processes do not begin until the program is already underway, which can result in time, budget or data constraints for the evaluators, which in turn can affect the reliability, validity or sensitivity of the evaluation. > The shoestring approach helps to ensure that
2291-587: Is simply assumed, and so an evaluator will need to draw out from the program staff how exactly the program is supposed to achieve its aims and assess whether this logic is plausible. For example, in an HIV prevention program, it may be assumed that educating people about HIV/AIDS transmission, risk and safe sex practices will result in safer sex being practiced. However, research in South Africa increasingly shows that in spite of increased education and knowledge, people still often do not practice safe sex. Therefore,
2370-449: Is sound. Wright and Wallis (2019) described an additional technique for assessing a program theory based on the theory's structure . This approach, known as integrative propositional analysis (IPA), is based on research streams finding that theories were more likely to work as expected when they had better structure (in addition meaning and data). IPA involves, first, identifying the propositions (statements of cause-and-effect) and creating
2449-463: Is unable to show that a program is achieving its purpose when it is in fact creating positive change may cause the program to lose its funding undeservedly. The Center for Disease Control (CDC) delineates six steps to a complete program evaluation. The steps described are: engage stakeholder, describe the program, focus the evaluation design, gather credible evidence, justify conclusions, and ensure use and share lessons learned. These steps can happen in
Program evaluation - Misplaced Pages Continue
2528-417: The state sector , is the part of the economy composed of both public services and public enterprises . Public sectors include the public goods and governmental services such as the military , law enforcement , infrastructure , public transit , public education , along with health care and those working for the government itself, such as elected officials . The public sector might provide services that
2607-403: The "black box" and examine if the intermediate outcomes progress as planned. In addition, the pathways of numerous outcomes are still largely misunderstood due their complexity, their unpredictability and lack of scientific / practical evidences. Therefore, with proper research design, one may not only assess the progress of intermediate outcomes, but evaluate as well if the program theory of change
2686-417: The 1950s. Patricia J. Rogers's (2005) encyclopedia article instead traces it back to Edward A. Suchman's (1967) book about evaluative research. Both encyclopedia articles and LeCroy (2018) mention increasing interest, usage and publications about the subject. One of the most important uses of the logic model is for program planning. It is suggested to use the logic model to focus on the intended outcomes of
2765-497: The KPIs are arranged by category and only the category is displayed on the logic model. The extensive list of KPIs are an appendix to the logic model. Organizations identify the KPIs and corresponding outcomes by first conducting a needs assessment and/or community focus groups. This helps to ensure that the logic model remains focused on improving the real-time needs of people to remove racial barriers. The POSLM can help to make more clear
2844-530: The State civil service ( Fonction publique d'État , FPE) includes teachers and soldiers, and employs 44% of the workforce. The local civil service ( Fonction publique territoriale ; FPT) is made up of employees of town halls and regional councils: 25% of the workforce. The hospital civil service ( Fonction publique hospitalière , FPH) consists of doctors and nurses and is 21% of the workforce. Right-libertarian and Austrian School economists have criticized
2923-437: The W.K. Kellogg Foundation presentation of logic model, mainly aimed for evaluation, and the numerous types of logic models in the intervention mapping framework ). In addition, depending on the purpose of the logic model, elements depicted and the relationships between them is more or less detailed. Citing Funnell and Rogers's account (2011), Joy A. Frechtling's (2015) encyclopedia article traces logic model underpinnings to
3002-412: The assessment process as it provides a reality check on the concordance between the program theory and the program itself. The observations can focus on the attainability of the outcomes, circumstances of the target population, and the plausibility of the program activities and the supporting resources. These different forms of assessment of program theory can be conducted to ensure that the program theory
3081-405: The basic template. For example, many versions of logic models set out a series of outcomes/impacts, explaining in more detail the logic of how an intervention contributes to intended or observed results. Others often distinguish short-term, medium-term and long-term results, and between direct and indirect results. The intervention mapping approach of Bartholomew et al. makes an extensive use of
3160-467: The best way to perform would be. For example, whether the job performance standards are set by an organization or whether some governmental rules need to be considered when undertaking the task. Third, define and identify the target of interventions and accurately describe the nature of the service needs of that population It is important to know what/who the target population is/are – it might be individuals, groups, communities, etc. There are three units of
3239-423: The community impacted by the potential problem, the agents/actors working to address and resolve the problem, funders, etc. Including buy-in early on in the process reduces potential for push-back, miscommunication, and incomplete information later on. Second, assess the extent of the problem. Having clearly identified what the problem is, evaluators need to then assess the extent of the problem. They need to answer
Program evaluation - Misplaced Pages Continue
3318-429: The definition of the problem, the overarching goals, and the capacity and outputs of the program. Rossi, Lipsey & Freeman (2004) suggest four approaches and procedures that can be used to assess the program theory. These approaches are discussed below. This entails assessing the program theory by relating it to the needs of the target population the program is intended to serve. If the program theory fails to address
3397-424: The effect of the program and to find causal relationship between the program and the various outcomes. Finally, cost-benefit or cost-efficiency analysis assesses the efficiency of a program. Evaluators outline the benefits and cost of the program for comparison. An efficient program has a lower cost-benefit ratio. There are two types of efficiency, namely, static and dynamic. While static efficiency concerns achieving
3476-415: The effectiveness of a program cannot be assessed unless we know what the problem was in the first place. The program theory, also called a logic model , knowledge map, or impact pathway, is an assumption, implicit in the way the program is designed, about how the program's actions are supposed to achieve the outcomes it intends. This 'logic model' is often not stated explicitly by people who run programs, it
3555-462: The effectiveness of the evaluation, although it does not necessarily reduce or eliminate the program. Creating a logic model is a wonderful way to help visualize important aspects of programs, especially when preparing for an evaluation. An evaluator should create a logic model with input from many different stake holders. Logic Models have 5 major components: Resources or Inputs, Activities, Outputs, Short-term outcomes, and Long-term outcomes Creating
3634-402: The end in mind', rather than just consider inputs (e.g. budgets, employees) or the tasks that must be done. The logic model is often used in government or not-for-profit organizations, where the mission and vision are not aimed at achieving a financial benefit. Traditionally, government programs were described only in terms of their budgets . It is easy to measure the amount of money spent on
3713-500: The evaluation process is to measure whether the program has an effect on the social problem it seeks to redress; hence, the measurement instrument must be sensitive enough to discern these potential changes (Rossi et al., 2004). A measurement instrument may be insensitive if it contains items measuring outcomes which the program couldn't possibly effect, or if the instrument was originally developed for applications to individuals (for example standardized psychological measures) rather than to
3792-403: The evaluator, and correspondingly different evaluation approaches are needed. Rossi, Lipsey and Freeman (2004) suggest the following kinds of assessment, which may be appropriate at these different stages: A needs assessment examines the population that the program intends to target, to see whether the need as conceptualized in the program actually exists in the population; whether it is, in fact,
3871-528: The idea of public sector provision of goods and services as inherently inefficient. In 1961, Murray Rothbard wrote: "Any reduction of the public sector, any shift of activities from the public to the private sphere, is a net moral and economic gain." American libertarians and anarcho-capitalists have also argued that the system by which the public sector is funded, namely taxation, is itself coercive and unjust . However, even notable small-government proponents have pushed back on this point of view, citing
3950-410: The implementation process itself. Otherwise, a good innovative idea may be mistakenly characterized as ineffective, where in fact it simply had never been implemented as designed. The impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes, i.e. program outcomes. An outcome is the state of the target population or
4029-438: The intended outcomes and the casual pathways leading to them; both of which help to connect and compose a logical companion "if, then" theory of change statement. Again, more research is needed and currently being conducted as more nonprofits, philanthropic and governments use this model. By describing work in this way, managers have an easier way to define the work and measure it. Performance measures can be drawn from any of
SECTION 50
#17327799582084108-472: The intended services, staff are adequately qualified. Process evaluation is an ongoing process in which repeated measures may be used to evaluate whether the program is being implemented effectively. This problem is particularly critical because many innovations, particularly in areas like education and public policy, consist of fairly complex chains of action. For example, process evaluation can be used in public health research. Many of which these elements rely on
4187-453: The logic model through the whole life-cycle of a health promotion program. Since this method can start from as far as a vague desired outcome (author's example is a city whose actors decide to address "health issues" of the city), planners go through various steps in order to develop effective interventions and properly evaluate them. There are distinguishable but closely interwoven logic models with different purposes that can be developed through
4266-482: The logic of a program which relies on education as a means to get people to use condoms may be faulty. This is why it is important to read research that has been done in the area. Explicating this logic can also reveal unintended or unforeseen consequences of a program, both positive and negative. The program theory drives the hypotheses to test for impact evaluation. Developing a logic model can also build common understanding amongst program staff and stakeholders about what
4345-438: The long-term and may requires numerous intermediate changes (attitudes, social norm, industry practices, etc.) to advance progressively toward the outcomes. By making clear the intended outcomes and the causal pathways leading to them, a program logic model provides the basis upon which planners and evaluators can develop a measurement plan and adequate instruments. Instead of only looking at the outcome progress, planners can open
4424-446: The maximum possible methodological rigor is achieved under these constraints. Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation (Bamberger et al., 2004). Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation. Due to the budget constraints it might be difficult to effectively apply
4503-544: The most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation (Bamberger et al., 2004). Budget constraints may be addressed by simplifying the evaluation design, revising the sample size, exploring economical data collection methods (such as using volunteers to collect data, shortening surveys, or using focus groups and key informants) or looking for reliable secondary data (Bamberger et al., 2004). Public sector The public sector , also called
4582-454: The needs of the target population it will be rendered ineffective even when if it is well implemented. This form of assessment involves asking a panel of expert reviewers to critically review the logic and plausibility of the assumptions and expectations inherent in the program's design. The review process is unstructured and open ended so as to address certain issues on the program design. Rutman (1980), Smith (1989), and Wholly (1994) suggested
4661-404: The objectives with least costs, dynamic efficiency concerns continuous improvement. Perhaps the most difficult part of evaluation is determining whether the program itself is causing the changes that are observed in the population it was aimed at. Events or processes outside of the program may be the real cause of the observed outcome (or the real prevention of the anticipated outcome). Causation
4740-572: The observed outcome of increased employment, not the job training program. Evaluations conducted with random assignment are able to make stronger inferences about causation. Randomly assigning people to participate or to not participate in the program, reduces or eliminates self-selection bias . Thus, the group of people who participate would likely be more comparable to the group who did not participate. However, since most programs cannot use random assignment, causation cannot be determined. Impact analysis can still provide useful information. For example,
4819-404: The outcomes of the program can be described. Thus the evaluation can describe that people who participated in the program were more likely to experience a given outcome than people who did not participate. If the program is fairly large, and there are enough data, statistical analysis can be used to make a reasonable case for the program by showing, for example, that other causes are unlikely. It
SECTION 60
#17327799582084898-470: The outlay of economic and other assets; however, social outcomes can be more complex to assess than market outcomes, and a different skillset is required. Considerations include how much the program costs per participant, program impact , how the program could be improved, whether there are better alternatives, if there are unforeseen consequences , and whether the program goals are appropriate and useful. Evaluators help to answer these questions. Best practice
4977-428: The percentage of KPIs achieved at each stage or the percentage of people who reach each stage as they progress on pre-identified Key Performance Indicators (KPI). These KPIs are specific to the racial disparity issues which the population served identifies with (i.e. low reading, financial literacy, unemployment, etc). In an effort to prevent the logic model itself from being cluttered with an overwhelming number of KPIs,
5056-407: The population: population at risk, population in need and population in demand Being able to specify what/who the target is will assist in establishing appropriate boundaries, so that interventions can correctly address the target population and be feasible to apply< There are four steps in conducting a needs assessment: Needs analysis is hence a very crucial step in evaluating programs because
5135-428: The prior correct implementation of other elements, and will fail if the prior implementation was not done correctly. This was conclusively demonstrated by Gene V. Glass and many others during the 1980s. Since incorrect or ineffective implementation will produce the same kind of neutral or negative results that would be produced by correct implementation of a poor innovation, it is essential that evaluation research assess
5214-477: The problem/need is. Programs that do not do a needs assessment can have the illusion that they have eradicated the problem/need when in fact there was no need in the first place. Needs assessment involves research and regular consultation with community stakeholders and with the people that will benefit from the project before the program can be developed and implemented. Hence it should be a bottom-up approach. In this way potential problems can be realized early because
5293-399: The process would have involved the community in identifying the need and thereby allowed the opportunity to identify potential barriers. The important task of a program evaluator is thus to: First, construct a precise definition of what the problem is. Evaluators need to first identify the problem/need. This is most effectively done by collaboratively including all possible stakeholders, i.e.,
5372-531: The process: Evaluators thereafter use the logic model of the intervention to design a proper evaluation plan to assess implementation , impact and efficiency . The Progressive Outcomes Scale Logic Model (POSLM) approach was developed by Quisha Brown in response to the racial wealth gap [exacerbated by the COVID-19 pandemic ] to aid organizations in the immediate need to add a racial equity focus when developing program logic models. More testing and research
5451-425: The program is actually supposed to do and how it is supposed to do it, which is often lacking (see Participatory impact pathways analysis ). Of course, it is also possible that during the process of trying to elicit the logic model behind a program the evaluators may discover that such a model is either incompletely developed, internally contradictory, or (in worst cases) essentially nonexisistent. This decidedly limits
5530-477: The program is effective or not. It further helps to clarify understanding of a program. But the most important reason for undertaking the effort is to understand the impacts of the work on the people being served. With the information collected, it can be determined which activities to continue and build upon, and which need to be changed in order to improve the effectiveness of the program. This can involve using sophisticated statistical techniques in order to measure
5609-463: The progress, as well as continued developmental evaluation to explore new elements as they emerge. Formative evaluation involves "careful monitoring of processes in order to respond to emergent properties and any unexpected outcomes." Recommended evaluation approach: Summative evaluation "uses both quantitative and qualitative methods in order to get a better understanding of what [the] project has achieved, and how or why this has occurred." Planning
5688-586: The public infrastructure. Rates of pay for public sector staff may be negotiated by employers and their staff or staff representatives such as trade unions . In some cases, for example in the United Kingdom, a pay review body is charged with making independent recommendations on rates of pay for groups of public sector staff. As of 2017, France had 5.6 million civil servants , amounting to 20% of all jobs in France. They are subdivided into three types:
5767-488: The public sector are either part of the private sector or voluntary sector . The private sector is composed of the economic sectors that are intended to earn a profit for the owners of the enterprise. The voluntary, civic, or social sector concerns a diverse array of non-profit organizations emphasizing civil society . In the United Kingdom, the term "wider public sector" is often used, referring to public sector organizations outside central government. The organization of
5846-491: The public sector can take several forms, including: A borderline form is as follows: Infrastructure includes areas that support both the public's members and the public sector itself. Streets and highways are used both by those who work for the public sector and also by the citizenry. The former, who are public employees, are also part of the citizenry. Public roads , bridges , tunnels , water supply , sewers , electrical grids and telecommunication networks are among
5925-432: The questions listed below to assist with the review process. This form of assessment requires gaining information from research literature and existing practices to assess various components of the program theory. The evaluator can assess whether the program theory is congruent with research evidence and practical experiences of programs with similar concepts. This approach involves incorporating firsthand observations into
6004-480: The social conditions that a program is expected to have changed. Program outcomes are the observed characteristics of the target population or social conditions, not of the program. Thus the concept of an outcome does not necessarily mean that the program targets have actually changed or that the program has caused them to change in any way. There are two kinds of outcomes, namely outcome level and outcome change, also associated with program effect. Outcome measurement
6083-443: The ultimate necessity of a public sector for provision of certain services, such as national defense, public works and utilities, and pollution controls. Logic model Logic models are hypothesized descriptions of the chain of causes and effects leading to an outcome of interest (e.g. prevalence of cardiovascular diseases, annual traffic collision, etc). While they can be in a narrative form, logic model usually take form in
6162-447: The ‘where’ and ‘how big’ questions. Evaluators need to work out where the problem is located and how big it is. Pointing out that a problem exists is much easier than having to specify where it is located and how rife it is. Rossi, Lipsey & Freeman (2004) gave an example that: a person identifying some battered children may be enough evidence to persuade one that child abuse exists. But indicating how many children it affects and where it
6241-427: The ’how’ and ‘what’ questions The ‘how’ question requires that evaluators determine how the need will be addressed. Having identified the need and having familiarized oneself with the community evaluators should conduct a performance analysis to identify whether the proposed plan in the program will actually be able to eliminate the need. The ‘what’ question requires that evaluators conduct a task analysis to find out what
#207792