Misplaced Pages

Policy analysis

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Policy analysis or public policy analysis is a technique used in the public administration sub-field of political science to enable civil servants , nonprofit organizations , and others to examine and evaluate the available options to implement the goals of laws and elected officials. People who regularly use policy analysis skills and techniques on the job, particularly those who use it as a major part of their job duties are generally known by the title policy analyst . The process is also used in the administration of large organizations with complex policies. It has been defined as the process of "determining which of various policies will achieve a given set of goals in light of the relations between the policies and the goals."

#33966

119-459: Policy analysis can be divided into two major fields: One definition states that: Policy Analysis is the process of identifying potential policy options that could address your problem and then comparing those options to choose the most effective, efficient, and feasible one. The areas of interest and the purpose of analysis determine what types of analysis are conducted. A combination of two kinds of policy analyses together with program evaluation

238-447: A logic model , knowledge map, or impact pathway, is an assumption, implicit in the way the program is designed, about how the program's actions are supposed to achieve the outcomes it intends. This 'logic model' is often not stated explicitly by people who run programs, it is simply assumed, and so an evaluator will need to draw out from the program staff how exactly the program is supposed to achieve its aims and assess whether this logic

357-413: A nomothetic style of analysis. He adds that "the defining feature of qualitative work is its use of noncomparable observations—observations that pertain to different aspects of a causal or descriptive question", whereas quantitative observations are comparable. According to John Gerring, the key characteristic that distinguishes case studies from all other methods is the "reliance on evidence drawn from

476-411: A case study that will then be used in classrooms in the form of a "teaching" case study (also see case method and casebook method ). For instance, as early as 1870 at Harvard Law School , Christopher Langdell departed from the traditional lecture-and-notes approach to teaching contract law and began using cases pled before courts as the basis for class discussions. By 1920, this practice had become

595-516: A conservative tendency: new policies are only slightly different from old policies. Policy-makers are too short on time and other resources to make totally new policies; thus, past policies are accepted as having some legitimacy. When existing policies have sunk costs which discourage innovation, incrementalism is an easier approach than rationalism, and the policies are more politically expedient because they do not necessitate any radical redistribution of values. Such models necessarily struggle to, improve

714-416: A cycle framework to represent the continuing process of evaluation. Though program evaluation processes mentioned here are appropriate for most programs, highly complex non-linear initiatives, such as those using the collective impact (CI) model, require a dynamic approach to evaluation. Collective impact is "the commitment of a group of important actors from different sectors to a common agenda for solving

833-639: A different skillset is required. Considerations include how much the program costs per participant, program impact , how the program could be improved, whether there are better alternatives, if there are unforeseen consequences , and whether the program goals are appropriate and useful. Evaluators help to answer these questions. Best practice is for the evaluation to be a joint project between evaluators and stakeholders. A wide range of different titles are applied to program evaluators, perhaps haphazardly at times, but there are some established usages: those who regularly use program evaluation skills and techniques on

952-756: A fruitful way to come up with hypotheses and generate theories. Case studies are useful for understanding outliers or deviant cases. Classic examples of case studies that generated theories includes Darwin's theory of evolution (derived from his travels to the Easter Island), and Douglass North 's theories of economic development (derived from case studies of early developing states, such as England). Case studies are also useful for formulating concepts , which are an important aspect of theory construction. The concepts used in qualitative research will tend to have higher conceptual validity than concepts used in quantitative research (due to conceptual stretching :

1071-399: A given geographic area and what their demographics are. Rossi, Lipsey and Freeman (2004) caution against undertaking an intervention without properly assessing the need for one, because this might result in a great deal of wasted funds if the need did not exist or was misconceived. Needs assessment involves the processes or methods used by evaluators to describe and diagnose social needs This

1190-410: A group setting (Rossi et al., 2004). These factors may result in 'noise' which may obscure any effect the program may have had. Only measures which adequately achieve the benchmarks of reliability, validity and sensitivity can be said to be credible evaluations. It is the duty of evaluators to produce credible evaluations, as their findings may have far reaching effects. A discreditable evaluation which

1309-433: A logic model helps articulate the problem, the resources and capacity that are currently being used to address the problem, and the measurable outcomes from the program. Looking at the different components of a program in relation to the overall short-term and long-term goals allows for illumination of potential misalignments. Creating an actual logic model is particularly important because it helps clarify for all stakeholders:

SECTION 10

#1732772283034

1428-430: A means to get people to use condoms may be faulty. This is why it is important to read research that has been done in the area. Explicating this logic can also reveal unintended or unforeseen consequences of a program, both positive and negative. The program theory drives the hypotheses to test for impact evaluation. Developing a logic model can also build common understanding amongst program staff and stakeholders about what

1547-487: A measurement instrument is the 'extent to which the measure produces the same results when used repeatedly to measure the same thing' (Rossi et al., 2004, p. 218). The more reliable a measure is, the greater its statistical power and the more credible its findings. If a measuring instrument is unreliable, it may dilute and obscure the real effects of a program, and the program will 'appear to be less effective than it actually is' (Rossi et al., 2004, p. 219). Hence, it

1666-404: A new regulation or subsidy is set in place), and then finally, once the policy has been implemented and run for a certain period, the policy is evaluated. A number of different viewpoints can be used during evaluation, including looking at a policy's effectiveness, cost-effectiveness , value for money, outcomes or outputs. The meta-policy approach is a systems and context approach; i.e., its scope

1785-472: A particular policy was developed at a particular time and assess the effects, intended or otherwise, of that policy when it was implemented. There are three approaches that can be distinguished: the analysis-centric, the policy process, and the meta-policy approach. The analysis-centric (or "analycentric") approach focuses on individual problems and their solutions. Its scope is the micro-scale and its problem interpretation or problem resolution usually involves

1904-417: A person identifying some battered children may be enough evidence to persuade one that child abuse exists. But indicating how many children it affects and where it is located geographically and socially would require knowledge about abused children, the characteristics of perpetrators and the impact of the problem throughout the political authority in question. This can be difficult considering that child abuse

2023-431: A policy can be measured by changes in the behavior of the target population and active support from various actors and institutions involved. A public policy is an authoritative communication prescribing an unambiguous course of action for specified individuals or groups in certain situations. There must be an authority or leader charged with the implementation and monitoring of the policy with a sound social theory underlying

2142-461: A precise definition of what the problem is. Evaluators need to first identify the problem/need. This is most effectively done by collaboratively including all possible stakeholders, i.e., the community impacted by the potential problem, the agents/actors working to address and resolve the problem, funders, etc. Including buy-in early on in the process reduces potential for push-back, miscommunication, and incomplete information later on. Second, assess

2261-429: A problem; and if so, how it might best be dealt with. This includes identifying and diagnosing the actual problem the program is trying to address, who or what is affected by the problem, how widespread the problem is, and what are the measurable effects that are caused by the problem. For example, for a housing program aimed at mitigating homelessness, a program evaluator may want to find out how many people are homeless in

2380-409: A program evaluation can be broken up into four parts: focusing the evaluation, collecting the information, using the information, and managing the evaluation. Program evaluation involves reflecting on questions about evaluation purpose, what questions are necessary to ask, and what will be done with information gathered. Critical questions for consideration include: The "shoestring evaluation approach"

2499-468: A random selection of cases is a valid case selection strategy in large-N research, there is a consensus among scholars that it risks generating serious biases in small-N research. Random selection of cases may produce unrepresentative cases, as well as uninformative cases. Cases should generally be chosen that have a high expected information gain. For example, outlier cases (those which are extreme, deviant or atypical) can reveal more information than

SECTION 20

#1732772283034

2618-882: A relatively recent phenomenon. However, planned social evaluation has been documented as dating as far back as 2200 BC. Evaluation became particularly relevant in the U.S. in the 1960s during the period of the Great Society social programs associated with the Kennedy and Johnson administrations. Extraordinary sums were invested in social programs, but the impacts of these investments were largely unknown. Program evaluations can involve both quantitative and qualitative methods of social research . People who do program evaluation come from many different backgrounds, such as sociology , psychology , economics , social work , as well as political science subfields such as public policy and public administration who have studied

2737-411: A similar methodology known as policy analysis . Some universities also have specific training programs, especially at the postgraduate level in program evaluation, for those who studied an undergraduate subject area lacking in program evaluation skills. Program evaluation may be conducted at several stages during a program's lifetime. Each of these stages raises different questions to be answered by

2856-511: A single case and its attempts, at the same time, to illuminate features of a broader set of cases". Scholars use case studies to shed light on a "class" of phenomena. As with other social science methods, no single research design dominates case study research. Case studies can use at least four types of designs. First, there may be a "no theory first" type of case study design , which is closely connected to Kathleen M. Eisenhardt 's methodological work. A second type of research design highlights

2975-418: A small-N problem and certain standard causal identification problems." By using Bayesian probability , it may be possible to makes strong causal inferences from a small sliver of data. KKV also identify inductive reasoning in qualitative research as a problem, arguing that scholars should not revise hypotheses during or after data has been collected because it allows for ad hoc theoretical adjustments to fit

3094-505: A specific social problem" and typically involves three stages, each with a different recommended evaluation approach: Recommended evaluation approach: Developmental evaluation to help CI partners understand the context of the initiative and its development: "Developmental evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change." Recommended evaluation approach: Formative evaluation to refine and improve upon

3213-436: A study of a single case is called within-case research. Case study research has been extensively practiced in both the social and natural sciences . There are multiple definitions of case studies, which may emphasize the number of observations (a small N), the method ( qualitative ), the thickness of the research (a comprehensive examination of a phenomenon and its context), and the naturalism (a "real-life context"

3332-576: A technical solution. The primary aim is to identify the most effective and efficient solution in technical and economic terms (e.g. the most efficient allocation of resources). The policy process approach puts its focal point onto political processes and involved stakeholders ; its scope is the broader meso-scale and it interprets problems using a political lens (i.e., the interests and goals of elected officials). It aims at determining what processes, means and policy instruments (e.g., regulation , legislation , subsidy ) are used. As well, it tries to explain

3451-410: A theory that shows a larger number of concepts shows greater breadth of understanding of the program. The depth is the percentage of concepts that are the result of more than one other concept. This is based on the idea that, in real-world programs, things have more than one cause. Hence, a concept that is the result of more than one other concept in the theory shows better understanding of that concept;

3570-479: A theory with a higher percentage of better-understood concepts shows a greater depth of understanding of the program. Process analysis looks beyond the theory of what the program is supposed to do and instead evaluates how the program is being implemented. This evaluation determines whether the components identified as critical to the success of the program are being implemented. The evaluation determines whether target populations are being reached, people are receiving

3689-406: A visual diagram of those propositions. Then, the researcher examines the number of concepts and causal relationships between them (circles and arrows on the diagram) to measure the breadth and depth of understanding reflected in the theory's structure. The measure for breadth is the number of concepts. This is based on the idea that real-world programs involve a lot of interconnected parts, therefore

Policy analysis - Misplaced Pages Continue

3808-439: A wider group. One way of doing this follows a heuristic model called the policy cycle . In its simplest form, the policy cycle, which is often depicted visually as a loop or circle, starts with the identification of the problem, proceeds to an examination of the different policy tools that could be used to respond to that problem, then goes on to the implementation stage, in which one or more policies are put into practice (e.g.,

3927-571: Is a matter of representing the circumstances defined as the outcome by means of observable indicators that vary systematically with changes or differences in those circumstances. Outcome measurement is a systematic way to assess the extent to which a program has achieved its intended outcomes. According to Mouton (2009) measuring the impact of a program means demonstrating or estimating the accumulated differentiated proximate and emergent effect, some of which might be unintended and therefore unforeseen. Outcome measurement serves to help understand whether

4046-443: Is a process that typically follows a sequence of steps or stages: This model, however, has been criticized for being overly linear and simplistic. In reality, stages of the policy process may overlap or never happen. Also, this model fails to take into account the multiple factors attempting to influence the process itself as well as each other, and the complexity this entails. One of the most widely used model for public institutions

4165-476: Is an in-depth, detailed examination of a particular case (or cases) within a real-world context. For example, case studies in medicine may focus on an individual patient or ailment; case studies in business might cover a particular firm 's strategy or a broader market ; similarly, case studies in politics can range from a narrow happening over time like the operations of a specific political campaign , to an enormous undertaking like world war , or more often

4284-498: Is being examined) involved in the research. There is general agreement among scholars that a case study does not necessarily have to entail one observation (N=1), but can include many observations within a single case or across numerous cases. For example, a case study of the French Revolution would at the bare minimum be an observation of two observations: France before and after a revolution. John Gerring writes that

4403-434: Is by the juxtaposition of a variety of research methodologies focused on a common theme the richness of understanding is gained. This integrates what are usually separate bodies of evaluation on the role of gender in welfare state developments, employment transformations, workplace policies, and work experience. There are several other major types of policy analysis, broadly groupable into competing approaches: The success of

4522-448: Is defined as policy studies . Policy analysis is frequently deployed in the public sector , but is equally applicable elsewhere, such as nonprofit organizations and non-governmental organizations . Policy analysis has its roots in systems analysis , an approach used by United States Secretary of Defense Robert McNamara in the 1960s. Various approaches to policy analysis exist. The analysis for policy (and analysis of policy)

4641-469: Is designed to assist evaluators operating under limited budget, limited access or availability of data and limited turnaround time, to conduct effective evaluations that are methodologically rigorous(Bamberger, Rugh, Church & Fort, 2004). This approach has responded to the continued greater need for evaluation processes that are more rapid and economical under difficult circumstances of budget, time constraints and limited availability of data. However, it

4760-424: Is difficult to determine. One main reason for this is self selection bias. People select themselves to participate in a program. For example, in a job training program, some people decide to participate and others do not. Those who do participate may differ from those who do not in important ways. They may be more determined to find a job or have better support resources. These characteristics may actually be causing

4879-422: Is essential for evaluators because they need to identify whether programs are effective and they cannot do this unless they have identified what the problem/need is. Programs that do not do a needs assessment can have the illusion that they have eradicated the problem/need when in fact there was no need in the first place. Needs assessment involves research and regular consultation with community stakeholders and with

Policy analysis - Misplaced Pages Continue

4998-501: Is important to ensure that the instruments (for example, tests, questionnaires, etc.) used in program evaluation are as reliable, valid and sensitive as possible. According to Rossi et al. (2004, p. 222), 'a measure that is poorly chosen or poorly conceived can completely undermine the worth of an impact assessment by producing misleading estimates. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'. The reliability of

5117-483: Is important to ensure the evaluation is as reliable as possible. The validity of a measurement instrument is 'the extent to which it measures what it is intended to measure' (Rossi et al., 2004, p. 219). This concept can be difficult to accurately measure: in general use in evaluations, an instrument may be deemed valid if accepted as valid by the stakeholders (stakeholders may include, for example, funders, program administrators, et cetera). The principal purpose of

5236-475: Is not a public behavior, also keeping in mind that estimates of the rates on private behavior are usually not possible because of factors like unreported cases. In this case evaluators would have to use data from several sources and apply different approaches in order to estimate incidence rates. There are two more questions that need to be answered: Evaluators need to also answer the ’how’ and ‘what’ questions The ‘how’ question requires that evaluators determine how

5355-463: Is not always possible to design an evaluation to achieve the highest standards available. Many programs do not build an evaluation procedure into their design or budget. Hence, many evaluation processes do not begin until the program is already underway, which can result in time, budget or data constraints for the evaluators, which in turn can affect the reliability, validity or sensitivity of the evaluation. > The shoestring approach helps to ensure that

5474-407: Is one developed by Herbert A. Simon , the "father of rational models"; tt is also used by private corporations. However, many criticise the model due to some of its characteristics being impractical and relying on unrealistic assumptions. For instance, it is a difficult model to apply in the public sector because social problems can be very complex, ill-defined, and inter-dependent. The problem lies in

5593-470: Is plausible. For example, in an HIV prevention program, it may be assumed that educating people about HIV/AIDS transmission, risk and safe sex practices will result in safer sex being practiced. However, research in South Africa increasingly shows that in spite of increased education and knowledge, people still often do not practice safe sex. Therefore, the logic of a program which relies on education as

5712-449: Is sound. Wright and Wallis (2019) described an additional technique for assessing a program theory based on the theory's structure . This approach, known as integrative propositional analysis (IPA), is based on research streams finding that theories were more likely to work as expected when they had better structure (in addition meaning and data). IPA involves, first, identifying the propositions (statements of cause-and-effect) and creating

5831-414: Is that cases are chosen because they are consistent with the scholar's preconceived notions, resulting in biased research. Alexander George and Andrew Bennett also note that a common problem in case study research is that of reconciling conflicting interpretations of the same data. Another limit of case study research is that it can be hard to estimate the magnitude of causal effects. Teachers may prepare

5950-546: Is the central approach in social science and educational policy studies. It is linked to two different traditions of policy analysis and research frameworks. The approach of analysis for policy refers to research conducted for actual policy development, often commissioned by policymakers inside the bureaucracy (e.g. civil servants) within which the policy is developed. Analysis of policy is more of an academic exercise, conducted by academic researchers, professors and think tank researchers, who are often seeking to understand why

6069-707: Is the macro-scale and its problem interpretation is usually of a structural nature. It aims at explaining the contextual factors of the policy process; i.e., what the political, economic and socio-cultural factors are that influence it. As problems may result because of structural factors (e.g., a certain economic system or political institution), solutions may entail changing the structure itself. Policy analysis uses both qualitative methods and quantitative methods . Qualitative research includes case studies and interviews with community members. Quantitative research includes survey research , statistical analysis (also called data analysis ) and model building. A common practice

SECTION 50

#1732772283034

6188-428: Is to define the problem and evaluation criteria; identify and evaluate alternatives; and recommend a certain policy accordingly. Promotion of the best agendas are the product of careful "back-room" analysis of policies by a priori assessment and ' a posteriori evaluation. Several methods used in policy analysis are: There are six dimensions to policy analysis categorized as the effects and implementation of

6307-465: Is unable to show that a program is achieving its purpose when it is in fact creating positive change may cause the program to lose its funding undeservedly. The Center for Disease Control (CDC) delineates six steps to a complete program evaluation. The steps described are: engage stakeholder, describe the program, focus the evaluation design, gather credible evidence, justify conclusions, and ensure use and share lessons learned. These steps can happen in

6426-401: Is used to examine content, implementation or impact of the policy, which helps to understand the merit, worth and the utility of the policy. Following are National Collaborating Centre for Healthy Public Policy's (NCCHPP) 10 steps: Details of such a plan may vary by institution and context. For example, a Public Health Ontario revision of the above replaces the first three steps with "describe

6545-481: Is widely considered flawed; while quantitative scholars try to aggregate variables to reduce the number of variables and thus increase the degrees of freedom, qualitative scholars intentionally want their variables to have many different attributes and complexity. For example, James Mahoney writes, "the Bayesian nature of process of tracing explains why it is inappropriate to view qualitative research as suffering from

6664-425: Is will assist in establishing appropriate boundaries, so that interventions can correctly address the target population and be feasible to apply< There are four steps in conducting a needs assessment: Needs analysis is hence a very crucial step in evaluating programs because the effectiveness of a program cannot be assessed unless we know what the problem was in the first place. The program theory, also called

6783-401: The dependent variable ". They argue for example that researchers cannot make valid causal inferences about war outbreaks by only looking at instances where war did happen (the researcher should also look at cases where war did not happen). Scholars of qualitative methods have disputed this claim, however. They argue that selecting the dependent variable can be useful depending on the purposes of

6902-502: The policy analysis of real-world problems affecting multiple stakeholders. Generally, a case study can highlight nearly any individual, group, organization, event, belief system, or action. A case study does not necessarily have to be one observation ( N =1), but may include many observations (one or multiple individuals and entities across multiple time periods, all within the same case study). Research projects involving numerous cases are frequently called cross-case research, whereas

7021-458: The voluntary sector , stakeholders might be required to assess—under law or charter—or want to know whether the programs they are funding, implementing, voting for, receiving or opposing are producing the promised effect. To some degree, program evaluation falls under traditional cost–benefit analysis , concerning fair returns on the outlay of economic and other assets; however, social outcomes can be more complex to assess than market outcomes, and

7140-638: The N=1 research design is so rare in practice that it amounts to a "myth". The term cross-case research is frequently used for studies of multiple cases, whereas within-case research is frequently used for a single case study. John Gerring defines the case study approach as an "intensive study of a single unit or a small number of units (the cases), for the purpose of understanding a larger class of similar units (a population of cases)". According to Gerring, case studies lend themselves to an idiographic style of analysis, whereas quantitative work lends itself to

7259-674: The acceptability of public policy. Criticisms of such a policy approach include: challenges to bargaining (i.e. not successful with limited resources), downplaying useful quantitative information, obscuring real relationships between political entities, an anti-intellectual approach to problems (i.e. the preclusion of imagination), and a bias towards conservatism (i.e. bias against far-reaching solutions). There are many contemporary policies relevant to gender and workplace issues. Actors analyze contemporary gender-related employment issues ranging from parental leave and maternity programs, sexual harassment, and work/life balance to gender mainstreaming. It

SECTION 60

#1732772283034

7378-414: The assessment process as it provides a reality check on the concordance between the program theory and the program itself. The observations can focus on the attainability of the outcomes, circumstances of the target population, and the plausibility of the program activities and the supporting resources. These different forms of assessment of program theory can be conducted to ensure that the program theory

7497-626: The assumptions stated by Simon are never fully valid in a real-world context. Further criticism of the rational model include: leaving a gap between planning and implementation, ignoring of the role of people, entrepreneurs, leadership, etc., the insufficiency of technical competence (i.e. ignoring the human factor), reflecting too mechanical an approach (i.e. the organic nature of organizations), requiring of multidimensional and complex models, generation of predictions which are often wrong (i.e. simple solutions may be overlooked), and incurring of cost (i.e. costs of rational-comprehensive planning may outweigh

7616-590: The cases violate theoretical predictions and specifying the scope conditions of the theory. Case studies are useful in situations of causal complexity where there may be equifinality , complex interaction effects and path dependency . They may also be more appropriate for empirical verifications of strategic interactions in rationalist scholarship than quantitative methods. Case studies can identify necessary and insufficient conditions, as well as complex combinations of necessary and sufficient conditions. They argue that case studies may also be useful in identifying

7735-429: The causal mechanisms in a way that may be harder in a large-N study. In terms of identifying "causal mechanisms", some scholars distinguish between "weak" and "strong chains". Strong chains actively connect elements of the causal chain to produce an outcome whereas weak chains are just intervening variables. Case studies of cases that defy existing theoretical expectations may contribute knowledge by delineating why

7854-439: The cause of a policy problem. Evidence of a purposeful cause for a problem can compel policy decisions more than unguided causes. Program evaluation Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs , particularly about their effectiveness and efficiency . In both the public sector and private sector , as well as

7973-595: The collected data. However, scholars have pushed back on this claim, noting that inductive reasoning is a legitimate practice (both in qualitative and quantitative research). A commonly described limit of case studies is that they do not lend themselves to generalizability. Due to the small number of cases, it may be harder to ensure that the chosen cases are representative of the larger population. As small-N research should not rely on random sampling, scholars must be careful in avoiding selection bias when picking suitable cases. A common criticism of qualitative scholarship

8092-679: The cost savings of the policy). However, Thomas R. Dye, the president of the Lincoln Center for Public Service, states the rational model provides a good perspective since in modern society rationality plays a central role and everything that is rational tends to be prized. Thus, it does not seem strange that "we ought to be trying for rational decision-making". An incremental policy model relies on features of incremental decision-making such as: satisficing , organizational drift, bounded rationality, and limited cognition, among others. Such policies are often called "muddling through" and represent

8211-431: The definition of the problem, the overarching goals, and the capacity and outputs of the program. Rossi, Lipsey & Freeman (2004) suggest four approaches and procedures that can be used to assess the program theory. These approaches are discussed below. This entails assessing the program theory by relating it to the needs of the target population the program is intended to serve. If the program theory fails to address

8330-481: The designs can have substantial methodological differences, the designs also can be used in explicitly acknowledged combinations with each other. While case studies can be intended to provide bounded explanations of single cases or phenomena, they are often intended to raise theoretical insights about the features of a broader population. Case selection in case study research is generally intended to find cases that are representative samples and which have variations on

8449-436: The dimensions of theoretical interest. Using that is solely representative, such as an average or typical case is often not the richest in information. In clarifying lines of history and causation it is more useful to select subjects that offer an interesting, unusual, or particularly revealing set of circumstances. A case selection that is based on representativeness will seldom be able to produce these kinds of insights. While

8568-565: The distinction between single- and multiple-case studies, following Robert K. Yin 's guidelines and extensive examples. A third design deals with a "social construction of reality", represented by the work of Robert E. Stake . Finally, the design rationale for a case study may be to identify "anomalies". A representative scholar of this design is Michael Burawoy . Each of these four designs may lead to different applications, and understanding their sometimes unique ontological and epistemological assumptions becomes important. However, although

8687-666: The dominant pedagogical approach used by law schools in the United States . Outside of law, teaching case studies have become popular in many different fields and professions, ranging from business education to science education. The Harvard Business School has been among the most prominent developers and users of teaching case studies. Teachers develop case studies with particular learning objectives in mind. Additional relevant documentation, such as financial statements, time-lines, short biographies, and multimedia supplements (such as video-recordings of interviews) often accompany

8806-424: The effect of the program and to find causal relationship between the program and the various outcomes. Finally, cost-benefit or cost-efficiency analysis assesses the efficiency of a program. Evaluators outline the benefits and cost of the program for comparison. An efficient program has a lower cost-benefit ratio. There are two types of efficiency, namely, static and dynamic. While static efficiency concerns achieving

8925-464: The effectiveness of the evaluation, although it does not necessarily reduce or eliminate the program. Creating a logic model is a wonderful way to help visualize important aspects of programs, especially when preparing for an evaluation. An evaluator should create a logic model with input from many different stake holders. Logic Models have 5 major components: Resources or Inputs, Activities, Outputs, Short-term outcomes, and Long-term outcomes Creating

9044-458: The effects of an independent variable. They write that the number of observations could be increased through various means, but that would simultaneously lead to another problem: that the number of variables would increase and thus reduce degrees of freedom . Christopher H. Achen and Duncan Snidal similarly argue that case studies are not useful for theory construction and theory testing. The purported "degrees of freedom" problem that KKV identify

9163-501: The evaluation process is to measure whether the program has an effect on the social problem it seeks to redress; hence, the measurement instrument must be sensitive enough to discern these potential changes (Rossi et al., 2004). A measurement instrument may be insensitive if it contains items measuring outcomes which the program couldn't possibly effect, or if the instrument was originally developed for applications to individuals (for example standardized psychological measures) rather than to

9282-403: The evaluator, and correspondingly different evaluation approaches are needed. Rossi, Lipsey and Freeman (2004) suggest the following kinds of assessment, which may be appropriate at these different stages: A needs assessment examines the population that the program intends to target, to see whether the need as conceptualized in the program actually exists in the population; whether it is, in fact,

9401-435: The extent of the problem. Having clearly identified what the problem is, evaluators need to then assess the extent of the problem. They need to answer the ‘where’ and ‘how big’ questions. Evaluators need to work out where the problem is located and how big it is. Pointing out that a problem exists is much easier than having to specify where it is located and how rife it is. Rossi, Lipsey & Freeman (2004) gave an example that:

9520-446: The following steps to achieve rational decisions: The Rational planning model has also proven to be very useful to several decision making processes in industries outside the public sphere. Nonetheless, there are some who criticize the rational model due to the major problems which can be faced & which tend to arise in practice because social and environmental values can be difficult to quantify and forge consensus around. Furthermore,

9639-401: The government can resort to positive sanctions, such as favorable publicity, price supports, tax credits, grants-in-aid, direct services or benefits; declarations; rewards; voluntary standards; mediation; education; demonstration programs; training, contracts; subsidies; loans; general expenditures; informal procedures, bargaining; franchises; sole-source provider awards...etc. Policy evaluation

9758-472: The impact of aid policies in response to natural disasters. It has been suggested that rapid assessment methods may be necessary to evaluate energy and climate policies in the context of the climate emergency. Policy analysis affects policymakers' decisions by introducing them to new ideas to consider, through the work of policy analysts summarizing ideas and frameworks found in the relevant literature. Policymakers tend to value policy analysis more depending on

9877-411: The implementation process itself. Otherwise, a good innovative idea may be mistakenly characterized as ineffective, where in fact it simply had never been implemented as designed. The impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes, i.e. program outcomes. An outcome is the state of the target population or

9996-473: The intended services, staff are adequately qualified. Process evaluation is an ongoing process in which repeated measures may be used to evaluate whether the program is being implemented effectively. This problem is particularly critical because many innovations, particularly in areas like education and public policy, consist of fairly complex chains of action. For example, process evaluation can be used in public health research. Many of which these elements rely on

10115-411: The job are known as Program Analysts ; those whose positions combine administrative assistant or secretary duties with program evaluation are known as Program Assistants, Program Clerks (United Kingdom), Program Support Specialists, or Program Associates; those whose positions add lower-level project management duties are known as Program Coordinators. The process of evaluation is considered to be

10234-463: The limits imposed by given conditions and constraints”. The model makes a series of assumptions, such as: "The model must be applied in a system that is stable"; "The government is a rational and unitary actor and that its actions are perceived as rational choices"; "The policy problem is unambiguous"; "There are no limitations of time or cost". In the context of the public sector, policy models are intended to achieve maximum social gain, and may involved

10353-446: The maximum possible methodological rigor is achieved under these constraints. Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation (Bamberger et al., 2004). Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation. Due to the budget constraints it might be difficult to effectively apply

10472-522: The most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation (Bamberger et al., 2004). Budget constraints may be addressed by simplifying the evaluation design, revising the sample size, exploring economical data collection methods (such as using volunteers to collect data, shortening surveys, or using focus groups and key informants) or looking for reliable secondary data (Bamberger et al., 2004). Case studies A case study

10591-524: The need will be addressed. Having identified the need and having familiarized oneself with the community evaluators should conduct a performance analysis to identify whether the proposed plan in the program will actually be able to eliminate the need. The ‘what’ question requires that evaluators conduct a task analysis to find out what the best way to perform would be. For example, whether the job performance standards are set by an organization or whether some governmental rules need to be considered when undertaking

10710-456: The needs of the target population it will be rendered ineffective even when if it is well implemented. This form of assessment involves asking a panel of expert reviewers to critically review the logic and plausibility of the assumptions and expectations inherent in the program's design. The review process is unstructured and open ended so as to address certain issues on the program design. Rutman (1980), Smith (1989), and Wholly (1994) suggested

10829-404: The objectives with least costs, dynamic efficiency concerns continuous improvement. Perhaps the most difficult part of evaluation is determining whether the program itself is causing the changes that are observed in the population it was aimed at. Events or processes outside of the program may be the real cause of the observed outcome (or the real prevention of the anticipated outcome). Causation

10948-572: The observed outcome of increased employment, not the job training program. Evaluations conducted with random assignment are able to make stronger inferences about causation. Randomly assigning people to participate or to not participate in the program, reduces or eliminates self-selection bias . Thus, the group of people who participate would likely be more comparable to the group who did not participate. However, since most programs cannot use random assignment, causation cannot be determined. Impact analysis can still provide useful information. For example,

11067-404: The outcomes of the program can be described. Thus the evaluation can describe that people who participated in the program were more likely to experience a given outcome than people who did not participate. If the program is fairly large, and there are enough data, statistical analysis can be used to make a reasonable case for the program by showing, for example, that other causes are unlikely. It

11186-411: The people that will benefit from the project before the program can be developed and implemented. Hence it should be a bottom-up approach. In this way potential problems can be realized early because the process would have involved the community in identifying the need and thereby allowed the opportunity to identify potential barriers. The important task of a program evaluator is thus to: First, construct

11305-437: The plausible definitions of actors involved in feasibility. If the feasibility dimension is compromised, it will put the implementation at risk, which will entail additional costs. Finally, implementation dimensions collectively influence a policy's ability to produce results or impacts. One model of policy analysis is the "five-E approach", which consists of examining a policy in terms of: Policies are viewed as frameworks with

11424-401: The policy across a period of time. Also collectively known as "Durability" of the policy, which means the capacity in content of the policy to produce visible Effects Implementation The strategic effects dimensions can pose certain limitations due to data collection. However the analytical dimensions of effects directly influences acceptability. The degree of acceptability is based upon

11543-559: The potential to optimize the general well-being. These are commonly analyzed by legislative bodies and lobbyists. Every policy analysis is intended to bring an evaluative outcome. A systemic policy analysis is meant for in depth study for addressing a social problem. Following are steps in a policy analysis: Many models exist to analyze the development and implementation of public policy . Analysts use these models to identify important aspects of policy, as well as explain and predict policy and its consequences. Each of these models are based upon

11662-497: The potentially representative case. A case may also be chosen because of the inherent interest of the case or the circumstances surrounding it. Alternatively, it may be chosen because of researchers' in-depth local knowledge; where researchers have this local knowledge they are in a position to "soak and poke" as Richard Fenno put it, and thereby to offer reasoned lines of explanation based on this rich knowledge of setting and circumstances. Beyond decisions about case selection and

11781-428: The prior correct implementation of other elements, and will fail if the prior implementation was not done correctly. This was conclusively demonstrated by Gene V. Glass and many others during the 1980s. Since incorrect or ineffective implementation will produce the same kind of neutral or negative results that would be produced by correct implementation of a poor innovation, it is essential that evaluation research assess

11900-399: The program and the target group. Evaluations can help estimate what effects will be produced by program objectives/alternatives. However, claims of causality can only be made with randomized control trials in which the policy change is applied to one group and not applied to a control group and individuals are randomly assigned to these groups. To obtain compliance of the actors involved,

12019-425: The program is actually supposed to do and how it is supposed to do it, which is often lacking (see Participatory impact pathways analysis ). Of course, it is also possible that during the process of trying to elicit the logic model behind a program the evaluators may discover that such a model is either incompletely developed, internally contradictory, or (in worst cases) essentially nonexisistent. This decidedly limits

12138-478: The program is effective or not. It further helps to clarify understanding of a program. But the most important reason for undertaking the effort is to understand the impacts of the work on the people being served. With the information collected, it can be determined which activities to continue and build upon, and which need to be changed in order to improve the effectiveness of the program. This can involve using sophisticated statistical techniques in order to measure

12257-463: The program", "identify and engage partners", and "determine timelines and available resources", while otherwise retaining the model. There is sometimes a need for policy assessment to be conducted at speed, using rapid evaluation and assessment methods (REAM). Characteristics of REAM include setting clear and targeted objectives at the start of a policy cycle, participation and interdisciplinary teamwork, simultaneous collection and analysis of data, and

12376-465: The progress, as well as continued developmental evaluation to explore new elements as they emerge. Formative evaluation involves "careful monitoring of processes in order to respond to emergent properties and any unexpected outcomes." Recommended evaluation approach: Summative evaluation "uses both quantitative and qualitative methods in order to get a better understanding of what [the] project has achieved, and how or why this has occurred." Planning

12495-434: The questions listed below to assist with the review process. This form of assessment requires gaining information from research literature and existing practices to assess various components of the program theory. The evaluator can assess whether the program theory is congruent with research evidence and practical experiences of programs with similar concepts. This approach involves incorporating firsthand observations into

12614-420: The research objectives), Alexander George and Andrew Bennett added a sixth category: Aaron Rapport reformulated "least-likely" and "most-likely" case selection strategies into the "countervailing conditions" case selection strategy. The countervailing conditions case selection strategy has three components: In terms of case selection, Gary King , Robert Keohane , and Sidney Verba warn against "selecting on

12733-564: The research. Barbara Geddes shares their concerns with selecting the dependent variable (she argues that it cannot be used for theory testing purposes), but she argues that selecting on the dependent variable can be useful for theory creation and theory modification. King, Keohane, and Verba argue that there is no methodological problem in selecting the explanatory variable , however. They do warn about multicollinearity (choosing two or more explanatory variables that perfectly correlate with each other). Case studies have commonly been seen as

12852-407: The role and influence of stakeholders within the policy process. Stakeholders is defined broadly to include citizens, community groups, non-governmental organizations, businesses and even opposing political parties. By changing the relative power and influence of certain groups (e.g., enhancing public participation and consultation), solutions to problems may be identified that have more "buy in" from

12971-465: The same logics of causal inference can be used in both types of research. The authors' recommendation is to increase the number of observations (a recommendation that Barbara Geddes also makes in Paradigms and Sand Castles ), because few observations make it harder to estimate multiple causal effects, as well as increase the risk that there is measurement error , and that an event in a single case

13090-543: The scope conditions of a theory: whether variables are sufficient or necessary to bring about an outcome. Qualitative research may be necessary to determine whether a treatment is as-if random or not. As a consequence, good quantitative observational research often entails a qualitative component. Designing Social Inquiry (also called "KKV"), an influential 1994 book written by Gary King , Robert Keohane , and Sidney Verba , primarily applies lessons from regression-oriented analysis to qualitative research, arguing that

13209-483: The social conditions that a program is expected to have changed. Program outcomes are the observed characteristics of the target population or social conditions, not of the program. Thus the concept of an outcome does not necessarily mean that the program targets have actually changed or that the program has caused them to change in any way. There are two kinds of outcomes, namely outcome level and outcome change, also associated with program effect. Outcome measurement

13328-461: The staged reporting of findings. These require front-loaded effort: consulting with funders and achieving buy-in from informants who will face competing demands during implementation phases. They also blur the distinction between evaluation and implementation, as interim findings are used to adapt and improve processes. Rapid methods can be used when there is a short policy cycle . For instance, they are often used in international development to assess

13447-519: The study is to be single or multiple, and choices also about whether the study is to be retrospective, snapshot or diachronic, and whether it is nested, parallel or sequential. In a 2015 article, John Gerring and Jason Seawright list seven case selection strategies: For theoretical discovery, Jason Seawright recommends using deviant cases or extreme cases that have an extreme value on the X variable. Arend Lijphart , and Harry Eckstein identified five types of case study research designs (depending on

13566-409: The subject and object of the study, decisions need to be made about the purpose, approach, and process of the case study. Gary Thomas thus proposes a typology for the case study wherein purposes are first identified (evaluative or exploratory), then approaches are delineated (theory-testing, theory-building, or illustrative), then processes are decided upon, with a principal choice being between whether

13685-408: The task. Third, define and identify the target of interventions and accurately describe the nature of the service needs of that population It is important to know what/who the target population is/are – it might be individuals, groups, communities, etc. There are three units of the population: population at risk, population in need and population in demand Being able to specify what/who the target

13804-406: The thinking procedure implied by the model which is linear and can face difficulties in extraordinary problems or social problems which have no sequences of happenings. The rational planning model of decision-making is a process for making sound decisions in policy-making in the public sector. Rationality is defined as “a style of behavior that is appropriate to the achievement of given goals, within

13923-813: The types of policies. Public policy is determined by a range of political institutions, which give policy legitimacy to policy measures. In general, the government applies policy to all citizens and monopolizes the use of force in applying or implementing policy (through government control of law enforcement , court systems, imprisonment and armed forces ). The legislature , executive and judicial branches of government are examples of institutions that give policy legitimacy. Many countries also have independent, quasi-independent or arm's length bodies which, while funded by government, are independent from elected officials and political leaders. These organizations may include government commissions , tribunals , regulatory agencies and electoral commissions. Policy creation

14042-577: The unintentional comparison of dissimilar cases). Case studies add descriptive richness, and can have greater internal validity than quantitative studies. Case studies are suited to explain outcomes in individual cases, which is something that quantitative methods are less equipped to do. Case studies have been characterized as useful to assess the plausibility of arguments that explain empirical regularities. Case studies are also useful for understanding outliers or deviant cases. Through fine-gained knowledge and description, case studies can fully specify

14161-441: Was caused by random error or unobservable factors. KKV sees process-tracing and qualitative research as being "unable to yield strong causal inference" due to the fact that qualitative scholars would struggle with determining which of many intervening variables truly links the independent variable with a dependent variable. The primary problem is that qualitative research lacks a sufficient number of observations to properly estimate

#33966