To evaluate a particular program in a broad sense means to compare indicators of interest before and after the program, on the basis of which conclusions can be drawn regarding its effectiveness. However, to carry out such a comparison, a large array of reliable comparable data is required for at least two periods, without the evaluation of which the results of the program will remain undetected. The most obvious way to gather the necessary data is through surveys.
Content
- 1 Basic principles of the survey
- 2 Study Design
- 3 Experimental Design
- 3.1 Randomized Sample Design
- 3.2 Limitations of the use of experimental designs
- 4 Quasi-experimental design
- 4.1 Design of nonequivalent groups
- 4.2 Design of nonequivalent groups with assessment before and after intervention
- 4.3 One-to-One Design
- 4.4 Time Series Design
- 5 Literature
Poll ( English poll ) - a method of collecting information from the words of the respondent.
One of the main problems of program evaluation is obtaining high-quality and reliable data, the collection of which can be very costly both in time and in money terms. Therefore, so that the efforts are not wasted, it is especially important to follow the basic principles when conducting a survey.
The basic principles of the survey
- The survey should be carried out on a representative sample : the sample of respondents should include representatives of all major groups affected by the program in order to obtain reliable data that correspond to reality
- It is necessary to carefully consider the size of the sample: due to the significant limited resources, the optimal scale of the survey should be chosen that does not go beyond the budget, but allows to obtain a sufficient amount of information
- The questionnaire should be practically useful: the result of the survey should be information that is important for evaluating the program, so that the questions should be designed so that respondents can give meaningful answers to them, and the researcher should get valuable information
- The answers should be reliable and unbiased: during the survey, respondents should not want to distort information. By correctly compiling the questions, the problems of bias in answers can be avoided, thereby obtaining reliable data
- Prior to the survey, it is necessary to study the data already available: the information of interest to the researcher could be collected before him by non-governmental organizations or state bodies. This will significantly expand the amount of available knowledge regarding the issue under study, and will also help to significantly reduce its own information collection costs.
However, obtaining truly qualitative and necessary information research requires not only following the basic principles. Who needs to be interviewed to obtain comparable data on the effectiveness of the ongoing data program? When do you need to do this? The answers to these questions are set as part of the study design.
Study Design
Research design ( Eng. Research design ) - a form of research, in which, among other things, specifies a method for collecting comparable data to assess the effectiveness of the program. Focusing on the use of comparative data to interpret the effect of the program, it is the research design that determines whether the identified changes are a consequence of the implementation of the program being evaluated, or whether they are the effects of external variables.
Among the various types of research design, two main categories can be distinguished: experimental ( English true-experimental design ) and quasi-experimental design ( English quasi-experimental design ).
Experimental Design
Experimental design is a research method in which research objects (students, teachers, retirees - that is, the target audience of the program ) are randomly divided into two groups: the group exposed to the program, and the control group, There is a basis for comparison. One of the most commonly used experimental designs is the design of randomized sampling.
Randomized Design
A randomized control trials (RTC ) design is a form of pilot study in which the effects of one or more interventions are evaluated based on a random randomized distribution of objects in the experimental and control groups. The randomized distribution of objects into groups means that each of them has the same chance of being in the program. The experimental group is exposed to the program, while the control group is not exposed to it and is used as a basis for comparison. After implementing the program by interviewing all groups, researchers try to understand how significant changes are in the experimental group compared to the control group.
Only random distribution gives confidence that the groups are really comparable, and the observed differences in the results are not due to extraneous factors or previously existing differences. For example, what conclusion can be made on the basis that the exposed group of students showed better results than the control group if the first group of students before the program were taught by more qualified and creative teachers than the second? How to explain the observed difference between the groups: the effect of the program or the differences that originally exist between them? A random distribution of students into groups would only show the effect of the program.
Limitations of using experimental designs
However, the use of experimental design has its limitations. Experimental design is generally unsuitable for the analysis of complex programs, where, as in most political programs, the results are the result of the simultaneous interaction of several factors at once, which experimental design in most cases cannot catch.
Problems arise because the researcher is not able to eliminate the effects of all possible external factors, and sometimes this is not necessary. Indeed, in reality the random distribution of students does not look plausible, but the goal of any program is to influence the distribution of objects that takes place in reality in different territories. This means that the effectiveness of the program often needs to be assessed in conjunction with external factors. As a result of this, within the framework of a randomized sample, it is difficult to evaluate the cause-effect relationship between results and factors, which means that the effectiveness of the program and methods for its improvement can hardly be determined.
Quasi-experimental design
Quasi-experimental design ( English quasi-experimental design ) is a research method in which the emphasis is shifted from the probability distribution and causal relationships of experimental designs to the analysis of the interaction between variables . Quasi-experimental designs are commonly used in evaluating programs when random distribution is not possible or practical. However, despite the frequency of use, quasi-experimental designs have some interpretation problems. Commonly used types of quasi-experimental designs include numerous nonequivalent group design designs and time-series design designs.
Design of nonequivalent groups
The design of non-equivalent groups with post-intervention assessment ( English Nonequivalent group, posttest only ) includes the measurement of results by surveys in two experimental groups, but only after the implementation of the program. For example, one group of students could receive instructions on reading in a foreign language using exercises and the rules of the entire course as a whole, while the other could only receive instructions on phonetics. And after two weeks, a verification test would show which of the two programs was more effective. However, the main drawback is the problem of interpreting the results, since it becomes unclear whether the best results in reading one group are a consequence of the implementation of the program, or whether the groups initially differed in different abilities for foreign languages.
Design of nonequivalent groups with an assessment before and after the intervention
The design of nonequivalent groups with an assessment before and after the intervention ( English Nonequivalent group, pretest-posttest ) partially eliminates the main drawback of the previous design with an assessment only after the intervention. In the framework of this design, the researcher empirically evaluates the differences between the two groups at the very beginning of the experiment - that is, there is an assessment before the program. Thus, if a researcher, when assessing changes after a program, finds that one of the groups showed the best results, he can exclude the influence of the initial differences in favor of this group (if none were established) or, on the contrary, draw conclusions regarding the influence of this factor together with the impact programs.
One-to-One Design
A feature of the βone to one matched comparison group design β is that both the experimental and control groups are selected after the implementation of the program under study. The experimental group is recruited from those who have been influenced by the program, the base for comparison includes those who themselves decided not to participate in this program, but who approached all the characteristics and received an βinvitationβ. Thus, this design wins compared to the others in that the comparison occurs between two completely identical groups of people in the jedal: as if before and after the program. However, in practice it turns out to be very difficult to find such a control group, since one or another external factor always occurs.
Time Series Design
Time series design includes a repeated assessment of current changes in two groups - control and experimental - both before the program and in the process of its implementation. A series of observations of the two groups provides comprehensive information on gradual changes under the influence of the program, which means that the design is most sensitive to determining the general trend of the changes. However, despite all the advantages, the design of a time series, although to a lesser extent, is characterized by all the disadvantages and limitations of quasi-experimental designs.
Literature
- Campbell DT, Stanley JC: Experimental and Quasi-experimental Designs for Research. Chicago: Rand-McNally, 1963
- Cook TD, Campbell DT: Quasi-experimentation. Chicago: Rand-McNally, 1979
- David M. Streinberg, William G. Hunter Experimental design: review and comment. University of Wisconsin, Madison 1984
- John Bynner Experimental research strategy and evaluation research designs. British Educatiobal ββResearch Journal, 1980
- Guidance on Regulatory Impact Analysis (ARV)