- The researcher maintains control over multiple environmental interfering factors and studies the impact of only one of them (as independent variable) on the behavior (the dependent variables) of the system under study. Experimental design is the type of research activity typically implemented in the lab where controlling independent variables is feasible.
- Experimental group (also called 'treatment' group) denotes the group where the independent factor (variable) is allowed to have an impact on participants (for example, patients given an experimental drug or students learning with an innovative method/technology).
- Control group refers to the group where no special treatment is applied (for example, patients are given a placebo drug or students learn as usually without any innovation). As said, control group is necessary for comparison purposes to better understand the impact of the studied factor on the experimental group.
Technically the experimental design with randomized groups is denoted as above. 'R' signifies that groups are randomized, while 'X' is a symbol of the independent factor having impact only on the experimental group. Finally, 'O' referes to the observations the researcher makes during experimentation.
Especially helpful when sampling a population are the following classifications:
- Random sampling: a sample selected completely randomly from population. Is is considered as the simplest non-biased approach to getting a sample.
- Stratified sampling: (or 'proportional' sampling) when a population is divided into subgroups (for example, boys and girls in students population) then we arrange that the proportionality of subgroups in our stratified sample to be the same as in the original population.
- Convenience sampling: when it is easy to use as sample groups one or more intact groups of participants (for example two student classes that we use as treatment and control groups). In this case we talk about 'quasi-experimental' design ('quasi-experimental' is a term primarily denoting the lack of random sampling).
Further reading: Probability sampling
Measuring is the process of using a measuring instrument to get numerical data (numbers) that represent the value of the measured construct. Thus, you need two things to do a measurement:
- A 'measured construct' is a conceptual construct (a concept) referring to the property that you aim to measure. For example, for a physicist to measure temperatures she needs first to construct the relevant propert (that is, the temprature). Similarly, for an educator to measure students' learning performance she first needs to clearly construct what is to be measured (for example, individual or group learning performance).
- A 'measuring instrument' is a tool (pre-existing or ad hoc developed) to use for measuring the construct (property). A thermometer is an available instrument for measuring temperatures; a knowledge questionnaire might be an approproate instrument for measuring learners' performance.
Further reading on measurement
For example, measuring temperature in a reliable way means that the measurement is consistent without being affected by, let's say, the geographical latitude. Similarly, measuring learners' performance reliably would mean that the knowledge measuring questionnaire provides consistent measurements as students' knowledge change without being affected by, let's say, the students' personal interpretation of questions.
We will come back again later on reliability to explain how to get an estimate of it from our data.
Further reading on reliability (wikipedia)
'Validity' generally refers to the quality of being correct and accepted by experts and authorities. In research design, 'validity' is an 'umbrella' term (a general one subsuming many validity specific 'flavors') used to describe whether various aspects of design have been correclty constructed and accepted by experts.
Validity is not a measurable property but a quality that helps researchers critically analyze the very essence of research design: the extent to which the design is trully appropriate for what the researchers claim it is.
Most often researchers talk about 'construct validity', a concept referring to whether the design implements appropriate operationalizations (for example, developing and using measuring instruments) that fit well to the theoretical underpinnings of the study.
Discussing study limitations relevant to construct validity is an indication of quality for a scientific paper. There are several issues pertaining to such a discussion, but the key idea is always whether the practical implementation of the research design (the oprationalizations) fit (or not) well to the theoretical framework.
As an example from education, we could mention the effort of measuring individual vs. group learning. Individual learning (what an individual students learned) is definitely more clear then group learning (what the group learned). The researcher should be cautious about the way she operationalizes the 'group learning' concept in order to measure it. Threats to construct validity may arise if the implementation does not correspond well to what the theory predicts or proposes.