Home      |       Contents       |       About

Prev: Research design       |      Next: What is hypothesis testing

Experimental design

What is experimental design

  • Although the layman uses routinely the term "experiment" to collactively name various forms of research activity, in science 'experimental design' refers to a strict form of research, where:
    • The researcher maintains control over multiple environmental interfering factors and studies the impact of only one of them (as independent variable) on the behavior (the dependent variables) of the system under study. Experimental design is the type of research activity typically implemented in the lab where controlling independent variables is feasible.
  • Experimental design is considered as a 'must know to apply' method for a researcher as it is a method widely applied in many domains (routinely used in social and life sciences), and providing quantitative results the allow rigorous hypothesis testing under well controlled conditions.

Experimental design as a 'black box' approach

Experimental vs. Control group

  • When applied in social/life sciences experimental design entails two or more groups of participants (for example, patiens or students) who are treated differently for comparison purposes.
  • The simplest type of experimental design is to employ two groups: experimental (treatment) group vs. control group
    • Experimental group (also called 'treatment' group) denotes the group where the independent factor (variable) is allowed to have an impact on participants (for example, patients given an experimental drug or students learning with an innovative method/technology).
    • Control group refers to the group where no special treatment is applied (for example, patients are given a placebo drug or students learn as usually without any innovation). As said, control group is necessary for comparison purposes to better understand the impact of the studied factor on the experimental group.

The independent factor (variable) is allowed to have an impact only on the experimental group

Technically the experimental design with randomized groups is denoted as above. 'R' signifies that groups are randomized, while 'X' is a symbol of the independent factor having impact only on the experimental group. Finally, 'O' referes to the observations the researcher makes during experimentation.

Citation note: the above and some other relevant links below lead to: "Trochim, William M. The Research Methods Knowledge Base, 2nd Edition, Internet WWW page

Sampling

  • Sampling is the process of group formation by selecting member items from the population that we would like to investigate. In other words, sampling is determining who will participate in which group of those forseen by the applied research design.
  • Sampling is of major importance in research design, since the formation of a truly population representative sample allows (or not) to generalize research conclusions on the entire population where the sample comes from.
  • Especially helpful when sampling a population are the following classifications:

    • Random sampling: a sample selected completely randomly from population. Is is considered as the simplest non-biased approach to getting a sample.
    • Stratified sampling: (or 'proportional' sampling) when a population is divided into subgroups (for example, boys and girls in students population) then we arrange that the proportionality of subgroups in our stratified sample to be the same as in the original population.
    • Convenience sampling: when it is easy to use as sample groups one or more intact groups of participants (for example two student classes that we use as treatment and control groups). In this case we talk about 'quasi-experimental' design ('quasi-experimental' is a term primarily denoting the lack of random sampling).
  • Further reading: Probability sampling

Measuring

  • Measuring is the process of using a measuring instrument to get numerical data (numbers) that represent the value of the measured construct. Thus, you need two things to do a measurement:

    • A 'measured construct' is a conceptual construct (a concept) referring to the property that you aim to measure. For example, for a physicist to measure temperatures she needs first to construct the relevant propert (that is, the temprature). Similarly, for an educator to measure students' learning performance she first needs to clearly construct what is to be measured (for example, individual or group learning performance).
    • A 'measuring instrument' is a tool (pre-existing or ad hoc developed) to use for measuring the construct (property). A thermometer is an available instrument for measuring temperatures; a knowledge questionnaire might be an approproate instrument for measuring learners' performance.
  • Further reading on measurement

Reliability

  • Reliability refers to how consistent or repeatable is the measurement responding to changing (or not) conditions but without being affected by irrelevant factors.

    For example, measuring temperature in a reliable way means that the measurement is consistent without being affected by, let's say, the geographical latitude. Similarly, measuring learners' performance reliably would mean that the knowledge measuring questionnaire provides consistent measurements as students' knowledge change without being affected by, let's say, the students' personal interpretation of questions.

  • We will come back again later on reliability to explain how to get an estimate of it from our data.

  • Further reading on reliability (wikipedia)

Validity

  • 'Validity' generally refers to the quality of being correct and accepted by experts and authorities. In research design, 'validity' is an 'umbrella' term (a general one subsuming many validity specific 'flavors') used to describe whether various aspects of design have been correclty constructed and accepted by experts.

    Validity is not a measurable property but a quality that helps researchers critically analyze the very essence of research design: the extent to which the design is trully appropriate for what the researchers claim it is.

  • Most often researchers talk about 'construct validity', a concept referring to whether the design implements appropriate operationalizations (for example, developing and using measuring instruments) that fit well to the theoretical underpinnings of the study.

    Discussing study limitations relevant to construct validity is an indication of quality for a scientific paper. There are several issues pertaining to such a discussion, but the key idea is always whether the practical implementation of the research design (the oprationalizations) fit (or not) well to the theoretical framework.
    As an example from education, we could mention the effort of measuring individual vs. group learning. Individual learning (what an individual students learned) is definitely more clear then group learning (what the group learned). The researcher should be cautious about the way she operationalizes the 'group learning' concept in order to measure it. Threats to construct validity may arise if the implementation does not correspond well to what the theory predicts or proposes.

. Free learning material
. See full copyright and disclaimer notice