Meta analysis is a combined or collective effort to analyze the results of several independent studies to answer a larger and wider range of hypotheses. The term has been in use for several decades in the field of clinical research where the treatment efficacy and adverse effects could not be concluded with a single trial. Even if the experiment is statistically significant, it takes a lot to prove its biological significance.

Clinical significance is nothing but proving that the drug has higher efficacy, safety parameters and lesser risk. This could be established with a couple of patients, but it was important to increase the sample size to have statistical significance. With the discovery of new adverse effects, trials need to be conducted more vigorously to establish the biological efficacy without doubts. Hence the Meta analysis studies incorporating the results of several independent researches are gaining huge popularity.

The term was officially coined by Gene Glass in 1976. Meta analysis studies enable the researcher to review multiple studies. This is in effect a cross sectional study where the subjects are not individuals but an array of studies. A review becomes a Meta analytic study when it includes the quantification of the magnitude and confidence limits.

Purpose
1. Get a higher magnitude of effects
2. Increase precision.
3. Improve statistical significance.

Main factors analyzed


1. Weighting factor:
Weighting factor is 1/ (SE)2 in most of the Meta analytic studies. Standard Error is the variation in the expected effect when the study is repeated. When the other parameters of the study are equal, this becomes limited to the differences in sample size of the studies. This also accounts for errors in measurement in different studies. This step is huge involving the identification and quantification of confounding variables in each individual study.

2. Q statistic:
In traditional approaches, the heterogeneity between studies were found out by Q statistic with a power set to a higher value 0.10. When p>0.10, the effects are considered to be homogenous. The value is obtained by excluding outliers if any in the different studies. The resultant data set is therefore considered uniform and the confidence limits are calculated using the weighted mean approach and other statistical tests.

The disadvantage is that the experiment assumes uniformity and does not consider the presence of outliers or confounders which can seriously cause a bias in the results. The approach therefore suffers from poorer statistical significance.

3. Random effects:
It is the standard deviation of variations of magnitudes between different studies. Such studies assume that there are real differences between studies. The realistic estimate of the standard deviation is the error factor the expected standard mean value if the experiment is repeated.
The studies are termed as random effect Meta analysis models or fixed effect models. The characteristics of the study form the fixed effects. These effects explain variations to a certain extent. But it requires more number of studies than conventional studies.

Limitations to Meta analysis and how to resolve them

1. Most studies focus on mean values of the individual researches. This seldom explains the effect on individuals. This can be solved by taking the mean values of each individual response as covariates in the analysis.

2. Confounding variables pose a serious risk of introduction of bias and error in analysis. This is especially true if in any study, the confounding variable was not identified or measured. Nothing significant can be done to minimize such errors. If the individual data is available, modified randomization strategies can be used to minimize such effects. If a confounding variable which significantly affect the result is present, is not measured, and the individual values are not available then it is better not to include such studies in the Meta analysis.

3. There is high risk of publication bias since the available data tends to produce only significant p values.

4. The studies can be conducted with different units of measurement. When combining, these results have to be converted to same unit. In most of the Meta analysis studies, generic dimensionless measures are used in order to facilitate the combined analysis. Some of the measures include correlation coefficient, odds ratio, binary comparative outcomes (relative risk and relative difference), Cohen's d, percent changes in mean expression values etc.

The process requires more vigorous methods and processes. The approach is useful in biological research with the huge number of databases and requirement to handle large volumes of data to find out significant result. Some of the major areas where this has been implemented include microarray data analysis studies, clinical research, DNA sequence analysis and epidemiological studies.
Meta analysis process and how to do such analysis in biological research are explained in the next part of the article.

About Author / Additional Info: