1. Find an interesting effect and formulate an appropriate hypothesis.

2. Conduct literature review.

3. Select the relevant papers for analysis. As you do this, narrow your focus down to possibly a single criterion if you are considering the analysis to validate a single cause- effect relation or like. Broaden the scope if you need to establish the statistical significance. For example; design of the experiment can be set to include only randomized experiments. The same can be done to increase or decrease other factors such as population, outcomes etc.

4. Note down the effects. In some experiments this may be a single factor while in others this may be broadened to include the different factors as covariates in the analysis.

5. These effects should be converted to a common scale. Generic dimension fewer measures are preferred in analysis. These include standardized changes in mean value, percent or factor changes in mean values, correlation coefficient, and relative frequency values.

6. Determine the fixed effects. Fixed effects are those characteristics of the study which might account for some portion of variation of effects between the data from various studies. These can be considered as covariates in the analysis. E.g.: duration of treatment, dosage, gender, quality score, age, etc.

For example, if the studies have both genders, treat the effects of each group independently. If the gender effects are not accurately reflected in one or more studies, then consider one of the groups as a proportional representation of the total. This method can be used to classify all variables with dichotomous nature, i.e. they take one of the two possible options as values. E.g.: gender, diseased vs. normal, test vs. control etc.

7. Some research designs employ quality score. It is a checklist of some of the characters of the study such as

â€¢ Published in a peer-reviewed journal

â€¢ Studies done on random samples

â€¢ Blinding is employed to minimize bias

â€¢ Low drop rates of subjects during the research

â€¢ Data analysis methods are uniform

â€¢ No selection bias etc.

Each of these gets a rank like present-1, absent-0 (simplest). The final aggregate of the score determines the selection or rejection of the experiment to be included in the Meta analysis.

8. Calculate the weighting factor for the effects using

â€¢ Confidence interval

â€¢ confidence limits

â€¢ Test statistic like t, F, Chi square etc. F ratios where the degrees of freedom of numerator are more than 1 are not used for analysis.

â€¢ p value . The p values are set to 0.05 in most of the cases. Studies with p value set to more than that should be analyzed by individual parameters.

â€¢ Standard Deviation (SD) of scores. This is particularly useful for controlled trials where ideal randomization procedures are employed.

â€¢ Sometimes the details listed above are not accurately represented in the study. In such case sample size can be used as the weighting factor.

9. Perform a meta-analysis preferably mixed model.

â€¢ Calculate the confidence limits for the variables involved in the study.

â€¢ Analyze and interpret the results.

â€¢ Calculate the probability of the mean effect being significant.

â€¢ The magnitude of variation of mean effect between studies is the random variation of effects between the researches.

â€¢ Controlled trials usually have numerous variables. A SD of the mean effects of such individual responses should be considered for Meta analysis. This is ideal though requires lot of statistic.

â€¢ A funnel plot of standard error vs. magnitude of outcome can be used to minimize publication bias. The funnel plot is useful to find the outliers too. For this, residuals are plotted.

**Issues in Meta analysis**

There are many issues while doing a Meta analysis study. Some of the common ones are

â€¢ Difficulty to establish the selection criteria for studies.

â€¢ Identification of relevant parameters and outcomes

â€¢ Publication bias

â€¢ Problems with different methods of data presentation.

â€¢ Variations in measurement scales, methods, statistic, etc.

â€¢ Deciding the procedures to combine and analyze data

â€¢ Estimating heterogeneity of studies

Data Types and outcomes used

Data Types and outcomes used

For continuous data, difference in mean expression values is taken for combined analysis. When the data is dichotomous, then odds ratio and relative frequency values such as risk ratio and risk difference is used. Hazard ratio is used in survival studies. Pooled estimate of variance is used to standardize the effect magnitudes within the same group. When the outcomes are continuous and skewed, it is advisable to transform the data into another format such as its logarithmic values.

Meta analysis studies if done systematically and efficiently gives the best results adding statistical significance to the findings and effectively analyzing the cause of outliers.

**About Author / Additional Info:**