# Using Piface part 2/4

One-sample (or paired) t test
This test is usually conducted when you have only one group or two groups but the same subject are used in both of the groups.

• Sigma – signify standard deviation (SD). So, 1 sigma represent 1SD. This is the usual adopted value for most statistical analysis.
• mu – true mean.
• |mu-mu_0| – this would indicate that the difference of mean between the final result and the start of a test. The positive or negative value obtain from the calculation is neglected as is considered as positive. Another word for this equation is called effect size. In statistics, effect size is a measure of the strength of the relationship between two variables. In scientific experiments, it is often useful to know not only whether an experiment has a statistically significant effect, but also the size of any observed effects. In practical situations, effect sizes are helpful for making decisions. Effect size measures are the common currency of meta-analysis studies that summarize the findings from a specific area of research.
• n – suggested sample size.
• Power – Power is the probability that the test will reject Null hypothesis (Ho) when it is false. Thus, the more power, the higher probability of correctly rejecting Null hypothesis (Ho).
• Alpha – the p value (maximum level of significance)
• Two tail (un-select) – Select this option if you have a preconceive idea what the result will be. This usuallyÂ  will come when a comprehensive study was done prior to the current research.

Two-sample t test (general case)
This test is usually conducted when you haveÂ  two groups but totally different subjects are used in the two groups.

• Sigma – signify standard deviation (SD). So, 1 sigma represent 1SD. This is the usual adopted value for most statistical analysis. sigma 1 represent the SD for group 1 while sigma2 is the SD for group 2.
• Equal sigma (checked) – Use this option when both groups have equal SD. Usually this option is checked as most test uses 1 SD.
• n – suggested sample size.
• Degree of freedom – This value is derived from (n1 – 1) + (n2 -1). In statistics, the phrase degrees of freedom is used to describe the number of values in the final calculation of a statistic that are free to vary.
• True difference of means – this would indicate that the difference of mean between the final result and the start of a test. The positive or negative value obtain from the calculation is neglected as is considered as positive. Another word for this equation is called effect size. In statistics, effect size is a measure of the strength of the relationship between two variables. In scientific experiments, it is often useful to know not only whether an experiment has a statistically significant effect, but also the size of any observed effects. In practical situations, effect sizes are helpful for making decisions. Effect size measures are the common currency of meta-analysis studies that summarize the findings from a specific area of research.
• Power – Power is the probability that the test will reject Null hypothesis (Ho) when it is false. Thus, the more power, the higher probability of correctly rejecting Null hypothesis (Ho).
• Alpha – the p value (maximum level of significance)
• Two tail (un-select) – Select this option if you have a preconceive idea what the result will be. This usuallyÂ  will come when a comprehensive study was done prior to the current research.

Linear regression
Use this test if you would like to determine if your parameters can be plotted in a straight line. You can have more than one factor influencing an outcome. An outcome can only be one factor. This outcome is sometimes called an output.

• No. of predictors – How many parameters will be used to determine an outcome.
• Alpha – the p value (maximum level of significance)
• Two tail (un-select) – Select this option if you have a preconceive idea what the result will be. This usuallyÂ  will come when a comprehensive study was done prior to the current research.
• SD of x[j] – signify standard deviation (SD). 1SD is the usual adopted value for most statistical analysis.
• Power – Power is the probability that the test will reject Null hypothesis (Ho) when it is false. Thus, the more power, the higher probability of correctly rejecting Null hypothesis (Ho).
• Error S.D. -Â  Standard deviation of errors for the output variable.
• Detectable beta – Statisticians use the Greek letter beta to indicate the probability of failing to reject the hypothesis tested when that hypothesis is false and a specific alternative hypothesis is true. For a given test, the value of beta is determined by the previously elected value of alpha, certain features of the statistic that is being calculated (particularly the sample size) and the specific alternative hypothesis that is being entertained. While it is possible to carry out a statistical test without entertaining a specific alternative hypothesis, neither beta nor power can be calculated if there is no specific alternative hypothesis. It is relevant to note here that power ( the probability that the test will reject the hypothesis tested when a specific alternative hypothesis is true ) is always equal to one minus beta. ( i.e. Power = 1 – beta )

References

1. Effect size: http://en.wikipedia.org/wiki/Effect_size
2. Degrees of freedom (statistics): http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)
3. Statistics explained: http://www.sysurvey.com/tips/statistics/beta.htm