# Difference Between Z-Test and P-Value

Edited by Diffzy | Updated on: May 28, 2023 Our articles are well-researched

We make unbiased comparisons

Our content is free to access

We are a one-stop platform for finding differences and comparisons

We compare similar terms in both tabular forms as well as in points

## Introduction

Although the Z-test and the P-Value are both statistical tests, they are not the same thing. The former is a statistical test that indicates whether or not the null hypothesis should be rejected, whereas the latter is a probability test that indicates whether or not the null hypothesis will be rejected.

## Z-test vs. P-value

The fundamental distinction between the Z-Test and the P-Value is that the Z-Test discusses whether the null hypothesis should be rejected or not, whereas the P-Value discusses if the observations made during the experiment are the same or extreme when the null hypothesis is true. In statistics, a Z-test is a method used to evaluate if two population means differ even when the variables are known. It is a sort of hypothesis test that may be approximated by a normal distribution under the null hypothesis. In statistics, hypothesis testing is used to determine if the results of a survey or experiment are relevant or not. In a statistical hypothesis, a P-Value or probability value is the likelihood of receiving the test/experiment findings seen during the test or experiment under the premise that the null hypothesis is valid. A null hypothesis is a broad assertion that there is no link between the two groups being studied.

## What Is the Z-test?

In statistics, a Z-test is a method used to evaluate if two population means differ even when the variables are known. Furthermore, the sample size is enormous. It is a sort of hypothesis test that may be approximated by a normal distribution under the null hypothesis.

It is used to determine if the null hypothesis should be rejected. The Z-scores are standard deviation measurements; for example, +1.95 or -1.95 shows how far the test statistic result deviates from the mean.

The One-Sample Z-test makes the following assumptions:

• The data is continuous rather than discrete.
• The data have a normal probability distribution.
• If the sample is not random, the test statistic outcome may be incorrect.
• The standard deviation of the population is known.

### Requirements

Certain requirements must be satisfied for the Z-test to be valid.

• Nuisance parameters should be known or accurately estimated (an example of a nuisance parameter would be the standard deviation in a one-sample location test). Z-tests concentrate on a single unknown parameter and assume that all other unknown parameters are fixed at their real values. In reality, Slutsky's theorem allows for the justification of "plugging in" consistent estimates of nuisance parameters. The Z-test may not work well if the sample size is not large enough for these estimations to be reasonably reliable.
• A normal distribution should be used for the test statistic. In general, the central limit theorem is invoked to justify assuming that a test statistic fluctuates regularly. There have been a lot of statistical studies done on when a test statistic fluctuates regularly. A Z-test should not be utilized if the fluctuation of the test statistic is highly non-normal.

If estimates of nuisance parameters are fed in, as previously described, it is critical to utilize estimates that are appropriate for the manner the data were gathered. The ordinary sample standard deviation is only applicable in the particular situation of Z-tests for the one or two sample location problem if the data were gathered as an independent sample.

In some cases, a test that adequately accounts for the fluctuation in plug-in estimates of nuisance parameters can be devised. A t-test does this in the case of one and two sample location difficulties.

### Sample

Assume that the mean and standard deviation of reading test results in a specific geographic region is 100 and 12 points, respectively. We are interested in the results of 55 students from a certain school who achieved a mean score of 96. We can examine if this mean score is much lower than the regional mean—that is, are the children at this school equivalent to a simple random selection of 55 students from the entire region, or are their scores startlingly low?

To begin, compute the standard error of the mean:

Where is the standard deviation of the population?

Next, compute the z-score, which is the difference between the sample mean, and the population means in standard error units:

In this example, we assume that the population mean and variance are known, as they would be if all pupils in the region were examined. A Student's t-test should be used when the population characteristics are unknown.

The mean score in the classroom is 96, which is 2.47 standard error units lower than the population mean of 100. Looking up the z-score in a table of the cumulative probability of the standard normal distribution, we see that the likelihood of seeing a standard normal value less than 2.47 is roughly 0.5 0.4932 = 0.0068. This is the one-sided p-value for the null hypothesis that the 55 students are equivalent to a simple random sample of all test-takers from the population. The two-sided p-value is around 0.014. (twice the one-sided p-value).

Another way to put it is that a simple random sample of 55 students would have a mean test score within 4 units of the population mean with a probability of 1 0.014 = 0.986. We may also argue that we reject the null hypothesis that the 55 test-takers are equivalent to a simple random sample from the population of test-takers with 98.6 percent confidence.

According to the Z-test, the 55 students of interest had an extremely low mean test score when compared to most simple random samples of equal size drawn from the community of test-takers. This approach has a flaw in that it does not assess whether the impact size of four points is significant.

If, instead of a classroom, we considered a subregion of 900 kids with a mean score of 99, we would get approximately the same z-score and p-value. This demonstrates that even minor deviations from the null value can be statistically significant if the sample size is big enough. For a more in-depth treatment of this topic, see statistical hypothesis testing.

## What Is the P-value?

The P-value is the likelihood that the test statistic result will be rejected or accepted on the premise that the null hypothesis is valid. The level of significance is determined by the experiment, and if the p-value is less than the significant level, the null hypothesis is rejected.

To determine the p-value in a statistic:

• Look up the statistic for the relevant distribution.
• Determine the likelihood that the mean is greater than your test statistic.
• If the hypothesis is less likely than the alternative, calculate the probability that the mean is less likely than your test statistic. This is known as the p-value.
• If the hypothesis is more likely than the alternative, calculate the probability that the mean is larger than your test statistic. This is known as the p-value.
• If the hypothesis is identical to the alternative, we must double the likelihood of the mean being extreme to your test statistic.

## Main Differences Between Z-Test and P-Value in Points

• Meaning: -The P-Value is the likelihood of receiving a test statistic result that is at least as severe as a result seen in the experiment on the premise that the null hypothesis is true. The Z-Test, on the other hand, is used to evaluate if the mean of a population is more than, less than, or equal to a certain value. Because it employs the ordinary, normal distribution, this test is also known as the One-Sample Z-Test. It is assumed that the population's standard deviation is known.
• The Null Hypothesis: - In the case of P-Value, the null hypothesis is assumed to be true, and the test statistic result seen in the experiment is verified to determine if it is the same or more extreme than it was previously observed. The Z-Test, on the other hand, is used to determine whether or not the null hypothesis should be rejected.
• Hypothesis Alternative: - The alternative hypothesis is the critical assertion in the P-Value that the investigator would want to conclude in the experimental test if the evidence permits it. In contrast, the alternative hypothesis, together with the null hypothesis, alpha, and the Z-score, is crucial in the Z-Test. The opposite hypothesis is the alternative hypothesis, which asserts a difference in the population. The experimenter seeks to prove the theory.
• Limitations: - In the case of P-Value, if the sample size is tiny, the p-value may be incorrect. Furthermore, the p-value has a propensity to be classified as significant or non-significant based on the fact that it is less than or equal to 0.5, which is not the case with the Z-Test. However, there are a few restrictions to employing the Z-Test. The first is that the sample size can range from a few to several hundred. If the data is discrete with at least five distinct values, the continuous variable assumption can be ignored. The most significant limitation is that the data must be random, or else the significance levels may be inaccurate.
• Results: - If the p-value is relatively little in comparison to the previously established significant level (often 5% or 1%), it indicates that the observed data is inconsistent with the assumption that the null hypothesis is true, and so the hypothesis must be rejected and the alternative hypothesis accepted.
• As an example:
1. The hypothesis is rejected if p 0.1.
2. The hypothesis may or may not be rejected at 0.1p0.5.
3. The hypothesis is accepted if p>0.1.

In contrast, in Z-Test, for example: When utilizing a 95 percent confidence level, the essential Z-score values are -1.96 and +1.96 standard deviations. The p-value for a 95 percent confidence level is 0.05. If your Z score is between -1.96 and +1.96, your p-value will be greater than 0.05, and your null hypothesis cannot be rejected. If the Z score is beyond that range (for example, -2.5 or +5.4), the pattern is likely too unique to be due to random chance, and the p-value will be tiny to reflect this. In this instance, the hypothesis may be rejected. The numbers in the middle of the normal distribution reflect the expected outcome, which is a critical concept here.

## Conclusion

The P-Value and the Z-Test are two statistical tests with distinct goals. The P-Value is based on the likelihood of the experiment's findings or outcomes being the same or extreme if the null hypothesis is true. The Z-Test, on the other hand, denotes the validity of the observations made throughout the experiment. It is only utilized when the sample size is greater than 30, as in the case of a population, because the central theorem employed during this test assumes that as the number of samples rises, the samples are assumed to be normally distributed, and the data are randomly picked. The P-Values decrease as the sample size increases, but the Z-Test is impacted by the null hypothesis, alternative hypothesis, alpha, and Z-Score.

### Category

Styles:

#### MLA Style Citation

"Difference Between Z-Test and P-Value." Diffzy.com, 2023. Tue. 03 Oct. 2023. <https://www.diffzy.com/article/difference-between-z-test-and-p-value-1146>.

Edited by
Diffzy