The Mann-Whitney U test,
also known as the Mann-Whitney-Wilcoxon test, is a non-parametric test of a null hypothesis, that it is equally likely, that a randomly selected value from one sample, will be less than, or greater, than a randomly selected value from a second sample.
Intuitively I think I might be able to see how this translates to “the two samples are from populations that are actually different”. I have a harder time to describe that intuition.
The test is nearly as efficient as a t-test. But the advantadge is, that it does not require that we can assume that the two samples are normal distributed.
It can also be used to test if two independent samples, were selected from populations that have the same distribution.
The idea is, we have to samples. If we pick a random value from sample A, and a random value from sample B, there is a likelihood, a chance, that the value from sample A is larger than the value from sample B. Is that likelihood the same as the likelihood that the value from sample A is smaller than the value from sample B.
How to do that in R?
Let us look at some data. mtcars is a dataset with some facts about various 1974 US cars.
One of the numbers we have i mpg, miles pr gallon, or fuel-efficiency. Another is about the transmission. Is there a manual or an automatic gearbox. In the column am a “1” indicates an automatic gearbox. A “0” a manual gearbox.
We can now extract two sets of data on the fuel efficiency, based on the type of transmission:
aut <- mtcars[which(mtcars$am==1),]$mpg man <- mtcars[which(mtcars$am==0),]$mpg
In “aut” we have the fuel efficiency of cars with an automatic gearbox, and in “man” the same, but for the, in Europe, more familiar, manual gearbox.
How to do the Wilcoxon test?
## Warning in wilcox.test.default(aut, man): cannot compute exact p-value with ## ties
## ## Wilcoxon rank sum test with continuity correction ## ## data: aut and man ## W = 205, p-value = 0.001871 ## alternative hypothesis: true location shift is not equal to 0
The p-value is 0.001871. If we take a 0.05 significance level, we can conclude that the two sets of values are from non-identical populations. Had it been larger (quite a bit larger), we would not have been able to reject the null-hypothesis, that the two samples comes from identical populations.
There is a different way to do the test, where we dont split the dataset in two:
wilcox.test(mpg ~ am, data=mtcars)
## Warning in wilcox.test.default(x = c(21.4, 18.7, 18.1, 14.3, 24.4, 22.8, : ## cannot compute exact p-value with ties
## ## Wilcoxon rank sum test with continuity correction ## ## data: mpg by am ## W = 42, p-value = 0.001871 ## alternative hypothesis: true location shift is not equal to 0
We tell the function that we want to test mpg (fuel efficiency), as a function of am (the type of transmission), and that the dataset we are working on is mtcars. The function then splits the dataset in two based on the value in am. Had there been more than two, things would probably stop working.
The result is the same though.
We get a warning. That is because the algorithm has a problem when there are identical values in the two datasets. There is a way around. We can add a “jitter”, to the two datasets. jitter() adds small random values to the values in the dataset. Often that solves the problem:
## ## Wilcoxon rank sum test ## ## data: jitter(aut) and jitter(man) ## W = 205, p-value = 0.001214 ## alternative hypothesis: true location shift is not equal to 0
On the other hand, we are no longer comparing the values in the dataset. We can see the difference here:
aut - jitter(aut)
##  -0.051431873 -0.070007257 -0.036621231 -0.047522075 -0.039480170 ##  0.056659217 0.057724536 -0.003120035 -0.033923965 -0.019053550 ##  0.028977073 0.001790751 0.045607303
It would probably be a bad idea to add random noise to the data. On the other hand, it is not very likely that two cars have exactly the same fuel efficiency. We have probably “binned” the values, and the addition of random noise would probably not change the values we are working on too much. But you should always consider why the warning arises, if it is actually a problem and not just a warning, and if it is appropriate to solve it in this way.