Should we test on mobile devices?
In today's world, it is particularly common for employers to use ability tests for selecting their workforce. Technological advancements have made it possible for candidates to complete these tests on mobile devices alongside traditional methods. To accept the results of tests conducted on mobile devices, it is essential to obtain similar outcomes regardless of the platform.
Some studies have indicated that there are generally no significant differences between completing personality questionnaires on mobile and non-mobile devices. However, for cognitive ability tests, individuals using mobile devices tend to achieve lower scores compared to those using non-mobile devices. This suggests that cognitive tests may be more challenging on mobile devices, raising questions about their usability.
Nonetheless, these findings are not universal, as other studies have not found such differences. The inconsistency in results needs to be explained, especially when people's futures may depend on it. What could be the cause of inconsistent results? How can this issue be addressed? How can we trust the results of tests conducted on mobile devices? We will analyze this question.
The Impact of Platform
A popular approach in workforce selection research involves analyzing databases containing real job seeker data. This is particularly useful for the validity of studies, as researchers work with data from real-life situations. However, a significant problem arises: since such data does not come from experimental setups, it is unsuitable for determining causal relationships. Why is this significant?
Primarily because in real job seeker samples, we cannot control who takes the test on a mobile device and who uses a personal computer or laptop. This is referred to as selection bias and poses a significant risk to our ability to draw causal relationships from the data.
According to Brown and colleagues (2022), the explanation for inconsistencies in research results does not lie in the platform used to complete the test but rather in the preferences of individuals with different characteristics for using different devices for test completion. The researchers analyzed the data of more than 75,000 job applicants, many of whom took a general mental ability test between December 2019 and June 2020. Interestingly, they found that the use of mobile devices was more common among job applicants with lower educational qualifications who were applying for more traditional, practical, and lower complexity roles (e.g., cleaners, laborers). Applicants with higher educational qualifications and those applying for higher complexity roles (e.g., analysts) were less likely to use mobile devices when taking the test.
Therefore, the authors concluded that differences in scores between mobile and non-mobile device testing likely stem from the differences among individuals who prefer particular platforms rather than from the characteristics of the devices themselves.
What is the practical significance of all this? Should we test on mobile devices or not? It is undeniable that mobile devices are nearly ubiquitous nowadays, making it tempting to develop ability assessment tests for them. However, a more recommended solution to the aforementioned problem is to standardize the measurement process during the selection procedure. Providing everyone with the same conditions, the same device, the same tasks, and equal evaluation eliminates the issues stemming from different platforms and ensures fair conditions for everyone. The use of standard devices completely eradicates problems arising from varying platforms and ensures fairness for all.Back