Explore how we use industry-leading science to shake up how to hire
Building a diverse and inclusive workplace starts with fair hiring practices. While policies may be created with the best intentions, they can occasionally lead to unintended bias, known as adverse impact. This is what occurs when certain groups are unintentionally disadvantaged in the hiring process. By understanding how adverse impact happens, how it differs from intentional discrimination, and how to monitor and adjust for it, you can continuously improve fairness in your hiring process. In
HR professionals often face the challenge of hiring for diverse roles, many of which they might not be intimately familiar with. In a week, you might have ongoing processes for a software engineer, a marketing coordinator, and a graphic designer. It can be difficult to know, understand, and apply criteria and standards that are a) role-specific and suited to your organization and b) broad enough to have universal relevance. That’s where TestGorilla assessments step in, helping you evaluate both
Skills assessments are a valuable method for evaluating candidates' skills, traits, and abilities prior to making hiring decisions. However, it's important to ensure that the tools you use are precise, accurate, and valid. Test validity is a complex concept that encompasses different areas like criterion validity, construct validity, and content validity. Content validity is a type of validity that helps you determine whether your tests adequately cover the relevant knowledge and skills you aim
Gender bias is everywhere. At TestGorilla, we recognize that removing barriers for women within the labor market is vital if we want to create a fairer world. But what does that mean in the context of pre-employment testing, and what are we doing to ensure our tests don’t discriminate against gender? This blog post explores the concept of Differential Item Functioning (DIF) and its role in ensuring fairness in testing. We will also discuss the results of DIF analysis of TestGorilla tests and how
Cognitive ability tests are a great tool for identifying top candidates when used appropriately. However, while they certainly are popular and useful, tests of cognitive abilities should be used carefully. Using these tests haphazardly can potentially create unfair disadvantages or biases toward some groups, meaning you may overlook high-quality candidates or open your organization to legal risk. This blog will walk you through some best practices, based on the latest scientific insights, for
Skills tests are an excellent way to measure candidates’ skills and traits before hiring them. However, you need to ensure that the tools you use are accurate. Construct validity is one type of validity that helps you determine if your tests are measuring what you intend them to measure. In this article, we examine the construct validity definition, explore construct validity examples, discuss how TestGorilla evaluates the construct validity of our tests, and explain the importance of using vali
When using any hiring tool—be it internally developed structured interviews or TestGorilla’s pre-employment tests—one of the most significant challenges organizations face is figuring out how scores obtained on these hiring tools ultimately translate into job success. For example, we often get the question, “What scores are associated with high performance?” or “How do I know what’s a ‘good’ score?” In this blog, we’ll explore some strategies and techniques that will help you interpret test scor
One of the most frequent questions our Science team gets is about setting cut-off scores: What they are, how they are set, and their benefits and risks. In this introductory guide, we’ll explain why TestGorilla does not recommend cut-off scores to our customers and take a look into the intricacies of setting cut-off scores for pre-employment tests. We’ll discuss the importance of considering their legal defensibility and fairness in hiring. We’ll also offer insights into common misconceptions a
Artificial intelligence (AI) is no longer just an innovative tool but a fundamental skill set across various roles. As a result, hiring people skilled at understanding and leveraging AI tools has become essential. In response to this, we’ve been focussed on developing cutting edge, industry-leading AI-related skills tests. This science series blog delves into the importance of AI skills, and the role of multi-measure assessments in hiring for them. It also provides insights into how to use Test
At TestGorilla, we are committed to providing assessments that are not only reliable but also valid, ensuring that they are practical, relevant, and impactful tools for talent assessment. In the first installment of our series on how to interpret test fact sheets, we delved into the complexities of reliability, shedding light on how it underscores the consistency and dependability of our assessments. Building on that foundation, Part 2 of this series ventures into another cornerstone of psychom
At TestGorilla, we often wax lyrical about why you should use talent assessments for skills-based hiring. Something that we want to address in more depth, although we’ve covered tips for test ordering and data-driven ways to increase candidate satisfaction, is how and when to use them. In light of that, this science series article will answer the following question – where in the hiring process does a talent assessment fit best?
Have you ever wondered why our Culture Add test is 10 minutes long? Or why our Coding: Data Structures – Arrays test is 35? There’s more behind test timing than you might think: This seemingly simple aspect of talent assessment plays a crucial role in maintaining the balance between assessing candidate abilities and ensuring an efficient recruitment process. But why is this the case, and how do we maintain a balance? In this science series blog, we’ll explore cognitive load theory (CLT) and it