fbpx

Newsletter Archive

The Science of Applicant Success Assessment

All academic institutions are concerned about their students’ success. This includes success while they are in school (e.g., retention, GPA, graduation) and what they do after graduation (e.g., career success). The success of an academic institution’s students is a direct reflection of that academic institution and has a direct impact on that institution’s financial growth and longevity. As such, it behooves institutions to ensure that their students are as successful as possible.

Because student success is so important, colleges and universities have launched numerous initiatives for student success. These go by many names including student improvement programs, centers for academic excellence, teaching and learning centers, learning communities, and so on. Such programs are targeted at improving the success (e.g., retention, GPA, and graduation) of students who are already enrolled at the institution. The idea is that by spending institutional resources on its currently enrolled students, the indicators of the institutions success (e.g., graduation rate) will improve. Unfortunately, such programs—like most interventions—often have disappointing results. Even when such programs are effective, they are often quite expensive to run and maintain.

At the same time, colleges and universities have long-recognized that individual differences in their students have a major impact on their success. For example, most institutions require that applicants submit application materials that include cognitive ability scores (e.g., SAT, ACT). Further, such application materials – including cognitive ability scores – are used to make admission decisions. Although it may not be well-recognized, such admission decisions have a crucial impact on the ultimate success of the students and the university. What if, rather than spending a small fortune trying to help students who are unlikely to succeed, institutions spent that money identifying students who are much more likely to succeed in college?

Almost any professor will tell you that the success of any student is dependent on that student’s ability (i.e., what cognitive ability tests measures) and his or her motivation to succeed. And decades of research support the professors’ intuition: a variety of non-cognitive factors, such as personality and motivation, play a crucial role in student achievement. It is quite strange then that, while institutions have heavily relied on cognitive ability assessments when making their admission decision, almost none attempt to assess an applicant’s personality or motivation.

The Applicant Success Assessment (ASA) offered by Stevens Strategy is a state-of-the-art, and first of its kind, assessment tool specifically designed to assess the key features of personality and motivation that predict student success and how well the applicant’s personality will fit with a particular institution’s value attributes. The assessment tool is based on the science of personality and decades of empirical research linking personality and motivation to academic success. Moreover, when combined with cognitive ability tests, the results of the Applicant Success Assessment provide a much clearer picture of who is likely to succeed in higher education, resulting in improved student success rates and dollars saved. However, the Applicant Success Assessment does not only predict who will succeed in higher education, but also identifies applicants who (a) fit with the culture of the university and are less likely to transfer, (b) may struggle with post-secondary education, but may otherwise be successful in life (i.e., high risk / high potential students).

Academic institutions must make admissions decisions. These admission decisions have a crucial impact on the ultimate success of the university because they directly impact student success rates. Better admissions decisions mean more student and institutional success. In what follows, we describe different models institutions can use to make those decisions and their impact on student success rates and institutional finances.

Random Admission Decisions

One measure of student success is graduation. As such, one goal is to admit students who are likely to graduate. How should such crucial decisions be made? It may aid in our thinking by first considering an obviously poor decision making strategy: random admissions. We might also call this the “coin-flip” model (i.e., heads you’re admitted, tails you’re not). If we use this admission decision model, we might get results that look like Figure 1.

Screen Shot 2015-12-04 at 3.55.31 PM

Figure 1. Graduation results if admission decisions made via a random coin flip

Because the coin flip is completely random, we see that we make lots of errors (red points). That is, we will admit many students who won’t graduate and won’t admit many students who would have graduated (or will graduate elsewhere). We need a better model for making admission decisions. We need a model that makes fewer errors.

Cognitive Ability Models

The coin flip model isn’t very realistic for higher learning institutions because none of them make admission decisions this way. A more realistic model might use something like a cognitive ability test, such as the SAT or ACT, to decide who to admit. Figure 2 shows results that we might get if we use a cognitive abilities test to make admission decisions (note that these data are based on empirically estimated associations between the SAT and graduation).

Screen Shot 2015-12-04 at 3.57.48 PM

Figure 2. Graduation results if admission decisions are made via cognitive ability tests

Because cognitive ability tests are associated with graduation, this model is better. However, as can be seen in Figure 2, the model still makes plenty of errors. It accepts quite a few students with high test scores who do not graduate and passes up on students with low test scores who would have graduated (or will graduate elsewhere). If we consider personality and motivation, can we improve our selection model to make fewer errors?

Cognitive and Non-cognitive Ability Models

While the Cognitive Ability model is superior to a random model, we can do better if we know what other factors are associated with graduation. Decades of accumulated research show that a number of non-cognitive factors (i.e., personality and motivation) are associated with academic success (e.g., GPA, graduation). Importantly, these non-cognitive factors show almost no overlap with cognitive ability tests. That is, their ability to predict academic success is independent of cognitive ability and, when cognitive ability tests and non-cognitive abilities are assessed, we can create an even better model of student success. Figure 3 shows results we might get if we use both cognitive ability and non-cognitive ability tests to make admission decisions (again based on empirically estimated associations).

Screen Shot 2015-12-04 at 3.58.33 PM

Figure 3. Graduation results if admission decisions use both cognitive and non-cognitive tests

Because both cognitive and non-cognitive abilities are independently associated with school performance, a model that uses both is substantially better and makes far fewer errors. Of course, all models will still make errors. Some things that affect school performance are just impossible to predict (e.g., death of a family member; sudden loss of financial aid). Ultimately though, models that make fewer errors are better and can generate substantial savings.

Return on Investment – Hard Numbers

Better admissions models result in fewer admissions errors. But what are the costs of making admissions errors? Here we use a statistical simulation to estimate (a) the costs of admission errors, (b) the improvement to graduation rate when non-cognitive factors are included in the admissions model and (c) the net tuition gains for the institution when non-cognitive factors are included.

The six year graduation rate for a typical four year private, non-profit college or university is around 58%. Taking this number as our starting point, and making a few reasonable assumptions, we can estimate the impact of including non-cognitive factors as predictors of student success. Let’s consider a hypothetical institution that admits and enrolls 700 first time full-time freshmen each year. We assume that the college has historically received 1,400 applications and has admitted the top 50% of SAT scorers to get the 700 enrolled students (for now, we shall bypass the difference between admitted vs. enrolled students and assume all enroll). SAT scores correlate approximately r = .20 with graduation rate while non-cognitive factors tend to correlate approximately r = .30.

For this hypothetical institution, the expected number of graduates from the initial 700 students selected by the SAT model is 406 students (58%), which is on par with the typical six year graduation rate across the US for universities of this type. If such an institution were to add non-cognitive factors to their admission criteria, and still admit and enroll the top 700 qualifying students, the expected number of graduates would be approximately 455 students (65%). Thus, under a set of reasonable assumptions, non-cognitive factors are estimated to yield a 7% increase in graduation rate.

We could take that 7% increase in graduation rate as our starting point, but to be a bit conservative let’s call it 5% for now. How much is a 5% increase in graduate rate worth? The average tuition for universities of this type in the US is $35,000, but with discounts the net tuition is closer to $23,500 per student. The attrition rate for first time full-time freshman nationally is 30%. Thus, we can expect this university to lose .30 × 700 = 210 students after the first year. This amounts to $23,500 × 210 = $4,935,000, or nearly $5 million dollars in lost tuition from year 1 attrition alone. However, this loss continues for three more years, which totals 3 × $4.935M = $14,805,000 in lost net tuition from year 1 attrition. In the second year the attrition is another 20% of the remaining freshman class: .20 × 490 = 98 more students lost. Using the same figures as above and including two total years of extra net tuition lost, the total net tuition lost from students leaving in their second year averages $4,606,000. Thus, in total, this hypothetical university can expect to lose nearly $20 million ($19,411,000) in net tuition due to current attrition.

So what if attrition were 5% lower for each of the first two years? By simply replacing 25% for 30% year 1 attrition and 15% for 20% year 2 attrition, the total net tuition lost using these attrition rates is just over $16 million ($16,038,750). The difference in net tuition is an astounding $3,372,250. That is, an institution using non-cognitive measures to reduce attrition (or increase persistence and graduation) by a mere 5% gains over $3 million in net tuition returns. If we use a less conservative figure of 7% reduction in attrition, the university would stand to gain $4.7 million in additional revenue.

This is a substantial figure for any institution. Another way to look at the benefits of employing non-cognitive factors is as an institutional strategy to maintain current enrollments with less recruitment effort. Research indicates the cost of admission is about $5,500 per student. To maintain existing enrollment, this service would reduce the number of applicants admitted each year by about 60 students, delivering savings of $330,000 in annual admission expenses for a typical institution.

These (conservative) estimates of the total value of reducing admissions errors by employing non-cognitive measures are based only on the “Hard” numbers that are fairly straightforward to quantify. Indeed, Stevens Strategy has developed a calculator that will estimate the additional income or savings your institution will gain from employing non-cognitive factors: Click here to easily calculate your institution’s likely savings. 

In the next section we identify several other places where institutions who employ non-cognitive assessments will receive returns in “Soft” categories that are more difficult to quantify directly.

Return on Investment – Soft Numbers

Non-cognitive assessments can often replace costly admission decision making strategies. For example, interviews – whether over the phone or in-person – are notoriously problematic. They are time consuming and expensive to conduct. Additionally, they are subject to interviewer bias, especially when unstructured. Even when well-structured interviews are used, the predictive validities are about 9 times less effective than simple-to-use and unbiased non-cognitive tests.

Second, gains in retention and graduation rates should have a substantial impact on the marketability of the institution. This translates into lower recruiting costs and more interest from high quality students. Institutions that can boast the highest retention and graduation rates will have a significant advantage over the competition.

Third, non-cognitive factors (e.g., personality) are relevant to every behavioral action in which there are individual differences. As a byproduct of using non-cognitive assessments to identify students who are likely to succeed in college, institutions will also identify and avoid students who are likely to cause serious problems on campus. Using non-cognitive factors to recruit students will reduce the risk of alcohol and drug related incidents, violence, and campus crime in general. The amount of money saved by avoiding these incidents is difficult to quantify. However, avoiding even a single serious incident on campus (e.g., sexual assault) is undoubtedly valuable to the institution.

Incorporating Non-cognitive Factors into Admission Decisions

Admission decision models can vary from simple cutoffs (e.g., anyone with SAT above 1200) to more complicated weighting schemes (e.g., tests scores weighted 50%, interviews 25%, etc.). Stevens Strategy does offer a complete admissions modeling service that applies modern predictive analytics to all available data to construct the best possible admissions model. However, a simple heuristic that serves most institutions well is to weight the non-cognitive test results as much as one weights cognitive ability test results. For example, if cognitive test scores currently make up 50% of the admission decision value and high school grades the other 50%, we recommend weighting all three (ASA scores, cognitive ability scores, and high school grades) at 33%. Although this rule of thumb is not perfect, it generally performs quite well compared to more complicated weighting systems based on predictive analytics and is inexpensive to employ.

Incorporation of ASA in Financial Aid Decisions

The ASA results can inform the strategic allocation of institutional financial aid for recruitment and retention purposes. Stevens Strategy will work with your admissions, financial aid, and finance staff on incorporating ASA results in the development of an institutional financial aid awarding matrix.

Can Non-cognitive Factors be Measured?

The field of personality psychology has a long history of assessing non-cognitive factors (e.g., personality, motivation). There are two key principles to any assessment instrument: reliability and predictive validity. Reliability concerns the degree to which the same people get the same scores each time they take the test. Not surprisingly then, reliability is best assessed by testing a group of individuals and then testing them again later (test-retest). Reliability is important for making decisions because tests that are unreliable will produce inconsistent, and therefore largely inaccurate, scores. Although reliability is important, predictive validity is even more important. Predictive validity concerns the degree to which the scores on the test predict some other outcome. Tests that have higher predictive validity are better because they make fewer prediction errors. When it comes to making admissions decisions, making fewer errors means increased rates of student success and lots of money saved.

Can Non-cognitive Tests be Faked?

When using any test to predict future performance, test users are correct to be concerned about faking. Faking undermines the predictive validity of the tests. The academic literature on faking in psychology has shown two things. First, in hypothetical laboratory settings, participants who are instructed to “fake good” on a personality test do appear to get different scores. However, and much more importantly, in real-world high stakes testing situations (e.g., employment interviews) there is no evidence that people (a) try to fake or (b) have the ability to fake, on non-cognitive tests. There are five reasons for this. First, non-cognitive tests, such as those measuring personality or motivation, have no right or wrong answers. Thus, it is difficult to imagine how someone can “cheat” on the test. Second, because each test is designed with a different academic institution in mind, the “correct” responses for one university may be different from another. Third, many tests – including the Applicant Success Assessment – include validity scales that are used to identify test-takers who are not responding truthfully. Fourth, personality is ego-syntonic, meaning that people tend to like the scores they get and think their answers are the correct ones. Consider the example of neat freaks vs. slobs. Both types of people consider their way of living to be the correct way and the other to be incorrect. Thus, when confronted with the question “I like to keep things neat and tidy,” both groups think they know the right answer and their answers are the opposite of each other! Fifth, and perhaps most importantly, even if people are capable of “faking” a non-cognitive test, this indicates that these people are aware of the kinds of behaviors that are important for school success. As such, these people will be successful in school anyway because they know how to behave in a way that leads to school success. Ultimately, while there is a great deal of concern among academics about faking, the real-world evidence indicates that faking is a non-issue.

Legal Issues and Adverse Impact

Institutions must discriminate among their applicants. By discriminate, we mean that they must choose who gets admitted. As such, there are appropriate legal and ethical concerns about unfair discrimination on the basis of race, ethnicity, sex, gender, age, disability, religion, and sexual orientation. There are two general ways to stay compliant with federal and state regulations on these matters. The first is to use assessments with predictive validity. That is, if the test instrument actually predicts student performance, there can be no case for illegal discrimination. For example, cognitive ability tests such as the SAT or ACT have shown predictive validity for college student performance. As such, they can be legally used for admission decisions, despite their well-known ethnic and sex disparities. Because non-cognitive factors also have predictive validity for school performance, they too may be used for admission purposes.

Second, one can stay compliant with federal and state regulations by using tests with no adverse impact in terms of race, ethnicity, sex, gender, age, disability, religion, and sexual orientation. As just mentioned, cognitive ability tests (SAT, ACT) do not fit into this category. However, non-cognitive ability tests have consistently shown virtually zero group differences in terms of race, ethnicity, sex, gender, age, disability, religion, or sexual orientation. As such, non-cognitive ability tests tend to show no adverse impact and have predictive validity.

Development and Validation of the ASA

The ASA was developed using standard principles of psychometrics in a sample of over 1,000 individuals. Because we understand the demands placed on college applicants, the ASA was designed to be as efficient as possible consisting of just 55 items rated on a 1 (very untrue or me) to 4 (very true of me) scale. It takes applicants approximately 5 minutes to complete the entire measure. Despite its brevity, the ASA is a powerful predictor of college success. In the same sample of over 1,000 individuals, the ASA predicted estimated grade point average significantly more effectively than cognitive ability measures. Further, scores on the ASA showed no adverse impact in terms of sex, gender, race, or ethnicity. Thus, the ASA predicts college success and does not discriminate against traditionally marginalized groups.

Summary

Students differ in their intellectual abilities. Some have the ability to quickly learn what is taught in classrooms and to perform well when tested. In other words, these students can do the work. Standardized cognitive assessments are designed to measure individual differences in such can do abilities and those who score high on cognitive assessments are more likely to be successful in college. But success in college is not guaranteed just because one can do the work. One must also be willing to do the work. Not surprisingly, students differ in their willingness to do the work as well. The Applicant Success Assessment is designed to measure individual differences in such will do abilities and those who score high on this assessment are more likely to be successful in college, and later in life. The Applicant Success Assessment is a state-of-the-art instrument, based on modern research in Personality Science, offered exclusively by Stevens Strategy. When used in conjunction with cognitive ability measures, the Applicant Success Assessment provides a clear prognosis for an applicant’s likelihood of success in college. The result is better admission decisions, more efficient allocation of recruiting and retention resources, greater student success, and institutional growth.

The fee for ASA would be under $60,000 annually for a typical institution. This fee covers the cost of administering the assessment, storing and scoring the results, generating the reports and client support. ASA would generate a low end estimate of $3,000,000 in additional income per entering class over 4 years at a typical private college or university that wishes to increase its total enrollment. Conservatively, that is a return on investment of about 50 to 1!  If the institution choses to maintain total enrollment and lower the number of annual admits, cost savings accounting for reductions in admission expenses would be about $330,000 or a 5.5 to 1 return on investment annually. There is no better way than Stevens Strategy’s Applicant Success Assessment service to generate income for your institution and improve its academic environment.  

About the Author: John Stevens, Ed.D.

John A. Stevens is Founder and President of Stevens Strategy, LLC, a full-service consulting firm specializing in managing the process of strategic change in colleges, universities and schools. He also serves as a Founder and Principal of Chronos Company, LLC, organized to design, oversee … (Read More)

Comments are closed.