“Another College ‘Comes Clean’—What Does It Mean?”

It has happened again. Another high profile university has admitted to irregularities in its reporting of the data used to describe its entering class. In this case, SAT information for entering students was apparently “misreported” from 2006-2012 resulting in inflated scores—scores that appeared in promotional materials, trustee reports and, yes, college rankings.

The message released from the institution spoke of the need to preserve the integrity of the institution. It assured readers that those responsible were no longer at the institution and measures had been taken to make sure such misreporting did not happen again. In addition, the editors of US News & World Report would be notified so any necessary adjustments might be made in the calculations of the school’s ranking.

Sounds rather straightforward, but how can this happen? How might this type of behavior be explained?

My intent is not to vilify—or glorify—the institution, or any of the others that have similarly “come clean” in recent months. In fact, names and places are not important. Rather, I want to draw your attention to some of the factors that contribute to a broader cultural phenomenon and provide context for your interpretation of the inevitable headlines.

Historically, colleges and universities used internally defined metrics to measure progress against institutional goals and to track performance relative to peer or “frame of reference” institutions. Endowment performance, scholarly work among faculty, ratios of students to resources, student entrance data and student outcomes are commonly referenced in this manner.

The introduction of college rankings 30 years ago by US News & World Report substantially changed the dynamic with regard to data collection and reporting. In order to participate, institutions were required to submit data reflective of the student experience on their campuses. Despite editorial attempts to assert universal definitions and methodologies, wary institutions clung to internal practices. Interpretive “shades of gray” were inevitable. Folding together the metrics—and the cultures—of heretofore-autonomous institutions into a universal ranking guide proved to be a task that has yet to be mastered by the ranking guides.

In the early years of college rankings, most institutions exhibited mixed reactions to their presence, importance and future. Many tried to ignore rankings in the hopes they would simply go away; hence, the reluctance to alter historical data collection practices. Other places quickly bought into the opportunity to market themselves as one of US News’ “America’s Best Colleges” in whichever category they might have been situated. Regardless, rankings had—and continue have—an intoxicating effect as colleges and universities everywhere realize they can ill afford not to be included and, at that, in the most favorable light possible.

As the popular consumer response to rankings grew, the perceived grip of US News on participating institutions grew with it. The pressure many felt to maintain or improve upon their positions in the mythical pecking order resulted in strategic and, at times, creative institutional approaches to “managing” their metrics.

Consider, for example, the response to student-to-professor ratio. Schools with comparatively weak ratios could address them by 1) taking an expansive view of those who might hold teaching credentials and 2) only counting those students who are full-time on campus to enhance a school’s student-to-professor ratio. (Debates regarding the appropriate definitions of professors and students continue to this day.) Apply the same logic to student-to-endowment ratios, alumni giving rates and graduation rates and, well, you get the picture of the emerging mindsets.

Over time, though, institutions eager to improve their relative positions in the rankings came to realize that, the examples listed above notwithstanding, there was little they could do to substantively change the metrics of their academic programs or their financial statements. (Change on college campuses is glacial in nature!)

The admission process, however, presented an annual opportunity to acquire fresh metrics. Imagine the possibilities. If, for example, data is reported for only those entering as first-time, full-time students in the first semester, what becomes of the students (as well as their scores and academic records) whose enrollment is routed through a provisional summer program or January (second semester) enrollment?

Similarly, it is not hard to imagine the impact that Early Decision might have on a school’s selectivity and yield. It might not be so far-fetched to think that admission officers would strategize to enroll as many high-yielding students from the Wait List as well. Speaking of selectivity, is it any wonder that even highly selective institutions would strive to attract yet more applications without any intent to expand the number of students they admit?

Despite continued attempts by editors to arrive at universal definitions and employ common data sets in calculating their rankings, the potential for institutions to manipulate the data remains. Moreover, the lack of transparency—and accountability—into both rankings and nuanced institutional reporting practices underscores the need for cynicism in assessing rankings as well as the recent public self-disclosures made by institutions.

It should be clear that rankings such as those calculated by US News are not scientifically derived. In fact, a full critique of their methodology will be the subject of another article. You can be sure, though, that relying on rankings indiscriminately detracts from sound, student-centered decision making.

That is why caution is urged with regard to these self-disclosures. Are they simply examples of good institutions trying to get “it” right? Or are they calculated public relations ploys—a matter of penitent public pandering to US News? Draw your own conclusions. Regardless, the reports serve only to legitimize a deeply flawed ranking process.

The solution: Be diligent in conducting your own research about the colleges that interest you. Ignore the noise. Look beyond the rankings. What are your objectives? In what type of environment will you function best? How will you define a good college fit? Be cautious about allowing the presumed ethics of the reporting process to jaundice your view of any institution. The key is to find the truth of the opportunity for you that lies within each. Many of these places, whether they have misreported data to a ranking guide or not, are fundamentally good academic institutions that deserve your consideration.





Comments are closed.