“When I came to the chapter on quality, I just ended up chucking it,” says Shelton, now dean of online education at Dallas Baptist University. While attention to online programs as a recruitment battleground was growing, she says, the literature on how to compare quality was just too thin.
Now, with help from the nonprofit Sloan Consortium (Sloan-C) and dozens of veteran online education administrators, Shelton has developed a “quality scorecard” that she hopes will serve as a standardized measure for comparing any type of fully online college program, regardless of discipline. “I’m hoping that it will become an industry standard,” Shelton says.
The scorecard has 70 metrics, developed over six months by a panel of 43 long-serving online administrators representing colleges of various classifications, including several for-profit institutions. It builds on the Institute for Higher Education’s “Benchmarks for Success in Internet-Based Distance Education,” which was published in 2000 and outlines 24 metrics. Shelton and her panel used that set of benchmarks, which it thought still valid a decade later, as a “starting point,” adding on 45 additional metrics and dividing and combining some of the original 24 to round out the 70 quality indicators.
Francis X. Mulgrew, president of the for-profit Post University, which offers both face-to-face and online programs, said the Sloan-C metrics could prove influential among accrediting bodies, whose expertise in assessing Web-based education is limited compared to that of Sloan-C. "These are metrics that can be adopted by accrediting bodies that are maybe struggling with how they might evaluate online programs at both traditional and nontraditional universities,” Mulgrew said.
All of the scorecard's 70 metrics are weighted equally, each accounting for three possible points (for a total of 210 points). But certain categories contain more metrics, and therefore account for more points, than others. The categories, in descending order of aggregate weight, are support for students (24.3 percent), course development and instructional design (17.1 percent), evaluation and assessment (15.7), course structure (11.4 percent), support for faculty (8.6), technology support (8.6 percent), teaching and learning (7.1 percent), general institutional support (5.7 percent), and social and student engagement (1.4 percent).
Sloan-C, an influential group that convenes annual conferences and publishes research on online education, has thrown its full weight behind Shelton’s new scorecard, which it describes on its website as “versatile enough to be used to demonstrate the overall quality of online education programs, no matter what size or type of institution.”
Perhaps as a result, the specific metrics within the larger categories are mostly broad and nonprescriptive. For example, under the "support for students" heading, one metric asks if "efforts are made to engage students with the program and institution." In the "course structure" category, it inquires if "instructional materials are easily accessible and usable for the student."
The tool was conceived as a private self-study tool for institutions rather than any sort of U.S. News & World Report-like measuring stick for consumers, although it is too early to tell how the scorecard might evolve, says John Bourne, Sloan-C’s executive director. Janet Moore, chief knowledge officer for Sloan-C, said the scorecard might also prove “invaluable for institutional reporting.” Mulgrew, the Post University president, said institutions being assessed by accreditors might bring their scorecards to the table as evidence that they are going above and beyond the basic accreditation requirements in order to increase their odds of a favorable review.