(Cross-posted to MoneyLaw)

After pointing out the more severe flaws in the Legal Education Value Added Rankings, I spent some time reading Linda Wightman's LSAC National Longitudinal Bar Passage Study (based on a cohort of students who started law school in 1991). With respect to "valued added" concepts in legal education, one conclusion of this study stands out: LSAT scores and law school GPA have the strongest predictive value for bar passage rates, but results for students with the same LSAT and LGPA differ significantly between certain of the six "clusters" of schools which the study identified.

Wightman identified one group of schools ("Cluster 3") in which students having the same LSAT score and LGPA seemed to have higher success in passing the bar relative to several other clusters. And she found the greatest gap between the success of students with the same LSAT and LGPA who attend schools in Cluster 6 vs. those in Cluster 3. There were lesser gaps in outcomes for schools in Clusters 2 and 5 compared with those in Cluster 3. And two of the clusters (1 and 4) had no significant difference in success rates from Cluster 3 (also an interesting conclusion). Finally, the differences were greatest between students attending schools in different clusters when those students had a lower LSAT score and law school GPA.

Wightman points out that the study does not establish any *causal* connection between attending schools in different clusters. And it is a little hard to identify example schools from the various clusters, not the least because for most schools the values of the variables involved in the study have shifted greatly in the past seventeen years. But regardless what schools made up what clusters in 1990, Wightman's basic method remains compelling -- finding significant variables on which law schools tend to naturally cluster, and then examining how outcomes differ between those clusters.

I'm trying now to understand the math behind the clustering procedure and to repeat the process using current data, to see what clusters might emerge today. As a first step in that process, I composed data from the ABA Official Guide to Law Schools (2008 Edition) to replicate the variables Wightman used in her 1993 study: Clustering Law Schools Using Variables That Describe Cost, Selectivity, and Student Body Characteristics (PDF). I've only gotten as far as calculating summary statistics and Z-scores, but those results seemed striking enough to merit posting on their own (*). The variables Wightman used for clustering and which I have recalculated are:**ENROLL FT**: Full-time enrollment**S/F RATIO**: Student to faculty ratio**MIN %**: Percent minority enrollment (first-year)**ACC %**: Acceptance rate (total)**LSAT**: Median LSAT for entering full-time students**UGPA**: Median undergraduate GPA for entering full-time students**TUITION**: Full-time resident tuition and fees

I have not done any hypothesis tests to determine which changes are statistically significant, but just eyeballing the data seems to reveal some notable shifts.**Enrollment**: The mean and standard deviation have both dropped, suggesting a convergence of all schools toward lower full-time enrollment.**Faculty**: The S/F ratio has dropped by a good bit, but the standard deviation has widened. Schools seem to be employing noticeably more faculty overall, but they may also have scattered quite a bit in the magnitude and direction of change on this measure.**Minorities**: The mean percentage of minority enrollment has increased from 16% to 21%, but the standard deviation is about the same -- basically the entire curve took a step to the right. One question we may have is which minorities are counted for the purpose of "minority enrollment." I am not sure whether Wightman's figures only include certain minorities, or if she used the Total Minority numbers from the ABA as I did (and as I assumed she did).**Selectivity**: The overall distribution of acceptance rates has hardly changed at all, but academic index parameters have changed a lot. Schools as a whole appear to have become much more selective on GPA and much less selective on LSAT scores. The Wightman study uses LSAT scores from the old scale, and I could not find a score percentile table from 1991 anywhere online (if anyone out there happens to have a copy of one, please let me know). I made a rough attempt to equate the scoring scales, from which I guessed that a 36 on the old scale was around the 85th percentile. The current mean of 158 falls at the 75th percentile. Median GPAs, meanwhile, have shifted upward more than 0.20, but the standard deviation has stayed about the same. Here again, it looks like the whole curve just shifted right by quite a bit (grade inflation, anyone ... ?).**Cost**: The *standard deviation* for total tuition and fees today is larger than the *mean* was seventeen years ago -- enough said.

(*) As Wightman did, I excluded a handful of schools from these statistics: 1) The three ABA approved schools in Puerto Rico; and 2) The "one law school that enrolls part-time students almost exclusively."

### Law School Clusters

Posted by Michael Shaffer at 7/20/2008 03:43:00 AM

Subscribe to:
Post Comments (Atom)

## 1 comment:

After looking at the study you reference above, I find it much more striking how many people below the LSAT mean pass the bar.

The people above the mean did a little better, but really not that much. This is even more proof of the inadequacy of the LSAT. All it really does is say that people, very generally, who do well on the LSAT, across a large number of people, will, generally speaking, be smarter than those who do not do well, albeit inconsistently and to varying degrees. Excellent.

Testing people on how quickly they can complete the first stage of Mario Bros, or how many clues they can fill in on the NYT crossword, will tell you about the same thing.

Take me, for example:

1st time LSAT = 153

2nd time LSAT = 159

3rd time LSAT = 167

The test means nothing besides how you felt the day you took it. The LSAT is a complicated piece of folklore. Not to mention the vast discrepancies in reading speeds found in the different regions around the country. Apparently, reading speed is correlated to talking speed, which is a cultural difference, not an intellectual one.

What a joke! I wish people would stop trying to justify our over-reliance on this completely inadequate measurement of potential.

Post a Comment