Feeds:
Posts
Comments

Posts Tagged ‘statistics’

Editor’s Note: This is a joint post for ClassActionBlawg and the newly-launched Baker Hostetler Class Action Lawsuit Defense Blog.  Be sure to bookmark the Baker Hostetler blog at www.classactionlawsuitdefense.com for the latest in class action trends and decisions.

A common temptation in class action litigation is to fashion procedures based on “rough justice” to avoid overburdening the courts or attempting to redress alleged mass harm.  Over the past decade, as storage and computing power have increased exponentially, it has become increasingly tempting to use statistical sampling as a proxy for the actual adjudication of facts in class or mass actions.  The idea is that if the facts regarding a statistically significant subset of a class can be evaluated for a particular issue or set of issues, then the results of the evaluation of the sample can be extrapolated across the rest of the class.

One jurisdiction in particular where this approach has gained traction has been California.  There, the use of statistical sampling has been recognized for several years as a means of apportioning damages in some cases.   See Bell v. Farmers Ins. Exchange (2004) 115 Cal.App.4th 715 [9 Cal.Rptr.3d 544] (Bell III).   However, in recent years, plaintiffs have attempted to use statistical sampling as proof of liability, not simply as a means of apportioning damages when liability has been established or (as in Bell III) it is not contested.  This approach was harshly criticized in Part III of Justice Scalia’s majority opinion in Wal-Mart v. Dukes, (notably, this was the portion of the Dukes opinion with which all nine justices concurred):

The Court of Appeals believed that it was possible to replace such proceedings with Trial by Formula. A sample set of the class members would be selected, as to whom liability for sex discrimination and the backpay owing as a result would be determined in depositions supervised by a master. The percentage of claims determined to be valid would then be applied to the entire remaining class, and the number of (presumptively) valid claims thus derived would be multiplied by the average backpay award in the sample set to arrive at the entire class recovery— without further individualized proceedings. [internal citation omitted].  We disapprove that novel project.

Earlier this year, in Duran v. U.S. Bank National Association, No. A125557 & A126827 (Cal. App., Feb.  6, 2012), a division of the California Court of Appeal agreed with the above-quoted dicta in Dukes and rejected an attempt to use statistical sampling to prove liability an a wage and hour class action.  The plaintiff had presented testimony from statistician Richard Drogin, who had also served as an expert for the plaintiffs in Dukes.  Drogin presented a random sampling analysis that purported to estimate the percentage of the defendant’s employees that had been misclassified for purposes of entitlement to overtime pay.  The trial court adopted a sampling approach that was modeled on (but not exactly the same as) Drogin’s proposal.  

The Court of Appeal held that the trial court’s approach was improper and that it violated defendant’s due process rights for a variety of reasons, including that 1) the use of statistics to estimate the total number of employees who had been misclassified deprived the defendant an opportunity to present relevant evidence and individualized defenses as to individual plaintiffs’ alleged misclassification; 2) the court’s statistical methodology was flawed because it arbitrarily used a sample of 20 employees without any basis for concluding that the sample was statistically significant; 3) even the use of sampling as to damages was improper because the methodology used had an unacceptably high margin of error.

The Duran opinion is worthy of careful study for anyone considering the use of statistics in class certification proceedings, both in the employment context and in other types of class actions.  The opinion examines many of the due process problems with allowing proof of liability through statistical sampling, the most significant of which is that it tends to deprive a defendant of presenting evidence in its defense that it would be able to present in an individual case.  It also provides an additional illustration of what the Supreme Court considered an improper “trial by formula” in Dukes.

Read Full Post »

David H. Kaye, Distinguished Professor of Law and Weiss Family Faculty Scholar at the Penn State School of Law, recently published a fascinating commentary in the BNA Insights section of the BNA Product Safety & Liability and Class Action Reporters, entitled Trapped in the Matrixx: The U.S. Supreme Court And the Need for Statistical Significance.  In the article, Professor Kaye applies his vast expertise in the areas of scientific evidence and statistics in the law to add some color to the U.S. Supreme Court’s March 2011 decision in the securities class action Matrixx Initiatives, Inc. v. Siracusano

For those not familiar with Matrixx, the case involved allegations that the makers of the cold remedy product, Zicam, withheld information from investors suggesting that the product may cause a condition called anosmia, or loss of smell.  At the risk of oversimplifying, the holding of Justice Sotomayor’s unanimous opinion can generally be summarized as follows: in a securities fraud action arising out of an alleged failure to disclose information about a possible causal link between a product and negative health effects, the plaintiff need not allege that the omitted information showed a statistically significant probability that the product causes the ill effects in order to establish that the information was material.  The decision reaffirms the applicability of the reasonable investor standard for materiality announced in Basic Inc. v. Levinson, which looks to whether the omitted information would have “significantly altered the ‘total mix’ of information made available” to investors. 

Thus, Matrixx eschews a bright-line rule (statistical significance) in favor of a more flexible “reasonable investor” standard.  Professor Kaye does not take issue with the Court’s rejection of a bright line rule requiring a plaintiff to plead (and ultimately, prove) statistical significance of omitted information in the securities context.  Instead, he is critical of the Court’s failure to articulate in better detail the technical shortcomings of using statistical significance as a bright-line rule, and he cautions against interpreting Matrixx as suggesting that something less than statistical significance would be appropriate to prove a causal link between a product and disease in other contexts.  In other words, it is one thing to say that the causal link does not have to be statistically significant in order for information about an association between the product and disease to be meaningful to investors or consumers.  It is another thing to say that statistical significance is unimportant when it is necessary to actually show evidence of a causal link itself, such as in the toxic tort context.

Although I followed and generally agreed with Professor Kaye’s article from a legal perspective, there were some technical concepts discussed in the article that were admittedly a bit over my head.  Fortunately, I knew just who to ask for more insight, having recently worked with Justin Hopson of Hitachi Consulting on two CLE presentations discussing the use of statistics in class actions.  Here are some of Justin’s observations after reading the article:

  • The article is well-written.  Professor Kaye would make a good expert witness.
  • Kaye identifies studies showing that zinc sulfate caused anosmia.  He does not comment on zinc acetate, nor zinc gluconate, the active ingredients in Zicam.  It sounds like the causal link may have been known, and available to use.  So, this was not a case about “arbitrary statistics.”  Instead, the issue had to do with the measurement of an understood, causal relationship.
  • Kaye describes the standard applied in Matrixx as looking to whether a reasonable investor would find the omitted information “sufficiently extensive and disturbing” to induce him to make a different investment decision.  Nonlawyer experts may be tempted to ask for a formulaic definition for this phrase, and it may not be obvious without explanation that the standard would leave the question about what is “sufficiently extensive and disturbing” to the factfinder.
  • Kaye talks about the historical treatment of .05 as the threshold “significance level” that makes something statistically significant.  I’ve often thought of “significance level” associated with the relative degress of the “risks”.  If the risk of being wrong is death, then is 1-in-20 OK?  You really have to think through: What does Type I and Type II error look like in my experiment?  What are the implications?
  • If one were really attempting to compute the potential causal connection between Zicam and anosmia, it might help to understand why the FDA suggested a background rate in “all cold remedies”.  If the causality is related to zinc sulfate, then isn’t that the common population?
  • The point that the 0.05 “convention” is somewhat arbitrary is an important one.  Kaye observes that “[a] useful rule of complaint drafting must avoid inquiries into the soundness of expert judgments about the population, the test statistic, and the model.”  Hmm…so how do we get a useful rule if you cannot attack the fundamentals?  Indeed, Kaye’s next point is that Bayesian analysis should be used sometimes.  All inferential statistics have assumptions, and any appropriate standard of pleading or proof should be flexible enough to allow the opposing lawyer to challenge every single assumption. 
  • The observation that “the p-value, by itself, cannot be converted into a probability that the alternative hypothesis is true” is also very important.  This is a common misunderstanding in beginning stats because we teach, “I fail to reject the null hypothesis.  Or I reject the null, and accept the alternative hypothesis.”  It becomes very important to correctly specify null/alternative in exhaustive and exclusive terms.  Otherwise some other non-specified conclusion should be reached.
  • The one thing I might challenge is the assertion that adverse event reports (AERs) are “haphazardly collected data”.  I’m not sure why Kaye chose this phrase.  The AERs should be observations.  It is only their cause that is in doubt.  It is not their function to establish the causal link.  Instead, the link would have to be established with other data, such as through a clinical trial using a well-organized data collection process.

Read Full Post »

It’s not too late to sign up for this Thursday’s Strafford Publications Webinar, entitled Statistics in Class Action Litigation: Admissibility and the Impact of Wal-Mart v. Dukes.  Click the link on the title of the program for more information and to sign up.

For anyone looking for sneak preview, here are the program materials, which were are the result of the joint efforts of my co-presenters, Brian Troyer of Thompson Hine and Justin Hopson of Hitachi Consulting, and me.  We hope you can make it!

Read Full Post »

I’m very excited to be speaking at an upcoming Strafford Publications CLE webinar entitled: Statistics in Class Action Litigation: Admissibility and the Impact of Wal-Mart v. Dukes.   The program is scheduled for Thursday, October 6, at  1:00pm-2:30pm EDT.  This is a beefed up version of a presentation that Justin Hopson and I did for the Colorado Bar Association class actions subsection earlier this year.  Brian Troyer of Thompson Hine in Cleveland will be joining us this time around.  Here’s a synopsis of the program, followed by a link to the registration page:

As class certification standards have become more rigorous, the use of statistical evidence in certification proceedings has become an integral part of class action litigation. Effectively using or challenging statistics can be the difference between winning and losing a class certification motion.

Since statistical evidence is introduced through expert witness testimony, Daubert challenges may be an effective strategy. This raises the issue of the scope of the court’s inquiry into the merits at the class certification stage.

The prominent role of statistical evidence in class certification is underscored in Wal-Mart v. Dukes. The Court weighed in on both the level of statistical proof to sustain certification as well as the appropriate standard for a Daubert analysis.

My fellow panelists and I will provide class action counsel with a review of the Court’s treatment of statistical evidence and expert testimony in Wal-Mart v. Dukes, discuss admissibility and use of statistics in certification proceedings, and outline strategies for using statistics and cross-examining statistics witnesses.

We will offer our perspectives and guidance on these and other critical questions:

  • How did the Supreme Court in Wal-Mart v. Dukes address the level of Daubert analysis at the class certification stage?
  • What types of statistics can be introduced and what are the proper ways to utilize statistics?
  • What strategies can counsel use to effectively cross-examine statistics witnesses?
  • What are the recent trends in the use of statistical evidence to support a class certification motion?

After our presentations, we will engage in a live question and answer session with participants — so we can answer your questions about these important issues directly.

I hope you’ll join us.

For more information or to register, click here

Read Full Post »

« Newer Posts