LIPEZ, Circuit Judge.
This case requires us to address claims filed pursuant to the False Claims Act, 31 U.S.C. § 3729 (the "FCA"), alleging that the defendants submitted an application to the National Institute on Aging ("NIA") for research on Alzheimer's disease ("AD") which relied on falsified data. The district court granted summary judgment for the defendants. We vacate that ruling.
In 2006, Dr. Kenneth Jones ("Jones" or "Relator") filed a qui tam action under the FCA against defendants Brigham and Women's Hospital ("BWH"), Massachusetts General Hospital ("MGH"), Dr. Marilyn Albert, and Dr. Ronald Killiany (collectively, the "Defendants"). Jones claimed that the Defendants violated the FCA by including false statements in a grant application that was submitted to the NIA, an institute within the National Institutes of Health ("NIH"). The NIH is an agency of the United States Department of Health and Human Services. Jones alleged that statements in the Program Project Grant Application (the "Application") were predicated on falsified data and that the Defendants, knowing of this falsity, failed to take
Jones timely appeals, maintaining that material factual disputes remain concerning the Defendants' conduct. He asserts that the record supports a conclusion that the Defendants violated the FCA by (1) "knowingly submitting an application for a grant to the [NIH] that was based on falsified and fraudulently manipulated study data and false statements of blinded, reliable methodologies," and (2) receiving NIH funds while knowingly in violation of regulations that require applicant institutions to investigate and report allegations of scientific misconduct.
After careful review of the record, we conclude that the district court abused its discretion by excluding or failing to consider certain expert testimony. It then committed an error of law by failing to consider statements of the parties and experts in a manner required by the summary judgment standard. When properly considered, those statements generate genuine issues of material fact concerning some of Relator's FCA claims. We begin our explanation of these conclusions by describing the research project in question, Relator's concerns, and the NIH grant application process. We then recount in some detail the contents of the parties' expert reports. In Part II, we consider the district court's failure to examine Dr. Daniel Teitelbaum's expert report in its entirety, its improper evaluation of portions of other expert reports, and Relator's other claims.
The alleged false claims were submitted to the NIA in conjunction with a Program Project Grant ("PPG") proposal focused on AD, a neurodegenerative illness associated with aging. Dr. Marilyn Albert acted as the Principal Investigator (the "PI") on the PPG and oversaw the work of both Killiany and Jones.
The proposed PPG consisted of four "Projects," long-term research studies focused on related issues, and four "Cores," each of which provided various types of support to the Projects. Killiany, a neuroanatomist, would head "Project 3," and utilize MRI to explore regions of interest ("ROIs"), including the EC, in the brain. Project 3 was a continuation of a study already in progress, the preliminary results of which were published in a 2000 paper authored by Killiany, Albert, and
Between 1995 and 1999, Killiany modified his outlining process. In his deposition, Killiany testified that as he worked through participants' scans, he encountered a number of "anatomical anomalies." When he encountered such anomalies, he went back and reviewed earlier outlines to ensure that those tracings properly considered the anatomical issue. If, upon reviewing an outline, he felt that it should be revised, he re-traced the EC boundary, the software recalculated the volumetric data, and he eventually sent the new data to Hyde.
As noted, the relator in this case, Jones, headed Core B. In that role, he supervised data management, assessed project progress, analyzed project data, and developed new analytic frameworks. In March 2001, Jones met with Albert and Dr. Keith Johnson, the leader of Project 2. Jones and Johnson alerted Albert that they had concerns about the data that Killiany produced prior to 1998, which had been used to demonstrate a statistically significant relationship between the volume of the EC and conversion to AD. Jones noted that there was more than one
In response, Albert asked Dr. Mark Moss, a noted neuroanatomist who had been involved with the PPG since its inception, to examine for accuracy the 23 re-measurements about which Jones and Johnson expressed concern. Moss reviewed the scans in question and hand wrote notes expressing his opinion of the accuracy of each scan. He concluded that with one exception, Killiany's second set of measurements more accurately outlined the EC. At Albert's request, Moss gave his notes to Killiany. Later, Killiany created a typewritten document ostensibly containing Moss's notes,
To secure funding from the NIH for age-related research, institutions must submit applications to the Center for Scientific Review and the NIH. The applications are then forwarded to the NIA, where they first undergo a peer review process conducted by a panel of independent experts in the relevant field. The panel considers a number of factors, including the quality and originality of the science proposed, the quality of the investigators, and the quality of the facilities in
On October 1, 2001, Albert and MGH submitted a PPG Application for the 2002-2007 NIH funding cycle. As part of the Application, Defendants described the results of relevant preliminary studies, including the MRI study in which Killiany allegedly manipulated data. The Application also outlined the methods that would be used in future research and the protocols that the researchers planned to employ to ensure data reliability. The Defendants did not include the allegedly false underlying data itself, but did include a discussion of the results generated by the data and explained why those results supported the proposed study. The Application did not mention the existence of two sets of data or Jones's allegations of wrongdoing. Before submission, Albert and a representative of MGH certified the truthfulness of the Application's contents on its cover.
Saykin is a professor of radiology at the Indiana University School of Medicine and Director of the Indiana University Center for Neuroimaging, with a research focus on "the integration of neuroimaging and genomic data with emphasis on early detection of Alzheimer's disease."
Not surprisingly, Relator's expert offered a contrasting conclusion. Schuff is a professor of radiology at the University of California in San Francisco, an investigator at the VA Medical Center in San Francisco, lead physicist at the Center for Imaging of Neurodegenerative Diseases at the VA Medical Center in San Francisco, and a researcher focusing on the development of new MRI methods and concepts to identify markers of neurodegenerative diseases, including AD.
Dávila-García, an Associate Professor at Howard University College of Medicine,
Dávila-García also stated that Albert's inquiry into the alleged misconduct was insufficient. Under 42 C.F.R. § 50.103(d)(8)-(9) (2001),
Teitelbaum has a Ph.D. in engineering and has experience working with "statistics, data analysis, predictive modeling, building computer simulation, mathematical optimization, [and] logistics and operations research." He stated that in his opinion, "the altered data points undo the validity of the study's conclusions" because "[r]ather than being a systematic visitation upon the study data, Killiany conducted the second set of measurements by cherry-picking the study subjects in a non-random fashion." Moreover, Teitelbaum stated that "the changes themselves were responsible for the significance of the results Killiany claimed to achieve. . . . Had the original data been used, Killiany could not have reported his findings in published scientific journals or to the NIH in support of an application for a Program Project Grant."
Regarding the reliability study, Teitelbaum explained that
When Jones filed his claim in 2006,
The district court found that Jones had not generated genuine issues of material fact on any of the elements at issue-falsity, materiality, and knowledge. Jones argues to the contrary. He notes that the Application relies on Killiany's research and claims that Killiany "fraudulently altered the MRI study data prior to 1998 to produce false results of a statistically significant correlation between conversion to AD and volume of the EC," and did so "after the scientists had conducted a
We review the district court's evidentiary determinations, namely, its decisions to admit or exclude expert testimony, for abuse of discretion. Alt. Sys. Concepts, Inc. v. Synopsys, Inc., 374 F.3d 23, 31 (1st Cir.2004); see also Gen. Elec. Co. v. Joiner, 522 U.S. 136, 142-13, 118 S.Ct. 512, 139 L.Ed.2d 508 (1997) (noting that abuse of discretion review applies to threshold evidentiary determination made in connection with summary judgment motions). "Evidentiary rulings have the potential to shape and winnow the scope of the summary judgment inquiry, and a trial court should have as much leeway in dealing with those matters at the summary judgment stage as at trial." Alt. Sys. Concepts, Inc., 374 F.3d at 31-32. A court abuses its discretion if it commits "a material error of law," or if it "ignores a material factor deserving significant weight, relies upon an improper factor, or assesses only the proper mix of factors but makes a serious mistake in evaluating them." Downey v. Bob's Disc. Furniture Holdings, 633 F.3d 1, 5 (1st Cir.2011).
After reviewing the district court's evidentiary determinations and thereby settling the scope of the summary judgment record, we review the court's grant of summary judgment de novo. Schubert v. Nissan Motor Corp. in U.S.A., 148 F.3d 25, 29 (1st Cir.1998); see also Sch. Union No. 37 v. United Nat'l Ins. Co., 617 F.3d 554, 558 (1st Cir.2010). "[W]e will reverse a grant of summary judgment only if, making all factual inferences in favor of the non-moving party, a rational fact-finder could resolve the legal issue for either side." D & H Therapy Assocs., LLC v. Boston Mut. Life Ins. Co., 640 F.3d 27, 34 (1st Cir.2011). Where, as here, the parties have filed cross-motions for summary judgment, we must "determine whether either of the parties deserves judgment as a matter of law on facts that are not disputed." Sch. Union No. 37, 617 F.3d at 559 (quoting Littlefield
The Defendants filed a motion in limine to preclude Relator from offering certain testimony and evidence, including aspects of the proposed testimony from each of his three expert witness reports. Defendants argued, among other things, that as a statistician, Teitelbaum was unqualified to assess inter-rater reliability or to opine when a reliability study should be conducted or what would be expected on the basis of reliability results. The Defendants also challenged the admissibility of various statements in Teitelbaum's report, arguing that his opinions lacked sufficient support or were ambiguous, misleading, or otherwise inadmissible.
Jones notes that the district court did not directly address the motion in limine in the memorandum accompanying its summary judgment ruling, and, indeed, did not mention Teitelbaum's report or its admissibility at all. Although a district court is afforded great discretion in deciding whether to admit or exclude opinion evidence, Crowe v. Marchand, 506 F.3d 13, 16 (1st Cir.2007), it cannot abdicate that responsibility altogether. In this case, the district court could not properly conduct its summary judgment analysis without determining the admissibility of Teitelbaum's report, which speaks directly to issues at the heart of Jones's claims. See, e.g., Cruz-Vázquez v. Mennonite Gen. Hosp., 613 F.3d 54, 57 (1st Cir.2010) (noting that it is the district court's responsibility to "determin[e] whether to admit or exclude expert testimony" based on an evaluation of whether "the expert's testimony both rests on a reliable foundation and is relevant to the task at hand" (internal quotation marks omitted)). By failing to exercise its discretion, namely, failing to admit or exclude Teitelbaum's report, the district court committed an error of law and, thereby, abused its discretion. See Downey, 633 F.3d at 5.
That error, however, does not necessarily mean that we must vacate the district court's summary judgment ruling. In the absence of the district court's analysis regarding the dispute over Teitelbaum's qualifications, we will make our own determination on the admissibility of Teitelbaum's testimony in order to determine the scope of the summary judgment record. See Boston Duck Tours, LP v. Super Duck Tours, LLC, 531 F.3d 1, 15 (1st Cir.2008) (stating that an appellate court may make a determination on "a relevant and required issue" where remanding "would be a waste of judicial resources and incompatible with the urgency of the issue before us").
According to the curriculum vitae submitted with his expert report, Teitelbaum has a Ph.D. in Engineering and Public Policy. Since his graduation in 1998, he has worked in a variety of settings with duties related to statistics, data analysis, predictive modeling, computer simulation, mathematical optimization, and logistics and operations research. In rendering his opinions, Teitelbaum stated that he analyzed a variety of materials including, among other things, Killiany's original and revised data sets, Killiany's 2000 paper, Albert's deposition testimony, Relator's Table, a data chart illustrating the revisions' effect on reliability, and excerpts from the Application.
We see no bar to the admission of Teitelbaum's testimony in light of this dispute over his qualifications. In our judgment, that dispute only goes to the weight of his opinion testimony before a fact-finder. For purposes of summary judgment,
Jones alleged that Defendants made three different misrepresentations in conjunction with the NIH Application. First, Jones claimed that the Defendants described and relied on Killiany's research without reflecting the alleged fraudulent manipulation of the EC tracings, specifically, the unblinded, selective enlargement of certain tracings to produce results that exaggerated group differences.
Second, Jones alleged that the Defendants made false statements regarding reliability methodologies in the Application by reporting a 0.96 Pearson Correlation coefficient. Jones argued that this coefficient corresponded to the first set of tracings that Killiany made, even though the second set was the data that produced the statistically significant result that the Defendants relied upon in the Application. Jones argued that when a reliability study was conducted comparing Gomez-Isla's tracings and Killiany's second set of measurements, the correlation coefficient dropped to 0.54.
Finally, Jones alleged that the Defendants violated the FCA by falsely certifying that they were in compliance with Public Health Services ("PHS") terms and conditions and NIH Scientific Misconduct Regulations. Jones alleged that Defendants failed to meet their obligations under 42 C.F.R. § 50.103(c)(3), which requires applicant institutions to take "immediate and appropriate action as soon as misconduct on the part of employees or persons within the organization's control is suspected or alleged." Jones claimed that Albert's inquiry into Jones's allegations was insufficient and that the Defendants were obligated to conduct a full investigation and report the results of that investigation to the NIH.
The district court considered the parties' claims under an FCA falsity framework that we have since rejected. The court stated:
(Footnotes omitted.) After the parties briefed this appeal, we had occasion to clarify the proper framework for analyzing FCA claims. See generally Hutcheson, 647 F.3d 377; see also New York v. Amgen, Inc., 652 F.3d 103 (1st Cir.2011), cert. dismissed, ___ U.S. ___, 132 S.Ct. 993, 181 L.Ed.2d 570 (2011). In Hutcheson, we rejected rigid divisions between factual and legal falsity, and express and implied certification, noting that the text of the FCA does not make such distinctions. The use of such categories, in "our view[,]. . . may do more to obscure than clarify the issues before us." Hutcheson, 647 F.3d at 385-86. Instead, we take a broad view of what may constitute a false or fraudulent statement to avoid "foreclos[ing] FCA liability in situations that Congress intended to fall within the Act's scope." Id. at 387 (quoting United States v. Sci. Applications Int'l Corp., 626 F.3d 1257, 1268 (D.C.Cir.2010)) (internal quotation marks omitted). Taking this broad view does not, however, create limitless liability. Indeed, FCA liability continues to be circumscribed by "strict enforcement of the Act's materiality and scienter requirements."
The district court found that Jones "failed to articulate how the supposedly false data relates to a false statement in the Application," as "[t]here is no evidence that the EC data itself was submitted as part of the Application." Moreover, the district court found that "the act of tracing the boundaries of the EC is subjective and requires the exercise of scientific judgment" such that "two scientists who use the same protocol manually to trace the EC may nevertheless obtain different results." In dismissing the significance of the disagreement of the experts, the district court noted that such disputes over the exercise of scientific judgment may not form the proper basis for an FCA claim and cannot "yield a resolution where one can state with reasonable certainty that one conclusion is true and the other false."
Although it is true that the allegedly false EC volumetric data was not itself included in the Application, that fact is not determinative of the false claim allegation. The statute makes it a violation to "use[ ]. . . a false record or statement to get a false or fraudulent claim paid or approved by the Government." 31 U.S.C. § 3729(a)(2). A number of statements in the Application demonstrate reliance on the study's conclusions and therefore necessarily implicate the allegedly false data. For example, the Application contains the following statements:
These statements rely on the data challenged by Jones as false. In the language of the FCA, they "use . . . a false record."
We agree with the district court that "[e]xpressions of opinion, scientific judgments, or statements as to conclusions about which reasonable minds may differ cannot be false." (citing United States ex rel. Roby v. Boeing Co., 100 F.Supp.2d 619, 625 (S.D.Ohio 2000)). However, we disagree that the creation of the data in question was necessarily a matter of scientific judgment. The district court relied on "the undisputed fact that tracing the EC is highly subjective and thus two scientists who use the same protocol manually to trace the EC may nevertheless obtain different results." This reliance, however, misses the point that the various results produced in this case were obtained by one scientist purportedly using the same protocol. Although the decision as to which measurement method to employ was a question of scientific judgment, that is not the issue here. As Schuff noted, Killiany and Gomez-Isla "had reached a conceptual understanding [about] how to trace the [EC]" and used that protocol in their initial measurements, which demonstrated high inter-rater reliability. The real issue is whether, as Schuff opines, Killiany's revisions "substantially deviate[d] . . . from the initial protocol for [the EC] that [he and Gomez-Isla] had established to the point that the initial and new markings [were] no longer consistent" or capable of meaningful comparison. Indeed, Killiany himself explained that one aim of the project was to substantiate whether "two knowledgeable individuals" could "apply a definition of the [EC] . . . across a large number of MRI scans" and "actually even agree on where the [EC] would be."
Killiany suggested that over the course of measuring 103 participants' brain scans, he went through a learning curve during which he discovered various anatomical anomalies that required modifications of his measurement technique. He explained that after he came across this type of anomaly, he would review previously outlined scans and evaluate them in accordance with his now-modified technique. Although the Defendants' expert Saykin stated that such re-measurements to improve accuracy were not "unusual or inappropriate. . . as long as [Killiany] remained blinded to the clinical status of the participants," the record raises questions about Killiany's explanation.
The distribution of altered data among and within participant groups raises the greatest concern. Of the 103 participants in the final study population, 30 participants' scans were re-traced. Teitelbaum suggested that if the changes were in fact revisions for accuracy, one would expect some re-tracings to be smaller than the corresponding original tracing, resulting in a lower volume measurement, and some re-tracings to be larger, resulting in a higher volume measurement. Moreover, Teitelbaum stated, one would expect to see such enlargements and reductions occurring randomly in all groups of participants. Instead, the most frequent and dramatic changes occur in the normal group. Measurements were changed for 13 of 24 normals (54.2 percent), with volume increases from 0.3 to 283.3 percent.
In essence, moderate revisions occurred seemingly randomly in the converter and questionable groups, but relatively large revisions were concentrated in the smallest half of the control group. As Schuff stated, most of Killiany's revisions were "quite extensive," "biased toward normal subjects," frequently "inconsistent with the initially adapted protocol agreed to by the raters, and "not founded on scientific reason." Schuff further explained that "[i]f Killiany had made the second set of measurements as part of his `learning curve,'" sound scientific practice required him "to generate documentation and work papers in connection with his corrections . . . and share[ ] what he learned with his colleagues." Killiany testified that he does not recall discussing his decision to perform re-measurements with anyone on the PPG, and that after he initially discussed the EC boundaries with Gomez-Isla, the two did not have subsequent discussion on the topic. The upward revisions in the control group were critical to the predictive value of the study; if normal subjects showed large EC volume changes and moved into the converter or questionable group, the result was physical evidence of potentially great predictive significance.
The revisions in question do not implicate questions of scientific judgment as the district court suggested, because all the measurements in question were purportedly generated by a single protocol that Killiany and Gomez-Isla agreed to before beginning the measurements. Indeed, as Schuff noted, whether Killiany's measurements were more or less accurate than the initial measurements is not at issue. Even if Killiany's re-measurements fall within an accepted range of scientific accuracy, a question remains as to whether the data was falsified by intentionally exaggerating the EC boundaries of normal subjects to achieve a desired result. We conclude that the distribution of revisions presents a genuine issue of material fact as to whether, as Teitelbaum put it, Killiany cherry-picked measurements to revise in a non-random fashion "in order to produce data that would support his hypothesis on the role of EC volume and the prediction of prodromal AD."
Using data provided by the defendants during discovery, Jones created a data table
In his deposition, Killiany stated that he was blinded to the group status of study participants when he was making his tracings and transmitting the volumetric data generated from those tracings. Albert stated that she believed that Killiany had stayed blinded because he did not have access to participants' diagnoses and because he told her that he had been blinded. Johnson stated that it was "[his] understanding. . . that the operator who is implementing the protocol . . . would be blinded to the classification of the subject being [traced]." Saykin concluded that "[b]ased on all the information [he] reviewed, [he] believed that Dr. Killiany did remain blind to the clinical status of the cases he was analyzing or re-analyzing anatomically, as would be standard and appropriate in this type of research."
Relying on Albert and Johnson's depositions,
On appeal, Jones maintains that he referred only to his direct personal knowledge when he indicated that he had no evidence that Killiany was not blinded. Moreover, Jones insists that his personal knowledge is not determinative. Rather, Jones argues, Teitelbaum's independent analysis of the raw data and the Relator's Table each created a genuine issue of fact
Jones alleged that the Application contained misrepresentations about the pertinent reliability study—specifically, that the study cited in the Application was conducted on the first set of data, not the second set, which was the data ultimately used. The Application stated that the methodology used to manually draw image maps had demonstrated an inter-rater reliability coefficient between 0.94 and 0.99. According to Albert, such reliability numbers are "very high" and demonstrate consistency in results between raters, here Killiany and Gomez-Isla. Relator claimed that when the second set of data was tested for reliability, the Pearson Correlation coefficient dropped from 0.96 to 0.54.
The district court disregarded Jones's testimony on this issue. Based on Jones's experience as lead statistician for Core B, the district court thought that Jones was "likely . . . qualified to provide expert testimony regarding a reliability study," but nonetheless rejected his testimony because (1) "it is not clear . . . that the Relator has put himself forth as an expert consistent with [Federal Rule of Civil Procedure] 26(a)(2)" and (2) the Relator failed to provide a proper foundation upon which to accept his conclusions. The district court found Jones similarly deficient as a lay witness, finding that he did "not provide sufficient competent evidence of his personal knowledge," because (1) much of the data he received from Killiany's research came by way of another statistician on the project, and (2) he provided no evidence conveying "when and how the reliability study was conducted, who randomly selected the twenty-five subjects for the study, and who actually conducted the study."
Jones did, in fact, list himself as a non-retained expert in his Rule 26(a)(2) expert disclosure, see Fed.R.Civ.P. 26(a)(2), and, to form his opinion, relied on personal knowledge obtained as leader of the statistical Core and information provided by the Defendants during discovery, see Fed. R.Evid. 702. The district court abused its discretion in excluding from consideration Jones's reliability study testimony. See Alt. Sys. Concepts, Inc., 374 F.3d at 32. Moreover, the district court did not account for the apparently undisputed fact that no reliability study was conducted on the re-measurements. For example, Albert was asked during her deposition whether "anybody . . . conducted any reliability studies on the remeasurements?" She replied, "No. . . . We didn't have reliability data for—we didn't have another rater available. We would have had to redo the reliability studies, and to me the critical thing . . . was that [the measurements] were accurate." As Teitelbaum points out, however, the question was not only one of accuracy, but also one of reliability, specifically whether another reliability study was necessary after Killiany's re-measurements. Albert thought not, saying, "I thought that we were following the guidelines, that they were the same guidelines established in the reliability study,
Teitelbaum, on the other hand, opined that "it was inappropriate to claim that a blinded reliability study had been used in the generation of preliminary data when the data reported was not generated pursuant to the reported reliable methodologies" and had not been subject to a reliability study. Similarly, Jones maintains that after he "analyzed the impact of the altered data on both the reliability study and the reported volumetric data results," it became clear that the changes that Killiany made were responsible for the statistical significance of the reported data and that the altered data resulted in a vastly lower Pearson Correlation coefficient. Any technique modifications that Killiany made after discovering anatomical anomalies meant that he was no longer using the precise method Gomez-Isla had employed when she previously outlined ECs under the original methodology. The reliability numbers published in the Application conveyed the reliability between Killiany and Gomez-Isla under the original, unmodified approach to outlining the EC. Relator and his experts suggest that once Killiany deviated from this methodology, another reliability study should have been conducted and its results included in the Application.
There are substantial disputes here on the veracity of the reliability study data included in the Application. The district court erred in concluding otherwise.
Jones alleged that the Defendants misrepresented their compliance with the Public Health Services ("PHS") terms and conditions, which outline investigation and reporting requirements when scientific misconduct is reported. Regulation 42 C.F.R. § 50.103(d)(1) requires each institution to "inquir[e] immediately into an allegation or other evidence of possible misconduct." The institution must contact outside authorities and report any situation in which, based on the initial inquiry, the institution determines that an investigation is warranted. 42 C.F.R. § 50.103(d). Jones claims that the inquiry Moss conducted was patently inadequate to satisfy the requirements of § 50.103(d). Relator also argues that Defendants were required to create a written record of the inquiry conducted about Killiany's alleged misconduct and report the results to the NIH Office of Research Integrity.
Noting that Relator had not properly pled his PHS terms and conditions certification claim in his Second Amended Complaint, the district court stated that Relator made this claim for the first time in his motion for summary judgment. Further, the district court stated that even if the claim were considered on the merits, it could not withstand summary judgment. The district court acknowledged that the "Applicant Organization" certification signed by the MGH Director of Grants and Contracts promised that MGH would comply with PHS terms and conditions, including 42 C.F.R. § 50.103.
We focus on the district court's procedural critique of Relator's pleading. In so
In an effort to avoid this conclusion, Jones points to three paragraphs in the Second Amended Complaint that he argues demonstrate that his PHS terms and conditions compliance claim was properly pled:
These statements do not contain any references to the Defendants' alleged violations of the PHS terms and conditions, namely, the failure to adequately investigate Jones's allegations of misconduct or report the Moss inquiry that was conducted in 2001. Moreover, read in context, paragraph 27's reference to "false and fraudulent statements" does not refer to a certification of compliance with the PHS terms and conditions. Rather, it refers to statements set forth in the complaint (see ¶¶ 24-26) about the validity of the data, the blinding protocols used, and the inter-rater reliability coefficient applicable to the reported data.
The Second Amended Complaint also generally alleged that the Defendants "knowingly failed to take corrective action or disavow the false and fraudulent data after learning that their representations to NIH were false." But that allegation, too, read in context, does not aver an independent PHS terms and conditions certification claim. Instead, it relates to Albert's knowledge regarding the validity of Killiany's data and the claims made in the application based on that data. Thus, as the district court did, we conclude that any distinct, independent FCA claim resting on a statement of adequate investigation or an omission of proper reporting was not originally pled in the Second Amended Complaint. We affirm the district court's grant of summary judgment on this
A false statement is material if it has "a natural tendency to influence, or [is] capable of influencing, the decision of the decisionmaking body to which it was addressed." Loughren, 613 F.3d at 307 (alteration in original) (quoting Neder v. United States, 527 U.S. 1, 16, 119 S.Ct. 1827, 144 L.Ed.2d 35 (1999)) (internal quotation marks omitted). Jones claimed that the allegedly manipulated data relied upon and the reliability coefficient reported were material misrepresentations that would have a natural tendency to influence the Application reviewers.
Because the district court failed to address Teitelbaum's and Jones's statements and other relevant record evidence, it did not make a materiality determination with regard to Relator's data manipulation claims. On the record before us, we conclude that it is likely that Relator's Table, Jones's testimony, and Teitelbaum's opinions about the manipulation of data—if credited—would be deemed material by a fact-finder. The allegedly false data produced the preliminary research results relied upon in the Project 3 proposal in the Application. If established, the notion that Killiany had been unblinded and had selectively manipulated data to produce a statistically significant result would certainly have had "a natural tendency to influence" the reviewers' evaluations. See id.
Having excluded portions of the reports from experts Schuff and Dávila-García, who alleged that the statements regarding the reliability study were material to NIH's decision to fund the Grant, the district court found that Jones "fail[ed] to satisfy the materiality element with respect to the statements concerning the reliability study."
The district court found Schuff unqualified to "testify as to the materiality of a statement regarding the NIH review process" because he did not "list any qualifications regarding the NIH application review process or the peer editing process." To the contrary, Schuff's curriculum vitae—submitted with his report—listed four NIA/NIH grant proposals on which he was a reviewer between 2006 and 2009 and numerous other experiences as a grant and peer reviewer for other institutions as well. Schuff's curriculum vitae also stated that he has specialized knowledge and training in relevant topics such as neuroimaging and neurodegeneration, has published more than 150 peer-reviewed articles in those and related fields, and has acted as the Principal Investigator on several clinical trials that utilized MRI to examine ROIs including the EC. In light of the information in Schuff's curriculum vitae, we conclude that the district court abused its discretion by excluding Schuff's opinions regarding the materiality of the application statements discussing
The district court similarly excluded Dávila-García's statement "that the reliability study was material to NIH's decision to fund the Grant." Although the court stated that Dávila-García "appears qualified to opine" on the materiality of statements in the Application concerning the reliability study, it rejected her report because it found that she "[did] not support her opinion with any evidence from the record." Specifically, the district court found that although Dávila-García stated that the reliability analysis was material because it was a required element of the application, it excluded her opinion because she "[did] not . . . provide any support for [her] statement from a statute, regulation, instruction manual, or . . . personal experience[,]. . . [and did not] cite any of the reviewers' comments from the Pink Sheets regarding the strengths and weaknesses of the Application." Further, the district court found that the record contradicted Dávila-García's testimony, because the Pink Sheets stated that "[t]he use of the Pearson correlation coefficients and Student t-tests to assess reliability, as proposed, is inadequate." Despite the reviewers' disapproval of the proposed reliability methodology, they favorably evaluated the Application. Therefore, the Defendants argued, the reliability study could not have been material to the NIH's determination. The district court agreed.
The district court abused its discretion by concluding that Dávila-García did not sufficiently rely on personal experience in formulating her opinion about the materiality of the reliability study. If a witness relies primarily on experience, she must "explain how that experience leads to the conclusion reached, why that experience is a sufficient basis for the opinion, and how that experience is reliably applied to the facts." Fed.R.Evid. 702 advisory committee's note. Dávila-García had personal experience as a peer reviewer and was familiar with the NIH grant application process.
Moreover, we note that the Pink Sheet statement cited by the Defendants and the district court appeared in Section D of the Application, entitled "Research and Design Methods." Section D outlined the methodologies that would be employed in future studies to be conducted if grant funds were awarded. In contrast, reliability study results from past studies were presented in Section C, "Progress Report/Preliminary Studies." Although we agree with the district court that the methodology concerns described in the Pink Sheets indicate that the Defendants' proposed method of future reliability testing was immaterial to the reviewers' decision, that conclusion does not satisfy the materiality determination about the results of the reliability study conducted in previous research projects. The alleged falsity in this case does not rest upon the adequacy of the reliability method to be employed in the future, but rather on results allegedly already obtained in a past reliability study and relied upon by the applicants in their proposal.
In sum, the evidence brought forth by Relator on the reliability issue generates an issue of fact regarding materiality, specifically, whether providing a reliability coefficient of 0.54, or stating that no reliability study was conducted on the measurements that gave rise to the scientifically significant results, would be capable of influencing the reviewers' decision.
The text of the FCA and our case law make clear that liability cannot arise under the FCA unless a defendant acted knowingly. See 31 U.S.C. § 3729(a); Hutcheson, 647 F.3d at 388. In the district court, Jones pointed to Relator's Table and Teitelbaum's expert report, among other record evidence, as proof that the defendants knowingly created falsified data and used that data to support statements in the Application espousing the promise of research demonstrating the significant predictive value of the volume of the EC in determining who will later develop AD.
Jones also alleged that the parties knowingly submitted a reliability coefficient from Killiany's study that did not incorporate his second set of data, the set from which the study's conclusions were drawn. The district court found that Jones failed to establish that the parties knew that they were submitting a "statement regarding the Pearson Coefficient [that] was inaccurate." The district court found that
As noted, the district court's conclusion on this matter ignores Defendant Albert's statement that she did not have an additional reliability study performed on the re-measurements because she was focused on the accuracy and did not have another rater available for re-measurements. See supra Part II.B.2. Furthermore, Killiany testified that he reviewed Gomez-Isla's tracings sometime in 1996 or 1997 after the reliability study had been conducted. Killiany recognized that in tracing the scans upon which the reliability study was based, Gomez-Isla was "trying to apply the same working definition [that he] was trying to apply at the time." At the same time, Killiany testified that he continued tracing scans into 1998 or 1999, making revisions when necessary, and acknowledged that it is possible that he re-measured scans that were part of the initial reliability study well after that study had concluded.
In light of the foregoing arguments, and construing all facts in favor of Relator, we conclude that Jones generated a genuine issue of material fact as to whether the Defendants acted knowingly when allegedly making false representations in the Application.
The dispute at the heart of this case is not about resolving which scientific protocol produces results that fall within an acceptable range of "accuracy." Nor is it about whether Killiany's re-measurements, the basis for the preliminary scientific conclusions reported in the Application, are "accurate" insofar as they fall within a range of results accepted by qualified experts. Rather, the essential dispute is about whether Killiany falsified scientific data by intentionally exaggerating the re-measurements of the EC to cause proof of a particular scientific hypothesis to emerge from the data, and whether statements made in the Application about having used blinded, reliable methods to produce those results were true. If the jury should find that statements in the Application are false, they must also determine whether those statements were material and whether the Defendants acted knowingly in violating the FCA.
Because we conclude that genuine issues of material fact remain on these central issues, we vacate the district court's order and remand for further proceedings consistent with this opinion. Costs are awarded to the appellant.
So ordered.
MGH representative Marcia L. Smith signed the following certification:
The district court found—as the name of the regulation suggests—that the Post-Award Requirements are forward-looking and apply only once a grant has been awarded. Thus, because the grant in question was not funded until 2002, the Post-Award Requirements did not apply to records pre-dating 2002. We agree with the district court that Jones's spoliation claim fails on this basis. The district court also noted that applicant institutions are required to retain documents related to misconduct inquiries for three years, a time span that terminated in 2004 in this case. Jones gives us no basis to doubt this determination; indeed, we note that Jones first brought suit in 2006.