Elawyers Elawyers
Washington| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 48 similar cases
DADE COUNTY SCHOOL BOARD vs. EDMOND G. TORELLI, 86-002017 (1986)
Division of Administrative Hearings, Florida Number: 86-002017 Latest Update: Oct. 09, 1986

Findings Of Fact At all times pertinent to the allegations in the Notice of Charges, the Respondent was employed as an Assistant Principal in the Dade County School System and held a continuing contract as a teacher. In December 1985, while employed as Assistant Principal at the Westview Middle School, he applied for placement on the roster of eligible candidates for appointment to positions as Principal or Assistant Principal in the Dade County School system. The Respondent's application was forwarded through appropriate channels to the Office of Management Selection where it was reviewed by Mr. Coleman, the Director. Mr. Coleman determined that the Respondent's application did not include the three performance evaluations rendered on him immediately prior to the submission of the application and, since in December 1984, the school board rule regarding this subject was changed to require "exceeds performance standards" evaluations on three prior ratings for an individual to be considered for principal/assistant principal positions, Mr. Coleman called the Respondent on the phone and spoke to him about this. At this point in time, Mr. Coleman already knew about an investigation that had been conducted regarding the Respondent shortly before the submission of his application involving an allegation that the Respondent had used excessive force in the disciplining of a student and he, Mr. Coleman, was satisfied that Respondent's application was not likely to be approved. As a result, he attempted to dissuade the Respondent from submitting the application but was unable to do so. When the application was received, it had only one evaluation form attached and, as a result, Ms. Mendez, Mr. Coleman's employee, contacted the Respondent again by telephone and requested that he submit the other two evaluations. It is at this point that Respondent claims he went to his personal file, extracted the two pertinent evaluation forms considering date only, and submitted them to the school board without looking to see what the rating was that appeared thereon. When received by the school board, the three ratings in question for the period August 1982 through June 1983, August 1983 through June 1984, and August 1984 through June 1985, all reflected that the overall assessment of the Respondent was that his performance was either above or exceeded performance expectations or standards. The three applications in question were prepared by Ms. Jerkins (August 1982 through June 1983) and Mr. Berteaux (August 1983 through June 1984 and August 1984 through June 1985). Ms. Jerkins categorically denies ever having rendered an annual performance evaluation on the Respondent with an exceeds performance standard rating notwithstanding what appears on the rating form bearing her signature contained in Petitioner's Composite Exhibit 4, dated June 20, 1983. This form reflects an "exceeds expected performance" standard. She rated him for the period as "meets expected performance standards." She did, on March 2, 1983, rate the Respondent "outstanding" in each listed category on a reference evaluation form relating to the Respondent's application for a position of Supervisor II in Computer Education. She feels that a rating of outstanding is appropriate for this purpose but she did not then and would not now rate him as exceeding the performance standards of an Assistant Principal. It is this Assistant Principal position to which the performance evaluation form submitted by the Respondent with his application for placement on the principal's roster relates. Respondent's contention that the reference evaluation of outstanding equates to an exceeds performance rating is not supported by the facts. With respect to the 1983/84 rating, Mr. Berteaux evaluated Respondent at the end of the school year and admittedly first evaluated him as having exceeded performance standards. A copy of this performance report was forwarded to the Respondent and reflected the "exceeded standards" rating. However, before being finalized through channels, the rating was changed by Mr. Berteaux as a result of his receipt of a report of investigation into an allegation that Respondent used excessive force against a student. When the report of investigation was given to Mr. Berteaux, apparently indicating that the allegation of excessive force was well-founded, he advised the Respondent by telephone that the evaluation which previously indicated that Respondent "exceeds" performance standards would be lowered to a rating that the Respondent "meets" performance standards. This was done, and constitutes the official and final evaluation of the Respondent for that period of time. Mr. Berteaux cannot say with any certainty whether a copy of the amended evaluation form was furnished to the Respondent. However, he is certain that he personally spoke with the Respondent about it by telephone because Respondent had already gone on summer vacation when the evaluation was completed and advised him of the lowering of the performance evaluation. It is most likely that a copy of the lowered evaluation was not given to the Respondent. In fact, that form which appears in the school board's records bears a signature of the Respondent which does not appear to be his bona fide signature. There was no evidence presented by the Petitioner to establish that the 1984/85 evaluation which bears a rating of above performance expectations was inaccurate and there is no allegation in the notice of charges that any impropriety exists with regard to that evaluation form. On February 24, 1986, Respondent appeared with counsel before Judge Norman C. Rotteger, Jr., in the United States District Court for the Southern District of Florida, and entered a plea of guilty to the charge of forging a U.S. Treasury check in violation of Title XVIII, U.S. Code, Section 495. A finding of guilty was entered but imposition of a sentence of confinement was withheld. Respondent was placed on probation for a period of three years. Mr. Torelli does not deny having placed his mother's name on the Social Security check made payable to her even though she had been deceased for a period in excess of one year at the time he did so. He contends, however, that a representative of the Social Security Administration office in Hollywood, Florida, to whom he spoke in regard to the disposition of the check, advised him that this was the appropriate thing to do. Respondent failed to present any evidence other than his own testimony to that effect, however. He presented the testimony of the two Social Security Administration employees with whom he allegedly spoke both of whom denied having told him to sign or cash his mother's Social Security check and both of whom contended that it is not Social Security policy to do so. Neither has ever advised a client to sign or cash a Social Security check that was not made out to them. Absent any evidence to the contrary other than the testimony of the Respondent, therefore, it is found that the Respondent did forge his mother's name to the check and cash it; that such action was without proper authority and was unlawful; and that he did so of his own volition. Respondent indicates that he has presented evidence to the U.S. Attorney which will result in the finding of guilty being vacated. Such evidence was not presented at this hearing and for the purposes of this hearing, it is found that the conviction was proper and properly entered. Both Mr. Coleman and Dr. Gray indicated that the actions of the Respondent as outlined above including the misrepresentation of his qualifications in regard to his application for placement on the principal's roster and his conviction in federal district court would have a substantial impact on Respondent's fitness to serve within the school system. Because it is imperative that the principal be able to have and place trust in his employees, Dr. Gray concluded that the Respondent's actions in both regards pose a substantial question as to his integrity and have a serious bearing on his capabilities to function as an educator. They have affected his ability to serve as a role model and as an example to his students.

Recommendation In light of the foregoing Findings of Fact and Conclusions of Law, it is therefore recommended that the Respondent, Edmond G. Torelli, be dismissed from employment with the School Board of Dade County effective as of the date of the final order of dismissal. RECOMMENDED in Tallahassee, this 9th day of October, 1986. ARNOLD H. POLLOCK Hearing Officer Division of Administrative Hearings 2009 Apalachee Parkway The Oakland Building Tallahassee, Florida 32301 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 9th day of October, 1986. APPENDIX TO RECOMMENDED ORDER, CASE NO. 86-2071 The following constitutes my specific rulings pursuant to Section 120.59(2), Florida Statutes, on all of the Proposed Findings of Fact submitted in this case by Petitioner. Respondent failed to submit Proposed Findings of Fact in a timely fashion. Accepted and incorporated in Finding of Fact 1. Accepted but not specifically related. Accepted and incorporated in Finding of Fact 7. Accepted and incorporated in Finding of Facts 8 and 9. Incorporated in Finding of Facts 2 and 3. Incorporated in Finding of Facts 2, 4 and 5. Incorporated in Finding of Fact 4. Incorporated in Finding of Fact 16. Incorporated in Finding of Fact 11. Incorporated in Finding of Fact 16. COPIES FURNISHED: Phyllis O. Douglas, Esquire School Board of Dade County Suite 301 1450 N.E. Second Avenue Miami, Florida 33132 Edmond G. Torelli 3905 N.W. 76 Terrace Davie, Florida 33319 Dr. Leonard Britton Superintendent Dade County Public Schools Board Administration Building 1410 Northeast Second Avenue Miami, Florida 33132 Honorable Ralph D. Turlington Commissioner of Education The Capitol Tallahassee, Florida 32301

# 1
NATURE'S WAY NURSERY OF MIAMI, INC. vs FLORIDA DEPARTMENT OF HEALTH, AN EXECUTIVE BRANCH AGENCY OF THE STATE OF FLORIDA, 17-005801RE (2017)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Oct. 19, 2017 Number: 17-005801RE Latest Update: Apr. 23, 2019

The Issue The issues to be decided are (i) whether Emergency Rule 64ER17-7(1)(b)-(d) constitutes an invalid exercise of delegated legislative authority, and (ii) whether Respondent's scoring methodology, which comprises several policies and procedures for determining the aggregate scores of the nurseries that applied for Dispensing Organization licenses in 2015, constitutes an unadopted rule.

Findings Of Fact BACKGROUND AND PARTIES Respondent Florida Department of Health (the "Department" or "DOH") is the agency responsible for administering and enforcing laws that relate to the general health of the people of the state. The Department's jurisdiction includes the state's medical marijuana program, which the Department oversees. Art. X, § 29, Fla. Const.; § 381.986, Fla. Stat. Enacted in 2014, section 381.986, Florida Statutes (2015) (the "Noneuphoric Cannabis Law"), legalized the use of low-THC cannabis by qualified patients having specified illnesses, such as cancer and debilitating conditions that produce severe and persistent seizures and muscle spasms. The Noneuphoric Cannabis Law directed the Department to select one dispensing organization ("DO") for each of five geographic areas referred to as the northwest, northeast, central, southwest, and southeast regions of Florida. Once licensed, a regional DO would be authorized to cultivate, process, and sell medical marijuana, statewide, to qualified patients. Section 381.986(5)(b), Florida Statutes (2015), prescribed various conditions that an applicant would need to meet to be licensed as a DO, and it required the Department to "develop an application form and impose an initial application and biennial renewal fee." DOH was, further, granted authority to "adopt rules necessary to implement" the Noneuphoric Cannabis Law. § 381.986(5)(d), Fla. Stat. (2015). Accordingly, the Department's Office of Compassionate Use ("OCU"), which is now known as the Office of Medical Marijuana Use, adopted rules under which a nursery could apply for a DO license. Incorporated by reference in these rules is a form of an Application for Low-THC Cannabis Dispensing Organization Approval ("Application"). See Fla. Admin. Code R. 64-4.002 (incorporating Form DH9008-OCU-2/2015). To apply for one of the initial DO licenses, a nursery needed to submit a completed Application, including the $60,063.00 application fee, no later than July 8, 2015.1/ See Fla. Admin. Code R. 64-4.002(5). Petitioner Nature's Way of Miami, Inc. ("Nature's Way"), is a nursery located in Miami, Florida, which grows and sells tropical plants to big box retailers throughout the nation. Nature's Way timely applied to the Department in 2015 for licensure as a DO in the southeast region. THE 2015 DO APPLICATION CYCLE These rule challenges arise from the Department's intended denial of Nature's Way's October 19, 2017, application for registration as a medical marijuana treatment center ("MMTC"), which is the name by which DOs are now known. Nature's Way asserts that it qualifies for licensure as an MMTC because it meets the newly created "One Point Condition," which can be satisfied only by a nursery, such as Nature's Way, whose 2015 application for licensure as a DO was evaluated, scored, and not approved as of the enactment, in 2017, of legislation that substantially overhauled the Noneuphoric Cannabis Law. See Ch. 2017-232, Laws of Fla. The current iteration of section 381.986, in effect as of this writing, will be called the "Medical Marijuana Law." The One Point Condition operates retroactively in that it establishes a previously nonexistent basis for licensure that depends upon pre-enactment events. This is analogous to the legislative creation of a new cause of action, involving as it does the imposition of a new duty (to issue licenses) on the Department and the bestowal of a new right (to become licensed) on former applicants based on their past actions. Facts surrounding the inaugural competition under the Noneuphoric Cannabis Law for regional DO licenses are material, therefore, to the determination not only of whether an applicant for licensure as an MMTC under the Medical Marijuana Law meets the One Point Condition, but also of the (in)validity of the emergency rule at issue, and the (il)legality of the agency statements alleged to be rules by definition, upon which the Department relies in applying the One Point Condition. To understand the issues at hand, it is essential first to become familiar with the evaluation and scoring of, and the agency actions with respect to, the applications submitted during the 2015 DO application cycle. The Competitive, Comparative Evaluation As stated in the Application, OCU viewed its duty to select five regional DOs as requiring OCU to choose "the most dependable, most qualified" applicant in each region "that can consistently deliver high-quality" medical marijuana. For ease of reference, such an applicant will be referred to as the "Best" applicant for short. Conversely, an applicant not chosen by OCU as "the most dependable, most qualified" applicant in a given region will be called, simply, "Not Best." Given the limited number of available DO licenses under the Noneuphoric Cannabis Law, the 2015 application process necessarily entailed a competition. As the Application explained, applicants were not required to meet any "mandatory minimum criteria set by the OCU," but would be evaluated comparatively in relation to the "other Applicants" for the same regional license, using criteria "drawn directly from the Statute." Clearly, the comparative evaluation would require the item-by-item comparison of competing applicants, where the "items" being compared would be identifiable factors drawn from the statute and established in advance. Contrary to the Department's current litigating position, however, it is not an intrinsic characteristic of a comparative evaluation that observations made in the course thereof must be recorded using only comparative or superlative adjectives (e.g., least qualified, qualified, more qualified, most qualified).2/ Moreover, nothing in the Noneuphoric Cannabis Law, the Application, or Florida Administrative Code Rule 64-4.002 stated expressly, or necessarily implied, that in conducting the comparative evaluation, OCU would not quantify (express numerically an amount denoting) the perceived margins of difference between competing applications. Quite the opposite is true, in fact, because, as will be seen, rule 64-4.002 necessarily implied, if it did not explicitly require, that the applicants would receive scores which expressed their relative merit in interpretable intervals. Specifically, the Department was required to "substantively review, evaluate, and score" all timely submitted and complete applications. Fla. Admin. Code R. 64-4.002(5)(a). This evaluation was to be conducted by a three-person committee (the "Reviewers"), each member of which had the duty to independently review and score each application. See Fla. Admin. Code R. 64-4.002(5)(b). The applicant with the "highest aggregate score" in each region would be selected as the Department's intended licensee for that region. A "score" is commonly understood to be "a number that expresses accomplishment (as in a game or test) or excellence (as in quality) either absolutely in points gained or by comparison to a standard." See "Score," Merriam-Webster.com, http://www.merriam-webster.com (last visited May 30, 2018). Scores are expressed in cardinal numbers, which show quantity, e.g., how many or how much. When used as a verb in this context, the word "score" plainly means "to determine the merit of," or to "grade," id., so that the assigned score should be a cardinal number that tells how much quality the graded application has as compared to the competing applications. The language of the rule leaves little or no doubt that the Reviewers were supposed to score the applicants in a way that quantified the differences between them, rather than with superlatives such as "more qualified" and "most qualified" (or numbers that merely represented superlative adjectives). By rule, the Department had identified the specific items that the Reviewers would consider during the evaluation. These items were organized around five subjects, which the undersigned will refer to as Topics. The five Topics were Cultivation, Processing, Dispensing, Medical Director, and Financials. Under the Topics of Cultivation, Processing, and Dispensing were four Subtopics (the undersigned's term): Technical Ability; Infrastructure; Premises, Resources, Personnel; and Accountability. In the event, the 12 Topic-Subtopic combinations (e.g., Cultivation-Technical Ability, Cultivation- Infrastructure), together with the two undivided Topics (i.e., Medical Director and Financials), operated as 14 separate evaluation categories. The undersigned refers to these 14 categories as Domains. The Department assigned a weight (by rule) to each Topic, denoting the relative importance of each in assessing an applicant's overall merit. The Subtopics, in turn, were worth 25% of their respective Topics' scores, so that a Topic's raw or unadjusted score would be the average of its four Subtopics' scores, if it had them. The 14 Domains and their associated weights are shown in the following table: CULTIVATION 30% 1. Cultivation – Technical Ability 25% out of 30% 2. Cultivation – Infrastructure 25% out of 30% 3. Cultivation – Premises, Resources, Personnel 25% out of 30% 4. Cultivation – Accountability 25% out of 30% PROCESSING 30% 5. Processing – Technical Ability 25% out of 30% 6. Processing – Infrastructure 25% out of 30% 7. Processing: Premises, Resources, Personnel 25% out of 30% 8. Processing: Accountability 25% out of 30% DISPENSING 15% 9. Dispensing: Technical Ability 25% out of 15% 10. Dispensing: Infrastructure 25% out of 15% 11. Dispensing: Premises, Resources, Personnel 25% out of 15% 12. Dispensing: Accountability 25% out of 15% 13. MEDICAL DIRECTOR 5% 14. FINANCIALS 20% If there were any ambiguity in the meaning of the word "score" as used in rule 64-4.002(5)(b), the fact of the weighting scheme removes all uncertainty, because in order to take a meaningful percentage (or fraction) of a number, the number must signify a divisible quantity, or else the reduction of the number, x, to say, 20% of x, will not be interpretable. Some additional explanation here might be helpful. If the number 5 is used to express how much of something we have, e.g., 5 pounds of flour, we can comprehend the meaning of 20% of that value (1 pound of flour). On the other hand, if we have coded the rank of "first place" with the number 5 (rather than, e.g., the letter A, which would be equally functional as a symbol), the meaning of 20% of that value is incomprehensible (no different, in fact, than the meaning of 20% of A). To be sure, we could multiply the number 5 by 0.20 and get 1, but the product of this operation, despite being mathematically correct (i.e., true in the abstract, as a computational result), would have no contextual meaning. This is because 20% of first place makes no sense. Coding the rank of first place with the misleading symbol of "5 points" would not help, either, because the underlying referent——still a position, not a quantity——is indivisible no matter what symbol it is given.3/ We can take this analysis further. The weighting scheme clearly required that the points awarded to an applicant for each Topic must contribute a prescribed proportionate share both to the applicant's final score per Reviewer, as well as to its aggregate score. For example, an applicant's score for Financials had to be 20% of its final Reviewer scores and 20% of its aggregate score, fixing the ratio of unweighted Financials points to final points (both Reviewer and aggregate) at 5:1. For this to work, a point scale having fixed boundaries had to be used, and the maximum number of points available for the final scores needed to be equal to the maximum number of points available for the raw (unweighted) scores at the Topic level. In other words, to preserve proportionality, if the applicants were scored on a 100-point scale, the maximum final score had to be 100, and the maximum raw score for each of the five Topics needed to be 100, too. The reasons for this are as follows. If there were no limit to the number of points an applicant could earn at the Topic level (like a baseball game), the proportionality of the weighting scheme could not be maintained; an applicant might run up huge scores in lower-weighted Topics, for example, making them proportionately more important to its final score than higher-weighted Topics. Similarly, if the maximum number of points available at the Topic level differed from the maximum number of points available as a final score, the proportionality of the weighting scheme (the prescribed ratios) would be upset, obviously, because, needless to say, 30% of, e.g., 75 points is not equal to 30% of 100 points. If a point scale is required to preserve proportionality, and it is, then so, too, must the intervals between points be the same, for all scores, in all categories, or else the proportionality of the weighting scheme will fail. For a scale to be uniform and meaningful, which is necessary to maintain the required proportionality, the points in it must be equidistant from each other; that is, the interval between 4 and 5, for example, needs to be the same as the interval between 2 and 3, and the distance between 85 and 95 (if the scale goes that high) has to equal that between 25 and 35.4/ When the distances between values are known, the numbers are said to express interval data.5/ Unless the distances between points are certain and identical, the prescribed proportions of the weighting scheme established in rule 64-4.002 will be without meaning. Simply stated, there can be no sense of proportion without interpretable intervals. We cannot say that a 5:1 relationship exists between two point totals (scores) if we have no idea what the distance is between 5 points and 1 point. The weighting system thus necessarily implied that the "scores" assigned by the Reviewers during the comparative evaluation would be numerical values (points) that (i) expressed quantity; (ii) bore some rational relationship to the amount of quality the Reviewer perceived in an applicant in relation to the other applicants; and (iii) constituted interval data. In other words, the rule unambiguously required that relative quality be counted (quantified), not merely coded. The Scoring Methodology: Interval Coding In performing the comparative evaluation of the initial applications filed in 2015, the Reviewers were required to use Form DH8007-OCU-2/2015, "Scorecard for Low-THC Cannabis Dispensing Organization Selection" (the "Scorecard"), which is incorporated by reference in rule 64-4.002(5)(a). There are no instructions on the Scorecard. The Department's rules are silent to how the Reviewers were supposed to score applications using the Scorecard, and they provide no process for generating aggregate scores from Reviewer scores. To fill these gaps, the Department devised several policies that governed its free-form decision-making in the run- up to taking preliminary agency action on the applications. Regarding raw scores, the Department decided that the Reviewers would sort the applications by region and then rank the applications, from best to worst, on a per-Domain basis, so that each Reviewer would rank each applicant 14 times (the "Ranking Policy"). An applicant's raw Domanial score would be its position in the ranking, from 1 to x, where x was both (i) equal to the number of applicants within the region under review and (ii) the number assigned to the rank of first place (or Best). In other words, the Reviewer's judgments as to the descending order of suitability of the competing applicants, per Domain, were symbolized or coded with numbers that the Department called "rank scores," and which were thereafter used as the applicants' raw Domanial scores. To be more specific, in a five-applicant field such as the southeast region, the evaluative judgments of the Reviewers were coded as follows: Evaluative Judgment Symbol ("Rank Score") Best qualified applicant ("Best") 5 points Less qualified than the best qualified applicant, but better qualified than all other applicants ("Second Best") 4 points Less qualified than two better qualified applicants, but better qualified than all other applicants ("Third Best") 3 points Less qualified than three better qualified applicants, but better qualified than all other applicants ("Fourth Best") 2 points Less qualified than four better qualified applicants ("Fifth Best") 1 point The Department's unfortunate decision to code the Reviewers' qualitative judgments regarding positions in rank orders with symbols that look like quantitative judgments regarding amounts of quality led inexorably to extremely misleading results. The so-called "rank scores" give the false impression of interval data, tricking the consumer (and evidently the Department, too) into believing that the distance between scores is certain and the same; that, in other words, an applicant with a "rank score" of 4 is 2 points better than an applicant with a "rank score" of 2. If this deception had been intentional (and, to be clear, there is no evidence it was), we could fairly call it fraud. Even without bad intent, the decision to code positions in ranked series with "scores" expressed as "points" was a colossal blunder that turned the scoring process into a dumpster fire. Before proceeding, it must be made clear that an applicant's being ranked Best in a Domain meant only that, as the highest-ranked applicant, it was deemed more suitable, by some unknown margin, than all the others within the group. By the same token, to be named Second Best meant only that this applicant was less good, in some unknown degree, than the Best applicant, and better, in some unknown degree, than the Third Best and remaining, lower-ranked applicants. The degree of difference in suitability between any two applicants in any Domanial ranking might have been a tiny sliver or a wide gap, even if they occupied adjacent positions, e.g., Second Best and Third Best. The Reviewers made no findings with respect to degrees of difference. Moreover, it cannot truthfully be claimed that the interval between, say, Second Best and Third Best is the same as that between Third Best and Fourth Best, for there exists no basis in fact for such a claim. In sum, the Department's Domanial "rank scores" merely symbolized the applicants' positions in sets of ordered applications. Numbers which designate the respective places (ranks) occupied by items in an ordered list are called ordinal numbers. The type of non-metric data that the "rank scores" symbolize is known as ordinal data, meaning that although the information can be arranged in a meaningful order, there is no unit or meter by which the intervals between places in the ranking can be measured. Because it is grossly misleading to refer to positions in a ranking as "scores" counted in "points," the so-called "rank scores" will hereafter be referred to as "Ordinals"——a constant reminder that we are working with ordinal data. This is important to keep in mind because, as will be seen, there are limits on the kinds of mathematical manipulation that can appropriately be carried out with ordinal data. The Department's policy of coding positions in a rank order with "rank scores" expressed as "points" will be called the "Interval Coding Policy." In conducting the evaluation, the Reviewers followed the Ranking Policy and Interval Coding Policy (collectively, the "Rank Scores Policies"). The Computational Methodology: Interval Statements and More Once the Reviewers finished evaluating and coding the applications, the evaluative phase of the Department's free-form process was concluded. The Reviewers had produced a dataset of Domanial Ordinals——42 Domanial Ordinals for each applicant to be exact——that collectively comprised a compilation of information, stored in the scorecards. This universe of Domanial Ordinals will be called herein the "Evaluation Data." The Department would use the Evaluation Data in the next phase of its free-form process as grounds for computing the applicants' aggregate scores. Rule 64-4.002(5)(b) provides that "scorecards from each reviewer will be combined to generate an aggregate score for each application. The Applicant with the highest aggregate score in each dispensing region shall be selected as the region's Dispensing Organization." Notice that the rule here switches to the passive voice. The tasks of (i) "combin[ing]" scorecards to "generate" aggregate scores and of (ii) "select[ing]" regional DOs were not assigned to the Reviewers, whose work was done upon submission of the scorecards. As mentioned previously, the rule does not specify how the Evaluation Data will be used to generate aggregate scores. The Department formulated extralegal policies6/ for this purpose, which can be stated as follows: (i) the Ordinals, which in actuality are numeric code for uncountable information content, shall be deemed real (counted) points, i.e., equidistant units of measurement on a 5-point interval scale (the "Deemed Points Policy"); (ii) in determining aggregate scores, the three Reviewer scores will be averaged instead of added together, so that "aggregate score" means "average Reviewer score" (the "Aggregate Definition"); and (iii) the results of mathematical computations used to determine weighted scores at the Reviewer level and, ultimately, the aggregate scores themselves will be carried out to the fourth decimal place (the "Four Decimal Policy"). Collectively, these three policies will be referred to as the "Generation Policies." The Department's "Scoring Methodology" comprises the Rank Scores Policies and the Generation Policies. The Department's computational process for generating aggregate scores operated like this. For each applicant, a Reviewer final score was derived from each Reviewer, using that Reviewer's 14 Domanial Ordinals for the applicant. For each of the subdivided Topics (Cultivation, Processing, and Dispensing), the mean of the Reviewer's four Domanial Ordinals for the applicant (one Domanial Ordinal for each Subtopic) was determined by adding the four numbers (which, remember, were whole numbers as discussed above) and dividing the sum by 4. The results of these mathematical operations were reported to the second decimal place. (The Reviewer raw score for each of the subdivided Topics was, in other words, the Reviewer's average Subtopic Domanial Ordinal.) For the undivided Topics of Medical Director and Financials, the Reviewer raw score was simply the Domanial Ordinal, as there was only one Domanial Ordinal per undivided Topic. The five Reviewer raw Topic scores (per Reviewer) were then adjusted to account for the applicable weight factor. So, the Reviewer raw scores for Cultivation and Processing were each multiplied by 0.30; raw scores for Dispensing were multiplied by 0.15; raw scores (Domanial Ordinals) for Medical Director were multiplied by 0.05; and raw scores (Domanial Ordinals) for Financials were multiplied by 0.20. These operations produced five Reviewer weighted-Topic scores (per Reviewer), carried out (eventually) to the fourth decimal place. The Reviewer final score was computed by adding the five Reviewer weighted-Topic scores. Thus, each applicant wound up with three Reviewer final scores, each reported to the fourth decimal place pursuant to the Four Decimal Policy. The computations by which the Department determined the three Reviewer final scores are reflected (but not shown) in a "Master Spreadsheet"7/ that the Department prepared. Comprising three pages (one for each Reviewer), the Master Spreadsheet shows all of the Evaluation Data, plus the 15 Reviewer raw Topic scores per applicant, and the three Reviewer final scores for each applicant. Therein, the Reviewer final scores of Reviewer 2 and Reviewer 3 were not reported as numbers having five significant digits, but were rounded to the nearest hundredth. To generate an applicant's aggregate score, the Department, following the Aggregate Definition, computed the average Reviewer final score by adding the three Reviewer final scores and dividing the sum by 3. The result, under the Four Decimal Policy, was carried out the ten-thousandth decimal point. The Department referred to the aggregate score as the "final rank" in its internal worksheets. The Department further assigned a "regional rank" to each applicant, which ordered the applicants, from best to worst, based on their aggregate scores. Put another way, the regional rank was an applicant's Ultimate Ordinal. The Reviewer final scores and the "final ranks" (all carried out to the fourth decimal place), together with the "regional ranks," are set forth in a table the Department has labeled its November 2015 Aggregated Score Card (the "Score Card"). The Score Card does not contain the Evaluation Data. Preliminary Agency Actions Once the aggregate scores had been computed, the Department was ready to take preliminary agency action on the applications. As to each application, the Department made a binary decision: Best or Not Best. The intended action on the applications of the five Best applicants (one per region), which were identified by their aggregate scores (highest per region), would be to grant them. Each of the Not Best applicants, so deemed due to their not having been among the highest scored applicants, would be notified that the Department intended to deny its application. The ultimate factual determination that the Department made for each application was whether the applicant was, or was not, the most dependable, most qualified nursery as compared to the alternatives available in a particular region. Clear Points of Entry Letters dated November 23, 2015, were sent to the applicants informing them either that "your application received the highest score" and thus is granted, or that because "[you were] not the highest scored applicant in [your] region, your application . . . is denied," whichever was the case. The letters contained a clear point of entry, which concluded with the usual warning that the "[f]ailure to file a petition within 21 days shall constitute a waiver of the right to a hearing on this agency action." 8/ (Emphasis added). Nature's Way decided not to request a hearing in 2015, and therefore it is undisputed that the Department's proposed action, i.e., the denial of Nature's Way's application because the applicant was not deemed to be the most dependable, most qualified nursery for purposes of selecting a DO for the southeast region, became final agency action without a formal hearing, the right to which Nature's Way elected to waive. The Department argues that Nature's Way thereby waived, forever and for all purposes, the right to a hearing on the question of whether its aggregate score of 2.8833 and Costa's aggregate score of 4.4000 (highest in the southeast region)——which the Department generated using the Scoring Methodology——are, in fact, true as interval statements of quantity. (Note that if these scores are false as interval data, as Nature's Way contends, then the statement that Costa's score exceeds Nature's Way's score by 1.5167 points is false, also, because it is impossible to calculate a true, interpretable difference (interval) between two values unless those values are expressions of quantified data. Simply put, you cannot subtract Fourth Best from Best.) The Department's waiver argument, properly understood, asserts that Nature's Way is barred by administrative finality from "relitigating" matters, such as the truth of the aggregate scores as quantifiable facts, which were supposedly decided conclusively in the final agency action on its DO application in 2015. To successfully check Nature's Way with the affirmative defense of administrative finality, the Department needed to prove that the truth of the aggregate scores, as measurable quantities, was actually adjudicated (or at least judicable) in 2015, so that the numbers 2.8833 and 4.4000 are now incontestably true interval data, such that one figure can meaningfully be subtracted from the other for purposes of applying the One Point Condition. The Department's affirmative defense of collateral estoppel/issue preclusion was rejected in the related disputed- fact proceeding, which is the companion to this litigation, based on the undersigned's determination that the truth of the aggregate scores as statements of fact expressing interval data had never been previously adjudicated as between the Department and Nature's Way. See Nature's Way Nursery of Miami, Inc. v. Dep't of Health, Case No. 18-0721 (Fla. DOAH June 15, 2018). The Ambiguity of the Aggregate Scores There is a strong tendency to look at a number such as 2.8833 and assume that it is unambiguous——and, indeed, the Department is unquestionably attempting to capitalize on that tendency. But numbers can be ambiguous.9/ The aggregate scores are, clearly, open to interpretation. To begin, however, it must be stated up front that there is no dispute about the existence of the aggregate scores. It is an undisputed historical fact, for example, that Nature's Way had a final ranking (aggregate score) of 2.8833 as computed by the Department in November 2015. There is likewise no dispute that Costa's Department-computed aggregate score was 4.4000. In this sense, the scores are historical facts—— relevant ones, too, since an applicant needed to have had an aggregate score in 2015 to take advantage of the One Point Condition enacted in 2017. The existence of the scores, however, is a separate property from their meaning. Clearly, the aggregate scores that exist from history purport to convey information about the applicants; in effect, they are statements. The ambiguity arises from the fact that each score could be interpreted as having either of two different meanings. On the one hand, an aggregate score could be understood as a numerically coded non- quantity, namely a rank. In other words, the aggregate scores could be interpreted reasonably as ordinal data. On the other hand, an aggregate score could be understood as a quantified measurement taken in units of equal value, i.e., interval data. In 2015, the Department insisted (when it suited its purposes) that the aggregate scores were numeric shorthand for its discretionary value judgments about which applicants were best suited, by region, to be DOs, reflecting where the applicants, by region, stood in relation to the best suited applicants and to each other. The Department took this position because it wanted to limit the scope of the formal hearings requested by disappointed applicants to reviewing its decisions for abuse of discretion. Yet, even then, the Department wanted the aggregate scores to be seen as something more rigorously determined than a discretionary ranking. Scores such as 2.8833 and 3.2125 plainly connote a much greater degree of precision than "these applicants are less qualified than others." Indeed, in one formal hearing, the Department strongly implied that the aggregate scores expressed interval data, arguing that they showed "the [Department's position regarding the] order of magnitude" of the differences in "qualitative value" between the applicants, so that a Fourth Best applicant having a score of 2.6458 was asserted to be "far behind" the highest-scored applicant whose final ranking was 4.1042.10/ A ranking, of course, expresses order but not magnitude; interval data, in contrast, expresses both order and magnitude, and it is factual in nature, capable of being true or false. In short, as far as the meaning of the aggregate scores is concerned, the Department has wanted to have it both ways. Currently, the Department is all-in on the notion that the aggregate scores constitute precise interval data, i.e., quantified facts. In its Proposed Recommended Order in Case No. 18-0721,11/ on page 11, the Department argues that "Nature's Way does not meet the within-one-point requirement" because "Nature's Way's Final Rank [aggregate score of 2.8833] is 1.5167 points less than the highest Final Rank [Cost's aggregate score, 4.4000] in its region." This is a straight-up statement of fact, not a value judgment or policy preference. Moreover, it is a statement of fact which is true only if the two aggregate scores being compared (2.8833 and 4.4000), themselves, are true statements of quantifiable fact about the respective applicants. The Department now even goes so far as to claim that the aggregate score is the precise and true number (quantity) of points that an applicant earned as a matter of fact. On page 6 of its Proposed Final Order, the Department states that Costa "earned a Final Rank of 4.4000" and that Nature's Way had an "earned Final Rank of 2.8833." In this view, the scores tell us not that, in the Department's discretionary assignment of value, Costa was better suited to be the DO for the southeast region, but rather that (in a contest, it is insinuated, the Department merely refereed) Costa outscored Nature's Way by exactly 1.5167 points——and that the points have meaning as equidistant units of measurement. The Department is plainly using the aggregate scores, today, as interval statements of quantifiable fact, claiming that Nature's Way "earned" exactly 2.8833 points on a 5-point scale where each point represents a standard unit of measurement, while Costa "earned" 4.4000 points; this, again, is the only way it would be correct to say that Costa was 1.5167 points better than Nature's Way. Indeed, Emergency Rule 64ER17-7 (the "Emergency Rule") purports to codify this interpretation of the aggregate scores——and to declare that the 2015 aggregate scores are true as interval data. ENACTMENT OF THE MEDICAL MARIJUANA LAW Effective January 3, 2017, Article X of the Florida Constitution was amended to include a new section 29, which addresses medical marijuana production, possession, dispensing, and use. Generally speaking, section 29 expands access to medical marijuana beyond the framework created by the Florida Legislature in 2014. To implement the newly adopted constitutional provisions and "create a unified regulatory structure," the legislature enacted the Medical Marijuana Law, which substantially revised section 381.986 during the 2017 Special Session. Ch. 2017-232, § 1, Laws of Fla. Among other things, the Medical Marijuana Law establishes a licensing protocol for ten new MMTCs. The relevant language of the new statute states: (8) MEDICAL MARIJUANA TREATMENT CENTERS.— (a) The department shall license medical marijuana treatment centers to ensure reasonable statewide accessibility and availability as necessary for qualified patients registered in the medical marijuana use registry and who are issued a physician certification under this section. * * * The department shall license as medical marijuana treatment centers 10 applicants that meet the requirements of this section, under the following parameters: As soon as practicable, but no later than August 1, 2017, the department shall license any applicant whose application was reviewed, evaluated, and scored by the department and which was denied a dispensing organization license by the department under former s. 381.986, Florida Statutes 2014; which had one or more administrative or judicial challenges pending as of January 1, 2017, or had a final ranking within one point of the highest final ranking in its region under former s. 381.986, Florida Statutes 2014; which meets the requirements of this section; and which provides documentation to the department that it has the existing infrastructure and technical and technological ability to begin cultivating marijuana within 30 days after registration as a medical marijuana treatment center. § 381.986, Fla. Stat. (Emphasis added: The underscored provision is the One Point Condition). The legislature granted the Department rulemaking authority, as needed, to implement the provisions of section 381.986(8). § 381.986(8)(k), Fla. Stat. In addition, the legislature authorized the Department to adopt emergency rules pursuant to section 120.54(4), as necessary to implement section 381.986, without having to find an actual emergency, as otherwise required by section 120.54(4)(a). Ch. 2017-232, § 14, at 45, Laws of Fla. IMPLEMENTATION OF THE ONE POINT CONDITION AND ADOPTION OF THE EMERGENCY RULE The One Point Condition went into effect on June 23, 2017. Ch. 2017-232, § 20, Laws of Fla. Thereafter, the Department issued a license to Sun Bulb Nursery (a 2015 DO applicant in the southwest region), because the Department concluded that Sun Bulb's final ranking was within one point of the highest final ranking in the southwest region.12/ Keith St. Germain Nursery Farms ("KSG"), like Nature's Way a 2015 DO applicant for the southeast region, requested MMTC registration pursuant to the One Point Condition in June 2017. In its request for registration, KSG asserted that the One Point Condition is ambiguous and proposed that the Department either calculate the one-point difference based on the regional ranks set forth in the Score Card (KSG was the regional Second Best, coded as Ultimate Ordinal 4) or round off the spurious decimal points in the aggregate scores when determining the one-point difference. The Department preliminarily denied KSG's request for MMTC registration in August 2017. In its notice of intent, the Department stated in part: The highest-scoring entity in the Southeast Region, Costa Nursery Farms, LLC, received a final aggregate score of 4.4000. KSG received a final aggregate score of 3.2125. Therefore, KSG was not within one point of Costa Farms. KSG requested a disputed-fact hearing on this proposed agency action and also filed with DOAH a Petition for Formal Administrative Hearing and Administrative Determination Concerning Unadopted Rules, initiating Keith St. Germain Nursery Farms v. Florida Department of Health, DOAH Case No. 17-5011RU ("KSG's Section 120.56(4) Proceeding"). KSG's Section 120.56(4) Proceeding, which Nature's Way joined as a party by intervention, challenged the legality of the Department's alleged unadopted rules for determining which of the 2015 DO applicants were qualified for licensure pursuant to the One Point Condition. Faced with the KSG litigation, the Department adopted Emergency Rule 64ER17-3, which stated in relevant part: For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). The Department admits that not much analysis or thought was given to the development of this rule, which reflected the Department's knee-jerk conclusion that the One Point Condition's use of the term "final ranking" clearly and unambiguously incorporated the applicants' "aggregate scores" (i.e., "final rank" positions), as stated in the Score Card, into the statute. In any event, the rule's transparent purpose was to adjudicate the pending licensing dispute with KSG and shore up the Department's ongoing refusal (in Department of Health Case No. 2017-0232) to grant KSG a formal disputed-fact hearing on the proposed denial of its application. Naturally, the Department took the position that rule 64ER17-3 had settled all possible disputes of material fact, once and for all, as a matter of law. In a surprising about-face, however, on October 26, 2017, the Department entered into a settlement agreement with KSG pursuant to which the Department agreed to register KSG as an MMTC. The Department issued a Final Order Adopting Settlement Agreement with KSG on October 30, 2017. That same day (and in order to effectuate the settlement with KSG), the Department issued the Emergency Rule. The Emergency Rule amends former rule 64ER17-3 to expand the pool of Qualified 2015 Applicants by exactly one, adding KSG——not by name, of course, but by deeming all the regional Second Best applicants to be Within One Point. Because KSG was the only 2015 applicant ranked Second Best in its region that did not have an aggregate score within one point of its region's Best applicant in accordance with rule 64ER17-3, KSG was the only nursery that could take advantage of the newly adopted provisions. As relevant, the Emergency Rule provides as follows: This emergency rule supersedes the emergency rule 64ER17-3 which was filed and effective on September 28, 2017. For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – for the aggregate score under the column "Final Rank" one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) or for the regional rank under the column "Regional Rank" one whole number difference, by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). (Emphasis added). In a nutshell, the Emergency Rule provides that an applicant meets the One Point Condition if either (i) the difference between its aggregate score and the highest regional aggregate score, as those scores were determined by the Department effective November 23, 2015, is less than or equal to 1.0000; or (ii) its regional rank, as determined by the Department effective November 23, 2015, is Second Best. A number of applicants satisfy both criteria, e.g., 3 Boys, McCrory's, Chestnut Hill, and Alpha (northwest region). Some, in contrast, meet only one or the other. Sun Bulb, Treadwell, and Loop's, for example, meet (i) but not (ii). KSG, alone, meets (ii) but not (i). The Department has been unable to come up with a credible, legally cohesive explanation for the amendments that distinguish the Emergency Rule from its predecessor. On the one hand, Christian Bax testified that KSG had persuaded the Department that "within one point" meant, for purposes of the One Point Condition, Second Best (or "second place"), and that this reading represented a reasonable interpretation of a "poorly crafted sentence" using an "unartfully crafted term," i.e., "final ranking." On the other hand, the Department argues in its Proposed Final Order (on page 17) that the One Point Condition's "plain language reflects the legislature's intent that the 'second-best' applicant in each region (if otherwise qualified) be licensed as an MMTC." (Emphasis added). Logically, of course, the One Point Condition cannot be both "poorly crafted" (i.e., ambiguous) and written in "plain language" (i.e., unambiguous); legally, it must be one or the other. Put another way, the One Point Condition either must be construed, which entails a legal analysis known as statutory interpretation that is governed by well-known canons of construction and results in a legal ruling declaring the meaning of the ambiguous terms, or it must be applied according to its plain language, if (as a matter of law) it is found to be unambiguous. Obviously, as well, the One Point Condition, whether straightforward or ambiguous, cannot mean both within one point and within one place, since these are completely different statuses. If the statute is clear and unambiguous, only one of the alternatives can be correct; if ambiguous, either might be permissible, but not both simultaneously. By adopting the Emergency Rule, the Department took a position in direct conflict with the notion that the One Point Condition is clear and unambiguous; its reinterpretation of the statute is consistent only with the notion that the statute is ambiguous, and its present attempt to disown that necessarily implicit conclusion is rejected. The irony is that the Department surrendered the high ground of statutory unambiguity, which it initially occupied and stoutly defended, to take up an indefensible position, where, instead of choosing between two arguably permissible, but mutually exclusive, interpretations, as required, it would adopt both interpretations. The only reasonable inference the undersigned can draw from the Department's bizarre maneuver is that the Emergency Rule is not the product of high-minded policy making but rather a litigation tactic, which the Department employed as a necessary step to resolve the multiple disputes then pending between it and KSG. The Emergency Rule was adopted to adjudicate the KSG disputes in KSG's favor, supplanting the original rule that was adopted to adjudicate the same disputes in the Department's favor. THE IRRATIONALITY OF THE SCORING METHODOLOGY The Department committed a gross conceptual error when it decided to treat ordinal data as interval data under its Interval Coding and Deemed Points Policies. Sadly, there is no way to fix this problem retroactively; no formula exists for converting or translating non-metric data such as rankings (which, for the most part, cannot meaningfully be manipulated mathematically) into quantitative data. Further, the defect in the Department's "scoring" process has deprived us of essential information, namely, actual measurements. A Second Look at the Department's Scoring Methodology The Department's Scoring Methodology was described above. Nevertheless, for purposes of explicating just how arbitrary and capricious were the results of this process, and to shed more light on the issues of fact which the Department hopes the Emergency Rule has resolved before they can ever become grounds for a disputed-fact hearing, the undersigned proposes that the way the Department arrived at its aggregate scores be reexamined. It will be recalled that each applicant received 14 Ordinals from each reviewer, i.e., one Ordinal per Domain. These will be referred to as Domanial Ordinals. Thus, each applicant received, collectively, 12 Domanial Ordinals apiece for the Main Topics of Cultivation, Processing, and Dispensing; and three Domanial Ordinals apiece for the Main Topics of Medical Director and Financials, for a total of 42 Domanial Ordinals. These five sets of Domanial Ordinals will be referred to generally as Arrays, and specifically as the Cultivation Array, the Processing Array, the Dispensing Array, the MD Array, and the Financials Array. Domanial Ordinals that have been sorted by Array will be referred to, hereafter, as Topical Ordinals. So, for example, the Cultivation Array comprises 12 Topical Ordinals per applicant. A table showing the Arrays of the southeast region applicants is attached as Appendix A. Keeping our attention on the Cultivation Array, observe that if we divide the sum of the 12 Topical Ordinals therein by 12, we will have calculated the mean (or average) of these Topical Ordinals. This value will be referred to as the Mean Topical Ordinal or "MTO." For each applicant, we can find five MTOs, one apiece for the five Main Topics. So, each applicant has a Cultivation MTO, a Processing MTO, and so forth. As discussed, each Main Topic was assigned a weight, e.g., 30% for Cultivation, 20% for Financials. These five weights will be referred to generally as Topical Weights, and specifically as the Cultivation Topical Weight, the Processing Topical Weight, etc. If we reduce, say, the Cultivation MTO to its associated Cultivation Topical Weight (in other words, take 30% of the Cultivation MTO), we will have produced the weighted MTO for the Main Topic of Cultivation. For each applicant, we can find five weighted MTOs ("WMTO"), which will be called specifically the Cultivation WMTO, the Processing WMTO, etc. The sum of each applicant's five WMTOs equals what the Department calls the applicant's aggregate score or final rank. In other words, in the Department's scoring methodology, an MTO is functionally a "Topical raw score" and a WMTO is an "adjusted Topical score" or, more simply, a "Topical subtotal." Thus, we can say, alternatively, that the sum of an applicant's five Topical subtotals equals its DOH-assigned aggregate score. For those in a hurry, an applicant's WMTOs (or Topical subtotals) can be computed quickly by dividing the sum of the Topical Ordinals in each Array by the respective divisors shown in the following table: Dividend Divisor Quotient Sum of the Topical Ordinals in the CULTIVATION Array ÷ 40 - Cultivation WMTO Sum of the Topical Ordinals in the PROCESSING Array ÷ 40 - Processing WMTO Dividend Divisor Quotient Sum of the Topical Ordinals in the DISPENSING Array ÷ 80 - Dispensing WMTO Sum of the Ordinals in Topical the MD Array ÷ 60 - MD WMTO Sum of the Topical Ordinals in the FINANCIALS Array ÷ 15 - Financials WMTO To advance the discussion, it is necessary to introduce some additional concepts. We have become familiar with the Ordinal, i.e., a number that the Department assigned to code a particular rank (5, 4, 3, 2, or 1).13/ From now on, the symbol ? will be used to represent the value of an Ordinal as a variable. There is another value, which we can imagine as a concept, namely the actual measurement or observation, which, as a variable, we will call x. For our purposes, x is the value that a Reviewer would have reported if he or she had been asked to quantify (to the fourth decimal place) the amount of an applicant's suitability vis-à-vis the attribute in view on a scale of 1.0000 to 5.0000, with 5.0000 being "ideal" and 1.0000 meaning, roughly, "serviceable." This value, x, is a theoretical construct only because no Reviewer actually made any such measurements; such measurements, however, could have been made, had the Reviewers been required to do so. Indeed, some vague idea, at least, of x must have been in each Reviewer's mind every time he or she ranked the applicants, or else there would have been no grounds for the rankings. Simply put, a particular value x can be supposed to stand behind every Topical Ordinal because every Topical Ordinal is a function of x. Unfortunately, we do not know x for any Topical Ordinal. Next, there is the true value of x, for which we will give the symbol µ. This is a purely theoretical notion because it represents the value that would be obtained by a perfect measurement, and there is no perfect measurement of anything, certainly not of relative suitability to serve as an MMTC.14/ Finally, measurements are subject to uncertainty, which can be expressed in absolute or relative terms. The absolute uncertainty expresses the size of the range of values in which the true value is highly likely to lie. A measurement given as 150 ± 0.5 pounds tells us that the absolute uncertainty is 0.5 pounds, and that the true value is probably between 149.5 and 150.5 pounds (150 – 0.5 and 150 + 0.5). This uncertainty can be expressed as a percentage of the measured value, i.e., 150 pounds ± .33%, because 0.5 is .33% of 150. With that background out of the way, let's return to concept of the mean. The arithmetic mean is probably the most commonly used operation for determining the central tendency (i.e., the average or typical value) of a dataset. No doubt everyone reading this Order, on many occasions, has found the average of, say, four numbers by adding them together and dividing by 4. When dealing with interval data, the mean is interpretable because the interval is interpretable. Where the distance between 4 and 5, for example, is the same as that between 5 and 6, everyone understands that 4.5 is halfway between 4 and 5. As long as we know that 4.5 is exactly halfway between 4 and 5, the arithmetic mean of 4 and 5 (i.e., 4.5) is interpretable. The mean of a set of measurement results gives an estimate of the true value of the measurement, assuming there is no systematic error in the data. The greater the number of measurements, the better the estimate. Therefore, if, for example, we had in this case an Array of xs, then the mean of that dataset (x¯) would approximate µ, especially for the Cultivation, Processing, and Dispensing Arrays, which have 12 observations apiece. If the Department had used x¯ as the Topical raw score instead of the MTO, then its scoring methodology would have been free of systematic error. But the Department did not use x¯ as the Topical raw score. In the event, it had only Arrays of ?s to work with, so when the Department calculated the mean of an Array, it got the average of a set of Ordinals (?¯), not x¯. Using the mean as a measure of the central tendency of ordinal data is highly problematic, if not impermissible, because the information is not quantifiable. In this case, the Department coded the rankings with numbers, but the numbers (i.e., the Ordinals), not being units of measurement, were just shorthand for content that must be expressed verbally, not quantifiably. The Ordinals, that is, translate meaningfully only as words, not as numbers, as can be seen in the table at paragraph 27, supra. Because these numbers merely signify order, the distances between them have no meaning; the interval, it follows, is not interpretable. In such a situation, 4.5 does not signify a halfway point between 4 and 5. Put another way, the average of Best and Second Best is not "Second-Best-and-a- half," for the obvious reason that the notion is nonsensical. To give a real-life example, the three Topical Ordinals in Nature's Way's MD Array are 5, 3, and 2. The average of Best, Third Best, and Fourth Best is plainly not "Third-Best-and-a- third," any more than the average of Friday, Wednesday, and Tuesday is Wednesday-and-a-third. For these reasons, statisticians and scientists ordinarily use the median or the mode to measure the central tendency of ordinal data, generally regarding the mean of such data to be invalid or uninterpretable. The median is the middle number, which is determined by arranging the data points from lowest to highest, and identifying the one having the same number of data points on either side (if the dataset contains an odd number of data points) or taking the average of the two data points in the middle (if the dataset contains an even number of data points). The mode is the most frequently occurring number. (If no number repeats, then there is no mode, and if two or more numbers recur with the same frequency, then there are multiple modes.) We can easily compute the medians, modes, and means of the Topical Ordinals in each of the applicants' Arrays. They are set forth in the following table. Cultivation 30% Processing 30% Dispensing 15% Medical Director 5% Financials 20% Bill's Median Mode Mean 1 1 1.8333 Median Mode Mean 2 2 1.7500 Median Mode Mean 1 1 1.1667 Median Mode Mean 2 NA 2.0000 Median Mode Mean 1 1 1.0000 Costa Median Mode Mean 5 5 4.6667 Median Mode Mean 4.5 5 4.1667 Median Mode Mean 4 4 4.0000 Median Mode Mean 4 4 4.3333 Median Mode Mean 5 5 4.6667 Keith St. Germain Median Mode Mean 4 4 3.4167 Median Mode Mean 4 4 3.2500 Median Mode Mean 2 2 2.4167 Median Mode Mean 4 NA 3.6667 Median Mode Mean 3 3 3.3333 Nature's Way Median Mode Mean 3 4 3.0833 Median Mode Mean 3 3 2.5833 Median Mode Mean 3.5 3 3.6667 Median Mode Mean 3 NA 3.3333 Median Mode Mean 2 2 2.3333 Redland Median Mode Mean 2 2 2.2500 Median Modes Mean 3.5 3, 4, 5 3.4167 Median Mode Mean 5 5 4.1667 Median Mode Mean 2 NA 2.3333 Median Mode Mean 4 NA 3.6667 It so happens that the associated medians, modes, and means here are remarkably similar——and sometimes the same. The point that must be understood, however, is that the respective means, despite their appearance of exactitude when drawn out to four decimal places, tell us nothing more (if, indeed, they tell us anything) than the medians and the modes, namely whether an applicant was typically ranked Best, Second Best, etc. The median and mode of Costa's Cultivation Ordinals, for example, are both 5, the number which signifies "Best." This supports the conclusion that "Best" was Costa's average ranking under Cultivation. The mean of these same Ordinals, 4.6667, appears to say something more exact about Costa, but, in fact, it does not. At most, the mean of 4.6667 tells us only that Costa was typically rated "Best" in Cultivation. (Because there is no cognizable position of rank associated with the fraction 0.6667, the number 4.6667 must be rounded if it is to be interpreted.) To say that 4.6667 means that Costa outscored KSG by 1.2500 "points" in Cultivation, therefore, or that Costa was 37% more suitable than KSG, would be a serious and indefensible error, for these are, respectively, interval and ratio statements, which are never permissible to make when discussing ordinal data. As should by now be clear, ?¯ is a value having limited usefulness, if any, which cannot ever be understood, properly, as an estimate of µ. The Department, regrettably, treated ?¯ as if it were the same as x¯ and, thus, a reasonable approximation of µ, making the grievous conceptual mistakes of using ordinal data to make interval-driven decisions, e.g., whom to select for licensure when the "difference" between applicants was as infinitesimal as 0.0041 "points," as well as interval representations about the differences between applicants, such as, "Costa's aggregate score is 1.5167 points greater than Nature's Way's aggregate score." Due to this flagrant defect in the Department's analytical process, the aggregate scores which the Department generated are hopelessly infected with systematic error, even though the mathematical calculations behind the flawed scores are computationally correct. Dr. Cornew's Solution Any attempt to translate the Ordinals into a reasonable approximation of interval data is bound to involve a tremendous amount of inherent uncertainty. If we want to ascertain the x behind a particular ?, all we can say for sure is that: [(? – n) + 0.000n] = x = [(? + a) – 0.000a], where n represents the number of places in rank below ?, and a symbolizes the number of places in rank above ?. The Ordinals of 1 and 5 are partial exceptions, because 1 = x = 5. Thus, when ? = 5, we can say [(? – n) + 0.000n] = x = 5, and when ? = 1, we can say 1 = x = [(? + a) – 0.000a]. The table below should make this easier to see. Lowest Possible Value of x Ordinal ? Highest Possible Value of x 1.0004 5 5.0000 1.0003 4 4.9999 1.0002 3 4.9998 1.0001 2 4.9997 1.0000 1 4.9996 As will be immediately apparent, all this tells us is that x could be, effectively, any score from 1 to 5——which ultimately tells us nothing. Accordingly, to make any use of the Ordinals in determining an applicant's satisfaction of the One Point Condition, we must make some assumptions, to narrow the uncertainty. Nature's Way's expert witness, Dr. Ronald W. Cornew,15/ offers a solution that the undersigned finds to be credible. Dr. Cornew proposes (and the undersigned agrees) that, for purposes of extrapolating the scores (values of x) for a given applicant, we can assume that the Ordinals for every other applicant are true values (µ) of x, in other words, perfectly measured scores expressing interval data——a heroic assumption in the Department's favor. Under this assumption, if the subject applicant's Ordinal is the ranking of, say, 3, we shall assume that the adjacent Ordinals of the other applicants, 2 and 4, are true quantitative values. This, in turn, implies that the true value of the subject applicant's Ordinal, as a quantified score, is anywhere between 2 and 4, since all we know about the subject applicant is that the Reviewer considered it to be, in terms of relative suitability, somewhere between the applicants ranked Fourth Best (2) and Second Best (4). If we make the foregoing Department-friendly assumption that the other applicants' Ordinals are µ, then the following is true for the unseen x behind each of the subject applicant's ?s: [(? – 1) + 0.0001] = x = [(? + 1) – 0.0001]. The Ordinals of 1 and 5 are, again, partial exceptions. Thus, when ? = 5, we can say 4.0001 = x = 5, and when ? = 1, we can say 1 = x = 1.9999. Dr. Cornew sensibly rounds off the insignificant ten-thousandths of points, simplifying what would otherwise be tedious mathematical calculations, so that: Lowest Possible Value of x Ordinal ? Highest Possible Value of x 4 5 5 3 4 5 2 3 4 1 2 3 1 1 2 We have now substantially, albeit artificially, reduced the uncertainty involved in translating ?s to xs. Our assumption allows us to say that x = ? ± 1 except where only negative uncertainty exists (because x cannot exceed 5) and where only positive uncertainty exists (because x cannot be less than 1). It is important to keep in mind, however, that (even with the very generous, pro-Department assumption about other applicants' "scores") the best we can do is identify the range of values within which x likely falls, meaning that the highest values and lowest values are not alternatives; rather, the extrapolated score comprises those two values and all values in between, at once. In other words, if the narrowest statement we can reasonably make is that an applicant's score could be any value between l and h inclusive, where l and h represent the low and high endpoints of the range, then what we are actually saying is that the score is all values between l and h inclusive, because none of those values can be excluded. Thus, in consequence of the large uncertainty about the true values of x that arises from the low-information content of the data available for review, Ordinal 3, for example, translates, from ordinal data to interval data, not to a single point or value, but to a score- set, ranging from 2 to 4 inclusive. Thus, to calculate Nature's Way's aggregate score-set using Dr. Cornew's method, as an example, it is necessary to determine both the applicant's highest possible aggregate score and its lowest possible aggregate score, for these are the endpoints of the range that constitutes the score-set. Finding the high endpoint is accomplished by adding 1 to each Topical Ordinal other than 5, and then computing the aggregate score-set using the mathematical operations described in paragraphs 74 and 75. The following WMTOs (Topical subtotals) are obtained thereby: Cultivation, 1.2250; Processing, 1.0500; Dispensing, 0.6625; MD, 0.2000; and Financials, 0.6667. The high endpoint of Nature's Way's aggregate score-set is the sum of these numbers, or 3.8042.16/ Finding the low endpoint is accomplished roughly in reverse, by subtracting 1 from each Topical Ordinal other than 1, and then computing the aggregate score-set using the mathematical operations described in paragraphs 74 and 75. The low endpoint for Nature's Way works out to 1.9834. Nature's Way's aggregate score-set, thus, is 1.9834-3.8042.17/ This could be written, alternatively, as 2.8938 ± 0.9104 points, or as 2.8938 ± 31.46%. The low and high endpoints of Costa's aggregate score- set are found the same way, and they are, respectively, 3.4000 and 4.8375.18/ Costa's aggregate score-set is 3.4000-4.8375, which could also be written as 4.1188 ± 0.7187 points or 4.1188 ± 17.45%. We can now observe that a score of 2.4000 or more is necessary to satisfy the One Point Condition, and that any score between 2.4000 and 3.8375, inclusive, is both necessary and sufficient to satisfy the One Point Condition. We will call this range (2.4000-3.8375) the Proximity Box. A score outside the Proximity Box on the high end, i.e., a score greater than 3.8375, meets the One Point Condition, of course; however, a score that high, being more than sufficient, is not necessary. Rounding Off the Spurious Digits Remember that the Ordinal 5 does not mean 5 of something that has been counted but the position of 5 in a list of five applicants that have been put in order——nothing more. Recall, too, that there is no interpretable interval between places in a ranking because the difference between 5 and 4 is not the same as that between 4 and 3, etc., and that there is no "second best-and-a-half," which means that taking the average of such numbers is a questionable operation that could easily be misleading if not properly explained. Therefore, as discussed earlier, if the mean of ordinal data is taken, the result must be reported using only as many significant figures as are consistent with the least accurate number, which in this case is one significant figure (whose meaning is only Best, Second Best, Third Best, and so forth). The Department egregiously violated the rule against reliance upon spurious digits, i.e., numbers that lack credible meaning and impart a false sense of accuracy. The Department took advantage of meaningless fractions obtained not by measurement but by mathematical operations, thereby compounding its original error of treating ordinal data as interval data. When the Department says that Nature's Way's aggregate score is 2.8833, it is reporting a number with five significant figures. This number implies that all five figures make sense as increments of a measurement; it implies that the Department's uncertainty about the value is around 0.0001 points——an astonishing degree of accuracy. The trouble is that the aggregate scores, as reported without explanation, are false and deceptive. There is no other way to put it. The Department's reported aggregate scores cannot be rationalized or defended, either, as matters of policy or opinion. This point would be obvious if the Department were saying something more transparent, e.g., that 1 + 1 + 1 + 0 + 0 = 2.8833, for everyone would see the mistake and understand immediately that no policy can change the reality that the sum of three 1s is 3. The falsity at issue is hidden, however, because, to generate each applicant's "aggregate score," the Department started with 42 whole numbers (of ordinal data), each of which is a value from 1 to 5. It then ran the applicant's 42 single- digit, whole number "scores" through a labyrinth of mathematical operations (addition, division, multiplication), none of which improved the accuracy or information content of the original 42 numbers, to produce "aggregate scores" such as 2.8833. This process lent itself nicely to the creation of spreadsheets and tables chocked full of seemingly precise numbers guaranteed to impress.19/ Lacking detailed knowledge (which few people have) about how the numbers were generated, a reasonable person seeing "scores" like 2.8833 points naturally regards them as having substantive value at the microscopic level of ten-thousandths of a point——that's what numbers like that naturally say. He likely believes that these seemingly carefully calibrated measurements are very accurate; after all, results as finely-tuned as 2.8833 are powerful and persuasive when reported with authority. But he has been fooled. The only "measurement" the Department ever took of any applicant was to rank it Best, Second Best, etc.——a "measurement" that was not, and could not have been, fractional. The reported aggregate scores are nothing but weighted averages of ordinal data, dressed up to appear to be something they are not. Remember, the smallest division on the Reviewers' "scale" (using that word loosely here) was 1 rank. No Reviewer used decimal places to evaluate any portion of any application. The aggregate scores implying precision to the ten-thousandth place were all derived from calculations using whole numbers that were code for a value judgment (Best, Second Best, etc.), not quantifiable information. Therefore, in the reported "aggregate scores," none of the digits to the right of first (tenth place) decimal point has any meaning whatsoever; they are nothing but spurious digits introduced by calculations carried out to greater precision than the original data. The first decimal point, moreover, being immediately to the right of the one (and only) significant figure in the aggregate score, is meaningful (assuming that the arithmetic mean of ordinal data even has interpretable meaning, which is controversial) only as an approximation of 1 (whole) rank. Because there is no meaningful fractional rank, the first decimal must be rounded off to avoid a misrepresentation of the data. Ultimately, the only meaning that can be gleaned from the "aggregate score" of 2.8833 is that Nature's Way's typical (or mean) weighted ranking is 2.8833. Because there is no ranking equivalent to 2.8833, this number, if sense is to be made of it, must be rounded to the nearest ranking, which is 3 (because 2.8 ˜ 3), or Third Best. To report this number as if it means something more than that is to mislead. To make decisions based on the premise that 0.8833 means something other than "approximately one whole place in the ranking" is, literally, irrational——indeed, the Department's insistence that its aggregate scores represent true and meaningful quantities of interval data is equivalent, as a statement of logic, to proclaiming that 1 + 1 = 3, the only difference being that the latter statement is immediately recognizable as a delusion. An applicant could only be ranked 1, 2, 3, 4, or 5——not 2.8833 or 4.4000. Likewise, the only meaning that can be taken from the "aggregate score" of 4.4000 is that Costa's average weighted ranking is 4.4000, a number which, for reasons discussed, to be properly understood, must be rounded to the nearest ranking, i.e., 4. The fraction, four-tenths, representing less than half of a position in the ranking, cannot be counted as approximately one whole (additional) place (because 4.4 ? 5). And to treat 0.4000 as meaning four-tenths of a place better than Second Best is absurd. There is no mathematical operation in existence that can turn a number which signifies where in order something is, into one that counts how much of that thing we have. To eliminate the false precision, the spurious digits must be rounded off, which is the established mathematical approach to dealing with numbers that contain uncertainty, as Dr. Cornew credibly confirmed. Rounding to the nearest integer value removes the meaningless figures and eliminates the overprecision manifested by those digits.

Florida Laws (9) 120.52120.536120.54120.56120.569120.57120.595120.68381.986
# 2
KNAUS SYSTEMS, INC. OF FLORIDA vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 99-001230BID (1999)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Mar. 19, 1999 Number: 99-001230BID Latest Update: Sep. 23, 1999

The Issue The issue is whether Respondent's proposed decision to award a computer-maintenance contract to Intervenor is clearly erroneous, contrary to competition, arbitrary, or capricious.

Findings Of Fact On November 20, 1998, Respondent issued a Request for Proposals titled "The Maintenance of Network Terminal Equipment" (RFP). The purpose of the RFP is to obtain a three-year maintenance service contract for video display terminals, printers, microcomputers, and related components located throughout the State of Florida. The RFP seeks a three-year, labor-intensive contract projected at the hearing to be worth between $3 million and $3.5 million. RFP Section 6.1 promises a "comprehensive, fair, and impartial evaluation" of all timely submitted offers by an "Evaluation Committee," which is an undefined term. Nothing in the RFP describes the Evaluation Committee, in terms of number or qualifications, except that repeated references to "each evaluator" imply the existence of more than one member. Section 6.1.A identifies four evaluation categories: Corporate Experience (100 points), Project Staff (200 points), Minimum Maintenance Service Requirements (200 points), and Cost (500 points). The category at issue in this case is Corporate Experience. Section 6.1.B states that the Procurement Officer will evaluate whether each offer meets the "fatal criteria." The only relevant fatal criterion is 10, which states: "Are there three (3) years of financial statements for the proposer and any proposed subcontractors, TAB 6?" RFP, Section 6.3.A.10. The RFP does not define "financial statements," nor does it require audited financial statements. The Procurement Officer bore the responsibility for determining whether offers complied with the fatal criteria, and he testified that he applied this fatal criterion by checking for a balance sheet, income statement, and statement of changes in financial position. Tr., p. 84. However, the Procurement Officer, acknowledging the absence of any definition of "financial statements," testified that he would accept "even a balance sheet and income statement," which is exactly what he received from Intervenor. Tr., p. 99. The Procurement Officer added: "I didn't throw out anyone for lack of submitting any other financial statements that are commonly included in audited financial statements." Id. Section 6.1.B also provides that offers meeting the "fatal criteria" will be scored by the Evaluation Committee, which will score each responsive offer "based on the evaluation criteria provided in Section 6.3 " Regarding Corporate Experience, Section 6.1.C.3 states: "The criteria, which will be used in evaluating Corporate Experience, are listed in the Rating Sheet, see Section 6.3.B." Section 6.3 states that the non-fatal criteria for each of the four categories are listed on the Rating Sheet, which is part of the RFP. Each evaluator must assign a score from 0-4 for each of these criteria. The meaning of each point value is as follows: 0 = no value; proposer has no capability or has ignored this area 1 = poor; proposer has little or no direct capability or has not covered this area, but there is some indication of marginal capability 2 = acceptable; proposer has adequate capability 3 = good; proposer has a good approach with above average capability 4 = superior; proposer has excellent capability and an outstanding approach Section 6.3.B lists 40 evaluation criteria divided among three categories. (The fourth category is Cost; its scoring methodology is irrelevant to this case.) Project Staff and Minimum Maintenance Service Requirements contain a total of 37 criteria. Corporate Experience contains only three criteria. The three criteria of Corporate Experience are: Does the proposal present financial information that supports the proposer's ability to perform this work required by this Request for Proposal? (RFP section 5.6.B) Is the ratio of current assets to current liabilities at least 2:1? Is the debt to net worth ratio (total liabilities/net worth) equal to or less than 1? Has the cash/operating capital exceeded projected monthly operating expenses over the past three years? Does the proposer have sufficient financial resources to complete the project? Does the proposal document the proposer's experience, organization, technical qualifications, skills, and facilities? (RFP section 5.6.B) Is the experience supplied (including subcontractor experience) relevant? Has the proposer (including any subcontractors) previously provided the maintenance services required by the department? Have the proposer and any subcontractors previously worked together? Does the proposer[-]supplied organization chart demonstrate the capability to perform well on this project? Have the projects supplied by the proposer or for any subcontractors been performed recently enough to be relevant? What percentage of the work is to be done by the proposer and each subcontractor? Does the proposal present maintenance projects similar to the requirements of this RFP as references? (RFP section 5.6.B) Is each project described in sufficient detail so that the department is able to judge its complexity and relevance? Are projects similar or greater in scope? How broad is the range of equipment that was serviced? How current is the project? The challenge focuses exclusively on the first criterion under Corporate Experience. On this criterion, the evaluators gave Intervenor an average of 3.0 and Petitioner an average of 2.0. The Procurement Officer prepared an Evaluation Manual for the evaluators. The Evaluation Manual states: Scoring should reflect the evaluator's independent evaluation of the proposal's response to each evaluation criterion. Following each evaluation criterion are considerations each evaluator may use in determining an evaluation score. These considerations are only suggestions. The considerations provided are not intended to be an all-inclusive list and will not be scored independently for the criterion that they address. Joint Exhibit 8, page 4. Nothing among the documents given prospective offerors informed them explicitly that the evaluators were not required to consider any of the bulleted items listed under each of the criteria. However, the Procurement Officer conducted a Proposers' Conference, at which he stated that the bullets under all of the criteria were strictly suggestions that the evaluators were free to ignore. Tr., p. 115. The Procurement Officer provided this information in response to a question asked by a representative of Intervenor. Joint Exhibit 23, pp. 63-64. The RFP did not require attendance at the Proposers' Conference, nor did Respondent publish the response following the conference. The three bullets under the first criterion under Corporate Experience appear in Respondent's manual titled "Developing a Request for Proposal (RFP)." The exhibit in evidence is a copy of the manual issued on April 1, 1998, but this manual has been in existence well prior to that. The manual suggests that the RFP include a criterion for evaluating the adequacy of the offeror's financial resources. Under the category of reviewing financial statements, the manual lists the first three bullets, as well as other considerations. However, nothing in the manual requires the inclusion of these bulleted items as scoring criteria or the consideration of these bulleted items within one or more scoring criteria. The rating sheets contain a space for comments. The following are the scores and comments from each of the five evaluators for the challenged criterion regarding the financial resources of Petitioner and Intervenor. Evaluator 1 assigned Intervenor a 2, noting "high debt, loss in income 1998." Evaluator 1 assigned Petitioner a 1, noting "financial information limited. Total assets less than value of contract." Evaluators 2 and 4 each assigned Intervenor a 3 and Petitioner a 2 without any comments. Evaluator 3 assigned Intervenor a 3, noting "Exceeds all requirements." Evaluator 3 assigned Petitioner a 3, noting "financials appear to meet this requirement. However, the replacement parts-inventory [sic] dollars seem very low in relations [sic] to the mentioned state contracts that are currently existing [sic]-[.]" Evaluator 5 assigned Intervenor a 4 without any comments, but citing the presence of a 10-K report in response to where he found the financial information. Evaluator 5 assigned Petitioner a 1 originally, noting "asset/liabilities 1:1." However, he changed his score to a 2 and lined out his comment. In general, the five evaluators have technical backgrounds in telecommunications or information management. They do not have significant backgrounds in business or financial matters. Evaluator 1 has a limited financial background, having taken a couple of accounting courses in college. His testimony during his deposition was evasive. Unwilling or unable at the deposition to discuss substantively the financial statements, Evaluator 1 claimed not to recall nearly all material aspects of the evaluation that had taken place about four months earlier. Evaluators 2 and 3 testified at the hearing. Evaluator 2 owns a company, although he has never read the financial statements of any company besides his own. However, he believes that he can read financial statements to determine if a corporation is profitable. On the other hand, Evaluator 2 admits that he does not know how to calculate the ratio of current assets to liabilities from the financial statements or the difference between a balance sheet and an income statement. Evaluator 2 also admits that he does not know how the value of determining whether the ratio of debt to net worth is less than 1. Evaluator 2 concedes that he does not know how to determine if an offeror had sufficient cash to complete the contract. However, during his deposition, Evaluator 2 testified that he checked the financial statements for cash on hand and monthly income, although he admitted that he did not know how much cash a company would need to perform the contract. Evaluator 2 also admitted in his deposition that, in giving Intervenor a 3 and Petitioner a 2, he did not compare the net worth or ratio of cash to operating expenses of the two offerors. Evaluator 3 testified that he has some relevant education in college, but he has not previously examined financial statements for Respondent. Like Evaluator 2, Evaluator 3 testified that he did not compute any of the bulleted ratios and was incapable of calculating the current ratio described in the first bullet or the other ratios described in the second and third bullets. Evaluator 3 conceded that he did not determine whether the offerors had sufficient resources to complete the project. In his deposition, Evaluator 3 admitted that his review of the financial criterion was largely confined to checking to see if an offeror's assets exceeded its liabilities. Evaluator 3 conceded that he did not compare debt loads. In two respects, Evaluator 3 approached the evaluation differently from his counterparts. First, he assumed that someone had already determined that the offerors were financially able to service the contract. Second, evidently relying on information not contained in the offers or RFP, Evaluator 3 determined that Petitioner's parts inventory was too low. In his deposition, Evaluator 4 stated that he felt that it was optional whether he had to consider whether the financial information supported an offeror's ability to perform the contract. In rating Intervenor, Evaluator 4 admitted that he was unaware of its debt load. Evaluator 4 testified in his deposition that he did not feel qualified to decide whether an offeror could perform financially under the RFP. In his deposition, Evaluator 5 testified that he did not know what financial resources an offeror must possess to be able to complete the contract. He also admitted that he never determined if Intervenor had operated at a loss for the past two years. In addressing the qualifications of the evaluators to score the financial criterion, it is useful to compare their evaluations to what was being evaluated. The Administrative Law Judge rejects Petitioner's implicit invitation to assess the qualifications of the evaluators without regard to the extent to which their evaluations corresponded with, or failed to correspond with, that which they were evaluating. It is impossible to perform much of a comparative analysis of the financial resources of Petitioner and Intervenor because of the paucity of financial information supplied by Petitioner. Petitioner did not submit audited, reviewed, or even compiled financial statements, so that a credibility issue attaches to its owner-generated statements. Also, Petitioner did not submit a statement of changes in financial position, which is the first financial document that the Procurement Officer testified that he would consult in assessing a corporation's financial resources. Tr., p. 88. Absent this data concerning cash flow, it is not possible to identify reliably the information necessary to consider the third bullet, which asks the evaluator to compare historic cash flow from operations (which is derived from the statement of changes in financial position) with the "projected monthly operating expenses" (which is derived from the income statement). Subject to these important qualifications concerning Petitioner's financial statements, Petitioner's balance sheet reveals a current ratio of 5:1 and a ratio of total liabilities to net worth of well under 1. By contrast, Intervenor's audited financial statements (for DecisionOne Corporation and Subsidiaries) reveal a current ratio of barely 1:1, total liabilities in excess of total assets, and a negative shareholder's equity of $204,468,000. Intervenor's income statement discloses a net loss of $171,641,000 in fiscal year ending 1998 with a note suggesting that $69,000,000 of this loss is attributable to nonrecurring merger expenses. If interest is included, as it should be (given its impact on real-world cash flow), Intervenor's statement of changes in financial position reports negative cash flows for the past three years. Counting interest and taxes, the negative cash flow in 1998 is $37,298,000. This negative cash flow is attributable to the payment of a $244,000,000 to Intervenor's parent, but negative cash flows of $13,144,000 and $11,961,000 in 1997 and 1996, respectively, do not include any dividend payments. Perhaps partly due to the already-discussed problems in ascertaining the role, at hearing, of the accuracy of the scoring, Intervenor did not elicit explanatory testimony concerning its relatively complicated financial statements, although Intervenor's forbearance seems directed more to not developing the evidentiary record concerning the formal and substantive deficiencies of Petitioner's financial statements. However, it is clear that, except for Evaluator 1, Respondent's evaluators could not and did not understand much more of Intervenor's financial statements than that they were professionally prepared and contain large numbers. Turning to the extent to which the scores correspond to what the evaluators were scoring, Petitioner's financial statements are incomplete and owner-generated. Given these facts, the evaluators could legitimately give Petitioner a 2, which is an "acceptable" score, reflective of "adequate capability." The evaluators could also have legitimately given Petitioner a 1, indicative of a "poor" score with "some indication of marginal capability." The evaluators could not have given Petitioner a 0 because its financial statements are at least partly present in the offer and reflect some financial capability. By contrast, Intervenor's financial statements are completed and audited. However, they portray a company that is in financial distress with substantial losses, a negative shareholder's equity, and ongoing negative cash flows. Although much better in form than the financial statements of Petitioner, Intervenor's financial statements raise at least one question as to form because, although disclosing interest and tax payments, they attempt to stress a modified cash flow without regard to these substantial cost items. Given the sizeable losses suffered recently by Intervenor, the evaluators could not rationally assign Intervenor a 3, which is "good" and reflective of "above average capability." Without dealing with Intervenor's losses and specifically identifying cash flow that would be available, after debt service and other expenditures, to service the contract, the evaluators could not rationally assign Intervenor even a 2. Except for Evaluator 1, the evaluators never identified the financial condition of Intervenor and thus never considered it in their scoring. Undermined from the start by a lack of knowledge of roughly how much financial capacity would be necessary to service the three-year contract, the scoring process, as applied to Intervenor, is further undermined by the near-total absence in the record of any informed reason for the scoring of Intervenor's offer. Evaluator 3 erroneously believed that someone not on the evaluation team had already determined that the offerors were financially capable of performing the contract. Evaluator 4 erroneously believed that evaluating the financial condition of the offerors was optional, and admitted that he was unqualified to perform this task in any event. Evaluator 2 claimed to be able to identify losses on a financial statement, but, if he did so as to Intervenor's statements, there is no evidence in the record that he gave the matter any thought. Evaluator 5 expressly admitted that he never made this determination. The only informed bases in the record, either contemporaneous with the scoring process or at any later time through the hearing, for the scoring of the subject criterion in the offers of Petitioner and Intervenor are the evaluation forms of Evaluator 1. In these forms, Evaluator 1 correctly noted the loss suffered by Intervenor in 1998 and the already- mentioned formal deficiencies of Petitioner's financial statements. However, the sole contribution of Evaluator 1 to this case is in the comments on his forms. He was unwilling and unable to discuss any aspect of his scoring when questioned at his deposition. The case of the financial qualifications of the evaluators thus comes down to four evaluators who had no idea what they were doing and one evaluator who offers only two spare, handwritten notes suggestive of a rational basis for distinguishing between the financial capabilities of the two offerors. This is insufficient. The RFP promised an informed evaluation by more than one evaluator. Even if the RFP did not so promise, the promising comments of Evaluator 1 are not indicative of his qualifications when, for no good reason, he could not recall the recently completed evaluation process or could not or would not respond meaningfully to questions concerning the financial materials that he was evaluating. For the purpose of assessing the qualifications of Evaluation 1, the hint of rationality present in his two comments is overwhelmingly offset by the actual financial condition of Intervenor. Rejecting a chance to discuss his evaluation, Evaluator 1 has chosen to let his evaluation be judged on the strength of its correspondence to the subject matter of the evaluation, Intervenor's financial statements. Under all of the circumstances, Evaluator 1's evaluation of the subject criterion in Intervenor's offer was clearly erroneous and contrary to competition. The remaining evaluators' evaluations of this criterion were clearly erroneous, contrary to competition, arbitrary, and capricious. However, Petitioner has elected not to make a direct issue of the accuracy of the scores. Addressing the qualifications of the evaluators, then, their evident lack of qualifications, coupled with the already-described grave deficiencies in the results of their scoring the first criterion of Intervenor's offer and the material impact on the outcome of the relative scoring of the offers of Intervenor and Petitioner, has rendered the evaluation process clearly erroneous, contrary to competition, arbitrary, and capricious.

Recommendation It is RECOMMENDED that the Department of Children and Family Services enter a final order rejecting all offers. DONE AND ENTERED this 3rd day of September, 1999, in Tallahassee, Leon County, Florida. ___________________________________ ROBERT E. MEALE Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 3rd day of September, 1999. COPIES FURNISHED: Gregory D. Venz, Agency Clerk Department of Children and Family Services Building 2, Room 204B 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 John S. Slye, General Counsel Department of Children and Family Services Building 2, Room 204 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 William E. Williams Andrew Berton, Jr. Huey Guilday Post Office Box 1794 Tallahassee, Florida 32302-1794 R. Beth Atchison Assistant General Counsel Department of Children and Family Services Building 2, Room 204 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 Gregory P. Borgognoni Kluger Peretz 17th Floor, Miami Center 201 South Biscayne Boulevard Miami, Florida 33131

Florida Laws (3) 120.57287.001287.057
# 3
GREAT AMERICAN FINANCIAL RESOURCES, INC. vs BROWARD COUNTY SCHOOL BOARD, 03-000614BID (2003)
Division of Administrative Hearings, Florida Filed:Fort Lauderdale, Florida Feb. 24, 2003 Number: 03-000614BID Latest Update: Jan. 09, 2004

The Issue The issue is whether, in connection with a procurement of vendors of tax-sheltered annuity programs for Respondent's employees, Respondent's failure to select Petitioner's proposal instead of, or in addition to, the proposals of ten offerors that were accepted is contrary to the agency's governing statutes, the agency's rules or policies, or the solicitation specifications, in violation of Section 120.57(3)(f), Florida Statutes.

Findings Of Fact On October 7, 2002, Respondent issued RFP 23-089V--Tax Sheltered Annuity Program for School Board Employees (RFP). The purpose of the procurement is to select multiple vendors to sell tax-sheltered annuities to Respondent's 26,000 fulltime employees through a payroll-deduction plan. In general, Respondent sought to improve its existing tax-sheltered annuity program that it offered its employees by selecting vendors whose products would improve the quality of investment products, decrease expenses, establish service standards, increase participation, improve the dissemination of program information, maintain consistent communications, and ensure compliance with the applicable provisions of the Internal Revenue Code of 1986, as amended (IRC). Under the existing tax-sheltered annuity program, 31 percent of Respondent's eligible employees make payroll deductions totalling about $36 million annually to 21 vendors. If Respondent does not select an existing vendor in the procurement that is the subject of this case, the vendor's enrollment will be frozen, and the vendor may not enroll new members. RFP Section 2.0 seeks annuities and mutual funds, under IRC Sections 403(b)(1) and 403(b)(7), respectively (collectively, TSAs), but not life insurance unless directly connected to an annuity. Section 2.0 states that Respondent will not contract with "independent agents or brokers," but will contract "directly with one or more financial organizations independently or . . . with multiple financial organizations from the same vendor(s)." Section 2.0 explains that it will be the "carrier's/company's responsibility to appoint, supervise, and maintain properly licensed and trained agents to offer these products." Respondent imposed this requirement on vendors so that they would be directly responsible for the persons who undertook the marketing of TSAs to Respondent's employees. RFP Section 1.0 includes a certification from the offeror. Part of the Required Response Form, the certification states: I hereby certify that: I am submitting the following information as my firm's (proposer) proposal and am authorized by proposer to do so; proposer agrees to complete and unconditional acceptance of the contents of Pages 1 through 19 inclusive of this Request for Proposals, and all appendices and the contents of any Addenda released hereto; proposer agrees to be bound to any and all specifications, terms and conditions contained in the Request for Proposals, and any released Addenda and understand [sic] that the following are requirements of this RFP and failure to comply will result in disqualification of proposal submitted; proposer has not divulged, discussed, or compared the proposal with other proposers and has not colluded with any other proposer or party to any other proposal; proposer acknowledges that all information contained herein is part of the public domain as defined by the State of Florida Sunshine and Public Records Laws; all responses, data and information contained in this proposal are true and accurate. RFP Section 3.4 warns: "Any modifications or alterations to this form shall not be accepted and proposal will be rejected. The enclosed original Required Response Form will be the only acceptable form." RFP Section 3.7 sets forth the "minimum eligibility" criteria, which an offeror must meet "[i]n order to be considered for award . . .." The criteria are: Insurance carriers must be licensed in the State of Florida and provide a copy of your license and/or certificate. Insurance carriers must have, and maintain, a minimum size category of VI and a financial rating of A- from A.M. Best. If proposer is proposing a fixed or variable annuity program, the proposer must be licensed as a life insurance carrier within the State of Florida. If the proposer is not an insurance carrier, it shall represent and warrant that it is a broker-dealer registered with the U.S. Securities and Exchange Commission under the Securities Exchange Act of 1934 and with any applicable State securities commission, and also is a member of the National Association of Securities Dealers, Inc. Proposer must be a direct provider of the product(s) offered versus a marketing unit for the product(s) offered. RFP Section 3.8 requires each offeror to complete the 14-page questionnaire attached as Attachment A. Section 3.8 states: "If you are unable to answer a question, indicate why. If you are unable or unwilling to disclose particular information asked in a question, indicate why." Question 3 of the "Company Information" part of the questionnaire asks: "How long has your company (not parent company) been licensed to do business?" Question 4 of the "Company Information" part of the questionnaire asks: "How long has your company been licensed to do business in the State of Florida?" Question 5 of the "Company Information" part of the questionnaire asks: "In how many states is your company licensed to do business?" Question 6 of the "Company Information" part of the questionnaire asks: "Do you currently have all the necessary licenses and registration to perform the activities proposed?" Question 7 of the "Company Information" part of the questionnaire asks: "What is your company's (not parent company) total assets under management for 403(b) Programs including the number of plans and number of participants as of December 31, 2001?" Question 8 of the "Company Information" part of the questionnaire asks: "If applicable, what was your company's ratings during each of the most recent three years?" This question contains a matrix with the years 2000-2002 and four sources of ratings: ”A.M. Best," "Moody's," "S&P," and "Duff & Phelps.” Question 13 of the "Company Information" part of the questionnaire asks: "Provide the name, address, and telephone number of three public entities (preferably public schools) 403(b) clients we may contact as references." Questions 1 and 2 of the "Contract Overview" part of the questionnaire ask the offeror to identify its investment options. Questions 7-19 of the "Contract Overview" part of the questionnaire ask the offeror detailed questions about the expenses associated with its TSA programs. Question 16 of the "Variable Annuity" part of the questionnaire asks the offeror to identify each variable-annuity fund, by name, that it offers participants. Question 19 of this part asks for the rate of return for each investment option for the period ending June 30, 2002. Question 21 of the "Mutual Fund" part of the questionnaire asks the offeror to identify each mutual fund, by name, that it offers participants. Question 25 of this part asks for the total cumulative rate of return for each investment option for the period ending June 30, 2002. Question 26 of this part asks for the Morningstar Rating for each investment option, as of June 30, 2002, if applicable. Question 29 of the "Contract Overview" part of the questionnaire asks if the offeror will provide a toll-free and local number for participants to conduct specified financial transactions, including changing investment mixes and beneficiaries. Question 30 of the "Contract Overview" part of the questionnaire asks the same question regarding internet access for participants. Question 22 of the "Administration" part of the questionnaire asks: "Do you offer electronic investment advice to program participants? . . ." Question 23 of the "Administration" part of the questionnaire asks: "Do you offer an asset allocation program? . . ." Question 10 of the "Enrollment Procedures and Services" part of the questionnaire asks: "Do you provide any communications to participants on a regular basis (e.g., newsletters)? Please provide examples." Question 11 of the "Enrollment Procedures and Services" part of the questionnaire asks: "How will you restrict your representatives from selling other products to [Respondent's] employees?" Question 12 of the "Enrollment Procedures and Services" part of the questionnaire asks: "Has your organization had a SAS 70 internal controls review? Please attach." RFP Section 3.10 states that the "following services are requested by" Respondent and asks each offeror to "[c]learly describe how [it] can accomplish each of the following[.]" Section 3.10.1 provides a matrix with three columns: "Yes, Can Comply," "Yes, Can Comply But With Deviations," and "No, Cannot Comply." The rows describe specific services, such as providing customer service telephone numbers that are local calls, a toll- free telephone number for employees who reside outside the local area codes, and videotapes or websites educating participants about TSAs. RFP Section 3.11 requires each offeror to complete Attachment C, which is a Financial Response form. This document requires each proposal to list the "annual participant account charge," "wrap fees," "mortality, expense, [and] administrative charges," "total fund management or separate account charges for each fund offered," "front loads," "CDSC or surrender charges & terms," "other fees & expenses," and "4th quarter interest rate." Section 3.11 also requires each offeror to calculate the "cumulative account balance," "average compound annual net rate of return," and "cash surrender value" for a specified sum over a specified period. RFP Section 3.12 requires each offeror to provide the following information to receive points for minority or women business enterprises: Proposers must provide information regarding diversity of proposer's company. Complete and submit Attachment F. Proposer must provide information regarding diversity of the proposer's local . . . agents and/or representatives. Complete and submit Attachment G. Proposer must provide information and[/]or documentation of the Proposing Company's outreach program for employment and/or contracting of local agents. Proposer shall submit information of its involvement in the minority community. Such evidence may include, but not be limited to, minority sponsored events, and purchases made from minority companies funds targeting minority students, financial considerations and/or providing other corporate resources for minority community projects. Attachment G provides a chart to be completed by the offeror. The chart lists "Broker/Agents" and "% of Total Workforce" and requests the following data for each category: total, non-Hispanic white males, non-Hispanic white females, non-Hispanic black males, non-Hispanic black females, Hispanic males, Hispanic females, Asian males, Asian females, American Indian/Alaskan Native males, and American Indian/Alaskan Native females. RFP Section 5.0 requires offerors to submit their proposals by 2:00 p.m. on November 20, 2002. The events calendar states that, on January 13, 2003, the Evaluation Committee will review proposals and recommends awards, and, on January 21, 2003, Respondent will post the recommendation. RFP Section 6.1 states: The Superintendent's Insurance Advisory Committee (hereinafter referred to as "Committee"), [Respondent], or both reserve the right to ask questions of a clarifying nature once proposals have been opened, interview any or all proposers that respond to the RFP, or make their recommendations based solely on the information contained in the proposals submitted. RFP Section 6.2 states: The Committee shall evaluate all proposals received, which meet or exceed Section 3.7, Minimum Eligibility Requirements. The Committee reserves the right to ask questions of a clarifying nature and interview any or all proposers that meet or exceed Section 3.7. RFP Section 6.3 states that the Insurance Advisory Committee (Committee) shall evaluate "[p]roposals that meet or exceed Section 3.7," pursuant to the following criteria (with maximum points indicated in parentheses): "Experience and Qualifications" (30 points), "Scope of Services Provided" (30 points), "Minority/Women Business Enterprise--Diversity of Proposer's Company" (5 points), "Documentation of the Proposing Company's Minority/Women Business Enterprise Outreach Programs" (5 points), and "Cost of Services Provided" (30 points). RFP 6.3 warns: "Except for those requirements stated in Section 3.7 and Section 9.0, the failure to respond, provide detailed information or to provide requested proposal elements may result in the reduction of points in the evaluation process." RFP Section 6.4 states: Based upon the results of Section 6.3, the Committee, at its sole discretion, may: interview; recommend award to the top ranked proposer; may recommend award to more than one top ranked proposer; may short-list the top ranked proposers (short-list number to be determined by the Committee) for further consideration or, may reject all proposals received. RFP Section 6.5 states: In the event that the Committee chooses to short-list proposers, the list of short- listed proposers may be further considered by the Committee, [Respondent], or both. The Committee, [Respondent], or both may re- interview the short-listed proposers in order to make an award recommendation (by the Committee) or an award by [Respondent]. During the interview process, no submissions made, after the proposal due date, amending or supplementing the proposal shall be considered. RFP Section 6.6 states: In the event that an Agreement between the Committee, [Respondent] or both, and the selected proposer(s) is deemed necessary, at the sole discretion of the Committee, [Respondent] or both, the Committee will begin negotiations with the selected proposer(s). The Committee reserves the right to negotiate any term, condition, specification or price with the selected proposer(s). . . . RFP Section 7.4 provides that the term of the contract will extend through December 31, 2010, plus possible renewals through December 31, 2015. RFP Section 8.11 states: The award of this RFP is subject to the provisions of Chapter 112, Florida Statutes, as currently enacted or amended from time to time. All proposers must disclose with their proposal the name of any officer, director or agent who is also an employee of [Respondent]. In addition, Gallagher Benefit Services, Inc. will be providing consulting services to [Respondent] in relation to this RFP. All proposers must disclose with their proposal the name of any officer, director or agent, who is also an employee of Gallagher Benefit Services, Inc. or any subsidiaries of Gallagher Benefit Services, Inc. RFP Section 8.13 provides that, in the event of a conflict among documents, the order of priority is as follows: the agreement between offeror and Respondent, RFP addenda (the latest receiving the highest priority), RFP, and offeror's proposal. RFP Section 8.22 specifies the procedure under which a person may protest the specifications of the RFP and warns that the failure to timely protest the RFP specifications "shall constitute a waiver of proceedings " RFP Section 8.23 provides that Respondent will post the Committee's recommendations and tabulations on January 21, 2003. Section 8.23 provides that any person seeking to protest the "decision or intended decision" shall file a notice of protest and formal written protest within certain time limits. Citing School Board Policy 3320 and Chapter 120, Florida Statutes, Section 8.23 warns that the failure to timely protest the Committee's recommendations "shall constitute a waiver of proceedings " RFP Section 8.24 states that Respondent "reserves the right . . . to directly negotiate/purchase per School Board policy and/or State Board Rule 6A-1.012, as currently enacted or as amended from time to time, in lieu of any offer received or award made as a result of this RFP if it is in its best interest to do so." RFP Section 8.28 states: By [Respondent]: [Respondent] agrees to be fully responsible for its acts of negligence, or its agents' acts or negligence when acting within the scope of their employment and agrees to be liable for any damages resulting from said negligence. Nothing herein is intended to serve as a waiver of sovereign immunity by [Respondent]. Nothing herein shall be construed as consent by [Respondent] to be sued by third parties in any manner arising out of any contract. By VENDOR: VENDOR agrees to indemnify, hold harmless and defend [Respondent], its agents, servants and employees from any and all claims, judgments, costs and expenses including, but not limited to, reasonable attorney's fees, reasonable investigative and discovery costs, court costs and all other sums which [Respondent], its agents, servants and employees may pay or become obligated to pay on account of any, all and every claim or demand, or assertion of liability, or any claim or action founded thereon, arising or alleged to have arisen out of the products, goods or services furnished by the VENDOR, its agents, servants or employees; the equipment of the VENDOR, its agents, servants or employees while such equipment is on premises owned or controlled by [Respondent]; or the negligence of VENDOR or the negligence of VENDOR's agents when acting within the scope of their employment, whether such claims, judgments, costs and expenses be for damages, damage to property including [Respondent]'s property, and injury or death of any person whether employed by the VENDOR, [Respondent] or otherwise. RFP Section 8.35.1 provides that Respondent "reserves the right to request additional information [or] reject any or all proposals that do not meet mandatory requirements " RFP Section 8.35.2 provides that Respondent "reserves the right to waive minor irregularities in any proposal," although "such a waiver shall in no way modify the RFP requirements or excuse the proposer from full compliance with the RFP specification and other contract requirements if the proposer is awarded the contract." RFP Section 8.35.3 states that Respondent "may" reject a proposal "if it does not conform to the rules or requirements contained in this RFP." Section 8.35.3 cites as "[e]xamples for rejection" such nonconformities as the failure to file the proposal by the deadline, the failure to execute and return the Required Response Form described in RFP Section 1.0, the failure to respond to all subsections of the RFP, or the addition of provisions by an offeror that reserve the right to accept or reject an award or to enter into a contract or add provisions contrary to those in the RFP. RFP Section 8.42 states: "No submissions made after the proposal opening, amending or supplementing the proposal shall be considered." RFP Section 8.43 states: "The Committee and/or [Respondent] reserves the right to waive minor irregularities or technicalities in proposals received." RFP Section 9.0 states: Proposer agrees, by submission of their [sic] proposal, that any Agreement resulting from this RFP will include the following provisions, which are not subject to negotiation. Proposer agrees to the following: --Obtain and maintain insurance with coverage limits in Special Conditions 7.6 for the term of any Agreement. * * * Twenty-three offerors timely submitted offers in response to the RFP. In addition to Petitioner, the offerors were: American Express Financial Advisors (American Express), Americo Financial Life and Annuity Insurance Company (Americo), CitiStreet Associates LLC (CitiStreet), Equitable Life Assurance Society of the United States and AXA Advisors LLC (Equitable), First Investors Corporation (First Investors), Horace Mann Life Insurance Company (Horace Mann), ING, Life Insurance Company of the Southwest (Southwest), Lincoln Financial Group (Lincoln), MassMutual, MetLife, Inc. (MetLife), Nationwide Life Insurance Company (Nationwide), New York Life Insurance & Annuity Corp. (New York Life), PFS Investments, Inc. (PFS), Pioneer Funds Distributor, Inc. (Pioneer), Putnam Investments (Putnam), Security Benefit Group of Companies (Security Benefit), The Hartford (Hartford), The Legend Group (Legend Group), The Variable Annuity Life Insurance Company (VALIC), TIAA-CREF, and Veritrust Financial, LLC (Veritrust). The responsibility of the Committee is to score the proposals, adopt a scoring threshold, negotiate agreements with offerors scoring at and above the threshold, and recommend to the Superintendent those offerors with which the Committee successfully negotiates agreements. The responsibility of the Superintendent is to recommend to the School Board the offerors that it should accept as vendors of TSAs to its eligible employees. The Committee comprises 15 members, who represent administrators and nonmanagerial employees of Respondent, as well as three of Respondent's nine members of the School Board. The purpose of the Committee is to provide the Superintendent with advice regarding insurance matters. The Superintendent rarely overrides the recommendations of the Committee. In the present procurement, Respondent makes no financial contributions to any vendors, so Respondent's sole interest is the satisfaction of its employees. Thus, it is highly unlikely that the Superintendent or even the School Board would override the recommendations of the Committee. However, as authorized by RFP Section 8.23, Petitioner has protested the recommendations of the Committee. Neither the Superintendent nor the School Board has yet considered the Committee's recommendations, which are the sole subject of this bid protest. Members of the Committee received copies of the proposals shortly after they were submitted. At the same time, the Committee chair assigned to Respondent's insurance consultant, Gallagher Benefit Services, Inc. (Gallagher), the task of examining and evaluating the proposals. Ultimately owned by Arthur J. Gallagher & Company, Gallagher is part of a large family of corporations involved in the financial-services industry. Arthur J. Gallagher's annual revenues exceed $1 billion. Gallagher is the third largest insurance broker in the United States and the fourth largest in the world. A fee-based consultant, Gallagher employs 1000 persons, and the Gallagher family of corporations employs over 7000 persons. GBS Retirement Services, Inc., which is a broker- dealer and part of the Gallagher family of corporations, manages over $3 billion in retirement plan assets. In examining and evaluating the proposals, Gallagher prepared an Analysis of Proposals, which it supplied to each member of the Committee one week prior to its meeting on January 13, 2003. The Analysis of Proposals contains a useful glossary of terms, a "Summary of Returns," an "Individual Fund Expenses Handout," as well as 507 pages of materials consisting almost exclusively of analysis of offers by way of matrices reflecting various provisions of the RFP. The Analysis of Proposals comprises four parts: "Minimum Eligibility," "Experience & Qualifications," "Scope of Services," and "Cost." The first part corresponds to the Minimum Eligibility criteria set forth in RFP Section 3.7. The second, third, and fourth parts correspond to the three 30-point scoring categories set forth in RFP Section 6.3--omitting only the two five-point categories for minority or women business enterprises, which Gallagher did not consider or analyze. As discussed below, Gallagher prepared scoring sheets for the parts of the Analysis of Proposals corresponding to the three 30-point scoring categories. Except for the area of references, as discussed below, these scoring sheets were derived from the more extensive information contained in the Analysis of Proposals. However, Gallagher did not prepare any synopsis of the Minimum Eligibility analysis contained in the Analysis of Proposals. Gallagher did not analyze the proposals of New York Life and Putnam, which submitted proposals, but failed to sign the Required Response Form. Respondent's Purchasing Department determined that these unsigned proposals were nonresponsive and did not forward them to Gallagher for evaluation. Gallagher determined that each of the 21 remaining proposals met the Minimum Eligibility criteria. However, as stated in the Executive Summary of the Analysis of Proposals, Gallagher relayed the doubts of Respondent's Purchasing Department that the proposal of Mass Mutual was signed by an authorized representative and deferred for the Committee's determination whether the proposals of PFS, Legend Group, and Veritrust complied with RFP Section 3.7.5. In general, Gallagher treated the Minimum Eligibility criteria as requirements of substance, not form. Thus, if an offeror neglected to provide the specified documentation in its proposal, Gallagher researched readily available sources to determine if the offeror satisfied the criterion. For RFP Section 3.7.1, which requires that offerors that are insurers must be licensed in Florida and provide a copy of their license, Gallagher checked online with the Florida Department of Financial Services, a publicly available website, to determine whether each such offeror was licensed in Florida. Gallagher determined that each such offeror was properly licensed. Gallagher did not recommend the disqualification of vendors that failed to provide a copy of their current insurance licenses. By these means, Gallagher insured that no unlicensed offeror would be deemed compliant merely by providing an apparent copy of a license and that no properly licensed offeror would be deemed noncompliant merely for omitting a copy of its license or including a copy of an old license. For RFP Section 3.7.2, which requires that offerors that are insurers have an A.M. Best minimum size category of VI and financial rating of A-, Gallagher again checked online with A.M. Best, a subscriber-available website, to determine whether each such offeror was so rated by A.M. Best. Gallagher determined that each such offeror met the minimum A.M. Best ratings. For RFP Section 3.7.3, which requires that offerors that are proposing fixed or variable annuity programs must be licensed in Florida as life insurers, Gallagher relied on its verification under RFP Section 3.7.1. Gallagher reasoned that Section 3.7.3 was redundant because the sale of such programs requires licensure only as an insurer, not as a life insurer. For RFP Section 3.7.4, which requires that offerors that are not insurers must warrant that they are registered broker-dealers, Gallagher checked other online resources to verify the status of noninsurer offerors as registered broker- dealers. For RFP Section 3.7.5, which requires that offerors must be "direct provider[s] of the product(s) offered," Gallagher largely deferred to the Committee because this criterion is not an industry standard. Finding some doubt as to two offerors, Gallagher required them to provide letters clarifying their status as direct providers, but did not analyze any of the proposals for compliance with this criterion. Gallagher's treatment of the Minimum Eligibility criteria as requirements imposed upon an offeror's actual status, and not merely formal requirements imposed upon an offeror's proposal, is the correct interpretation of RFP Sections 3.7.1, 3.7.2, and 3.7.3. These sections require that an offeror, in fact, meet certain requirements. The additional provision in Section 3.7.1 concerning a copy of an insurance certificate serves the convenience of Respondent and does not transform the requirement from one of fact to one of fact and clerical competence in assembling a proposal. RFP Section 3.7.4 seems to require only a representation by noninsurer offerors that they are registered broker dealers. These types of provisions often raise issues in bid challenges when the procuring agency attempts to verify a bidder's response. The issue is somewhat simpler in this case, though, because only four offerors were not insurers: Pioneer, PFS, Legend Group, and Veritrust. None of these offerors scored sufficient points to be designated for negotiations. Also, these four offerors did not compete with Petitioner, which, as an insurer, was not required to comply with Section 3.7.4, so that Gallagher's interpretation of Section 3.7.4, which effectively recommended that these four offerors proceed to scoring, was not especially significant to Petitioner. Under the circumstances, Gallagher's interpretation of Section 3.7.4, which is a reasonable interpretation, if not the better interpretation, is moot. RFP Section 3.7.5, which requires that the offeror be a direct provider, was designed by Respondent to ensure accountability among vendors of TSAs. Gallagher's determination, in effect, to defer to the Committee any close determinations concerning an offeror's compliance with this unusual criterion was entirely reasonable. This decision was preferable to Gallagher's attempting to construe this requirement, with which it had no experience, and possibly recommend the exclusion of an offeror that Respondent would not have excluded. Gallagher also prepared "GBS Scoring Sheets" for each proposal and an overall "Gallagher Benefit Services TSA Evaluation." Although based on the RFP, the GBS Scoring Sheets are products of Gallagher's design and are not a comprehensive restatement of all of the RFP provisions applicable to each category. The GBS Scoring Sheets identify five scoring items for Experience and Qualifications, nine scoring items for Scope of Services Provided, and four scoring items for Cost of Services Provided. The GBS Scoring Sheets assign a maximum of ten points for each of the three categories and deduct a specific number of points from a proposal's score if the proposal fails to satisfy certain items. For ease of reference, at the suggestion of Mr. Weintraub, Gallagher tripled its raw scores, so that the maximum possible scores for each of the three categories are 30 points, which is the maximum possible scores of each of these categories in the RFP. The GBS Scoring Sheets gave each offeror two raw points in Experience and Qualifications, one raw point in Scope of Services Provided, and two raw points in Cost of Services. Petitioner correctly contends in its proposed recommended order that this feature of the scoring in the GBS Scoring Sheets is unexplained, but it is also harmless. For each of the three categories, Gallagher identified items for scoring based in part on the variability of the proposals concerning their responses to such items. It appears that Gallagher also drew on its financial expertise in identifying critical features of the RFP. For Experience and Qualifications, the scoring items are: "403(b) Assets," "Number of Participants," "Rating," "Years," and "References." For "403(b) Assets," the GBS Scoring Sheets reduce a score by two points if the value is under $1 billion, by one point if over $1 billion but not over $10 billion, and by zero points if over $10 billion. For "Number of Participants," the GBS Scoring Sheets reduce the score by two points if less then 100,000 participants or the proposal did not provide the information, by one point if over 100,000 participants but not over 1,000,000 participants, and by zero points if over 1,000,000 participants. For "Rating," the GBS Scoring Sheets reduce the score by one point if the A.M. Best Rating is A or A- and by zero points if the A.M. Best Rating is A+. For "Years," the GBS Scoring Sheets reduce the score by one point if less than 10 years. For "References," the GBS Scoring Sheets reduce the score by two points for "Negative Comments." For Scope of Services Provided, the scoring items are: "VRS" [Voice Response System], "Internet," "Routine Communication," "Electronic Investment Advice," "Allocation Program," "SAS 70," "Restrict Sale of Other Products," "954 Area Code," and "Video or Website." For "VRS" and "Internet," the GBS Scoring Sheets reduce the score by one point each for a limited, rather than full, voice recognition system or Internet access in terms of the ability of a participant to use this medium to change his or her beneficiary. For the remainder of the items, the GBS Scoring Sheets reduce the score by one point if the proposal fails to comply with the item. For Cost of Services Provided, the scoring items are: "Fees," "Investment Performance," "Morningstar Ratings," and "Morningstar Category Rank." For "Fees," the GBS Scoring Sheets reduce a score by two points if "poor," by one point if "reasonable expenses relative to universe," and by zero points if "no front/back end [charge] and competitive total." For "Investment Performance," the GBS Scoring Sheets reduce a score by two points if the proposal did not report performance, by one point if "wrong date or data," and by zero points if "as requested." For "Morningstar Ratings," the GBS Scoring Sheets reduce the score by two points if the average rating is less than three stars, by one point if the average rating is three stars, and by zero points if the average rating is four or five stars. For "Morningstar Category Rank," the GBS Scoring Sheets reduce the score by two points if less than 20 percent of the funds are in the top half, by one point if more than 20 percent but not more than 40 percent of the funds are in the top half, and by zero points if more than 40 percent of the funds are in the top half. Although the GBS Scoring Sheets obviously produced scores for the Committee, a preliminary statement about Experience and Qualifications in the Executive Summary underscores Gallagher's intention not to usurp the Committee's scoring function. Gallagher stated that it was assuming that Respondent would select multiple vendors to sell TSAs to its employees; if so, "the experience and qualifications of all vendors was considered sufficient." If Respondent selected only one or a more limited number of vendors, Gallagher warned: "size and experience would become more critical and should be revisited." In these statements, Gallagher implicitly invited the Committee to concentrate its scores for Experience and Qualifications, even though Gallagher's GBS Scoring Sheets did not do so. The five scoring items for Experience and Qualifications in the GBS Scoring Sheets are fair issues on which to differentiate proposals. These five scoring items are derived from provisions of the RFP, and, although other important provisions are omitted, the included items are significant. Although perhaps not of direct interest to Petitioner, which is an insurer, a potential problem existed with the use of the A.M. Best rating, which is unavailable to noninsurers. However, Gallagher did not reduce the score of any of the four noninsurers for lacking a specific A.M. Best rating. Section 403(b) assets and numbers of plan participants are the subjects of question 7 of the "Company Information" part of the questionnaire. The A.M. Best rating is a Minimum Eligibility criterion set forth in RFP Section 3.7.2. Duration of experience is the subject of questions 3 and 4 of the "Company Information" part of the questionnaire. References are requested in question 13 of the "Company Information" part of the questionnaire. Gallagher's scoring of Experience and Qualifications ranged from a low of 9 points for Americo to a high of 30 points for Legend Group and TIAA-CREF. Petitioner received a 15, which is the third lowest score for this section on the GBS Scoring Sheets. In its proposed recommended order, Petitioner complains primarily about two aspects of its scoring. First, Petitioner contends that Gallagher incorrectly deducted two points for the alleged failure of Petitioner's proposal to state the number of participants. This contention is correct. Petitioner's proposal contained information from which Gallagher should have determined that the proper score for Petitioner on this item was -1 point, not -2 points, which Gallagher assigned to Petitioner. When tripled, this deficiency amounts to three points. However, this scoring anomaly invites consideration of the relationship of Gallagher's scoring to the Committee's scoring. As already noted, Gallagher did not attempt to preempt the Committee's responsibility for scoring. The GBS Scoring Sheets assigned Petitioner 15 points for Experience and Qualifications, but the Committee average score for Petitioner on Experience and Qualifications was a 19. This was the largest difference in scoring between the GBS Scoring Sheets and the Committee average score in Experience and Qualifications. For only one other offeror, Southwest, did the Committee average score in this category differ from the GBS Scoring Sheet score by four points and again the Committee raised Gallagher's score by this amount. Overall, though, the average scores that the Committee assigned each offeror in Experience and Qualifications tracked the GBS Scoring Sheets scores. Cumulatively, the differences amounted to only 18 points, so the increases assigned to Petitioner and Southwest amount to nearly half of the total difference between Gallagher and the Committee members. This fact suggests that the Committee members exercised independence and may have generated a more reliable score than Gallagher did for Petitioner's proposal for the category of Experience and Qualifications. Second, Petitioner contends that Gallagher incorrectly deducted one raw point for references. References was the only item among the three main scoring categories for which Respondent's employees collected the data. One of Respondent's employees in the Benefits Department contacted references for all the offerors and carefully noted their responses to three basic questions. The employee consistently applied a simple, but fair, test for the question at issue, so that an offeror received credit only if the reference answered, "yes" in response to a question regarding its satisfaction with the offeror. One reference of Petitioner answered, "somewhat," and Petitioner lost credit. Gallagher merely applied this data to its scoring matrix when it deducted one raw point from Petitioner's score for this item in Experience and Qualifications. Petitioner argues that other offerors would have suffered a reduction in points, if Gallagher had used another feature to measure customer satisfaction, such as exclusive reliance on 1-800 telephone numbers for service. However, Petitioner has not demonstrated that reducing credit for the absence of an affirmative response to a reference check, even in isolation, is unreasonable, especially when service issues, apart from overall customer satisfaction, receive considerable attention in the items cited in the GBS Scoring Sheets for the category of Scope of Services Provided. In conjunction with its attack on the reference item in the GBS Scoring Sheet, Petitioner argues in its proposed recommended order that investment performance was already covered in three of four items in the category of Cost of Services Provided in the GBS Scoring Sheets. However, Gallagher's decision to emphasize performance in its evaluation of TSAs to be sold to Respondent's employees is entirely reasonable. The nine scoring items for Scope of Services Provided in the GBS Scoring Sheets are fair issues on which to differentiate proposals. Telephone and voice response systems and Internet, as means to change beneficiaries or perform other transactions, are the subjects of Questions 29 and 30 of the "Contract Overview" part of the questionnaire. Electronic investment advice and an asset allocation program are set forth in Questions 22 and 23 of the "Administration" part of the questionnaire. Routine communications with participants, the means by which sales representatives will be restricted from selling other products, and SAS internal controls review are stated in Questions 10, 11, and 12, respectively, in the "Enrollment Procedures and Services" part of the questionnaire. The availability of a local area code and educational video or website are cited in RFP Section 3.10.1. Gallagher's scoring of the Scope of Services Provided ranged from a low of 9 points for Americo to a high of 30 points for Hartford and TIAA-CREF. The next highest score was 27, which was assigned to CitiStreet, MetLife, Security Benefit, and VALIC. The next highest score was 24, which was assigned to five offerors, including Petitioner. Petitioner challenges Gallagher's selection of criteria, but, for the reasons already noted, they fairly reflect important features of the RFP concerning Scope of Service Provided. Petitioner identifies some scoring anomalies where other offerors received no point reductions for omissions or Petitioner received a point reduction when another offeror did not, although Petitioner handled an item in the same way. However, at best, Petitioner showed minor imperfections in Gallagher's scoring, but did not prove that any such minor imperfections misinformed the actual scoring by the Committee. Although the average score assigned by the Committee for Petitioner in Scope of Services Provided was the same as that assigned by Gallagher, the difference between the Committee's average scores and Gallagher's scores was 30 points, as compared to merely 18 points separating them in Experience and Qualifications. Also, the difference in Experience and Qualifications between the Committee and Gallagher was the result of nine increases and one decrease by the Committee. In Scope of Services Provided, the difference between them was the result of 12 increases and 4 decreases. Of the five offerors cited in Petitioner's proposed recommended order as improperly failing to receive reductions in the GBS Scoring Sheets, the Committee reduced the score of one offeror by one point, did not change the scores of two offerors, increased the score of one offeror by one point, and increased the score of one offeror by two points. As Petitioner argued in its proposed recommended order, Gallagher culled a limited number of items from the RFP for scoring Scope of Services Provided in the GBS Scoring Sheets. It is as likely as not that the Committee members, many of whom would be using these vendor services, independently scrutinized a wider range of services than the nine items included in the GBS Scoring Sheets. The four scoring items for Cost of Services Provided in the GBS Scoring Sheets are fair issues on which to differentiate proposals. "Cost--Fees" is specified in Questions 7-19 of the "Contract Overview" part of the questionnaire and Attachment C. "Investment Performance" is covered in Question 19 of the "Variable Annuity" part of the questionnaire and Question 25 of the "Mutual Fund" part of the questionnaire. The "Morningstar Ratings" is the subject of Question 26 of the "Mutual Fund" part of the questionnaire. Although the RFP did not request any Morningstar Ratings information about variable annuities or any Morningstar Category Ranks about any mutual or variable annuity funds, these sources of information about such investments are readily available and reliable. Gallagher's use of such information as scoring items was reasonable. For "Investment Performance," Gallagher deviated from its general approach of evaluating actual facts and instead evaluated formal compliance with the RFP. Points for this item reflect the extent to which an offeror reported the information requested, not the actual performance of the funds. Although Petitioner argues for a more formal approach elsewhere, it contends that Gallagher overemphasized form over substance by making this formal item one of only four items that it scored for Cost of Services. Petitioner's argument is not without its appeal. However, the RFP amply warned that Respondent might award points based on formal compliance with the RFP provisions. Misstated or omitted data in financial performance is especially pernicious and, from Gallagher's perspective, probably vexing, because it impedes analysis of one of the more important features of the proposed TSAs--their rates of return. Also, the record does not suggest that the Committee members reduced their scoring exercise to the items used by Gallagher. The three attachments to the Analysis of Proposals all address cost and performance issues and provide the Committee members with ample bases on which to score the proposals in terms of cost and performance, without regard to the four items selected by Gallagher. The glossary explains common terms, nearly all of which involve cost and performance. The Individual Fund Expenses Handout comprises a series of tables setting forth specific expenses of specific funds offered by the offerors. The Summary of Returns lists each fund identified by each offeror and provides one-, three-, five-, and ten-year returns. Evidencing perhaps a keen interest in the cost and performance of the TSAs in which many Committee members would be investing, the Committee assigned average scores in Cost of Services Provided that varied from Gallagher's scores by the largest amount--a total of 44 points. The Committee increased the scores of 15 offerors in Cost of Services and decreased no scores. Two offerors, TIAA-CREF and Veritrust, received increases of five points, one offeror, Nationwide, received an increase of four points, and six offerors, including Petitioner, received increases of three points. Still, though, Petitioner received an average Committee score of only 15 points for Cost of Services Provided. One offeror received the same score, and one offeror received 11 points; the rest of the offerors received more points, with the highest score being 26 points. The extensive record on the cost and performance of the TSAs that offerors proposed to sell to Respondent's employees provides a rational basis for the low score that Petitioner received in Cost of Services Provided. The Individual Fund Expenses Handout reveals that the range of Petitioner's total fees was from 1.72 percent to 2.69 percent, with all but three of the funds bearing fees of at least 2.0 percent. Security Benefit, which received the highest score for Cost of Services, imposed fees ranging from 0.50 percent to 1.94 percent. TIAA-CREF, which received the second highest score for Cost of Services, imposed fees ranging from 0.34 percent to 0.63 percent. Horace Mann, which received the same score as Petitioner, imposed fees ranging from 1.30 percent to 3.14 percent--in general, comparable to Petitioner's fees. Likewise, Petitioner imposes the highest back-end load or surrender charge--14 percent--on its fixed annuity product, tapering off to 4 percent in the seventh year that the policy has been in effect. Only three offerors had longer periods during which they imposed surrender charges. The Summary of Returns reveals the performance among Petitioner's funds over the last ten years. A scorer might reasonably decide that the high cost of Petitioner's TSAs is not offset sufficiently by their performance, so as to warrant more than 15 points. The Summary of Returns lists 33 funds of Petitioner with total returns for the past ten years. In percentages, these cumulative returns, over ten years, are: -18.17, -12.11, -10.35, -7.54, -4.82, -3.85, -2.64, -0.24, 0.41, 1.28, 1.4, 1.54, 4.12, 4.38, 5.2, 5.36, 5.63, 5.92, 5.94, 6.06, 6.37, 6.73, 7.1, 7.87, 8.01, 9.05, 9.21, 9.96, 9.99, 10.56, 10.6, 10.87, and 13.48. Gallagher did not provide the individual GBS Scoring Sheets to the Committee. At a Committee meeting on January 13, 2003, Gallagher discussed these individual scoring sheets with the Committee and presented the Committee with a two-page synopsis of the overall scores of each of the 21 offerors that it scored for each of the three categories. The two-page scoring synopsis concludes with the following warning: This information represents Gallagher Benefit Services summary comparison of the proposals and is provided solely to assist in the evaluation and scoring process. It is not intended nor should it be construed as direct guidance as to how these firms should be scored. As a committee member, it is within your pervue [sic] to score the proposals as you deem appropriate using all of the information and guidance provided to you. Should you feel based on the information provided that someone deserves a significantly greater or lesser score than might be indicated through our process, you should rely on your own judgment. Our ranking was based on a 10 point system. For illustrative purposes we have multiplied our rankings by three to more closely reflect the range on a 30 point system. Overall, given the presence of School Board members and Respondent's management, as well as the personal attention that a procurement of this type would generate among Respondent's nonmanagement employees on the Committee, it is highly unlikely that Committee members would give undue weight by the GBS Scoring Sheets or two-page evaluation. It is far more likely, given the nature of the procurement and membership of the Committee, that individual members scored these proposals based on their independent examinations of the proposals. These factors also undermine Petitioner's argument concerning undisclosed conflicts of interest. As Petitioner states in its proposed recommended order, no offeror disclosed the name of any of its officers, directors, or agents were employees of Gallagher or its subsidiaries. Employees of Gallagher serve as agents for many financial service providers, including some of the offerors in this case. One of Gallagher's employees is an agent of Petitioner. In its proposed recommended order, Respondent states that the last two sentences of RFP Section 8.11, which impose the relevant conflict-of-interest provisions, are "ineffectual and appear to be misplaced." The use of "agent" was ill-advised because it extends the reach of the conflict-of-interest provision to a vast number of employees of Gallagher, which is part of a very large organization, and thus captures mostly persons who would be unaware of, and uninvolved in, this procurement. Petitioner argues in its proposed recommended order that Gallagher failed to disclose the conflicts. RFP Section 8.11 imposes the responsibility to disclose on the offerors, not Gallagher, and Gallagher was under no responsibility to discharge an obligation of the financial service providers which it represents. Nondisclosing offerors risked the displeasure of the Committee, but the failure of the Committee to penalize such offerors is consistent with the immateriality of these conflicts, which are the product of an overly broad definition. As already noted, Gallagher had no part in the evaluation of the Minority/Women Business Enterprise--Diversity of Proposer's Company (Diversity) and Documentation of the Proposing Company's Minority/Women Business Enterprise Outreach Programs (Outreach). Respondent's Minority and Business Enterprise Contract Compliance Administrator, Michelle-Bryant Wilcox, initially evaluated the proposals under the Diversity and Outreach categories. In its proposed recommended order, Petitioner notes that the Diversity and Outreach provisions do not reflect School Board Policy 7007, which was incorporated by reference into the RFP. However, no prospective offeror challenged the specifications of the RFP, which clearly identified the Diversity and Outreach scoring categories. However, other challenges of Petitioner concerning Ms. Wilcox's scoring of the proposals for Diversity and Outreach are more meritorious. RFP Section 3.12.1 requires offerors to provide diversity information and outreach information. Section 3.12.2 requires offerors to provide information of their involvement in the minority community. Respondent's contention in its proposed recommended order that the flush language on outreach beneath Section 3.12.1, but above Section 3.12.2, is somehow part of Section 3.12.2 is incorrect. Thus, the scoring categories identified in RFP Section 6.3 clearly draw upon the two elements of RFP Section 3.12.1, and not upon any part of Section 3.12.2. Most likely, the language about involvement in the minority community was borrowed from another procurement. In any event, Ms. Wilcox decided to count outreach twice, under both categories, and to count involvement in the minority community under the Outreach category. These decisions cannot be characterized as refinements of the relevant provisions of the RFP, which already suffered from poor draftsmanship, but these decisions do not distort or undermine the procurement process. Outreach is obviously important in maintaining and increasing the diversity of a workforce, and involvement in the minority community may assist in this important effort. Ms. Wilcox fared more poorly in her construction of outreach, for which she unduly emphasized internal recruitment efforts and, thus, the mere existence of antidiscrimination and affirmative action statements of policy. Also, her counting of women was unreliable, leaving the impression that, for example, at times, female blacks might generate double credit and, at other times, female blacks might generate single credit, or white females might not generate any credit at all. Notwithstanding the shortcomings in Ms. Wilcox's work, again, the Committee was able to evaluate the proposals independently when it met on January 13, 2003. At that meeting, the Committee understood that Ms. Wilcox's analysis was merely staff analysis. Given the membership of the Committee, each of the 15 members undoubtedly understood his or her duties to examine the proposals, not merely Ms. Wilcox's analysis, for scoring under the Diversity and Outreach categories. Eventually, the Committee assigned Petitioner 3.5 points for Diversity and 3.7 points for Outreach. These were, respectively, the sixth- and fourth-highest score for these two categories. The higher scores for Diversity were 4.5 for VALIC, 4.3 for TIAA-CREF, 4.1 for MetLife, 3.9 for Southwest, and 3.7 for CitiStreet. The higher scores for Outreach were 4.4 for MetLife, 4.2 for VALIC, and 3.9 for CitiStreet and ING. Even if Petitioner had received the maximum scores for Diversity and Outreach, its total score would have increased by only 2.8 points, which would still leave it under the 70- point threshold. Petitioner has not demonstrated scoring irregularities of such a magnitude for itself or other vendors with respect to these two categories to require such an adjustment. Given the resolution of Petitioner's challenge to the three main scoring categories, Petitioner's challenge to the Diversity and Outreach categories is therefore immaterial. On January 6, 2003, Respondent sent a letter to all offerors that the Committee would meet, as disclosed in RFP Section 5.0, on January 13, 2003, to review proposals and recommend awards. The letter states that Committee decided to interview offerors, so each offeror should have an authorized representative to speak with the Committee. At the January 13 meeting, the Committee decided to reject the MassMutual proposal because its Required Response Form had not been executed by an authorized representative. With the prior elimination of New York Life and Putnam, the Committee proceeded to score the remaining 20 proposals. The Committee's average scores were as follows: TIAA-CREF: 90 VALIC: 88 ING: 86 MetLife: 86 CitiStreet: 82 Security Benefit: 80 Lincoln: 78 Hartford: 75 Equitable: 75 Southwest: 70 Petitioner: 65 Legend Group: 65 American Express: 64 Nationwide: 64 Horace Mann: 63 PFS: 61 First Investors: 60 Pioneer: 56 Veritrust: 51 Americo: 42 After examining the scores, the Committee decided to negotiate contracts with the ten offerors that received at least 70 points. Three days later, the Committee successfully completed negotiations with all ten offerors, and it recommended that the Superintendent approve these negotiated agreements and refer them to the School Board for final approval. Pursuant to the provisions of the RFP, which provided a point of entry to protest these actions of the Committee, Petitioner timely filed a notice of intent to protest and formal written protest. Pursuant to Respondent's policy, Respondent and Petitioner presented their dispute to Respondent's Bid Protest Committee on February 13, 2003. By a two-to-one vote, the Bid Protest Committee initially decided to lower the scoring threshold to 65 points, which would include Petitioner and Legend Group. However, after receiving advice of Respondent's counsel concerning the ability of this committee to lower the scoring threshold set by the Committee, the Bid Protest Committee rescinded its earlier vote and unanimously voted to reject Petitioner's protest. The earlier vote was designed entirely to mollify Petitioner and was not based on any determination of a deficiency in the procurement found by the Bid Protest Committee.

Recommendation It is RECOMMENDED that, on behalf of the Insurance Advisory Committee, Respondent enter a final order dismissing the protest of Great American Financial Resources, Inc., and directing the Insurance Advisory Committee to proceed to recommend to the Superintendent the ten offerors of tax-sheltered annuity programs with which it has negotiated tentative contracts. DONE AND ENTERED this 2nd day of October, 2003, in Tallahassee, Leon County, Florida. S ROBERT E. MEALE Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 2nd day of October, 2003. COPIES FURNISHED: Dr. Franklin L. Till, Jr. Superintendent Broward County School Board 600 Southeast Third Avenue Fort Lauderdale, Florida 33301-3125 Daniel J. Woodring, General Counsel Department of Education 325 West Gaines Street 1244 Turlington Building Tallahassee, Florida 32399-0400 William G. Salim, Jr. Michael W. Moskowitz Moskowitz, Mandell, Salim & Simowitz, P.A. 800 Corporate Drive, Suite 510 Fort Lauderdale, Florida 33334 Robert Paul Vignola Assistant General Counsel Office of the School Board Attorney Kathleen C. Wright Administration Building 600 Southeast Third Avenue, 11th Floor Fort Lauderdale, Florida 33301

Florida Laws (3) 120.576.067.54
# 4
JAMES CHAMPION vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 97-000040 (1997)
Division of Administrative Hearings, Florida Filed:Longwood, Florida Jan. 06, 1997 Number: 97-000040 Latest Update: Oct. 17, 1997

The Issue Whether the Petitioner is eligible for services offered by Respondent to the developmentally disabled under Chapter 393, Florida Statutes (1995).

Findings Of Fact James Champion is a nineteen year old male, born January 22, 1978, who is a permanent resident of the State of Florida. Petitioner currently lives with his natural mother, Susan Champion, who provides him food, shelter and assistance. Petitioner had a normal developmental history until the onset of seizures at the age of four, coinciding with a DPT inoculation. Since then he has had several types of seizures, and has been treated with multiple anti-epileptic medications without success. Currently, Petitioner experiences seizures on an almost daily basis. Petitioner has been oppositional, defiant, and at times volatile in his moods, and can be verbally aggressive. Due to his epilepsy and behavioral difficulties, while in school, Petitioner was placed in a special needs program with small class size and a one-on-one aide. Petitioner graduated from MacArthur North High School in Hollywood, Florida in 1996, with a special diploma. As a child, Petitioner had been given IQ tests. When he was twelve years old, a psychological assessment was performed, yielding a verbal IQ of 100, performance IQ of 88, and full scale IQ of 93. At the age of fourteen, he was tested again, using the Wechsler Intelligence Scale for Children-Third Edition (WISC- III). Intelligence testing yielded a verbal IQ of 71, performance IQ of 74, and a full scale IQ of 70. This testing revealed functioning in the Borderline range (second percentile rank) with a six point margin of error. This level of intellectual functioning reflected a 23 IQ point loss from previous testing. A few months past his eighteenth birthday, Petitioner was tested using the Wechsler Adult Intelligence Scale, Revised (WAIS-R) and other tests. On the WAIS-R, Petitioner yielded a Verbal IQ of 74, performance IQ of 70, and a full scale IQ of 71. Petitioner was diagnosed as having [Axis I] Dysthymic Disorder (300.4); [Axis II] Borderline Intellectual Functioning (V62.89) and Personality Disorder Due to Medical Condition (310.1); and [Axis III] Epilepsy. This test confirmed that Petitioner was functioning in the Borderline range of intellectual functioning. This drop in test results is accounted for as a result of brain damage caused by Petitioner’s continuing episodes of epilepsy. Applying the margin of error to the lower spectrum, the 70 and 71 test results become 67 and 68, respectively. Taking the totality of the circumstances, it is persuasive that Petitioner has shown that he has tested at an IQ level of approximately 70 or below The accepted criteria used for determining mental retardation and used by Respondent to determine eligibility for its Developmental Services Program is significantly subaverage intellectual functioning (an IQ approximately 70 or below on an individually administered IQ test); concurrent deficits or impairments in present adaptive functioning in at least two of the following areas: communication, self- care, home living, social/interpersonal skills, use of community resources, self- direction, functional academic skills, work, leisure, health, and safety; and the onset is before 18 years. 12 In determining an individual’s eligibility for its Developmental Services Program, Respondent has a two-step process. First, it determines whether the individual meets the IQ requirement for mental retardation. If, and only if, the individual satisfies this first step, does Respondent proceed to the second step which is determining whether the individual meets the adaptive functioning requirements. Respondent’s evaluator determined that Petitioner failed to satisfy the IQ requirements and, therefore, it was not necessary to examine Petitioner’s adaptive functioning. Petitioner’s IQ results in his teens should be evaluated from the lower tested result, i.e., at 70, and the margin of error should be placed at the lower, not the higher, spectrum (-3). The lower tested result becomes 67, placing Petitioner in the mild mental retardation category. There was some evidence that Petitioner has deficits in adaptive functioning in communication, home living, social/interpersonal skills, self-direction, work, and safety. However, Respondent’s evaluator did not evaluate Petitioner in this area and the testimony of Petitioner’s mother is insufficient to meet the burden of proof necessary in this forum. The onset of Petitioner’s condition occurred prior to his eighteen birthday.

Recommendation Based on the foregoing findings of fact and conclusions of law, it is RECOMMENDED that the Respondent issue an order determining that prior to his eighteenth birthday, Petitioner has suffered from “significantly subaverage general intellectual functioning.” However, the evidence is insufficient to presently establish if it exists concurrently with deficits in adaptive behavior. It is further RECOMMENDED that this matter be remanded to Petitioner’s evaluator to determine if Petitioner has deficits in adaptive behavior in two or more areas and would therefore, be eligible for developmental services offered by Respondent. DONE AND ENTERED this 11th day of June, 1997, in Tallahassee, Leon County, Florida. DANIEL M. KILBRIDE Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (904) 488-9675 SUNCOM 278-9675 Fax Filing (904) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 11th day of June, 1997. COPIES FURNISHED: Susan C. Champion, Parent 104 Lake Gem Drive Longwood, Florida 32750 Eric Dunlap, Esquire District 7 Legal Office Department of Children and Families 400 West Robinson Street, Suite S-1106 Orlando, Florida 32801 Gregory D. Venz, Agency Clerk Department of Children and Families 1317 Winewood Boulevard, Room 204-X Tallahassee, Florida 32399-0700 Richard Doran, General Counsel Department of Children and Families 1317 Winewood Boulevard, Room 204 Tallahassee, Florida 32399-0700

Florida Laws (3) 120.569120.57393.063
# 5
ECKERD YOUTH ALTERNATIVES, INC. vs DEPARTMENT OF JUVENILE JUSTICE, 10-000535BID (2010)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Feb. 05, 2010 Number: 10-000535BID Latest Update: May 20, 2010

The Issue The issue in this case is whether the intended contract award to Intervenor pursuant to Request for Proposals P2056 for a Community Based Intervention Services Program in Brevard County, Florida, is contrary to Respondent’s governing statutes, Respondent’s policies and rules, and the request for proposals.

Findings Of Fact The Department is an agency of the State of Florida and is the procuring agency for the RFP at issue in this proceeding. Eckerd is a not-for-profit corporation duly-organized under the laws of the State of Florida. White is a not-for-profit corporation duly-organized under the laws of the State of Florida. On September 4, 2009, the Department issued the RFP to select a provider to operate a 44-slot Community Based Intervention Services Program for youth ages ten through 21 in Brevard County, Florida. Eckerd did not protest the specifications of the RFP nor the methodology that the Department had historically used in scoring proposals for similar services within 72 hours of the issuance of the RFP. Eckerd and White submitted timely responses to the RFP on or before October 14, 2009. Under the RFP, one of the categories that the Department evaluates is the “Evaluation of the Past Performance for Non-Residential Programs.” One of the three components of the past performance standard is: Part I—Evaluation for Past Performance in Florida. This includes, as a subcomponent, the provider’s “Combined Success Rate” (CSR), with an assigned value of 200 points. The RFP defines CSR as “Percentage of youth who do not recidivate,” and further provides, “Points are awarded based on the combination of successful youth program completions, and the percentage of youth who do not recidivate.” Each proposer was required to complete and submit with its proposal Attachment C to the RFP entitled “Data Sheet: Past Performance of Non-Residential Programs” (Data Sheet). The Data Sheet was to provide certain information for non-residential programs that the proposer had operated in Fiscal Year (FY) 2006-2007, including program name, contract number, number of completions during FY 2006-2007, and FY 2006-2007 Recidivism Rates. Some of the information, such as the completions and the recidivism rates, was to be based on information found in the Department’s 2008 Florida Comprehensive Accountability Report [CAR].1 The CAR is prepared by the Department and includes program outcomes, including total releases, number of completions, completion rates, and success rates for all types of probation and community intervention programs that released youth in FY 2006-2007. The information is reported by judicial circuit. The CAR may report information on different programs in a judicial circuit, and some of the programs may be included in one contract with a provider. For example, White has one contract in the Second Judicial Circuit, contract number P2028, but the CAR reports information for two programs under contract number P2028. In the Fourth Judicial Circuit, White has one contract, contract number D7102, under which services are provided in Duval and Nassau Counties. The CAR treats the counties as being separate programs and provides separate data for the services provided in Duval County and for the services provided in Nassau County. As set forth in the Data Sheet, the number of completions is defined as “[t]he number of youth completing the program during FY 2006/2007 documented in the Department’s 2008 Florida Comprehensive Accountability Report.” In the CAR, the column titled “N4” provides the number of youth who successfully completed a specific program. The recidivism rate is the percentage of youth who later offended. The Data Sheet provides that the recidivism rate is found in the “2006-2007 Recidivism Column as reported in the Department’s 2008 Florida Comprehensive Accountability Report.” The CAR does not report recidivism rates; it reports success rates. Instead of providing the percentage of youth who completed the program and reoffended, the CAR reports the percentage of youth who did not reoffend. Thus, the recidivism rate is calculated by subtracting the success rate from 100. The Department relies on data from the CAR in determining the percentage of recidivism because the success completion percentages that are reported in the CAR have been calculated already. Therefore, it is easy to calculate the recidivism percentages using the CAR success rates. Paul Hatcher, senior management analyst for the Department, is the individual responsible for determining the CSR for providers who have submitted proposals in response to requests for proposals issued by the Department. Mr. Hatcher is the only individual who performs this function for the Department and has been in this position, performing this task, for over nine years. Mr. Hatcher processes the proposals through a standard procedure. The RFP provides that the information submitted in the Data Sheet “will be verified by the Department [and] [a]ny inaccurate or omitted information will be corrected.” After receiving the proposals, Mr. Hatcher verifies the accuracy of the information provided, including the number of completions and the recidivism rate reported on the Data Sheets submitted with each proposal, against the information provided in the corresponding CAR. If the information regarding a program is reported incorrectly, Mr. Hatcher corrects it to conform to the information in the appropriate CAR. The information submitted on the Data Sheet is submitted by contract number. The contract number is how the Department identifies quality assurance reviews, as well as fiscal and other data sources. For example, for contract number P2028, White submitted the completions for both programs in the Second Judicial Circuit. One program had 19 completions and the other program had 29 completions, for a total of 48. White intended to combine the completions for placement under Column 9 of the Data Sheet but erroneously used the combined number of releases. Pursuant to the RFP, Mr. Hatcher corrected the data to reflect the combined completions as reported in the CAR.2 The CAR reported a success rate of one program as 63% and the success rate of the other program as 69%, which equated to recidivism rates of 37% and 31%. White recorded the recidivism rates for the contract on the Data Sheet as 37%/31%. The same approach was used for reporting the information on contract number D7102 for the services provided in Duval County and Nassau County in the Fourth Judicial Circuit. The services provided in Duval and Nassau Counties were considered by the Department to be one program; however, the CAR reported the information by county as if they were separate programs. The completions for both counties were intended to be combined for reporting on the Data Sheet, but White recorded the combined number of releases on the Data Sheet.3 Mr. Hatcher corrected the data to reflect the combined completions as reported in the CAR. The CAR reported the success rates for the Duval County program as 62% and the success rate of the Nassau County program as 100%. These success rates equated to recidivism rates of 38% and 0%. Because the Department is looking for the recidivism rate for each contract, and the CAR reports the success rates used to calculate recidivism rates by program as in the Second Judicial Circuit or by county as in the Fourth Judicial Circuit, Mr. Hatcher averages the combined recidivism rates to come up with one recidivism rate for each contract in the Second and Fourth Judicial Circuits. Thus, the recidivism rates for contract number P2028 for the Second Judicial Circuit were averaged, resulting in one recidivism rate of 21%. The same method was applied to the recidivism rates for the Fourth Judicial Circuit, resulting in one recidivism rate of 19%. After checking the reported numbers and making all necessary changes, including making corrections to the data to match the data reported in the CAR and averaging the recidivism rates for contracts encompassing more than one program or more than one county, Mr. Hatcher inputs the number of completions and the recidivism rate for each contract into a standardized Microsoft Excel spreadsheet (Spreadsheet), which performs the actual calculations and computes the total CSR for each individual proposal. The Spreadsheet uses fixed formulas to perform the mathematical calculations necessary to determine the CSR for each proposal. The last two columns on the right hand side of the Spreadsheet relate to the CSR, and the numbers shown therein are generated by the fixed formulas. The Spreadsheet performs several calculations. It multiplies the number of completions by the recidivism rate for each contract to obtain the number of youth recidivating. Then, from each contract, the number of youth recidivating was subtracted from the number of total completions to obtain the number of successful youth for each contract. It then adds each of these successful youth figures together and divides the total by the combined total number of completions, resulting in the total CSR. The Department awarded Eckerd a score of 129 points based on a 64.5% Combined Success Rate. The Department awarded White a score of 160 points based on an 80% Combined Success Rate. On December 11, 2009, the Department posted its Notice of Agency Action, which indicated its intent to award the contract to White. The Department awarded White the highest overall score of 1554.49 points. The Department awarded Eckerd the second highest overall score of 1544.49 points. On December 28, 2009, Eckerd filed the Petition pursuant to Subsection 120.57(3), Florida Statutes (2009),4 and Florida Administrative Code Rule 28-110.004. The same Spreadsheet had been used by the Department for several years in calculating the CSR for proposals submitted in response to requests for proposals. Additionally, the Department’s practice of averaging scores for single-contract programs with more than one set of data was not a new scoring concept for the procurement at issue. In 2007, Eckerd submitted a response to Request for Proposal P2303 (RFP P2303) issued by the Department and was awarded the contract by achieving the highest score that was calculated in the same manner as the scores for the procurement at issue.5 In the Data Sheet submitted by Eckerd for RFP P2303, under program name, it entered in one cell, a single-contract program (contract number P70444) operated by Eckerd in the Tenth and Twelfth Judicial Circuits as “Circuit 10, 12, West/EYDC.” In its Data Sheet for RFP P2303, Eckerd took the total number of completions from the 2006 CAR for the Tenth Judicial Circuit and the Twelfth Judicial Circuit for contract number P7044, 19 and 31, respectively, and added them together for a total of 50 completions, which it entered under the “Number of Completions” column. The 2006 CAR reported recidivism rates for the Tenth and Twelfth Judicial Circuits as 26% and 23%, respectively, for contract number 7044. Eckerd listed both recidivism rates in its Data Sheet for RFP P2303 under the “2004-2005 Recidivism Rate.” Mr. Hatcher averaged the recidivism rates for contract number 7044 resulting in a single recidivism rate of 25%. This figure was used in the Spreadsheet to calculate the CSR. The Data Sheet submitted by Eckerd for RFP P2303 also contains two boxes at the bottom of the page that contain statements indicating that each circuit was reported separately and that the cell contains both circuits. The boxes have arrows that point to the relevant combined data cells in the “Number of Completions” and “2004-2005 Recidivism Rate” columns. The information contained in the data cells was derived from the 2006 CAR, which listed separate data for the Tenth and Twelfth Judicial Circuits even though the services provided were through a single contract. Eckerd has also submitted responses for other requests for proposals, RFP P2028, RFP P2032, and RFP P2034, using the same data for each Data Sheet as it used for the Data Sheet submitted for RFP P2303. On February 15, 2010, the Department changed its policy on the scoring methodology to be used in procurements such as the one at issue. The change in policy was expressed in an addendum to RFP P2062. The addendum stated in part: If the 2008 CAR Report lists a program with more than one recidivism percent, list all of the percentages and the number of completions for the program on Attachment C [Data Sheet], and the Department will be treating a Provider’s program with more than one recidivism rate as separate programs for the purposes of calculating success rate and will not be averaging the programs. The Department verifies all program information from the CAR Report. This change in policy was in response to the anticipated changes to the 2009 CAR, which will report and identify multiple areas of information, including more programs with several separately reported recidivism rates. The change in policy was implemented upon evaluation of the 2009 CAR and in anticipation of the release of the 2009 CAR. Eckerd claims that the policy of averaging recidivism percentages for contracts in which the CAR lists more than one recidivism rate resulted in an inaccurate recidivism percentage for White’s contracts for the Second and Fourth Judicial Circuits. For example, in the Fourth Judicial Circuit, the recidivism rate for Duval County was 38%, and the recidivism rate for Nassau County was 0%. Eckerd contends that the multiple recidivism rates as calculated from the CAR should have been used in the Spreadsheet rather than an average of multiple recidivism rates for a single contract. When the recidivism rate that is calculated from the CAR report for Duval County is used, the number of youth reoffending is 87.4, and the number of youth reoffending in Nassau County is 0%. When the average recidivism rate of 19% is used for Duval and Nassau Counties, the number of youth reoffending drops to 44.08, which is not an accurate accounting of the actual number of youth who reoffended. When the recidivism rate is lowered, the success rate will rise. Therefore, if the method espoused by Eckerd was used, White would have received a 71.9 score for CSR, resulting in a decrease of the points awarded to White of 16 points for CSR and a corresponding decrease in the total points awarded to White. Using Eckerd’s methodology, Eckerd would have received the highest number of points.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that a final order be entered dismissing the Petition filed by Eckerd. DONE AND ENTERED this 28th day of April, 2010, in Tallahassee, Leon County, Florida. S SUSAN B. HARRELL Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 28th day of April, 2010.

Florida Laws (2) 120.569120.57 Florida Administrative Code (1) 28-110.004
# 6
JOSEPH BRAXTON vs. DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 83-003612 (1983)
Division of Administrative Hearings, Florida Number: 83-003612 Latest Update: Mar. 20, 1984

Findings Of Fact Respondent, Department of Health and Rehabilitative Services (HRS), operates the Developmental Services Program for individuals in the State of Florida who are qualified, through its Diagnostic and Evaluation Services (DES) DES operates through a team of professionals who make appraisals of applicants for diagnostic services. In the operation of this program, an application is forwarded along with supporting documents to the DES office by the social worker who takes it in. Upon receipt, the package is scanned for a preliminary determination of eligibility. If not obviously ineligible, the applicant is then given a series of evaluations including nursing, educational, psychological, etc. Wherever possible, existing evaluations are utilized. When all the evaluations have been completed, the package is given to a team of experts to develop a treatment plan for the individual. However, if after review of the evaluation it is determined that the applicant is not eligible for the service for some reason, then that individual is notified and no plan is developed. That was the scenario in this case. Criteria applied in evaluating an individual for eligibility for service include: an IQ of 69 or below; defects in adaptive behavior; the condition had to exist prior to the applicant turning 18; and such conditions as cerebral palsy, epilepsy, or autism exist without retardation. Petitioner, Joseph Braxton, currently a resident of a foster home in Orlando, through the Legal Aid Society, applied for placement in Respondent's Developmental Services Program. He is an individual who withdrew from high school in January, 1961, after failing to successfully progress even in remedial classes as indicated in the records of his former school in West Virginia. At the hearing, petitioner indicated he did "pretty good" in school, a definite inconsistency with his record, and feels that, while he is a slow learner, he is not retarded. His IQ, however, when tested during the evaluation process, was determined to be 66. Petitioner quit school before the age of 16 to work at odd jobs and as a house painter for a lady who owned several houses, to help support his mother. Each time he got paid, he would give all but $20.00 to her. She would pay all his bills, do all his shopping, do all the cooking, and take care of his clothes and his room. He admits to being shy and tends to do whatever is asked of him by others. He is unmarried and has no family in this state. Petitioner came to Florida several years ago and thereafter held several unskilled jobs, the last of which was as a migrant farm worker which earned him between $50.00 and $60.00 per week. In March, 1983, while drinking with friends, he fell off the brick wall on which he was sitting and suffered a spinal cord injury for which he was hospitalized until September 10, 1983. At that time, he was transferred to the foster home where he now lives. As a result of the spinal cord injury he sustained, he cannot walk without a cane and has lost the full use of his hands--one more so than the other. He is unable to do more than care for his own basic personal needs, but is desirous of being productive and wants to be trained. Respondent produced nursing and academic assessments, psychosocial evaluations, and the medical records on Petitioner from the hospital where he had been treated. Upon review of all the information available, the committee determined Petitioner was ineligible because there was no proof his deficiency existed before age 18. To arrive at this conclusion, Respondent relied heavily on two sources: (1) the intellectual evaluation done by Dr. Robert T. Edelman, a clinical psychologist, done while Petitioner was in the hospital; and (2) the academic evaluation done by Bonnie Burke, a developmental disabilities consultant, done after he got out of the hospital. At the time of both evaluations, Petitioner was 39 years old. The academic evaluation by Ms. Burke, using the Peabody Individual Achievement Test among others, showed Petitioner was functioning at the sixth grade level overall. That evaluation is broken down as follows: Mathematics 3.8 Reading Recognition 4.7 Reading Comprehension 6.8 Spelling 8.0 General Information 7.5 Average 6.16 The Picture Vocabulary Test showed his Receptive Language Age to be 10.8,and his Visual-Motor Integration Age Equivalent was 5.7. This latter area, of course, may well be attributed to his injury, as Respondent claims, but is not, of itself, determinative. Respondent also claims that the test scores show that if Petitioner had been retarded while going to school, he would not have been able to achieve test scores this high. This position has merit, and it is so found. The previously mentioned high school record showing unsatisfactory performance up to withdrawal is also not persuasive to Respondent's witness, Mr. Carpenter. He contends there are many reasons, other than retardation, for doing poorly in school. Since the criteria cutoff for IQ is 69, and Petitioner's IQ tested at 66, this is a borderline case showing "mild" mental retardation. However, there are other criteria as well, as was seen before. Of equal importance is the question of whether Petitioner has any adaptive behavior defects. These would affect his ability to function in the environment in which he is placed. Respondent, while admitting current adaptive behavior defects, contends they came after, and as a result of, his injury. In support of that position Mr. Carpenter cites the fact that Petitioner survived for many years and was totally self-sufficient after leaving school and before his injury. Adaptive behavior defects can be mental as well as physical--in fact, usually are mental. Though Respondent contends Petitioner can adapt well and is not deficient in that area and cites Dr. Edelman's reference to the Lie Scale, which indicates generally that Petitioner would try to please or answer as he thought was wanted, Petitioner was not interviewed by the committee to see how he would react, nor is there any indication that his case worker got into the question of his ability to handle funds. Respondent contends that Petitioner does not fall within the criteria for enrollment because: Petitioner has a mild mental retardation at present; There is no evidence of retardation prior to age 18; and There is no evidence of severe adaptive behavior problems. It is Respondent's position that petitioner would not benefit from developmental services because: He needs a residential placement; If Petitioner were to be placed in a group home with mental retardeds, it would most likely make him very unhappy and could cause him to regress; and Petitioner needs the stimulation of normal people in his own age group to help him develop, and a residential setting in Respondent's program would not fulfill this need. Respondent contends, through Mr. Carpenter, that Petitioner should be in an Adult Congregate Living Facility and enrolled in vocational rehabilitative schooling with the potential for him to progress to a sole living situation in the future. In Petitioner's case, the factors other than the pre-18 year condition (the program's potential for benefiting Petitioner) did not enter into the original decision to deny Petitioner enrollment. It is quite conceivable that if Petitioner could prove retardation prior to age 18 and were to reapply, he might be accepted. Mr. Carpenter indicated he would be disposed to grant the eligibility under those circumstances, but he could not speak for the rest of the team. With that in mind and recognizing that Petitioner had the school records not available to the team at the time of the original evaluation, the Hearing Officer recessed the hearing to allow the team to reconsider in light of this additional evidence. On February 28, 1984, the original diagnostic and evaluation team which took the action complained of by Petitioner met and considered the evidence from Valley High School. It thereafter determined it could not retreat from its original position.

Florida Laws (1) 393.063
# 7
GWENDOLYN SALTER vs INTERNATIONAL PAPER, 06-000339 (2006)
Division of Administrative Hearings, Florida Filed:Pensacola, Florida Jan. 26, 2006 Number: 06-000339 Latest Update: Jan. 30, 2007

The Issue The issues to be resolved in this proceeding concern whether the Petitioner was the victim of an unlawful employment practice by allegedly being discriminated against as to a demotion and pay decision on the basis of race and sex, in purported violation of Section 760.10, Florida Statutes.

Findings Of Fact The Petitioner, Gwendolyn Salter, is an African- American female who was initially employed by International Paper in June 2000 as an Operator. Shortly thereafter she was promoted to the position of Lumber Grader and on December 15, 2001, was promoted to the hourly position of Lead Grader. The Respondent, International Paper Company, is a forest product company. At its McDavid, Florida facility it operates a sawmill which produces lumber and other building products for sale to forest product dealers, lumber yards, and dealers in the construction industry. The sawmill opened in the year 2000. It is very important to determine the value of all pieces of lumber a sawmill produces. Variance in the grade of a given board can mean the difference in several dollars in value per board. In order to determine and set the value or price of a piece of lumber, the sawmill must employ Lumber Graders. The Graders inspect lumber to determine the type and number of defects and therefore to determine what the grade of a given board is, including the determination of whether a board should be trimmed to eliminate some defects. The Southern Pine Inspection Bureau (SPIB) promulgates standard lumber grading rules, which are accepted and applied by all members of the lumber-producing industry that are members of the Southern Pine Association. The rules govern how each board is graded. Boards are basically graded one, two, three, four or MSR. Number one is the best grade and a board with the most knots or other defects would be graded a four. An MSR board is generally a grade two board that is particularly strong. Such boards are primarily used for structural members. When grading lumber, the graders determine in the grading process whether a board should be trimmed in order to remove defects to enhance its grade and value. If there is a defect at the end of the board, for example, the board can be trimmed to the next shorter standard length, which would actually increase the value of that board. Since a board's value can vary several dollars per piece, depending on its grade, the integrity of the grading system is integral to the successful operation and profitability of the Respondent's sawmill. The McDavid Mill operated with four shifts. There were about four to five graders working on each shift. They worked on only the "dry end" of the mill. That means that they worked grading lumber after it was sawed in the sawmill, had been kiln dried, to remove excess moisture, and dressed in the planer mill. Then it was graded, including any necessary final trimming. The graders, have approximately two seconds to observe a board, flip it to look for defects on all four sides, and grade it. They look for natural defects, including knots, and make a mark or a symbol on a board indicating its grade. Additionally, the McDavid Mill has a machine vision grader (MVG) that automatically grades the wane and the size of each board. Wane is a defect involving a tapering or lessening of a board's proper dimension generally caused by the board being sawn near the outside margin of a log so that the logs curvature and natural taper and bark tend to reduce the size and square dimension along the edge of a board. The Petitioner was promoted to the position of "Lead Grader" on December 15, 2001. It thus became her responsibility to review the performance of each of the graders at the facility. The McDavid Mill through its operations manager, Alan Orcutt, instituted a new Grader Performance System in November 2003. The new system rated graders every eight weeks based on their grade decision accuracy, their trim decision accuracy, and their knowledge of grading rules. A grader's pay could vary every eight-week period depending on his or her performance during the previous eight week period. The Lead Grader, the Petitioner, was charged with implementing this system. It was the responsibility of the Lead Grader to ensure that at least 1200 boards were reviewed for each grader, each eight-week period, either by the Lead Grader, by the MVG operator, or by SPIB reviewers. Essentially, the SPIB reviewers or inspectors would select a "pack" of boards and review them to ensure that the graders had graded those boards properly. The reviewers would record if a board was above grade, below grade, or properly graded. Secondly, the Lead Grader was responsible for reviewing the graders' trim decision accuracy. The Lead Grader was required to review at least 100 "trim boards" for each grader for each period, to determine if the graders made the correct trim decisions. The SPIB inspectors would record the percentage of boards trimmed accurately. For board trimming decisions, the board is not processed, but is placed into a pack where it is viewable in its entirety by the reviewer. The reviewer sees exactly what the grader saw in looking at the board, and thus can determine whether or not the grader made the correct decision about whether to trim the board and, if so, how much, and where. In other words, the viewer can determine whether it was appropriate to make a two-foot cut on one end, whether or not a knot should be cut out of the board or whether it was under-trimmed or over-trimmed. The Lead Grader was also responsible for monitoring the graders' knowledge of lumber grading rules. The Lead Grader was thus required to give two 25-question written examinations (test) every eight-week period to graders concerning the written grading rules. The Lead Grader was required to administer the test twice per eight-week period on a crew-by-crew basis, correct the answers and return a copy to the grader with the correct answers and an overall score. In order to ensure the integrity of the testing process, the tests were only allowed to be given in group settings. Tests were not allowed to be given to individual graders. There had to be more persons present in the testing room than just the Lead Grader and one individual grader being tested. The graders were ranked B, A, or AA, and their pay would be adjusted accordingly. AA was the highest rating and was paid the highest salary rate. The ranking were based on a combination of grading accuracy, trim decision accuracy, and scores on the grading examination. In order to be ranked AA, for example, a grader would be required to have at least two percent of boards above grade, two and a half percent below grade, with 95 percent trim decision accuracy and 90 percent correct answers on the written test of grading rules knowledge. Depending on the scores, he or she could change ranks each eight-week period and thus change the salary level. The McDavid Mill's grader performance system was thus implemented in November 2003. Mr. Orcutt discussed the Lead Grader performance expectations with the Petitioner during a meeting with all graders. Essentially, Mr. Orcutt explained that the Petitioner was responsible for implementing the new performance system, specifically: re-grading 1200 boards per grader per period, reviewing 100 trim boards per grader per period, and providing at least two grading exams to each grader, each period. When the Grader Performance System was implemented in November 2003, the manager, Mr. Orcutt, intended that the first eight-week period would be a "dry run" in which the results of the grading of the various graders would have no effect on pay rates. The second eight-week period which ran from January to February 2004, was supposed to be "for the record" and would affect pay rates. Ultimately, Mr. Orcutt determined that the Petitioner's data on the graders was inaccurate and incomplete, and therefore he decided to extend the dry run until the third eight-week period during which pay rates would be affected by the graders' performance ratings. On February 6, 2004, Mr. Orcutt provided the Petitioner her performance review. In that review, Mr. Orcutt stated that the Petitioner had "not met expectations." He explained that this referred to the Petitioner's failure to keep track of the performance of all the graders, as well as deficiency issues regarding the grade rule test being administered inappropriately. On February 27, 2004, Mr. Orcutt issued a 30-day performance improvement plan to the Petitioner. In it he put her on notice that she must improve her performance in the execution of her role as a Lead Grader. He explained that during the first two months of 2004, the Petitioner had failed to meet the minimum expectations of the Lead Grader performance standards provided to her in November 2003. Specifically, this referred to Mr. Orcutt's finding that the Petitioner had failed to review the requisite number of boards during the first two months of 2004. The Performance Improvement Plan also explained that if the Petitioner failed to meet the expectations that had been explained to her in November 2003, that she would be removed from her position as Lead Grader and demoted to a Shift Grader position. Mr. Orcutt also decided to transfer the responsibility for in-putting the grader data into the computer to the accounting department. Mr. Orcutt explained that he had received complaints from graders to the effect that the Petitioner was failing to accurately keep records of the number of boards being reviewed, as entered into the computer, which could affect the pay rate of the graders. Mr. Orcutt believed that this change would allow the Petitioner to focus on monitoring the graders. Jessie Ford is an African-American male. He was hired by International Paper at the McDavid Mill in March 2004 as a Dry-End Superintendent. He was hired to replace Mr. Orcutt, who had been promoted. Mr. Ford was responsible for safety, production, and quality of the dry-end production portion of the mill, which included supervision of the graders. During his first few months he monitored the Petitioner's performance and determined that the Petitioner appeared to be complying with the Lead Grader performance expectations. He did, however, verbally counsel the Petitioner about giving tests to individual graders, instead of in the required setting of administering tests to the group of graders simultaneously. In August 2004, Mr. Ford asked the Petitioner if she had completed the requisite number of board re-grades in accordance with the lead grader performance expectations. Although the Petitioner indicated to him that she had completed the re-grades, a review of the data by Mr. Ford and Mr. Orcutt indicated that the Petitioner was under the required board count for re-grading as to several of the graders. Mr. Ford and Mr. Orcutt met with the Petitioner to ask her about the missing boards and also about the discrepancy in what she had told Mr. Ford. The Petitioner explained that she believed that she had reviewed 1200 boards. She claimed that she had reached 1200 by combining the boards that were reviewed for trim tests, with boards reviewed with grading. This explanation revealed both that the Petitioner had failed to meet her minimum expectations and also that the Petitioner appeared not to understand the program almost nine months after it had been implemented. Further, there were a couple of graders, for whom the re-grading count remained low, even if one (wrongly) counted their trim test boards in the aggregate total. Mr. Ford and Mr. Orcutt also spoke to the Petitioner about giving tests to graders on an individual basis, as was prohibited by the performance evaluating system that had been implemented in November 2003. That system required that the test be only given to a group of people or more than one person at a time in order to ensure the integrity of the test and of the performance evaluation system. When confronted with the question of whether she had given a test to an individual alone, the Petitioner responded that there was "someone else" in the room during the test. This again demonstrated to Mr. Ford and Mr. Orcutt that the Petitioner did not really understand the requirements of the performance evaluation or testing system. Following that meeting with the Petitioner, Mr. Ford and Mr. Orcutt met with the human resources manager, Karen Rutherford, as well as the mill manager, Alan Smith. They discussed the issues and possible solutions regarding the Petitioner's performance. Mr. Ford explained in his testimony that the group determined that it was his decision whether or not to discipline the Petitioner. Mr. Ford therefore reviewed the November 2003 performance expectations and the February 2004 Performance Improvement Plan directed at the Petitioner. Mr. Ford determined that the Petitioner had been properly advised of her responsibilities as Lead Grader, the consequences of inadequate performance after imposition of the improvement plan, and had failed to meet expectations. On August 27, 2004, he demoted the Petitioner from Lead Grader to a Shift Grader role or position, in accordance with the February Performance Improvement Plan. Mr. Ford explained to the Petitioner that she had failed to obtain the proper amount of boards in her re-counts, and that she had improperly given tests to graders on an individual basis, as prohibited. The Petitioner claims that she was discriminatorily demoted to a grader from the Lead Grader position and was discriminatorily denied a raise. She grounds this position on the contention that similarly-situated employees outside her protected class were treated differently and more favorably in similar situations, and that her temporary supervisor in the fall of 2003, Mr. Garrett, had a discriminatory attitude toward her and against women. This contention is based upon an alleged discriminatory statement he made and upon the fact that he also required her, in addition to her normal Lead Grader duties, to work on the MVG machine when its regular operator had been fired, and after she had trained his replacement. In essence, the Petitioner complains that Jamey Garrett was prejudiced against her and once made a comment that he "really did not care for working with women." Mr. Garrett had temporarily been placed in partial supervision of the Petitioner as Acting Dry-End Superintendent in the late summer and fall of 2003. At about this time, Mr. Orcutt, and/or Mr. Garrett, acting singly or in concert, directed the Petitioner to assume operation of the MVG machine when its normal operator was fired. She also was required to train a replacement operator for the machine. That effort took about three weeks. Thereafter she asked that her temporary assignment to the machine operation be ended. Mr. Garrett instead told her that he needed her to operate it through the rest of 2003 (approximately two to three months). She maintains, in her own testimonial opinion, that Mr. Garrett and/or Mr. Orcutt "loaded her up" with this extra duty in order to intentionally cause her to fail at her duties as Lead Grader. There is no evidence other than the Petitioner's unsupported opinion, that Mr. Garrett or Mr. Orcutt had this intent in requiring her to perform the extra duty, which incidentally began well before the implementation of the November 2003 new performance and evaluation standards for graders, which the Petitioner, as Lead Grader, was required to learn and implement. The only evidence the Petitioner provided concerning Mr. Garrett's discriminatory animus towards women is the alleged statement referenced above. Mr. Garrett denied making that comment. In fact, however, he did admit, regarding concerns he had about working as a supervisor, (which had not been his permanent assignment), that he asked a promotion board to help him work better with women. This was because he feared that his size (he is 6'5") was intimidating to women. This statement does not indicate any discriminatory intent toward women, nor does the alleged statement about not caring about working with women indicate any such intent, especially because of its isolated nature. Moreover, the persuasive evidence shows that Mr. Garrett actually worked well with women and that he promoted several women during his tenure in a supervisory role. The Petitioner herself recalled a conversation with Mr. Garrett in about March 2004 in which he stated that he thought he and the Petitioner were getting along a lot better. Mr. Garrett's only supervisory authority over the Petitioner was as a set-up supervisor near the end of 2003 and the beginning of 2004, during which time he did not have actual disciplinary authority over the Petitioner. That responsibility remained with Mr. Orcutt. He did apparently have the ability to make recommendations concerning employee matters, including discipline, to Mr. Orcutt. In fact, the evidence reveals that the only disciplinary issue concerning the Petitioner in which Mr. Garrett was actually involved occurred on or about March 2004. Mr. Garrett had been instructed by Mr. Orcutt to issue disciplinary sanctions to the Petitioner. Mr. Garrett therefore met with the Petitioner and allowed her to explain her version of the situation. After listening to her side of the story he accepted her explanation as correct and tore up the disciplinary memo and imposed no discipline. Therefore, although she received a less satisfactory performance evaluation in February 2004 and was placed upon a Performance Improvement Plan in late February 2004, no formal discipline was imposed upon the Petitioner until her demotion in August 2004. The Petitioner contends that she was denied a raise because of her sex. The only evidence related to a raise was testimony provided by Mr. Garrett, who indicated that the Petitioner could not receive a raise because she was already receiving the maximum pay for her grade level as a Lead Grader. The raise in question at that time was given to the other graders but not to the Lead Grader, the Petitioner, because she was already making the maximum of her pay range. Indeed, the Petitioner admitted that the raise was given to all graders, including black graders and female graders. The Petitioner acknowledges that she was the only individual denied a raise at the time in question. Mr. Garrett's explanation as to the reason she was not given a raise, when the others of both races and sexes were, is accepted as accurate. Further, the Petitioner admitted that she was also given the same rate of pay as the highest ranking, AA graders once she was demoted out of the lead grader position. The Petitioner's contention based upon her own opinion that she was denied a raise because of her race or sex is not deemed credible and persuasive under these circumstances. On June 8, 2005, the Petitioner filed her charge of discrimination with the Commission. In the charge she claimed that she had been discriminated against between August 2004 and December 2004, based upon her race and sex. She claimed discriminatory demotion as well as being discriminatorily denied a raise. The Commission after its investigation issued a Determination of No Reasonable Cause to believe that an unlawful practice had occurred. That determination was issued on December 13, 2005, and the Petition for Relief was filed January 26, 2006, initiating this proceeding. The Petitioner claimed in her Petition for Relief that in addition to being demoted and denied a raise because of her race and sex that the Respondent maintained a hostile work environment based upon issues of sex and race. The Petitioner also maintained that she was replaced when demoted by a white male, who took over the position of Lead Grader. She contends that the white male, Mr. LePage, was allowed to maintain a count of his own boards or pieces of lumber that he had reviewed while monitoring the graders, while the Petitioner's numbers of reviewed or inspected boards were maintained in the computer record by the company receptionist. She also maintained that Mr. LePage gave a non-proctored skill test to graders, but was not demoted for it, whereas the Petitioner was demoted for allegedly giving a non-proctored skill test to a grader or where no one else was present in the test room. The persuasive evidence shows that the Petitioner was not similarly situated with her replacement, Mr. LePage. Although she contends that Mr. LePage also provided a test to a grader individually instead of giving the test only in a group setting and yet was not demoted, their circumstances are not comparable. Mr. LePage had only held the Lead Grader position for a few months when the allegation against him was raised. When it was raised, his first disciplinary incident in that position, Mr. Ford counseled him and admonished him that he was only to give test in group settings. The Petitioner, however, had been in the Lead Grader position for a number of years and had been warned about the testing issue at least twice previously. Moreover, she had been admonished about her performance in conjunction with her February 2004 performance evaluation and had already been placed on a performance improvement plan at that time in part for that same issue concerning individualized testing. Thus she was not similarly situated as an employee to Mr. LePage who was disciplined less harshly because it was his first such transgression and warning. In a similar context, it is inferred that Mr. LePage was allowed to input his own board counts into the computer system because, unlike the Petitioner, he had not told management that he had performed and reported the proper board counts when that was proved not to be the case.

Recommendation Having considered the foregoing findings of fact, conclusions of law, the evidence of record, the candor and demeanor of the witnesses and the pleadings and arguments of the parties, it is, therefore, RECOMMENDED: That a final order be entered by the Florida Commission on Human Relations dismissing the Petition for Relief in its entirety. DONE AND ENTERED this 3rd day of November, 2006, in Tallahassee, Leon County, Florida. S P. MICHAEL RUFF Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 3rd day of November, 2006. COPIES FURNISHED: Denise Crawford, Agency Clerk Florida Commission on Human Relations 2009 Apalachee Parkway, Suite 100 Tallahassee, Florida 32301 Cecil Howard, General Counsel Florida Commission on Human Relations 2009 Apalachee Parkway, Suite 100 Tallahassee, Florida 32301 Frederick J. Gant, Esquire Albritton & Gant Post Office Box 12322 322 West Cervantes Street Pensacola, Florida 32581 Vincent J. Miraglia, Esquire International Paper Company 6400 Poplar Avenue, Tower II Memphis, Tennessee 38197

USC (1) 42 U.S.C 2000E Florida Laws (4) 120.569120.57760.10760.11
# 8
PSYCHOTHERAPEUTIC SERVICES OF FLORIDA, INC. vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 05-002800BID (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 03, 2005 Number: 05-002800BID Latest Update: May 25, 2006

The Issue Whether Department of Children and Families' (DCF's) intent to award nine contracts for Florida Assertive Community Treatment (FACT), as set forth in Request for Proposal No. 01H05FP3 (RFP), to the Intervenors herein was contrary to that Agency's governing statutes, its rules or policies, or the specifications of the RFP, thereby being clearly erroneous, contrary to competition, arbitrary, or capricious.1/

Findings Of Fact General Facts On April 6, 2005, Respondent DCF's Mental Health Program Office issued a 215-page RFP 01H05FP3 for "Florida Assertive Community Treatment (FACT) Programs for Persons with Severe and Persistent Mental Illnesses Procurement of February and September 2001 Awards." The FACT program is Florida's version of a nationally known model of community mental health intervention for individuals with severe and persistent mental illnesses known as the Program for Assertive Community Treatment (PACT). The PACT model of intervention manual published by the National Association for the Mentally Ill was the basis of developing Florida's adherence to the PACT model. The RFP specifies that proposers commit to PACT's evidence-based team approach. This RFP is not a statewide procurement. It is a single document seeking proposals for 17 separate agency districts/regions, of which Petitioner PSFI was then operating as the incumbent FACT provider in seven districts. The April 6, 2005, RFP contemplated that DCF would contract in each district for an initial three-year term, with a potential three-year renewal provision. The total cost for these contracts, if renewed, is in excess of $100,000,000.00, making this a procurement of substantial size for DCF's Mental Health Program Office. The April 6, 2005, RFP is DCF's second attempt to procure FACT contracts. DCF previously posted and withdrew an RFP for the same 17 contracts, due to concerns that its questions could give certain vendors an unfair advantage. All vendors receiving the RFP had an opportunity to submit written questions about the RFP's contents. Several vendors submitted written questions. The questions and DCF's answers became part of the RFP and were published to all potential vendors prior to the submission of responses. No potential vendor or proposer protested the written specifications and terms of the instant RFP. Therefore, the specifications are not at issue herein. On May 23, 2005, DCF opened the proposals. On June 27, 2005, DCF posted the results of its evaluation(s) in a document entitled "Proposal Tabulation and Notice of Intent to Award," indicating each applicant's score in each of the 17 divisions/regions; indicating it would award to the proposer with the highest score; and providing a mechanism to resolve ties. PSFI has protested DCF's Notice of Intent to Award for the following districts/regions in the April 6, 2005, RFP. The respective scores and intents to award are indicated: District 4 - Jacksonville (highest score - MHRC) Suncoast Region - New Port Richey (MHRC and Harbor tied for the highest score. Harbor is to be awarded the contract based on a tie-breaker procedure) Suncoast Region - Pinellas (highest score - MHRC) Suncoast Region - Hillsborough (highest score - MHRC) District 7 - Rockledge (highest score - MHRC) District 8 - North Fort Myers (highest score - Coastal) District 8 - Naples (highest score - MHRC) District 11 (south) - Miami (highest score - Bayview) District 15 - Stuart (highest score - MHRC) PSFI was the incumbent provider in seven of the foregoing nine protested districts. Bayview was the incumbent provider in the southern region of District 11. PSFI did not protest the District 3 - Gainesville Notice of Intent to Award, wherein PSFI was the successful responder/proposer. Therefore, despite rhetoric to the contrary at hearing, that region, where the same RFP and evaluation procedures accrued to PSFI's benefit, is not at issue herein. Section 2 of the RFP provided: The department reserves the right to reject any and all proposals, withdraw this RFP or to waive minor irregularities when to do so would be in the best interest of the State of Florida. Minor irregularities are defined as a variations from the request for proposal terms and conditions that does not affect the price of the proposal, or give the prospective vendor an advantage or benefit not enjoyed by other prospective vendors, or does not adversely impact the interest of the agency. At its option, the department may correct minor irregularities but is under no obligation to do so whatsoever. Correction or waiver of minor irregularities shall in no way modify the RFP requirements. Stephen Poole is Senior Management Analyst and Supervisor of DCF's Mental Health Program Office. Mr. Poole has been involved with the FACT program since 2000, the first year DCF engaged in a statewide procurement of the program. At all times material to the instant RFP, Mr. Poole's main responsibility was to oversee the FACT initiative. He principally authored the RFP at issue. He had drafted three-to- four RFPs before this one. In developing the instant RFP, Mr. Poole followed DCF's established internal review procedure. He was the sole point of contact for the instant RFP. After review of an Agency for Health Care RFP, the subject of which was closely aligned in the mental health subject area, Mr. Poole selected a 0-10 scoring range for the instant RFP, instead of DCF's historical 0-4 range, to allow individual reviewers more flexibility to score each item in a way that reflected that individual's assessment of each proposal. PSFI's objection to this 1-10 scoring range amounts to an argument that, "It's not the way we've always done it before," and is without merit. DCF expertise and discretion designed the 1-10 rating methodology to give qualified, unbiased scorers latitude to use their own expertise while scoring the proposals. The Agency intended evaluators to have "great latitude," based on their own individualized background and experience, to score each response to each question within each proposal. It was a goal of this RFP that each evaluator would exercise his or her specialized professional education, training, and experience, thereby getting the best result for the Agency. By averaging the scores of three evaluators for each district/region, DCF intended to blend areas of expertise and minimize any irregularities of an individual evaluator that might turn up. All responsive proposals were to be reviewed and rated for Fatal Criteria and Qualitative Requirements by a review panel of DCF personnel. Only proposals meeting the threshold test of Fatal Criteria were reviewed for Qualitative Requirements. Because the basic requirements for a FACT team are the same from area to area, proposers filing in multiple districts/regions submitted the same or almost identical answers to many of the questions asked of them in the RFP. For instance, PSFI submitted ten proposals in response to the RFP. These had identical text and appendices for RFP issues that were not district - or region-specific. Identical text applied to PSFI's responses to Qualitative Requirements 1, 2, 5, 7-14, 16-17, 19-23, 25, and 27-29. Other answers were tailored to the specific districts/regions. MHRC's proposals for the FACT contracts in District - Rockledge, District 15 - Stuart, Suncoast Region- Hillsborough, Suncoast Region - New Port Richey, Suncoast Region - Pinellas, and District 8 - Naples, were identical, with the exception of the identification of the district/region number and the name of the FACT contract that is the subject of each proposal. PSFI also submitted a proposal for each of those contracts. MHRC's proposal to retain its FACT contract in District 4 - Jacksonville is essentially the same as the other six proposals it submitted, except that the District 4 proposal describes aspects of MHRC's current FACT team in the present tense, whereas the other six proposals describe aspects of the proposed FACT teams in the future tense. PSFI also submitted a proposal for that contract/district. Bayview submitted only one proposal, and that was for renewal of its FACT team in District 11 (South) - Miami. Coastal and PSFI submitted proposals for the FACT contract in District 8 - North Fort Myers. Coastal was the only proposer for District 8 - Charlotte. DCF appointed an Evaluation Team to review the 34 proposals received. The review was pursuant to the time line set forth in the RFP, as amended by an addendum issued by DCF. The team of three evaluators always included two of DCF's Central Office employees, Kim Munt and Jane Streit, who each reviewed all 34 proposals for all 17 FACT contracts. For each contract, the third DCF employee on the evaluation team was a DCF employee, selected by the DCF FACT program supervisor in the district/region office where the respective contract would be carried out. The final score for each of 47 questions was the average score of the three evaluators. Ultimately, the district/region office evaluators were Diovelis Stone (District 1), Ken Birtman (District 2), Lisa Cue (District 3), Gene Costlow (District 4), Robert Parkinson (Suncoast Region - Pinellas ), Michael Wade (Suncoast Region - New Port Richey), Geovanna Dominguez (District 7), Linda Pournaras and Marcie Gillis (District 8) (see Findings of Fact 40-42 and 95-98), Joanna Cardwell (District 11), and Carol Eldeen-Todesco (District 15). PSFI complains that all, or most, of the foregoing evaluators had never worked on an RFP before and were insufficiently trained to evaluate this particular RFP, or not trained for it with mathematical precision. On the contrary, all the evaluators received the training specifically designed for this RFP; many had prior FACT experience; and some had prior RFP experience, as related infra. As to the several evaluators' individual abilities to analyze problems associated with FACT, no competent, credible evidence demonstrated that any reviewer was deficient in cognitive ability, thought processes, or reason; nor was it demonstrated that there was any specific bias or favoritism practiced by any evaluator. Specifically, although DCF evaluator Jane Streit's DCF employment did not deal directly with FACT teams, Ms. Streit has earned a Ph.D. in clinical psychology and has 12 years' experience working in the mental health field. DCF evaluator Kim Munt possesses a Master of Science degree and was an Operations Review Specialist in the combined contract management unit for DCF's program offices of Substance Abuse and Mental Health, where she reviewed the model contract attached to the RFP as Attachment One. She also had participated in three previous RFPs. Other individual qualifications of the district evaluators specifically challenged in this proceeding are described infra. Although there is testimony that the evaluators' reading of the RFP before the Initial Meeting, when the evaluators received their specific instructions, followed by formalized training in how to evaluate the proposals, with feedback and testing of the evaluators' understanding of that training, might have been a more desirable approach than the one used, there is no legal requirement for such institutionalized training of bid evaluators, nor is there any other requirement that agencies use "professionalized" bid evaluators. Qualitative Requirement 26 related to the financial stability of the proposing vendors, and was scored by three other evaluators: Cindy Grammas, Janet Holley, and Phyllis McMillman. Petitioner has not challenged any of the scores given for Question 26. On or about May 23, 2005, Mr. Poole read instructions on the proper procedures for reviewing and scoring the proposals to the evaluators at an Initial Meeting of Evaluators. The attending evaluators had an opportunity to ask questions. There was an opportunity for discussion, but no detailed discussion ensued. Afterwards, the evaluators returned to their work locations and independently reviewed the proposals assigned to them. At the Initial Meeting, each evaluator certified that the instructions had been reviewed and discussed as follows: Instructions and Certification for Evaluation Team For Request for Proposal (RFP) 01H05FP3, Released 4/06/05 I agree to read and apply the following list of instructions detailing my responsibilities as an evaluator for RFP 01H05FPH3: ? I will read RFP #01H05FP3 and any addenda in preparation for scoring proposals in response to this RFP. ? I will review the scoring methodology contained in subsection 6.3 entitled, "RFP Rating Methodology," specifically the definition of values attributed to the scores, "0", "1-3", "4-6", "7-9", and "10" as applicable in the scoring of all questions. ? I understand a mandatory review of any scoring variance of more than a value of "7" will take place when it is reported. ? I understand the RFP is the sole source of evaluating all proposals. ? I understand the use of the term "Considerations" is to be used as a guide to assist the evaluator. ? I understand that vendors not currently operating a FACT team will respond to questions as if they will be operating a FACT team in the future or that they will give information about other programs they provide to demonstrate their responsiveness and understanding of the question at issue even though it may not be directly linked to the operation of a FACT team. ? I will not use any personal or professional opinions, knowledge or perceptions either positively or negatively that I may possess about any of the vendors submitting proposals that I am evaluating. ? I understand I have the authority to cease searching any proposal for responses to questions if the response is not in the section indicated. ? I understand that I am to evaluate ALL questions in the RFP with the exception of question number 26 that will be scored by auditors and/or accountants. ? I understand I must record a justification for each score that is to be included in the "Note to Evaluator" section of the Scoring Protocol, and to minimally include a page number reference and/or a brief, written rationale. ? I understand that I must sign a Conflict of Interest Questionnaire/Statement indicating that I have no conflict of interest with any of the vendors submitting a proposal. ? I understand that if a conflict of interest exists, I am required by these instructions to disclose such conflict and excuse myself from scoring any proposals in which a conflict exists. ? I understand that I must sign each and every scoring sheet, known as the Scoring Protocol. ? I understand that I am to begin the scoring of each proposal with a base score of "0" and build a higher score, such as a "1", "2", "3", "4", "5", "6", "7", "8", "9", or "10, as applicable, to be awarded based on the merits of the response. ? I understand the definitions of each of the values used in the scoring or the protocol and acknowledge a copy of those definitions was provided to me. ? I understand that I am to score the proposals independent of other proposals. ? I understand that I am not to discuss my scoring with other evaluators and that I am not to ask questions of other evaluators during the review and scoring of proposals. ? I understand I am permitted to direct questions to Stephen Poole, FACT procurement officer for this RFP, concerning the scoring of the proposals and that I was provided the following phone numbers to call should I have a question: Office phone: (850) 410- 1188 or SC 210-1188; home phone: (850) 422- 1109. ? I understand that I must attend the Debriefing Meeting scheduled for June 23 and June 24, 2005, in person. ? I understand that ALL written documents that I have in my possession concerning RFP #O1H05FP3 must be returned at the Debriefing Meeting. These documents include any and all copies of RFPs, any and all proposals and any and all Scoring Protocols and notes that may have been made separately but not included on the Scoring Protocols. I certify that these instructions were discussed openly in a publicly scheduled meeting and that I affix my signature to this certification indicating my understanding of and compliance with the instructions. Signature Date Representing Each evaluator also had to sign a Conflict of Interest Form designed to assure that he/she had no conflict of interest with any of the vendors submitting proposals that he or she would evaluate. Robert (Rob) Parkinson, the district evaluator for the Suncoast Region - Pinellas, inadvertently failed to check "yes" or "no" to the question, "Are there any other conditions that may cause a conflict of interest?" Mr. Poole did not notice Mr. Parkinson's omission when he collected the conflict of interest statements. However, by signing the Instructions and Certification of the evaluation team, Mr. Parkinson certified that if a conflict of interest existed, he was required to disclose such conflict and to excuse himself from scoring any proposals in which a conflict existed. There is no affirmative evidence that Mr. Parkinson had any conflict of interest for his performance as an evaluator or was biased or prejudiced for or against any of the competing proposers. Therefore, the missing check mark is a minor irregularity which does not evidence bias, prejudice, or preference, and the check's absence should not discount Mr. Parkinson's participation as an evaluator or discount any scores he rendered. (See, also Findings of Fact 79-84.) Linda Pournaras and Marcie Gillis discussed the RFP they had gotten off the DCF website during their car trip to Tallahassee for the Initial Meeting of the evaluators, during a time prior to their receiving any instructions or signing their certifications at the Initial Meeting with Mr. Poole. Ms. Pournaras related her prior RFP experiences, but was clear that Ms. Gillis should listen carefully at the Initial Meeting, follow only those instructions, score independently, and not consult anyone but the authorized contact person (Mr. Poole) after the Initial Meeting closed. Despite PSFI's characterization of this conversation and speculation as to the conversation on Ms. Pournaras' and Ms. Gillis' return trip, the depositions of both women show that Ms. Gillis was not instructed to rely on Ms. Pournaras' interpretations of any RFP and did not do so, even though Ms. Pournaras was her supervisor. The evidence shows that what she was permitted to score, Ms. Gillis scored independently. However, Ms. Gillis, the originally-assigned district evaluator for her district, checked "yes" to the question, "Have you been employed by any of the potential bidders/entities listed within the last 24 months?" By way of full disclosure, she also wrote, "I worked for PSF[I] Naples from 2-04 to 2-05. I do not have a conflict of interest however." Mr. Poole did not replace Ms. Gillis with someone who had not worked for one of the proposers and did not substitute another District 8 representative. Ms. Pournaras testified that she thought she spoke with Mr. Poole about replacing Ms. Gillis, but considering the evidence as a whole, all that is clear is that Ms. Pournaras and Ms. Pournaras's own supervisor decided that Ms. Gillis would not review any PSFI proposals and that Ms. Gillis would review the Coastal proposal. As a result, Ms. Gillis reviewed an unopposed Coastal proposal for the Charlotte County contract and reviewed the Coastal proposal for North Fort Myers. Ms. Pournaras reviewed the PSFI proposal competing with the Coastal proposal for North Fort Myers and the PSFI and MHRC proposals for Naples. At the Initial Meeting, all the evaluators were given blank Scoring Protocol sheets to use in recording their scores for each of the Qualitative Requirements. Each was 48 pages long. Page One contained a reprint of the Rating Methodology set forth in Section 6.3 of the RFP. Each of the remaining pages provided a scoring sheet for an individual Qualitative Requirement from Sections 6.3.1 to 6.3.10 of the RFP, comprised of (a) a reprint of the question and related considerations; (b) a place to record a numerical score, and (c) the "Note to Evaluator" section, which required the evaluator to state: "(2) Where in the proposal you relied upon for the score: (cite page number & paraphrase rationale for score)." (Emphasis supplied). Because of the detail of the foregoing items that went with each individual evaluator during each scoring, the fact that each and every evaluator did not keep a copy of the Instructions and Certification with him/her to refer to while they scored the proposals is without any practical significance. The evaluators were also instructed in the Instructions and Certification to "record a justification for each score that is to be included in the 'Note to Evaluator' section of the Scoring Protocol, and to minimally include a page number reference and/or a brief, written rationale." (Emphasis supplied). In fact, some evaluators provided a comprehensive written justification, some provided a page number only, some provided both, occasionally, someone slipped up and provided no justification; and one district evaluator only provided justifications where he felt a particular scoring range required it. The last was Mr. Costlow in District 4 - Jacksonville. Mr. Costlow felt encouraged to present as much detail as possible, but he also felt he was only required to justify scores he assigned below and above the average range of 4-6 and that the midway range was adequate or satisfactory, so that justifications there were optional. Nonetheless, his comments in justification mostly relate to a proposal's being satisfactory. His justifications were not considered significant nor his scores at great variance with other scores during the Debriefing, when adjustments were made to resolve any irregularities in the scoring system. Mr. Poole testified that the Note to Evaluator Section should have included the same "and/or" language contained in the Instructions and Certification, but he did not explain this to the evaluators. He did not interpret what "and/or" meant in the Instructions and Certification. He felt at the Initial Meeting and at the Debriefing (see Findings of Fact 82, 105, 110, and 112-113), that it was up to the evaluators to justify their answers as they saw fit within the options given. There is no meaningful difference between the "Instructions and Certification" and the "Note to Evaluator," sufficient to invalidate the actual "scoring" of the several evaluators, despite one item using "&" and the other using "and/or." Therefore, it is also determined that the inconsistencies and occasional omissions of some evaluators on the "justification" portions are minor and waiveable irregularities. The RFP, in SECTION 6: PROPOSAL EVALUATION CRITERIA AND RATING SHEET, required DCF to review and rate each responsive proposal for Fatal Criteria and Qualitative Requirements in accordance with the evaluation criteria set forth in the RFP. The RFP Rating Methodology for the proposals was set forth in SECTION 6.3 of the RFP, in pertinent part as follows: When vendors' proposals are screened and meet fatal criteria requirements, the qualitative requirements will be scored based on the factors listed below: No Capability = No or little capability to meet RFP requirements.(Point Value - 0). Poor Capability = Poor or marginal capability to meet RFP requirements (Point Values = 1 through 3). Average Capability = Average capability meet RFP requirements.(Point Values 4 through 6). Above Average Capability = Above average capability to meet RFP requirements. (Point Values = 7 through 9) Excellent Capability = Excellent capability to meet RFP requirements.(Point Values = 10). The maximum number of points that can be scored is 470. Proposals failing to achieve at least 75 percent, or 353 points of the 470 total points, will not be eligible for a FACT contract. One of PSFI's assigned flaws for this RFP and the bidding process is that Mr. Poole provided to the evaluators no definitions of the foregoing terms. Yet it seems this is precisely where the flexible nature of the RFP was intended to be addressed by each individual evaluator's specialized education, training, and experiences. The Qualitative Requirements of the RFP are set forth in paragraph 6.3.1, sub-paragraphs 1-47 of the RFP. Those requirements required DCF to determine the existence and quality of "evidence" in each responsive proposal of the Qualitative Requirements set forth in the RFP. PSFI faults the RFP and the bidding process because the RFP contained no definition of "evidence." However, at the Initial Meeting of Evaluators, Mr. Poole had instructed the evaluators to review the Responses to Written Inquiries, which had also become part of the RFP, (see Finding of Fact 6), and which contained the following explanation of "evidence" to be used by evaluators to score any question: Written Inquiry No. 25 25. What would you consider "evidence" in cases where statistics or documents can't be provided? For example, in the case of "evidence that the individual is the focal point of all activity generated by the team?" Response: Evidence does not necessarily need to be statistics or documents but a detailed explanation about the vendor's vision, values, policies, procedures, and how they directly relate to the individual being the focal point of all activity generated by the team. Any statistics or documents directly related to the issue would strengthen the response. On its face, the first Consideration under Qualitative Requirements 33-38 and 40-41 sought evidence that the proposer could meet each respective performance measure. Some of the evaluators interpreted the thrust of these items to request that the vendor propose a plan for meeting the performance measure. Others looked in the proposals for evidence that the proposer had a good past performance record and a plan for performance of the present RFP. Although PSFI elicited a variety of explanations of how different evaluators' respective thought processes worked, no inconsistency by a single evaluator among proposals was affirmatively demonstrated. No inconsistency within a single district or region (except for North Fort Myers, see infra.) was affirmatively demonstrated. No favoritism toward, or prejudice against, any proposer was affirmatively demonstrated. No scorer preferences for incumbent providers with a "history" was demonstrated. PSFI alleges as a flaw in this RFP and its bidding process that multiple evaluators created scoring scales for themselves, by restricting themselves to narrow portions of the 1-10 scale. Each of the 47 questions provided several delineated "Considerations" for the evaluator to use as guidelines in determining whether the evidence offered by the proposer demonstrated the specific information requested. None of the 47 questions could be answered "yes" or "no." Each of the questions required a narrative response, except for Question 26. (See Finding of Fact 33.) One of the RFP instructions (see Finding of Fact 35), stated that the evaluators were to begin scoring each question with a base score of zero and build up to 10, based on the merits of the response. PSFI established that Ms. Streit, one of the two central office evaluators, did not do this. Ms. Streit described her scoring as "narrow," and she believed that an experienced vendor would likely start with an average score and then she would grade higher if the response merited more points. The scores she assigned ranged from six to nine. Although she did not begin scoring each question with a base score of zero, she did start each proposal's analysis at the average range (4-6) and adjusted her score based on the strength of the response. Her scores for PSFI's proposals ranged from 365-368 and for the MHRC proposals from 367 to 370. This amounts to three deviation points' difference for each of these competitors. Ms. Streit's rationale was consistently applied for each of the 46 questions she reviewed. Her approach was technically contrary to the instructions, but it did not unbalance the scores. It did not de-level the playing field. It was a distinction without a significant difference.5/ Assuming, arguendo, but not finding, that PSFI were entitled to three additional points across the board, it would not alter the final tabulation in any of the districts/regions where MHRC was the high scorer. MHRC and PSFI were the only proposers scored in District 4 - Jacksonville, in District 7 - Rockledge, in District 8 - Naples, and in District 15 - Stuart. In Suncoast - New Port Richey, Harbor and MHRC were tied for first place against PSFI. The scoring in District 8 - North Fort Myers, was an anomaly. (See Findings of Fact 95-98 and Conclusion of Law 135.) Kim Munt, the other central office reviewer, conceded that she might have become more lenient in her scoring over time and, due to the sheer size of the proposals and the need to adhere to the scoring process, she may have had a different "focus" from time-to-time. Ms. Munt initially reviewed proposals randomly, but half-way through the 36 proposals, she began scoring by district. Ms. Munt scored the following PSFI and MHRC proposals: MHRC PSFI June 20 (Stuart) 380 June 21 (Stuart) 346 June 20 (Naples) 380 June 17 (District 4) 338 June 18 (District 4) 379 June 13 345 (Hillsborough) June 16 (Pinellas) 388 June 10 (New Port Richey) 361 June 12 (New Port 379 June 9 (Pinellas) 357 Richey) June 11 379 June 9 (Naples) 358 (Hillsborough) May 24 (Rockledge) 362 May 31 (Rockledge) 314 The above table indicates that Kim Munt gave her lowest score for MHRC on May 24 for Rockledge (362) and she gave her lowest score for PSFI on May 31 for Rockledge (314). While she may not have been "lenient" in scoring the Rockledge proposals, it appears that Ms. Munt was not "more lenient" as she evaluated the proposals. She consistently scored the MHRC proposals higher than the PSFI proposals for the numerous districts for which services were sought by DCF. In fact, MHRC received its lowest score (362) from Ms. Munt in the Rockledge competition, which score of 362 was higher than any score received by PSFI, even though every PSFI proposal was scored later, when Ms. Munt was allegedly "more lenient." Rearranging the foregoing information, Ms. Munt's scores display a scoring pattern that shows no consistent correlation to which proposer's proposal was scored first. Also, Harbor, which tied with MHRC in Suncoast - New Port Richey, was scored on June 11, 2005, in-between Ms. Munt's scorings of MHRC and PSFI. (See Findings of Fact 73-78.) MHRC-Pinellas June 16 388 higher 7 days later PSFI-Pinellas June 9 361 MHRC-New Pt. Richey June 12 379 higher 2 days later PSFI-New Pt. Richey June 10 357 MHRC-Stuart June 20 380 PSFI-Stuart June 21 346 lower 1 day later MHRC-Naples June 20 380 higher 11 days later PSFI-Naples June 9 338 MHRC-District 4 June 18 379 higher 1 day later PSFI-District 4 June 17 338 MHRC-Hillsborough June 11 379 PSFI-Hillsborough June 13 345 lower 2 days later MHRC-Rockledge May 24 362 PSFI-Rockledge May 31 314 lower 6 days later PSFI was scored before MHRC five times out of the seven contracts in dispute. Between them, and in each of those instances, MHRC scored better. However, in three of these, and in four out of the seven districts, there was only one-to-two days' difference in the scoring dates. Ms. Munt believed, and it is only logical that, any loss of focus or any margin for inconsistency would be less where there is less time between scorings (see Finding of Fact 106, n.6), but additionally, Ms. Munt consistently scored MHRC higher than PSFI, whether her rating date for MHRC preceded, or was subsequent to, her rating date for PSFI. MHRC's scores by Munt are compact and vary only from 362 to 388; of these, there are five MHRC scores 379 to 380. PSFI's scores by Ms. Munt are less compact. They vary 314 to 361 points, and PSFI received its lowest score on the last day of Ms. Munt's scoring. The narrow range of Ms. Munt's MHRC scores June 11-20, from 379 to 388 (nine points) is reasonable, given that MHRC's proposals were virtually identical. Her PSFI scores, June 9-21, range from 338 to 361 (a greater spread of 22 points), and present some cause for concern. Nonetheless, given the evidence as a whole and the fact that PSFI's proposals contained identical responses to only 22 questions, the diversity in her scores cannot be determined to be unfair, capricious or arbitrary. PSFI's "bias via the increased leniency of central office evaluator Ms. Munt" theory is not proven. Some evaluators' scores show variations in scoring for identical proposals with similar provisions, but these are explainable by the reasons stated above, by other differences by district, and by innocent human error or confusion in an evaluation as complex as this one. In the absence of some direct evidence of arbitrariness, capriciousness, or bias, or some clear demonstration that these variables could have altered the final tabulation in any district/region, these minor irregularities are of no practical significance and may be waived. Facts Limited to District 4 - Jacksonville MHRC and PSFI were the only providers who had proposals scored by DCF for District 4 - Jacksonville. For District 4 - Jacksonville, MHRC received an averaged score of 377.00 for its proposal, and PSFI received an averaged score of 366.00. DCF's District 4 - Jacksonville evaluator was Gene Costlow. Mr. Costlow had no preference for any vendor, scored all competitors similarly, and believed MHRC provided more information than had been requested in the RFP. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Hunt 379 338 Jane Streit 370 365 Gene Costlow 355 325 Facts Limited to Suncoast Region - New Port Richey DCF scored three proposals for the Suncoast Region - New Port Richey FACT contract, with Intervenor Harbor receiving a score of 379.00, MHRC receiving a score of 379.00, and PSFI receiving a score of 362.67. In Suncoast Region - New Port Richey, the scoring reflects a first place tie between Harbor and MHRC, making PSFI the third place provider. DCF notified the providers in the Proposal Tabulation and Notice of Intent to Award, that it intended to post results of the tiebreaker evaluation on Monday, July 11, 2005. DCF broke the tie and later noticed its intent to award the Suncoast Region - New Port Richey FACT contract to Harbor. DCF's Suncoast Region - New Port Richey evaluator was Mike Wade. Ms. Munt scored PSFI on June 10, 2005, (361); scored Harbor on June 11, 2005, (393); and scored MHRC on June 12, 2005, (379). This is a tight period of "focus" so no "more lenient trend" is likely. The scores do not get progressively higher each day, so no "more lenient trend" is evident in her scores. In Suncoast Region - New Port Richey, the scores for MHRC, Harbor, and PSFI are as follows: MHRC Harbor PSFI Kim Munt 379 393 361 Jane Streit 367 369 370 Mike Wade 364 363 347 Facts Limited to Suncoast Region - Pinellas DCF scored five provider proposals for the Suncoast Region - Pinellas FACT contract. MHRC received a first place score of 396.00. A provider known as "Suncoast Center" received a second place score of 370.00. PSFI received a third place score of 352.00. A fourth place score of 349.00 was assigned to "Northside". A fifth place score of 315.67 was assigned to "Directions for Mental Health". As such, DCF has noticed its intent to award the Suncoast Region - Pinellas FACT contract to MHRC, and PSFI is the third place proposer for that FACT contract. Neither Suncoast Center nor Northside have intervened. DCF's Suncoast Region - Pinellas evaluator was Robert (Rob) Parkinson. Mr. Parkinson had worked with PSFI Fact teams, but he had no preference in vendors. He read the RFP several times before the Initial Meeting. He evaluated consistently. Mr. Parkinson independently scored all the proposals he reviewed on June 17, 2005, except for Question 27. At the Debriefing Meeting, he discovered that the Question 27 Protocol Sheet was missing from his initial scoring packet. He got the necessary sheet. He took time to review relevant portions of each proposal and then left the meeting room to score Question 27 on all the proposals. He took about 25 minutes to score one question on the several proposals assigned to him, and he did this before any debriefing of scores began for his area of the state. He did not discuss his, or anyone else's, score at any time other than as permitted at the part of the Debriefing Meeting for his part of the state. He did not hear any other person's scores for his part of the state called out before he had scored all the proposals for Question 27. He is found to have scored independently. For this part of the state, Ms. Munt scored all the ranked proposers between June 8, 2005, and June 16, 2005, so that she was scoring every one-to-three days in this area, and her "focus" was therefore fairly tight. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 388 357 Jane Streit 368 370 Rob Parkinson 405 319 Facts Limited to Suncoast Region - Hillsborough DCF scored four proposals for the Suncoast Region - Hillsborough FACT contract, which included MHRC with a score of 376.00, an entity known as "Mental Health Care" with a score of 362.00, PSFI with a score of 358.00, and an entity known as "Northside" with a score of 344.67. DCF noticed its intent to award the Suncoast Region - Hillsborough FACT contract to MHRC, and PSFI is the third place proposer for that FACT contract. Neither Mental Health Care nor Northside have intervened. DCF's Suncoast Region - Hillsborough evaluator was Mike Wade. Ms. Munt scored PSFI after MHRC and Northside after she scored PSFI, so no increasing leniency is shown by her scores in this locale. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 372 345 Jane Streit 367 371 Mike Wade 362 348 Facts Limited to District 7 - Rockledge Intervenor MHRC and PSFI were the only providers that had proposals scored by DCF for District 7 Rockledge, with MHRC receiving a first place score of 395.42 and PSFI receiving a second place score of 378.67. DCF has noticed its intent to award the District 7 FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 7 - Rockledge evaluator was Geovanna Dominguez, who is an adult mental health specialist in District 7, where she acts as a FACT team liaison. The scores for MHRC and PSFI are as follows: Kim Munt MHRC 361 PSFI 314 Jane Streit 367 369 Geovanna Dominguez 431 443 Facts Limited to District 8 - North Fort Myers 95 Intervenor Coastal and PSFI were the only providers that had proposals scored by DCF for the District 8 FACT contract, with Coastal receiving a first place score of 399.67 and PSFI receiving a second place score of 350.00. As such, DCF has noticed its intent to award the District 8 - North Fort Myers FACT contract to Coastal Behavioral, and PSFI is the second place proposer for that FACT contract. DCF's District 8 - North Fort Myers' evaluators were Linda Pournaras, who evaluated and scored PSFI's proposal, and Marcie Gillis, who evaluated and scored Coastal's proposal. The scores for Coastal and PSFI are as follows: Coastal PSFI Kim Munt 386 341 Jane Streit 373 370 Marcie Gillis 413 Linda Pournaras 332 Facts Limited to District 8 - Naples Intervenor MHRC and PSFI were the only providers who had proposals scored by DCF for the District 8 - Naples FACT contract, with MHRC receiving a first place score of 381.33 and PSFI receiving a second place score of 356.67. As such, DCF has noticed its intent to award the District 8 - Naples FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 8 - Naples evaluator was Linda Pournaras. The scores for MHRC and PSFI are as follows: Kim Munt MHRC 380 PSFI 358 Jane Streit 367 369 Linda Pournaras 370 333 Facts Limited to District 11 Miami Intervenor Bayview and PSFI were the only providers who had proposals scored by DCF for the District 11 FACT contract, with Bayview receiving a first place score of 393.33 and PFSI receiving a second place score of 377.67. As such, DCF has noticed its intent to award the District 11 FACT contract to Bayview, and PSFI is the second place proposer for that FACT contract. DCF's District 11 evaluator was Joanna Cardwell. Like Mr. Parkinson in Suncoast - Pinellas, Ms. Cardwell also was missing a Question 27 protocol sheet and discovered it was missing upon her arrival at the Debriefing Meeting. She got the necessary sheets from Mr. Poole while in the room set aside for the debriefing. She then independently reviewed and scored the proposals assigned to her with regard to that question during a break and before any scores for her part of the state were called out. She is found to have scored independently. Ms. Streit and Ms. Munt had never previously dealt with PSFI or Bayview. Ms. Munt reviewed the Bayview and PSFI proposals back-to-back on the last two days of the evaluation period, June 21, 2005, for PSFI and June 22, 2005, for Bayview. She did not believe there could be much change in her focus in that short period, and it is found that there was not.6/ The scores for Bayview on June 22, 2005, and for PSFI, on June 21, 2005, excluding Criteria 26, are as follows: Bayview PSFI Kim Munt 372 348 Jane Streit 366 365 Joanna Cardwell 411 410 Facts Limited to District 15 - Stuart Intervenor MHRC and PSFI were the only providers who had proposals scored by DCF for the District 15 FACT contract, with MHRC receiving a first place score of 379.69 and PSFI receiving a second place score of 366.00. As such, DCF has noticed its intent to award the District 15 FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 15 evaluator was Carol Eldeen-Todesco. Ms. Eldeen-Todesco had some problems scoring all the proposals and even started over once. She, like Ms. Streit, started scoring in the middle range but was consistent. She considered PSFI's proposals harder to read than MHRC's proposals. She could not find one answer concerning daily nursing staffing in the format of the PSFI proposal. Therefore, she gave PSFI a "one" score on that question. Because her score on that question was so far deviant from that of the other two evaluators on her team, the process described in the RFP's instructions for scoring variances greater than seven was used during the Debriefing Meeting. After a team caucus, Ms. Eldeen- Todesco changed her score from "one" to "eight" in favor of PFSI. Petitioner has suffered no inequity in this bid procedure through the foregoing process. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 380 346 Jane Streit 367 372 Carol Eldeen- Todesco 365 370 Debriefing, Totaling-up, and Expert Testimony After the evaluators finished scoring their proposals, they met again for a Debriefing Meeting in Tallahassee. Mr. Poole tabulated the scores and averaged them to produce a final score for each proposal. The Agency's methodology for averaging the three independent scores had, as intended, effectively leveled and blended the divergent independent opinions. The following results were posted by the Respondent for the nine districts that Petitioner is challenging: District 4 -Mental Health Resource Center Score: 377 -PSFI Score: 346 (below the 353 threshold) Suncoast Region, Hillsborough -MHRC Score: 376 -Mental Healthcare Score: 362 -PSFI Score: 358 -Northside Score: 344(below the 353 threshold) Suncoast Region, New Port Richey -The Harbor Score: 379 -MHRC Score: 379 -PSFI Score: 362 Under a tie-breaker evaluation process, Harbor was declared the winner. Suncoast Region, Pinellas -MHRC Score: 396 -Suncoast Score: 370 -PSFI Score: 352 -Northside Score: 349 -Directions for Mental Health Score: 315 (The last three vendors were below the 353 threshold.) District 7, Rockledge -MHRC Score: 395.42 -PSFI Score: 378.67 District 8, North Fort Myers -Coastal Behavioral Score: 399.67 -PSFI Score: 350 (below the 353 threshold) District 8, Naples -MHRC Score: 381.33 -PSFI Score: 356.67 District 11 -Bayview Score: 393.33 -PSFI Score: 377.67 District 15 -MHRC Score: 379.69 -PSFI Score: 366 The foregoing scores include the scores given for Question 26, which asked for financial resources required to successfully operate a FACT team. PSFI was permitted to present the opinion of an expert statistician concerning the diverances of all the independent evaluators' scores. However, statistical analysis of divergent bid scoring is not generally accepted as probative of anything.7/ Herein, Petitioner's expert applied a concept called Intraclass Correlation Coefficient (ICC), which purports to measure agreement among all the independent raters in this case. It does not measure capriciousness, fairness, arbitrariness, or any other deficiency of the public entity bid process recognized by custom, rule, policy, or statute. Previously, it has been applied mostly to psychiatric diagnoses/studies and has never been tested as to public procurement. Petitioner's expert acknowledged that ICC deals in assuming that being able to get all reviewers' scores close to a mean, so that they are "repeatable," suggests what a "true score" might be, but that a "true score" is a purely theoretical concept, and that divergence of scores between reviewers does not necessarily indicate unfair competition. His process does not even determine whether the outcome of scoring would be different if measurement error were as represented. Therefore, Petitioner's expert's calculations and test is discredited for this case.

Recommendation Based on the foregoing Findings of Facts and Conclusions of Law, it is RECOMMENDED that the Department of Children and Family Services enter a Final Order that discards all bids in District - North Fort Myers, and awards a FACT team contract to the declared highest scorer in each of the other districts challenged in this case. DONE AND ENTERED this 21st day of February, 2006, in Tallahassee, Leon County, Florida. S ELLA JANE P. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 21st day of February, 2006.

Florida Laws (3) 120.57287.001287.012
# 9
MIAMI-DADE COUNTY SCHOOL BOARD vs ROSE DAVIDSON, 16-007495TTS (2016)
Division of Administrative Hearings, Florida Filed:Miami, Florida Dec. 20, 2016 Number: 16-007495TTS Latest Update: Nov. 08, 2019

The Issue Whether Rose Davidson committed the acts alleged in the Miami-Dade County School Board's Notice of Specific Charges dated April 7, 2017; and, if so, what discipline should be imposed against her.

Findings Of Fact Based on the evidence presented and the record as a whole, the undersigned makes the following findings of fact: Petitioner is the properly constituted School Board charged with the duty to operate, control, and supervise all public schools within the School District of Miami-Dade County, Florida. In the 2015-2016 school year, Respondent was employed, under a professional services contract, as a first-grade teacher at RPES, a public school in Miami-Dade County. Respondent’s employment, and any disciplinary action proposed to be taken against her, is governed by a collective bargaining agreement between the School Board and the United Teachers of Dade, as well as policies of the School Board and Florida law. Respondent has been employed by the School Board since 1990, nearly 27 years. She spent the first ten years of her career teaching at Westview Elementary. She subsequently taught high school for approximately 15 years. She was transferred to the Graham Center in the 2011-2012 school year, where she taught second grade for that school year and the 2012-2013 school year. Respondent was out of work on a period of suspension from the School Board for the 2013-2014 school year. She was reinstated by the School Board based on a Recommended Order issued by an Administrative Law Judge at DOAH in Case No. 13- 3418TTS, which found in her favor. She has been at RPES since June 2014. At the time of this incident in 2016, Respondent was a first grade teacher at RPES. Classroom Testing Incident on April 4, 2016 On April 4, 2016, Respondent administered a standardized math test to her first-grade class.1/ It was undisputed that the math test required the Respondent to read the questions out loud to the class, who then answered the questions on their individual test sheets. Respondent was assisted during the math testing by a reading coach at the school, Tedria Saunders. Saunders had been employed by the School Board for approximately 12 years. Saunders was a certified reading teacher for grades kindergarten through 12. Saunders was acting as a proctor and was expected to observe the students and provide support to Respondent. She stood or sat in the classroom during the course of the math exam. She had the freedom, like the teacher, to move around to observe the testing. She testified that her relationship with Respondent had been professional and friendly, and they had done some curriculum planning together.2/ Count I--Misconduct in Office During the course of the math test, Saunders observed Respondent engage in several testing irregularities. She saw Respondent providing direct assistance and “giving answers” to several students on the examination. More descriptively, she saw Respondent physically point out the correct answer to several students stating “you need to fix the answer.” Saunders also heard Respondent give verbal answers, prompts, or cues to several students, as Respondent walked around the classroom and stood near the desks of several students. As she walked around, Respondent would periodically touch or point to the student test booklet that was on the desk in front of the student, while making sounds and hand motions directing them to the correct answer. For example, when a student pointed to an answer, Respondent would give them a verbal cue or signal that their proposed answer was either right or wrong. Saunders observed Respondent help approximately six to seven students using these methods. Significantly, and after making these observations, Saunders decided to immediately depart from the classroom while the testing was still going on to ask the security guard to summon the appropriate administrator or report the event herself. After going outside, Saunders eventually found her way across the grassy area outside the classroom to the front office where she met with the assistant principal and test chairperson, Ines Diaz. She reported to Diaz that Respondent was improperly assisting the students and giving them answers to the standardized math questions. When Diaz pressed Saunders on the plausibility of her observations, Saunders told her that she was “sure of” what she had seen and reported. Diaz did not recall for certain if she went to the classroom herself, but was certain that Saunders was directed to return to the classroom, continue her observations, and allow the math testing to be completed. The principal, Robin Armstrong, was present briefly during Saunders’ initial visit with Diaz and after Saunders returned to the administrative office when the testing was concluded. She too overheard Saunders report testing irregularities by Respondent. After the incident on April 7, 2016, Armstrong delivered a letter to Respondent warning her not to discuss the matter with any witnesses, students, and other staff members. Pet. Ex. 17. On May 3, 2016, the administrative investigation was assigned to Detective Sofie Shakir. Among other things, the detective interviewed several of Respondent’s students and staff members. Pet. Ex. 7. Her investigation and subsequent findings resulted in the invalidation of the standardized math test for several of Respondent’s first-grade students due to test irregularities and improper assistance by Respondent on April 4, 2016. Pet. Ex. 15.3/ A conference-for-the-record (a meeting which may lead to disciplinary action) was held with Respondent on August 26, 2017. The meeting included her union representative and Helen Pina from the Office of Professional Standards as well as several other members of the school administration. The meeting occurred nearly five months after the incident. Pina recorded the results of the meeting in a Memorandum which was prepared pursuant to her duties. Pet. Ex. 4. Pina documented in the memo that when she formally confronted Respondent with the allegations by Saunders, Respondent stated: “I want to say it was first grade, not 2nd. I performed the tests very professionally. I followed all the directions and no one helped any kids. I followed the directions from the booklet and that is all that I did.”4/ More significant was a written statement prepared by Respondent and submitted to the principal just days after the classroom incident. Pet. Ex. 16. Although Respondent wrote that she administered the test “the proper way,” again she did not take the opportunity to firmly and positively deny Saunders’ allegations, or respond in more detail. This was significant to the undersigned.5/ Rather than an outright and emphatic denial of the accusations in her first written response, she instead accused Saunders of misconduct during the math testing. The undersigned found this unusual, and an attempt by Respondent to deflect the allegation and steer the blame to Saunders--not address it head on.6/ The testimony of Student D.B., called during the hearing, was uncertain, at best, and lacked any persuasive details to support a finding either way. As a result, his testimony was discounted and given little weight. The evidence from Principal Armstrong and Assistant Principal Diaz, concerning the prompt and contemporaneous reporting by Saunders, is consistent with and corroborates Saunders’ testimony concerning the classroom incident.7/ There was no evidence presented to indicate that Saunders had given any prior inconsistent or conflicting statements, nor was her version of the classroom irregularities impeached or discredited in any material fashion. The undersigned carefully read, studied, and compared a collection of deposition transcripts from seven students who were in Respondent’s class the day of the incident. Pet. Exs. 8-14. From those transcripts, only one of the seven students testified that Respondent directly helped or assisted him or her during the standardized math test. See Dep of J.M., Pet. Ex. 11.8/ The other six testified that Respondent did not help them, nor did they see Respondent help other students answer any test questions. Similarly, only one of the seven students deposed stated that Saunders raised her voice or yelled at any one during the math examination. See Dep. of S.D., Pet. Ex. 9.9/ In evaluating the weight to be given to the seven student depositions, the undersigned notes several key points regarding their ability to accurately recall what occurred and to know what they saw. Initially, all of the students were very young at the time of the incident. And while age is not controlling, it should be considered, along with other factors. More significantly, none of these very young students were charged with the responsibility to watch or observe the conduct of the teacher, other students, or the proctor during the testing. Rather, they were instructed to concentrate and focus on their own test, and not their surroundings.10/ In fact, a reasonable inference from the circumstances surrounding this incident, or any other standardized classroom testing for that matter, is that during regulated testing of this nature, students would not be looking or turning around to observe what others are doing. Based on the private nature of classroom testing and warnings that typically precede testing, students have a natural inclination to avoid being accused of having “wandering eyes” during classroom testing. In balance, the undersigned is unable to credit the testimony of those students who claim they did not see anything untoward or improper during the testing. Under these circumstances, the fact that the students did not see anything improper does not persuade the undersigned that it did not happen the way Proctor Saunders’ persuasively testified, distinctly recalled, and contemporaneously reported to the assistant principal. As a result of the testimony adduced at the hearing and the reasonable inferences drawn from the evidence, the undersigned concludes that there was sufficient evidence to prove Count I, Misconduct in Office. Count II--Gross Insubordination Regarding whether or not Respondent instructed students to be untruthful if questioned about her assisting them during the testing, five out of the seven deposed students denied this occurred.11/ One student said Respondent told them not to tell anyone she had “helped” them on the test. However, to put this comment in proper context, this student went on to clarify that “helping” them meant just reading the questions to them. Pet. Ex. 8. As a consequence, the testimony from this student is insignificant.12/ The remaining student, when asked directly if the teacher told him or her not to tell the truth, responded in deposition that Respondent only told him or her “don’t tell your momma I helped you a little on the test.” The description by this student was unclear and conflicting as well. Pet. Ex. 11. In sum, the testimony from this student was not persuasive. In short, the undersigned is persuaded to give some weight and credence to the deposition transcripts of the five students who denied that Respondent told them not to tell the truth if asked. Contrary to the allegation in paragraph 14 of the Notice of Specific Charges, there was no persuasive evidence that Respondent verbally told the students to be untruthful if asked. On Count II, Gross Insubordination, the undersigned finds that the charge was not proven.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Miami-Dade County School Board enter a final order adopting the Findings of Fact and Conclusions of Law contained in this Recommended Order. It is FURTHER RECOMMENDED that the final order impose a significant period of unpaid suspension against Rose Davidson and require retraining by her on standardized testing protocol. DONE AND ENTERED this 19th day of July, 2017, in Tallahassee, Leon County, Florida. S Robert L. Kilbride Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 19th day of July, 2014.

Florida Laws (7) 1012.331012.341012.391012.56120.569120.5790.803
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer