Elawyers Elawyers
Washington| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 48 similar cases
KPMG CONSULTING, INC. vs DEPARTMENT OF REVENUE, 02-001719BID (2002)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida May 01, 2002 Number: 02-001719BID Latest Update: Oct. 15, 2002

The Issue The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from further consideration in the invitation to negotiate process.

Findings Of Fact Procurement Background: The Respondent, the (DOR) is a state agency charged with the responsibility of administering the Child Support Enforcement Program (CSE) for the State of Florida, in accordance with Section 20.21(h), Florida Statutes. The DOR issued an ITN for the CAMS Compliance Enforcement implementation on February 1, 2002. This procurement is designed to give the Department a "state of the art system" that will meet all Federal and State Regulations and Policies for Child Support Enforcement, improve the effectiveness of collections of child support and automate enforcement to the greatest extent possible. It will automate data processing and other decision- support functions and allow rapid implementation of changes in regulatory requirements resulting from revised Federal and State Regulation Policies and Florida initiatives, including statutory initiatives. CSE services suffer from dependence on an inadequate computer system known as the "FLORIDA System" which was not originally designed for CSE and is housed and administered in another agency. The current FLORIDA System cannot meet the Respondent's needs for automation and does not provide the Respondent's need for management and reporting requirements and the need for a more flexible system. The DOR needs a system that will ensure the integrity of its data, will allow the Respondent to consolidate some of the "stand-alone" systems it currently has in place to remedy certain deficiencies of the FLORIDA System and which will help the Child Support Enforcement system and program secure needed improvements. The CSE is also governed by Federal Policy, Rules and Reporting requirements concerning performance. In order to improve its effectiveness in responding to its business partners in the court system, the Department of Children and Family Services, the Sheriff's Departments, employers, financial institutions and workforce development boards, as well as to the Federal requirements, it has become apparent that the CSE agency and system needs a new computer system with the flexibility to respond to the complete requirements of the CSE system. In order to accomplish its goal of acquiring a new computer system, the CSE began the procurement process. The Department hired a team from the Northrup Grumman Corporation headed by Dr. Edward Addy to head the procurement development process. Dr. Addy began a process of defining CSE needs and then developing an ITN which reflected those needs. The process included many individuals in CSE who would be the daily users of the new system. These individuals included Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; Frank Doolittle, Process Manager for Child Support Enforcement Compliance Enforcement and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program. There are two alternative strategies for implementing a large computer system such as CAMS CE: a customized system developed especially for CSE or a Commercial Off The Shelf, Enterprise Resource Plan (COTS/ERP). A COTS/ERP system is a pre-packaged software program, which is implemented as a system- wide solution. Because there is no existing COTS/ERP for child support programs, the team recognized that customization would be required to make the product fit its intended use. The team recognized that other system attributes were also important, such as the ability to convert "legacy data" and to address such factors as data base complexity and data base size. The Evaluation Process: The CAMS CE ITN put forth a tiered process for selecting vendors for negotiation. The first tier involved an evaluation of key proposal topics. The key topics were the vendors past corporate experience (past projects) and its key staff. A vendor was required to score 150 out of a possible 230 points to enable it to continue to the next stage or tier of consideration in the procurement process. The evaluation team wanted to remove vendors who did not have a serious chance of becoming the selected vendor at an early stage. This would prevent an unnecessary expenditure of time and resources by both the CSE and the vendor. The ITN required that the vendors provide three corporate references showing their past corporate experience for evaluation. In other words, the references involved past jobs they had done for other entities which showed relevant experience in relation to the ITN specifications. The Department provided forms to the vendors who in turn provided them to their corporate references that they themselves selected. The vendors also included a summary of their corporate experience in their proposal drafted by the vendors themselves. Table 8.2 of the ITN provided positive and negative criteria by which the corporate references would be evaluated. The list in Table 8.2 is not meant to be exhaustive and is in the nature of an "included but not limited to" standard. The vendors had the freedom to select references whose projects the vendors' believed best fit the criteria upon which each proposal was to be evaluated. For the key staff evaluation standard, the vendors provided summary sheets as well as résumés for each person filling a lead role as key staff members on their proposed project team. Having a competent project team was deemed by the Department to be critical to the success of the procurement and implementation of a large project such as the CAMS CE. Table 8.2 of the ITN provided the criteria by which the key staff would be evaluated. The Evaluation Team: The CSE selected an evaluation team which included Dr. Addy, Mr. Ellis, Mr. Bankirer, Mr. Doolittle and Mr. Esser. Although Dr. Addy had not previously performed the role of an evaluator, he has responded to several procurements for Florida government agencies. He is familiar with Florida's procurement process and has a doctorate in Computer Science as well as seventeen years of experience in information technology. Dr. Addy was the leader of the Northrup Grumman team which primarily developed the ITN with the assistance of personnel from the CSE program itself. Mr. Ellis, Mr. Bankirer and Mr. Doolittle participated in the development of the ITN as well. Mr. Bankirer and Mr. Doolittle had previously been evaluators in other procurements for Federal and State agencies prior to joining the CSE program. Mr. Esser is the Chief of the Bureau of Information Technology at the Department of Highway Safety and Motor Vehicles and has experience in similar, large computer system procurements at that agency. The evaluation team selected by the Department thus has extensive experience in computer technology, as well as knowledge of the requirements of the subject system. The Department provided training regarding the evaluation process to the evaluators as well as a copy of the ITN, the Source Selection Plan and the Source Selection Team Reference Guide. Section 6 of the Source Selection Team Reference Guide entitled "Scoring Concepts" provided guidance to the evaluators for scoring proposals. Section 6.1 entitled "Proposal Evaluation Specification in ITN Section 8" states: Section 8 of the ITN describes the method by which proposals will be evaluated and scored. SST evaluators should be consistent with the method described in the ITN, and the source selection process documented in the Reference Guide and the SST tools are designed to implement this method. All topics that are assigned to an SST evaluator should receive at the proper time an integer score between 0 and 10 (inclusive). Each topic is also assigned a weight factor that is multiplied by the given score in order to place a greater or lesser emphasis on specific topics. (The PES workbook is already set to perform this multiplication upon entry of the score.) Tables 8-2 through 8-6 in the ITN Section 8 list the topics by which the proposals will be scored along with the ITN reference and evaluation and scoring criteria for each topic. The ITN reference points to the primary ITN section that describes the topic. The evaluation and scoring criteria list characteristics that should be used to affect the score negatively or positively. While these characteristics should be used by each SST evaluator, each evaluator is free to emphasize each characteristic more or less than any other characteristic. In addition, the characteristics are not meant to be inclusive, and evaluators may consider other characteristics that are not listed . . . (Emphasis supplied). The preponderant evidence demonstrates that all the evaluators followed these instructions in conducting their evaluations and none used a criterion that was not contained in the ITN, either expressly or implicitly. Scoring Method: The ITN used a 0 to 10 scoring system. The Source Selection Team Guide required that the evaluators use whole integer scores. They were not required to start at "7," which was the average score necessary to achieve a passing 150 points, and then to score up or down from 7. The Department also did not provide guidance to the evaluators regarding a relative value of any score, i.e., what is a "5" as opposed to a "6" or a "7." There is no provision in the ITN which establishes a baseline score or starting point from which the evaluators were required to adjust their scores. The procurement development team had decided to give very little structure to the evaluators as they wanted to have each evaluator score based upon his or her understanding of what was in the proposal. Within the ITN the development team could not sufficiently characterize every potential requirement, in the form that it might be submitted, and provide the consistency of scoring that one would want in a competitive environment. This open-ended approach is a customary method of scoring, particularly in more complex procurements in which generally less guidance is given to evaluators. Providing precise guidance regarding the relative value of any score, regarding the imposition of a baseline score or starting point, from which evaluators were required to adjust their scores, instruction as to weighing of scores and other indicia of precise structure to the evaluators would be more appropriate where the evaluators themselves were not sophisticated, trained and experienced in the type of computer system desired and in the field of information technology and data retrieval generally. The evaluation team, however, was shown to be experienced and trained in information technology and data retrieval and experienced in complex computer system procurement. Mr. Barker is the former Bureau Chief of Procurement for the Department of Management Services. He has 34 years of procurement experience and has participated in many procurements for technology systems similar to CAMS CE. He established that the scoring system used by the Department at this initial stage of the procurement process is a common method. It is customary to leave the numerical value of scores to the discretion of the evaluators based upon each evaluator's experience and review of the relevant documents. According wider discretion to evaluators in such a complex procurement process tends to produce more objective scores. The evaluators scored past corporate experience (references) and key staff according to the criteria in Table 8.2 of the ITN. The evaluators then used different scoring strategies within the discretion accorded to them by the 0 to 10 point scale. Mr. Bankirer established a midrange of 4 to 6 and added or subtracted points based upon how well the proposal addressed the CAMS CE requirements. Evaluator Ellis used 6 as his baseline and added or subtracted points from there. Dr. Addy evaluated the proposals as a composite without a starting point. Mr. Doolittle started with 5 as an average score and then added or subtracted points. Mr. Esser gave points for each attribute in Table 8.2, for key staff, and added the points for the score. For the corporate reference criterion, he subtracted a point for each attribute the reference lacked. As each of the evaluators used the same methodology for the evaluation of each separate vendor's proposal, each vendor was treated the same and thus no specific prejudice to KPMG was demonstrated. Corporate Reference Evaluation: KPMG submitted three corporate references: Duke University Health System (Duke), SSM Health Care (SSM), and Armstrong World Industries (Armstrong). Mr. Bankirer gave the Duke reference a score of 6, the SSM reference a score of 5 and the Armstrong reference a score of 7. Michael Strange, the KPMG Business Development Manager, believed that 6 was a low score. He contended that an average score of 7 was required to make the 150-point threshold for passage to the next level of the ITN consideration. Therefore, a score of 7 would represent minimum compliance, according to Mr. Strange. However, neither the ITN nor the Source Selection Team Guide identified 7 as a minimally compliant score. Mr. Strange's designation of 7 as a minimally compliant score is not provided for in the specifications or the scoring instructions. Mr. James Focht, Senior Manager for KPMG testified that 6 was a low score, based upon the quality of the reference that KPMG had provided. However, Mr. Bankirer found that the Duke reference was actually a small-sized project, with little system development attributes, and that it did not include information regarding a number of records, the data base size involved, the estimated and actual costs and attributes of data base conversion. Mr. Bankirer determined that the Duke reference had little similarity to the CAMS CE procurement requirements and did not provide training or data conversion as attributes for the Duke procurement which are attributes necessary to the CAMS CE procurement. Mr. Strange and Mr. Focht admitted that the Duke reference did not specifically contain the element of data conversion and that under the Table 8.2, omission of this information would negatively affect the score. Mr. Focht admitted that there was no information in the Duke Health reference regarding the number of records and the data base size, all of which factors diminish the quality of Duke as a reference and thus the score accorded to it. Mr. Strange opined that Mr. Bankirer had erred in determining that the Duke project was a significantly small sized project since it only had 1,500 users. Mr. Focht believed that the only size criterion in Table 8.2 was the five million dollar cost threshold, and, because KPMG indicated that the project cost was greater than five million dollars, that KPMG had met the size criterion. Mr. Focht believed that evaluators had difficulty in evaluating the size of the projects in the references due to a lack of training. Mr. Focht was of the view that the evaluator should have been instructed to make "binary choices" on issues such as size. He conceded, however, that evaluators may have looked at other criteria in Table 8.2 to determine the size of the project, such as database size and number of users. However, the corporate references were composite scores by the evaluators, as the ITN did not require separate scores for each factor in Table 8.2. Therefore, Mr. Focht's focus on binary scoring for size, to the exclusion of other criteria, mis-stated the objective of the scoring process. The score given to the corporate references was a composite of all of the factors in Table 8.2, and not merely monetary value size. Although KPMG apparently contends that size, in terms of dollar value, is the critical factor in determining the score for a corporate reference, the vendor questions and answers provided at the pre-proposal conference addressed the issue of relevant criteria. Question 40 of the vendor questions and answers, Volume II, did not single out "project greater than five million dollars" as the only size factor or criterion. QUESTION: Does the state require that each reference provided by the bidder have a contract value greater than $5 million; and serve a large number of users; and include data conversion from a legacy system; and include training development? ANSWER: To get a maximum score for past corporate experience, each reference must meet these criteria. If the criteria are not fully met, the reference will be evaluated, but will be assigned a lower score depending upon the degree to which the referenced project falls short of these required characteristics. Therefore, the cost of the project is shown to be only one component of a composite score. Mr. Strange opined that Mr. Bankirer's comment regarding the Duke reference, "little development, mostly SAP implementation" was irrelevant. Mr. Strange's view was that the CAMS CE was not a development project and Table 8.2 did not specifically list development as a factor on which proposals would be evaluated. Mr. Focht stated that in his belief Mr. Bankirer's comment suggested that Mr. Bankirer did not understand the link between the qualifications in the reference and the nature of KPMG's proposal. Both Strange and Focht believe that the ITN called for a COTS/ERP solution. Mr. Focht stated that the ITN references a COTS/ERP approach numerous times. Although many of the references to COTS/ERP in the ITN also refer to development, Mr. Strange also admitted that the ITN was open to a number of approaches. Furthermore, both the ITN and the Source Selection Team Guide stated that the items in Table 8.2 are not all inclusive and that the evaluators may look to other factors in the ITN. Mr. Bankirer noted that there is no current CSE COTS/ERP product on the market. Therefore, some development will be required to adapt an off-the-shelf product to its intended use as a child support case management system. Mr. Bankirer testified that the Duke project was a small-size project with little development. Duke has three sites while CSE has over 150 sites. Therefore, the Duke project is smaller than CAMS. There was no information provided in the KPMG submittal regarding data base size and number of records with regard to the Duke project. Mr. Bankirer did not receive the information he needed to infer a larger sized-project from the Duke reference. Mr. Esser also gave the Duke reference a score of 6. The reference did not provide the data base information required, which was the number of records in the data base and the number of "gigabytes" of disc storage to store the data, and there was no element of legacy conversion. Dr. Addy gave the Duke reference a score of 5. He accepted the dollar value as greater than five million dollars. He thought that the Duke Project may have included some data conversion, but it was not explicitly stated. The Duke customer evaluated training so he presumed training was provided with the Duke project. The customer ratings for Duke were high as he expected they would be, but similarity to the CAMS CE system was not well explained. He looked at size in terms of numbers of users, number of records and database size. The numbers that were listed were for a relatively small-sized project. There was not much description of the methodology used and so he gave it an overall score of 5. Mr. Doolittle gave the Duke reference a score of 6. He felt that it was an average response. He listed the number of users, the number of locations, that it was on time and on budget, but found that there was no mention of data conversion, database size or number of records. (Consistent with the other evaluators). A review of the evaluators comments makes it apparent that KPMG scores are more a product of a paucity of information provided by KPMG corporate references instead of a lack of evaluator knowledge of the material being evaluated. Mr. Ellis gave a score of 6 for the Duke reference. He used 6 as his baseline. He found the required elements but nothing more justifying in his mind raising the score above 6. Mr. Focht and Mr. Strange expressed the same concerns regarding Bankirer's comment, regarding little development, for the SSM Healthcare reference as they had for the Duke Health reference. However, both Mr. Strange and Mr. Focht admitted that the reference provided no information regarding training. Mr. Strange admitted that the reference had no information regarding data conversion. Training and data conversion are criteria contained in Table 8.2. Mr. Strange also admitted that KPMG had access to Table 8.2 before the proposal was submitted and could have included the information in the proposal. Mr. Bankirer gave the SSM reference a score of 5. He commented that the SAP implementation was not relevant to what the Department was attempting to do with the CAMS CE system. CAMS CE does not have any materials management or procurement components, which was the function of the SAP components and the SSM reference procurement or project. Additionally, there was no training indicated in the SSM reference. Mr. Esser gave the SSM reference a score of 3. His comments were "no training provided, no legacy data conversion, project evaluation was primarily for SAP not KPMG". However, it was KPMG's responsibility in responding to the ITN to provide project information concerning a corporate reference in a clear manner rather than requiring that an evaluator infer compliance with the specifications. Mr. Focht believed that legacy data conversion could be inferred from the reference's description of the project. Mr. Strange opined that Mr. Esser's comment was inaccurate as KPMG installed SAP and made the software work. Mr. Esser gave the SSM reference a score of 3 because the reference described SAP's role, but not KPMG's role in the installation of the software. When providing information in the reference SSM gave answers relating to SAP to the questions regarding system capability, system usability, system reliability but did not state KPMG's role in the installation. SAP is a large enterprise software package. This answer created an impression of little KPMG involvement in the project. Dr. Addy gave the SSM reference a score of 6. Dr. Addy found that the size was over five million dollars and customer ratings were high except for a 7 for usability with reference to a "long learning curve" for users. Data conversion was implied. There was no strong explanation of similarity to CAMS CE. It was generally a small-sized project. He could reason some similarity into it, even though it was not well described in the submittal. Mr. Doolittle gave the SSM reference a score of 6. Mr. Doolittle noted, as positive factors, that the total cost of the project was greater than five million dollars, that it supported 24 sites and 1,500 users as well "migration from a mainframe." However, there were negative factors such as training not being mentioned and a long learning curve for its users. Mr. Ellis gave a score of 6 for SSM, feeling that KPMG met all of the requirements but did not offer more than the basic requirements. Mr. Strange opined that Mr. Bankirer, Dr. Addy and Mr. Ellis (evaluators 1, 5 and 4) were inconsistent with each other in their evaluation of the SSM reference. He stated that this inconsistency showed a flaw in the evaluation process in that the evaluators did not have enough training to uniformly evaluate past corporate experience, thereby, in his view, creating an arbitrary evaluation process. Mr. Bankirer gave the SSM reference a score of 5, Ellis a score of 6, and Addy a score of 6. Even though the scores were similar, Mr. Strange contended that they gave conflicting comments regarding the size of the project. Mr. Ellis stated that the size of the project was hard to determine as the cost was listed as greater than five million dollars and the database size given, but the number of records was not given. Mr. Bankirer found that the project was low in cost and Dr. Addy stated that over five million dollars was a positive factor in his consideration. However, the evaluators looked at all of the factors in Table 8.2 in scoring each reference. Other factors that detracted from KPMG's score for the SSM reference were: similarity to the CAMS system not being explained, according to Dr. Addy; no indication of training (all of the evaluators); the number of records not being provided (evaluator Ellis); little development shown (Bankirer) and usability problems (Dr. Addy). Mr. Strange admitted that the evaluators may have been looking at other factors besides the dollar value size in order to score the SSM reference. Mr. Esser gave the Armstrong reference a score of 6. He felt that the reference did not contain any database information or cost data and that there was no legacy conversion shown. Dr. Addy also gave Armstrong a score of 6. He inferred that this reference had data conversion as well as training and the high dollar volume which were all positive factors. He could not tell, however, from the project description, what role KPMG actually had in the project. Mr. Ellis gave a score of 7 for the Armstrong reference stating that the Armstrong reference offered more information regarding the nature of the project than had the SSM and Duke references. Mr. Bankirer gave KPMG a score of 7 for the Armstrong reference. He found that the positive factors were that the reference had more site locations and offered training but, on the negative side, was not specific regarding KPMG's role in the project. Mr. Focht opined that the evaluators did not understand the nature of the product and services the Department was seeking to obtain as the Department's training did not cover the nature of the procurement and the products and services DOR was seeking. However, when he made this statement he admitted he did not know the evaluators' backgrounds. In fact, Bankirer, Ellis, Addy and Doolittle were part of a group that developed the ITN and clearly knew what CSE was seeking to procure. Further, Mr. Esser stated that he was familiar with COTS and described it as a commercial off-the-shelf software package. Mr. Esser explained that an ERP solution or Enterprise Resource Plan is a package that is designed to do a series of tasks, such as produce standard reports and perform standard operations. He did not believe that he needed further training in COTS/ERP to evaluate the proposals. Mr. Doolittle was also familiar with COTS/ERP and believed, based on the amount of funding, that it was a likely response to the ITN. Dr. Addy's doctoral dissertation research was in the area of software re-use. COTS is one of the components that comprise a development activity and re-use. He became aware during his research of how COTS packages are used in software engineering. He has also been exposed to ERP packages. ERP is only one form of a COTS package. In regard to the development of the ITN and the expectations of the development team, Dr. Addy stated that they were amenable to any solution that met the requirements of the ITN. They fully expected the compliance solutions were going to be comprised of mostly COTS and ERP packages. Furthermore, the ITN in Section 1.1, on page 1-2 states, ". . . FDOR will consider an applicable Enterprise Resource Planning (ERP) or Commercial Off the Shelf (COTS) based solution in addition to custom development." Clearly, this ITN was an open procurement and to train evaluators on only one of the alternative solutions would have biased the evaluation process. Mr. Doolittle gave each of the KPMG corporate references a score of 6. Mr. Strange and Mr. Focht questioned the appropriateness of these scores as the corporate references themselves gave KPMG average ratings of 8.3, 8.2 and 8.0. However, Mr. Focht admitted that Mr. Doolittle's comments regarding the corporate references were a mixture of positive and negative comments. Mr. Focht believed, however, that as the reference corporations considered the same factors for providing ratings on the reference forms, that it was inconsistent for Mr. Doolittle to separately evaluate the same factors that the corporations had already rated. However, there is no evidence in the record that KPMG provided Table 8.2 to the companies completing the reference forms and that the companies consulted the table when completing their reference forms. Therefore, KPMG did not prove that it had taken all measures available to it to improve its scores. Moreover, Mr. Focht's criticism would impose a requirement on Mr. Doolittle's evaluation which was not supported by the ITN. Mr. Focht admitted that there was no criteria in the ITN which limited the evaluator's discretion in scoring to the ratings given to the corporate references by those corporate reference customers. All of the evaluators used Table 8.2 as their guide for scoring the corporate references. As part of his evaluation, Dr. Addy looked at the methodology used by the proposers in each of the corporate references to implement the solution for that reference company. He was looking at methodology to determine its degree of similarity to CAMS CE. While not specifically listed in Table 8.2 as a similarity to CAMS, Table 8.2 states that the list is not all inclusive. Clearly, methodology is a measure of similarity and therefore is not an arbitrary criterion. Moreover, as Dr. Addy used the same process and criteria in evaluating all of the proposals there was no prejudice to KPMG by use of this criterion since all vendors were subjected to it. Mr. Strange stated that KPMG appeared to receive lower scores for SAP applications than other vendors. For example, evaluator 1 gave a score of 7 to Deloitte's reference for Suntax. Suntax is an SAP implementation. It is difficult to draw comparisons across vendors, yet the evaluators consistently found that KPMG references lacked key elements such as data conversion, information on starting and ending costs, and information on database size. All of these missing elements contributed to a reduction in KPMG's scores. Nevertheless, KPMG received average scores of 5.5 for Duke, 5.7 for SSM and 6.3 for Armstrong, compared with the score of 7 received by Deloitte for Suntax. There is only a gap of 1.5 to .7 points between Deloitte and KPMG's scores for SAP implementations, despite the deficient information within KPMG's corporate references. Key Staff Criterion: The proposals contain a summary of the experience of key staff and attached résumés. KPMG's proposed key staff person for Testing Lead was Frank Traglia. Mr. Traglia's summary showed that he had 25-years' experience respectively, in the areas of child support enforcement, information technology, project management and testing. Strange and Focht admitted that Traglia's résumé did not specifically list any testing experience. Mr. Focht further admitted that it was not unreasonable for evaluators to give the Testing Lead a lower score due to the lack of specific testing information in Traglia's résumé. Mr. Strange explained that the résumé was from a database of résumés. The summary sheet, however, was prepared by those KPMG employees who prepared the proposal. All of the evaluators resolved the conflicting information between the summary sheet and the résumé by crediting the résumé as more accurate. Each evaluator thought that the résumé was more specific and expected to see specific information regarding testing experience on the résumé for someone proposed as the Testing Lead person. Evaluators Addy and Ellis gave scores to the Testing Lead criterion of 4 and 5. Mr. Ron Vandenberg (evaluator 8) gave the Testing Lead a score of 9. Mr. Vandenberg was the only evaluator to give the Testing Lead a high score. The other evaluators gave the Testing Lead an average score of 4.2. The Vandenberg score thus appears anomalous. All of the evaluators gave the Testing Lead a lower score as it did not specifically list testing experience. Dr. Addy found that the summary sheet listed 25-years of experience in child support enforcement, information technology, and project management and system testing. As he did not believe this person had 100 years of experience, he assumed those experience categories ran concurrently. A strong candidate for Testing Lead should demonstrate a combination of testing experience, education and certification, according to Dr. Addy. Mr. Doolittle also expected to see testing experience mentioned in the résumé. When evaluating the Testing Lead, Mr. Bankirer first looked at the team skills matrix and found it interesting that testing was not one of the categories of skills listed for the Testing Lead. He then looked at the summary sheet and résumé from Mr. Traglia. He gave a lower score to Traglia as he thought that KPMG should have put forward someone with demonstrable testing experience. The evaluators gave a composite score to key staff based on the criteria in Table 8.2. In order to derive the composite score that he gave each staff person, Mr. Esser created a scoring system wherein he awarded points for each attribute in Table 8.2 and then added the points together to arrive at a composite score. Among the criteria he rated, Mr. Esser awarded points for CSE experience. Mr. Focht and Mr. Strange contended that since the term CSE experience is not actually listed in Table 8.2 that Mr. Esser was incorrect in awarding points for CSE experience in his evaluation. Table 8.2 does refer to relevant experience. There is no specific definition provided in Table 8.2 for relevant experience. Mr. Focht stated that relevant experience is limited to COTS/ERP experience, system development, life cycle and project management methodologies. However, these factors are also not listed in Table 8.2. Mr. Strange limited relevance to experience in the specific role for which the key staff person was proposed. This is a limitation that also is not imposed by Table 8.2. CSE experience is no more or less relevant than the factors posited by KPMG as relevant experience. Moreover, KPMG included a column in its own descriptive table of key staffs for CSE experience. KPMG must have seen this information as relevant if it included it in its proposal as well. Inclusion of this information in its proposal demonstrated that KPMG must have believed CSE experience was relevant at the time its submitted its proposal. Mr. Strange held the view that, in the bidders conference in a reply to a vendor question, the Department representative stated that CSE experience was not required. Therefore, Mr. Esser could not use such experience to evaluate key staff. Question 47 of the Vendor Questions and Answers, Volume 2 stated: QUESTION: In scoring the Past Corporate Experience section, Child Support experience is not mentioned as a criterion. Would the State be willing to modify the criteria to include at least three Child Support implementations as a requirement? ANSWER: No. However, a child support implementation that also meets the other characteristics (contract value greater than $5 million, serves a large number of users, includes data conversion from a legacy system and includes training development) would be considered "similar to CAMS CE." The Department's statement involved the scoring of corporate experience not key staff. It was inapplicable to Mr. Esser's scoring system. Mr. Esser gave the Training Lead a score of 1. According to Esser, the Training Lead did not have a ten-year résumé, for which he deducted one point. The Training Lead had no specialty certification or extensive experience and had no child support experience and received no points. Mr. Esser added one point for the minimum of four years of specific experience and one point for the relevance of his education. Mr. Esser gave the Project Manager a score of 5. The Project Manager had a ten-year résumé and required references and received a point for each. He gave two points for exceeding the minimum required informational technology experience. The Project Manager had twelve years of project management experience for a score of one point, but lacked certification, a relevant education and child support enforcement experience for which he was accorded no points. Mr. Esser gave the Project Liaison person a score of According to Mr. Focht, the Project Liaison should have received a higher score since she has a professional history of having worked for the state technology office. Mr. Esser, however, stated that she did not have four years of specific experience and did not have extensive experience in the field, although she had a relevant education. Mr. Esser gave the Software Lead person a score of 4. The Software Lead, according to Mr. Focht, had a long set of experiences with implementing SAP solutions for a wide variety of different clients and should have received a higher score. Mr. Esser gave a point each for having a ten-year résumé, four years of specific experience in software, extensive experience in this area and relevant education. According to Mr. Focht the Database Lead had experience with database pools including the Florida Retirement System and should have received more points. Mr. Strange concurred with Mr. Focht in stating that Esser had given low scores to key staff and stated that the staff had good experience, which should have generated more points. Mr. Strange believed that Mr. Esser's scoring was inconsistent but provided no basis for that conclusion. Other evaluators also gave key staff positions scores of less than 7. Dr. Addy gave the Software Lead person a score of 5. The Software Lead had 16 years of experience and SAP development experience as positive factors but had no development lead experience. He had a Bachelor of Science and a Master of Science in Mechanical Engineering and a Master's in Business Administration, which were not good matches in education for the role of a Software Lead person. Dr. Addy gave the Training Lead person a score of 5. The Training Lead had six years of consulting experience, a background in SAP consulting and some training experience but did not have certification or education in training. His educational background also was electrical engineering, which is not a strong background for a training person. Dr. Addy gave the subcontractor managers a score of 5. Two of the subcontractors did not list managers at all, which detracted from the score. Mr. Doolittle gave the Training Lead person a He believed that based on his experience and training it was an average response. Table 8.2 contained an item in which a proposer could have points detracted from a score if the key staff person's references were not excellent. The Department did not check references at this stage in the evaluation process. As a result, the evaluators simply did not consider that item when scoring. No proposer's score was adversely affected thereby. KPMG contends that checking references would have given the evaluators greater insight into the work done by those individuals and their relevance and capabilities in the project team. Mr. Focht admitted, however, that any claimed effect on KPMG's score is conjectural. Mr. Strange stated that without reference checks information in the proposals could not be validated but he provided no basis for his opinion that reference checking was necessary at this preliminary stage of the evaluation process. Dr. Addy stated that the process called for checking references during the timeframe of oral presentations. They did not expect the references to change any scores at this point in the process. KPMG asserted that references should be checked to ascertain the veracity of the information in the proposals. However, even if the information in some other proposal was inaccurate it would not change the outcome for KPMG. KPMG would still not have the required number of points to advance to the next evaluation tier. Divergency in Scores The Source Selection Plan established a process for resolving divergent scores. Any item receiving scores with a range of 5 or more was determined to be divergent. The plan provided that the Coordinator identify divergent scores and then report to the evaluators that there were divergent scores for that item. The Coordinator was precluded from telling the evaluator, if his score was the divergent score, i.e., the highest or lowest score. Evaluators would then review that item, but were not required to change their scores. The purpose of the divergent score process was to have evaluators review their scores to see if there were any misperceptions or errors that skewed the scores. The team wished to avoid having any influence on the evaluators' scores. Mr. Strange testified that the Department did not follow the divergent score process in the Source Selection Plan as the coordinator did not tell the evaluators why the scores were divergent. Mr. Strange stated that the evaluator should have been informed which scores were divergent. The Source Selection Plan merely instructed the coordinator to inform the evaluators of the reason why the scores were divergent. Inherently scores were divergent, if there was a five-point score spread. The reason for the divergence was self- explanatory. The evaluators stated that they scored the proposals, submitted the scores and each received an e-mail from Debbie Stephens informing him that there were divergent scores and that they should consider re-scoring. None of the evaluators ultimately changed their scores. Mr. Esser's scores were the lowest of the divergent scores but he did not re-score his proposals as he had spent a great deal of time on the initial scoring and felt his scores to be valid. Neither witnesses Focht or Strange for KPMG provided more than speculation regarding the effect of the divergent scores on KPMG's ultimate score and any role the divergent scoring process may have had in KPMG not attaining the 150 point passage score. Deloitte - Suntax Reference: Susan Wilson, a Child Support Enforcement employee connected with the CAMS project signed a reference for Deloitte Consulting regarding the Suntax System. Mr. Focht was concerned that the evaluators were influenced by her signature on the reference form. Mr. Strange further stated that having someone who is heavily involved in the project sign a reference did not appear to be fair. He was not able to state any positive or negative effect on KPMG by Wilson's reference for Deloitte, however. Evaluator Esser has met Susan Wilson but has had no significant professional interaction with her. He could not recall anything that he knew about Ms. Wilson that would favorably influence him in scoring the Deloitte reference. Dr. Addy also was not influenced by Wilson. Mr. Doolittle has only worked with Wilson for a very short time and did not know her well. He has also evaluated other proposals where department employees were a reference and was not influenced by that either. Mr. Ellis has only known Wilson from two to four months. Her signature on the reference form did not influence him either positively or negatively. Mr. Bankirer had not known Wilson for a long time when he evaluated the Suntax reference. He took the reference at face value and was not influenced by Wilson's signature. It is not unusual for someone within an organization to create a reference for a company who is competing for work to be done for the organization.

Recommendation Having considered the foregoing Findings of Fact, Conclusions of Law, the evidence of record and the pleadings and arguments of the parties, it is, therefore, RECOMMENDED that a final order be entered by the State of Florida Department of Revenue upholding the proposed agency action which disqualified KPMG from further participation in the evaluation process regarding the subject CAMS CE Invitation to Negotiate. DONE AND ENTERED this 26th day of September, 2002, in Tallahassee, Leon County, Florida. P. MICHAEL RUFF Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with Clerk of the Division of Administrative Hearings this 26th day of September, 2002. COPIES FURNISHED: Cindy Horne, Esquire Earl Black, Esquire Department of Revenue Post Office Box 6668 Tallahassee, Florida 32399-0100 Robert S. Cohen, Esquire D. Andrew Byrne, Esquire Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive Tallahassee, Florida 32308 Seann M. Frazier, Esquire Greenburg, Traurig, P.A. 101 East College Avenue Tallahassee, Florida 32302 Bruce Hoffmann, General Counsel Department of Revenue 204 Carlton Building Tallahassee, Florida 32399-0100 James Zingale, Executive Director Department of Revenue 104 Carlton Building Tallahassee, Florida 32399-0100

Florida Laws (3) 120.569120.5720.21
# 1
KETURA BOUIE | K. B. vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 96-004200 (1996)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Sep. 04, 1996 Number: 96-004200 Latest Update: Jun. 09, 1997

The Issue Whether Ketura Bouie suffers from “retardation”, as that term is defined by Section 393.063(43), Florida Statutes, and therefore qualifies for developmental services offered by the Respondent agency under Chapter 393, Florida Statutes.

Findings Of Fact Ketura Bouie is 15 years old. She currently resides in Tallahassee, Florida. She is enrolled in a new school after transferring from Chatahoochee. Ketura has had several “social” promotions from grade to grade over the years. Her application for developmental services has been denied by the Respondent agency. Wallace Kennedy, Ph.D., is a Board-certified and Florida-licensed clinical psychologist. He was accepted as an expert in clinical psychology and the testing of children. He conducted a psychological evaluation of Ketura on April 12, 1995, for which he has provided a written narrative dated April 13, 1995. His narrative was admitted in evidence. Ketura was 13 years old at the time of Dr. Kennedy’s evaluation. He administered three standardized tests which are recognized and accepted for determining applicants’ eligibility for developmental services. These tests were: a wide range achievement test, Wechsler Intelligence Scale for Children— Revised (WISC-R), and Vineland Adaptive Behavior Scale. (Vineland) The wide range achievement test generally measures literacy. Ketura recognized only half of the upper-case letters of the alphabet and only a few three-letter kindergarten words. Her results indicated that she has the achievement level expected of a five and a half year old kindergarten student, even though she was then placed in the seventh grade. In Dr. Kennedy's view, there is "no chance Ketura will become functionally literate". The WISC-R measures intellectual functioning and academic aptitude without penalizing the child for handicaps. The mean score on this test is 100. To score two or more deviations from this mean, a subject must score 70 or below. All of Ketura’s WISC-R scores on the test administered by Dr. Kennedy in April 1995 were well below 70. They consisted of a verbal score of 46, a performance score of 46, and a full scale score of 40. Ketura’s full scale IQ of 40 is in the lowest tenth of the first percentile and represents a low moderate level of mental retardation. Ketura’s full scale score of 40 is the lowest result that WISC-R can measure. The Vineland measures communication, daily living skills, and socialization. Ketura’s composite score for Dr. Kennedy on the Vineland was 42. In conducting the Vineland test, Dr. Kennedy relied on information obtained through his own observation of Ketura and information obtained from Ketura’s mother. It is typical in the field of clinical psychology to rely on information supplied by parents and caregivers, provided they are determined to be reliable observers. Dr. Kennedy assessed Ketura’s mother to be a reliable observer. Dr. Kennedy’s Vineland test revealed that Ketura has a social maturity level of about six years of age. Her verbal and written communication skills are poor. Ketura has poor judgment regarding her personal safety. She cannot consistently remember to use a seatbelt and cannot safely use a knife. She has poor domestic skills. She has no concept of money or of dates. She does not help with the laundry or any other household task. She cannot use the phone. Ketura’s socialization skills are also poor. She does not have basic social manners. Her table manners and social interactive skills are poor. She has no friends, and at the time of Dr. Kennedy’s evaluation, she was unhappy due to classmates making fun of her for being unable to recite the alphabet. Dr. Kennedy rendered an ultimate diagnosis of moderate mental retardation and opined that Ketura's retardation is permanent. Although Dr. Kennedy observed that Ketura was experiencing low levels of depression and anxiety during his April 1995 tests and interview, he did not make a clinical psychological diagnosis to that effect. He attributed these emotional components to Ketura’s lack of confidence in being able to perform the tasks required during testing. In his opinion, Ketura did not have any behavioral or emotional problems which interfered with the reliability of the tests he administered. Also, there were no other conditions surrounding his evaluation which interfered with the validity or reliability of the test scores, his evaluation, or his determination that Ketura suffers from a degree of retardation which would qualify her for developmental services. In Dr. Kennedy’s expert opinion, even if all of Ketura's depression and anxiety were eliminated during testing, her WISC-R scores would not have placed her above the retarded range in April 1995. The retardation range for qualifying for developmental services is 68 or below. Ketura’s I.Q. was tested several times between 1990 and April 1995 with resulting full scale scores ranging from 40 to All or some of these tests and/or reports on the 1990 - 1995 tests were submitted to the agency with Ketura’s application for developmental services. Also included with Ketura’s application to the agency were mental health reports documenting depression, a recognized mental disorder. The most recent of these was one done as recently as May of 1996. However, none of these reports were offered or admitted in evidence at formal hearing. Respondent’s sole witness and agency representative, was Ms. JoAnne Braun. She is an agency Human Service Counselor III. Ms. Braun is not a Florida-licensed psychologist and she was not tendered as an expert witness in any field. As part of the application process, she visited with Ketura and her mother in their home. She also reviewed Petitioner’s application and mental health records described above. She reviewed the fluctuating psychological test scores beginning in 1990, one of which placed Ketura at 70 and another of which placed her at 74 on a scale of 100. Ms. Braun also reviewed a March 1995 psychological testing series that showed Ketura had a verbal 50, performance 60, and full scale 62 on the WISC-R test, one month before Dr. Kennedy’s April 1995 evaluation described above. However, none of these items which she reviewed was offered or admitted in evidence. The agency has guidelines for assessing eligibility for developmental services. The guidelines were not offered or admitted in evidence. Ms. Braun interpreted the agency's guidelines as requiring her to eliminate the mental health aspect if she felt it could depress Ketura's standard test scores. Because Ms. Braun "could not be sure that the mental health situation did not depress her scores," and because the fluctuation of Ketura’s test scores over the years caused Ms. Braun to think that Ketura’s retardation might not “reasonably be expected to continue indefinitely”, as required by the controlling statute, she opined that Ketura was not eligible for developmental services. Dr. Kennedy's assessment and expert psychological opinion was that if Ketura's scores were once higher and she now tests with lower scores, it might be the result of better testing today; it might be due to what had been required and observed of her during prior school testing situations; it might even be because she was in a particularly good mood on the one day she scored 70 or 74, but his current testing clearly shows she will never again do significantly better on standard tests than she did in April 1995. In his education, training, and experience, it is usual for test scores to deteriorate due to a retarded person's difficulties in learning as that person matures. I do not consider Ms. Braun’s opinion, though in evidence, as sufficient to rebut the expert opinion of Dr. Kennedy. This is particularly so since the items she relied upon are not in evidence and are not the sort of hearsay which may be relied upon for making findings of fact pursuant to Section 120.58(1)(a), Florida Statutes. See, Bellsouth Advertising & Publishing Corp. v. Unemployment Appeals Commission and Robert Stack, 654 So.2d 292 (Fla. 5th DCA 1995); and Tenbroeck v. Castor, 640 So.2d 164, (Fla. 1st DCA 1994). Particularly, there is no evidence that the "guidelines" (also not in evidence) she relied upon have any statutory or rule basis. Therefore, the only test scores and psychological evaluation upon which the undersigned can rely in this de novo proceeding are those of Dr. Kennedy. However, I do accept as binding on the agency Ms. Braun’s credible testimony that the agency does not find that the presence of a mental disorder in and of itself precludes an applicant, such as Ketura, from qualifying to receive developmental services; that Ketura is qualified to receive agency services under another program for alcohol, drug, and mental health problems which Ketura also may have; and that Ketura’s eligibility under that program and under the developmental services program, if she qualifies for both, are not mutually exclusive.

Recommendation Upon the foregoing findings of fact and conclusions of law, it is RECOMMENDED that the Department of Children and Families issue a Final Order awarding Ketura Bouie appropriate developmental services for so long as she qualifies under the statute.RECOMMENDED this 24th day of February, 1997, at Tallahassee, Florida. ELLA JANE P. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 SUNCOM 278-9675 Fax FILING (904) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 24th day of February, 1997. COPIES FURNISHED: Gregory D. Venz, Agency Clerk Department of Children and Families Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Richard A. Doran General Counsel Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Marla Ruth Butler Qualified Representative Children's Advocacy Center Florida State University Tallahassee, FL 32302-0287 Marian Alves, Esquire Department of Health and Rehabilitative Services 2639 North Monroe Street Suite 100A Tallahassee, FL 32399-2949

Florida Laws (2) 120.57393.063
# 2
THE FLORIDA INSURANCE COUNCIL, INC.; THE AMERICAN INSURANCE ASSOCIATION; AND THE PROPERTY CASUALTY INSURERS ASSOCIATION OF AMERICA vs DEPARTMENT OF FINANCIAL SERVICES, OFFICE OF FINANCIAL REGULATION AND FINANCIAL SERVICES COMMISSION, 05-001012RP (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Mar. 18, 2005 Number: 05-001012RP Latest Update: May 17, 2007

The Issue At issue in this proceeding is whether proposed Florida Administrative Code Rule 69O-125.005 is an invalid exercise of delegated legislative authority.

Findings Of Fact Petitioners AIA is a trade association made up of 40 groups of insurance companies. AIA member companies annually write $6 billion in property, casualty, and automobile insurance in Florida. AIA's primary purpose is to represent the interests of its member insurance groups in regulatory and legislative matters throughout the United States, including Florida. NAMIC is a trade association consisting of 1,430 members, mostly mutual insurance companies. NAMIC member companies annually write $10 billion in property, casualty, and automobile insurance in Florida. NAMIC represents the interests of its member insurance companies in regulatory and legislative matters throughout the United States, including Florida. PCI is a national trade association of property and casualty insurance companies consisting of 1,055 members. PCI members include mutual insurance companies, stock insurance companies, and reciprocal insurers that write property and casualty insurance in Florida. PCI members annually write approximately $15 billion in premiums in Florida. PCI participated in the OIR's workshops on the Proposed Rule. PCI's assistant vice president and regional manager, William Stander, testified that if the Proposed Rule is adopted, PCI's member companies would be required either to withdraw from the Florida market or drastically reorganize their business model. FIC is an insurance trade association made up of 39 insurance groups that represent approximately 250 insurance companies writing all lines of insurance. All of FIC's members are licensed in Florida and write approximately $27 billion in premiums in Florida. FIC has participated in rule challenges in the past, and participated in the workshop and public hearing process conducted by OIR for this Proposed Rule. FIC President Guy Marvin testified that FIC's property and casualty members use credit scoring and would be affected by the Proposed Rule. A substantial number of Petitioners' members are insurers writing property and casualty insurance and/or motor vehicle insurance coverage in Florida. These members use credit-based insurance scoring in their underwriting and rating processes. They would be directly regulated by the Proposed Rule in their underwriting and rating methods and in the rate filing processes set forth in Sections 627.062 and 627.0651, Florida Statutes. Fair Isaac originated credit-based insurance scoring and is a leading provider of credit-based insurance scoring information in the United States and Canada. Fair Isaac has invested millions of dollars in the development and maintenance of its credit-based insurance models. Fair Isaac concedes that it is not an insurer and, thus, would not be directly regulated by the Proposed Rule. However, Fair Isaac would be directly affected by any negative impact that the Proposed Rule would have in setting limits on the use of credit-based insurance score models in Florida. Lamont Boyd, a manager in Fair Isaac's global scoring division, testified that if the Proposed Rule goes into effect Fair Isaac would, at a minimum, lose all of the revenue it currently generates from insurance companies that use its scores in the State of Florida, because Fair Isaac's credit-based insurance scoring model cannot meet the requirements of the Proposed Rule regarding racial, ethnic, and religious categorization. Mr. Boyd also testified that enactment of the Proposed Rule could cause a "ripple effect" of similar regulations in other states, further impairing Fair Isaac's business. The Statute and Proposed Rule During the 1990s, insurance companies' use of consumer credit information for underwriting and rating automobile and residential property insurance policies greatly increased. Insurance regulators expressed concern that the use of consumer credit reports, credit histories and credit-based insurance scoring models could have a negative effect on consumers' ability to obtain and keep insurance at appropriate rates. Of particular concern was the possibility that the use of credit scoring would particularly hurt minorities, people with low incomes, and young people, because those persons would be more likely to have poor credit scores. On September 19, 2001, Insurance Commissioner Tom Gallagher appointed a task force to examine the use of credit reports and develop recommendations for the Legislature or for the promulgation of rules regarding the use of credit scoring by the insurance industry. The task force met on four separate occasions throughout the state in 2001, and issued its report on January 23, 2002. The task force report conceded that the evidence supporting the negative impact of the use of credit reports on specific groups is "primarily anecdotal," and that the insurance industry had submitted anecdotal evidence to the contrary. Among its nine recommendations, the task force recommended the following: A comprehensive and independent investigation of the relationship between insurers' use of consumer credit information and risk of loss including the impact by race, income, geographic location and age. A prohibition against the use of credit reports as the sole basis for making underwriting or rating decisions. That insurers using credit as an underwriting or rating factor be required to provide regulators with sufficient information to independently verify that use. That insurers be required to send a copy of the credit report to those consumers whose adverse insurance decision is a result of their consumer credit information and a simple explanation of the specific credit characteristics that caused the adverse decision. That insurers not be permitted to draw a negative inference from a bad credit score that is due to medical bills, little or no credit information, or other special circumstances that are clearly not related to an applicant's or policyholder's insurability. That the impact of credit reports be mitigated by imposing limits on the weight that insurers can give to them in the decision to write a policy and limits on the amount the premium can be increased due to credit information. No evidence was presented that the "comprehensive and independent investigation" of insurers' use of credit information was undertaken by the Legislature. However, the other recommendations of the task force were addressed in Senate Bills 40A and 42A, enacted by the Legislature and signed by the governor on June 26, 2003. These companion bills, each with an effective date of January 1, 2004, were codified as Sections 626.9741 and 626.97411, Florida Statutes, respectively. Chapters 2003-407 and 2003-408, Laws of Florida. Section 626.9741, Florida Statutes, provides: The purpose of this section is to regulate and limit the use of credit reports and credit scores by insurers for underwriting and rating purposes. This section applies only to personal lines motor vehicle insurance and personal lines residential insurance, which includes homeowners, mobile home owners' dwelling, tenants, condominium unit owners, cooperative unit owners, and similar types of insurance. As used in this section, the term: "Adverse decision" means a decision to refuse to issue or renew a policy of insurance; to issue a policy with exclusions or restrictions; to increase the rates or premium charged for a policy of insurance; to place an insured or applicant in a rating tier that does not have the lowest available rates for which that insured or applicant is otherwise eligible; or to place an applicant or insured with a company operating under common management, control, or ownership which does not offer the lowest rates available, within the affiliate group of insurance companies, for which that insured or applicant is otherwise eligible. "Credit report" means any written, oral, or other communication of any information by a consumer reporting agency, as defined in the federal Fair Credit Reporting Act, 15 U.S.C. ss. 1681 et seq., bearing on a consumer's credit worthiness, credit standing, or credit capacity, which is used or expected to be used or collected as a factor to establish a person's eligibility for credit or insurance, or any other purpose authorized pursuant to the applicable provision of such federal act. A credit score alone, as calculated by a credit reporting agency or by or for the insurer, may not be considered a credit report. "Credit score" means a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Tier" means a category within a single insurer into which insureds with substantially similar risk, exposure, or expense factors are placed for purposes of determining rate or premium. An insurer must inform an applicant or insured, in the same medium as the application is taken, that a credit report or score is being requested for underwriting or rating purposes. An insurer that makes an adverse decision based, in whole or in part, upon a credit report must provide at no charge, a copy of the credit report to the applicant or insured or provide the applicant or insured with the name, address, and telephone number of the consumer reporting agency from which the insured or applicant may obtain the credit report. The insurer must provide notification to the consumer explaining the reasons for the adverse decision. The reasons must be provided in sufficiently clear and specific language so that a person can identify the basis for the insurer's adverse decision. Such notification shall include a description of the four primary reasons, or such fewer number as existed, which were the primary influences of the adverse decision. The use of generalized terms such as "poor credit history," "poor credit rating," or "poor insurance score" does not meet the explanation requirements of this subsection. A credit score may not be used in underwriting or rating insurance unless the scoring process produces information in sufficient detail to permit compliance with the requirements of this subsection. It shall not be deemed an adverse decision if, due to the insured's credit report or credit score, the insured continues to receive a less favorable rate or placement in a less favorable tier or company at the time of renewal except for renewals or reunderwriting required by this section. (4)(a) An insurer may not request a credit report or score based upon the race, color, religion, marital status, age, gender, income, national origin, or place of residence of the applicant or insured. An insurer may not make an adverse decision solely because of information contained in a credit report or score without consideration of any other underwriting or rating factor. An insurer may not make an adverse decision or use a credit score that could lead to such a decision if based, in whole or in part, on: The absence of, or an insufficient, credit history, in which instance the insurer shall: Treat the consumer as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer; Treat the consumer as if the applicant or insured had neutral credit information, as defined by the insurer; Exclude the use of credit information as a factor and use only other underwriting criteria; Collection accounts with a medical industry code, if so identified on the consumer's credit report; Place of residence; or Any other circumstance that the Financial Services Commission determines, by rule, lacks sufficient statistical correlation and actuarial justification as a predictor of insurance risk. An insurer may use the number of credit inquiries requested or made regarding the applicant or insured except for: Credit inquiries not initiated by the consumer or inquiries requested by the consumer for his or her own credit information. Inquiries relating to insurance coverage, if so identified on a consumer's credit report. Collection accounts with a medical industry code, if so identified on the consumer's credit report Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the home mortgage industry and made within 30 days of one another, unless only one inquiry is considered. Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry and made within 30 days of one another, unless only one inquiry is considered. An insurer must, upon the request of an applicant or insured, provide a means of appeal for an applicant or insured whose credit report or credit score is unduly influenced by a dissolution of marriage, the death of a spouse, or temporary loss of employment. The insurer must complete its review within 10 business days after the request by the applicant or insured and receipt of reasonable documentation requested by the insurer, and, if the insurer determines that the credit report or credit score was unduly influenced by any of such factors, the insurer shall treat the applicant or insured as if the applicant or insured had neutral credit information or shall exclude the credit information, as defined by the insurer, whichever is more favorable to the applicant or insured. An insurer shall not be considered out of compliance with its underwriting rules or rates or forms filed with the Office of Insurance Regulation or out of compliance with any other state law or rule as a result of granting any exceptions pursuant to this subsection. A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. An insurer that requests or uses credit reports and credit scoring in its underwriting and rating methods shall maintain and adhere to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, this section, and all rules related thereto. (7)(a) An insurer shall establish procedures to review the credit history of an insured who was adversely affected by the use of the insured's credit history at the initial rating of the policy, or at a subsequent renewal thereof. This review must be performed at a minimum of once every 2 years or at the request of the insured, whichever is sooner, and the insurer shall adjust the premium of the insured to reflect any improvement in the credit history. The procedures must provide that, with respect to existing policyholders, the review of a credit report will not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan. (b) However, as an alternative to the requirements of paragraph (a), an insurer that used a credit report or credit score for an insured upon inception of a policy, who will not use a credit report or score for reunderwriting, shall reevaluate the insured within the first 3 years after inception, based on other allowable underwriting or rating factors, excluding credit information if the insurer does not increase the rates or premium charged to the insured based on the exclusion of credit reports or credit scores. The commission may adopt rules to administer this section. The rules may include, but need not be limited to: Information that must be included in filings to demonstrate compliance with subsection (3). Statistical detail that insurers using credit reports or scores under subsection (5) must retain and report annually to the Office of Insurance Regulation. Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Standards for review of models, methods, programs, or any other process by which to grade or rank credit report data and which may produce credit scores in order to ensure that the insurer demonstrates that such grading, ranking, or scoring is valid in predicting insurance risk of an applicant or insured. Section 626.97411, Florida Statutes, provides: Credit scoring methodologies and related data and information that are trade secrets as defined in s. 688.002 and that are filed with the Office of Insurance Regulation pursuant to a rate filing or other filing required by law are confidential and exempt from the provisions of s. 119.07(1) and s. 24(a), Art. I of the State Constitution.3 Following extensive rule development workshops and industry comment, proposed Florida Administrative Code Rule 69O-125.005 was initially published in the Florida Administrative Weekly, on February 11, 2005.4 The Proposed Rule states, as follows: 69O-125.005 Use of Credit Reports and Credit Scores by Insurers. For the purpose of this rule, the following definitions apply: "Applicant", for purposes of Section 626.9741, F.S., means an individual whose credit report or score is requested for underwriting or rating purposes relating to personal lines motor vehicle or personal lines residential insurance and shall not include individuals who have merely requested a quote. "Credit scoring methodology" means any methodology that uses credit reports or credit scores, in whole or in part, for underwriting or rating purposes. "Data cleansing" means the correction or enhancement of presumed incomplete, incorrect, missing, or improperly formatted information. "Personal lines motor vehicle" insurance means insurance against loss or damage to any motorized land vehicle or any loss, liability, or expense resulting from or incidental to ownership, maintenance or use of such vehicle if the contract of insurance shows one or more natural persons as named insureds. The following are not included in this definition: Vehicles used as public livery or conveyance; Vehicles rented to others; Vehicles with more than four wheels; Vehicles used primarily for commercial purposes; and Vehicles with a net vehicle weight of more than 5,000 pounds designed or used for the carriage of goods (other than the personal effects of passengers) or drawing a trailer designed or used for the carriage of such goods. The following are specifically included, inter alia, in this definition: Motorcycles; Motor homes; Antique or classic automobiles; and Recreational vehicles. "Unfairly discriminatory" means that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S. Insurers may not use any credit scoring methodology that is unfairly discriminatory. The burden of demonstrating that the credit scoring methodology is not unfairly discriminatory is upon the insurer. An insurer may not request or use a credit report or credit score in its underwriting or rating method unless it maintains and adheres to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, Section 626.9741, F.S., and these rules. Upon initial use or any change in that use, insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall include the following information in filings submitted pursuant to Section 627.062 or 627.0651, F.S. A listing of the types of individuals whose credit reports or scores the company will use or attempt to use to underwrite or rate a given policy. For example: Person signing application; Named insured or spouse; and All listed operators. How those individual reports or scores will be combined if more than one is used. For example: Average score used; Highest score used. The name(s) of the consumer reporting agencies or any other third party vendors from which the company will obtain or attempt to obtain credit reports or scores. Precise identifying information specifying or describing the credit scoring methodology, if any, the company will use including: Common or trade name; Version, subtype, or intended segment of business the system was designed for; and Any other information needed to distinguish a particular credit scoring methodology from other similar ones, whether developed by the company or by a third party vendor. The effect of particular scores or ranges of scores (or, for companies not using scores, the effect of particular items appearing on a credit report) on any of the following as applicable: Rate or premium charged for a policy of insurance; Placement of an insured or applicant in a rating tier; Placement of an applicant or insured in a company within an affiliated group of insurance companies; Decision to refuse to issue or renew a policy of insurance or to issue a policy with exclusions or restrictions or limitations in payment plans. The effect of the absence or insufficiency of credit history (as referenced in Section 626.9741(4)(c)1., F.S.) on any items listed in paragraph (e) above. The manner in which collection accounts identified with a medical industry code (as referenced in Section 626.9741(4)(c)2., F.S.) on a consumer's credit report will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which collection accounts that are not identified with a medical industry code, but which an applicant or insured demonstrates are the direct result of significant and extraordinary medical expenses, will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which the following will be treated in the underwriting or rating process, or within any credit scoring methodology used: Credit inquiries not initiated by the consumer; Requests by the consumer for the consumer's own credit information; Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another; Multiple lender inquiries that are not coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another, but that an applicant or insured demonstrates are the direct result of such inquiries; Inquiries relating to insurance coverage, if so identified on a consumer's credit report; and Inquiries relating to insurance coverage that are not so identified on a consumer's credit report, but which an applicant or insured demonstrates are the direct result of such inquiries. The list of all clear and specific primary reasons that may be cited to the consumer as the basis or explanation for an adverse decision under Section 626.9741(3), F.S. and the criteria determining when each of those reasons will be so cited. A description of the process that the insurer will use to correct any error in premium charged the insured, or in underwriting decision made concerning the insured, if the basis of the premium charged or the decision made is a disputed item that is later removed from the credit report or corrected, provided that the insured first notifies the insurer that the item has been removed or corrected. A certification that no use of credit reports or scores in rating insurance will apply to any component of a rate or premium attributed to hurricane coverage for residential properties as separately identified in accordance with Section 627.0629, F.S. Insurers desiring to make adverse decisions for personal lines motor vehicle policies or personal lines residential policies based on the absence or insufficiency of credit history shall either: Treat such consumers or applicants as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer and does not result in a disparate impact on persons belonging to any of the classes set forth in Section 626.9741(8)(c), This information will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. The information shall include: Data comparing experience for each category of those with absent or insufficient credit history to each category of insureds separately treated with respect to credit and having sufficient credit history; A statistically credible method of analysis that concludes that the relationship between absence or insufficiency and the risk assumed is not due to chance; A statistically credible method of analysis that concludes that absence or insufficiency of credit history does not disparately impact persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S.; A statistically credible method of analysis that confirms that the treatment proposed by the insurer is quantitatively appropriate; and Statistical tests establishing that the treatment proposed by the insurer is warranted for the total of all consumers with absence or insufficiency of credit history and for at least two subsets of such consumers. Treat such consumers as if the applicant or insured had neutral credit information, as defined by the insurer. Should an insurer fail to specify a definition, neutral is defined as the average score that a stratified random sample of consumers or applicants having sufficient credit history would attain using the insurer's credit scoring methodology; or Exclude credit as a factor and use other criteria. These other criteria must be specified by the insurer and must not result in average treatment for the totality of consumers with an absence of or insufficiency of credit history any less favorable than the treatment of average consumers or applicants having sufficient credit history. Insurers desiring to make adverse decisions for personal lines motor vehicle or personal lines residential insurance based on information contained in a credit report or score shall file with the Office information establishing that the results of such decisions do not correlate so closely with the zip code of residence of the insured as to constitute a decision based on place of residence of the insured in violation of Section 626.9741(4)(c)(3), F.S. (7)(a) Insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall develop, maintain, and adhere to written procedures consistent with Section 626.9741(4)(e), F.S. providing appeals for applicants or insureds whose credit reports or scores are unduly influenced by dissolution of marriage, death of a spouse, or temporary loss of employment. (b) These procedures shall be subject to examination by the Office at any time. (8)(a)1. Insurers using credit reports or credit scoring in rating personal lines motor vehicle or personal lines residential insurance shall develop, maintain, and adhere to written procedures to review the credit history of an insured who was adversely affected by such use at initial rating of the policy or subsequent renewal thereof. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: A review shall be conducted: No later than 2 years following the date of any adverse decision, or Any time, at the request of the insured, but no more than once per policy period without insurer assent. The insurer shall notify the named insureds annually of their right to request the review in (II) above. Renewal notices issued 120 days or less after the effective date of this rule are not included in this requirement. The insurer shall adjust the premium to reflect any improvement in credit history no later than the first renewal date that follows a review of credit history. The renewal premium shall be subject to other rating factors lawfully used by the insurer. The review shall not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan based on credit history. (b)1. As an alternative to the requirements in paragraph (8)(a), insurers using credit reports or scores at the inception of a policy but not for re-underwriting shall develop, maintain, and adhere to written procedures. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: Insureds shall be reevaluated no later than 3 years following policy inception based on allowable underwriting or rating factors, excluding credit information. The rate or premium charged to an insured shall not be greater, solely as a result of the reevaluation, than the rate or premium charged for the immediately preceding policy term. This shall not be construed to prohibit an insurer from applying regular underwriting criteria (which may result in a greater premium) or general rate increases to the premium charged. For insureds that received an adverse decision notification at policy inception, no residual effects of that adverse decision shall survive the reevaluation. This means that the reevaluation must be complete enough to make it possible for insureds adversely impacted at inception to attain the lowest available rate for which comparable insureds are eligible, considering only allowable underwriting or rating factors (excluding credit information) at the time of the reevaluation. No credit scoring methodology shall be used for personal lines motor vehicle or personal lines residential property insurance unless that methodology has been demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance. The demonstration of validity detailed below need only be provided with the first rate, rule, or underwriting guidelines filing following the effective date of this rule and at any time a change is made in the credit scoring methodology. Other such filings may instead refer to the most recent prior filing containing a demonstration. Information supplied in the context of a demonstration of validity will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. A demonstration of validity shall include: A listing of the persons that contributed substantially to the development of the most current version of the method, including resumes of the persons, if obtainable, indicating their qualifications and experience in similar endeavors. An enumeration of all data cleansing techniques that have been used in the development of the method, which shall include: The nature of each technique; Any biases the technique might introduce; and The prevalence of each type of invalid information prior to correction or enhancement. All data that was used by the model developers in the derivation and calibration of the model parameters. Data shall be in sufficient detail to permit the Office to conduct multiple regression testing for validation of the credit scoring methodology. Data, including field definitions, shall be supplied in electronic format compatible with the software used by the Office. Statistical results showing that the model and parameters are predictive and not overlapping or duplicative of any other variables used to rate an applicant to such a degree as to render their combined use actuarially unsound. Such results shall include the period of time for which each element from a credit report is used. A precise listing of all elements from a credit report that are used in scoring, and the formula used to compute the score, including the time period during which each element is used. Such listing is confidential if properly so identified by the insurer. An assessment by a qualified actuary, economist, or statistician (whether or not employed by the insurer) other than persons who contributed substantially to the development of the credit scoring methodology, concluding that there is a significant statistical correlation between the scores and frequency or severity of claims. The assessment shall: Identify the person performing the assessment and show his or her educational and professional experience qualifications; and Include a test of robustness of the model, showing that it performs well on a credible validation data set. The validation data set may not be the one from which the model was developed. Documentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Statistical analysis shall be performed on the current insureds of the insurer using the proposed credit scoring model, and shall include the raw data and detailed results on each classification set forth in Section 626.9741(8)(c), F.S. In lieu of such analysis insurers may use the alternative in 2. below. Alternatively, insurers may submit statistical studies and analyses that have been performed by educational institutions, independent professional associations, or other reputable entities recognized in the field, that indicate that there is no disproportionate impact on any of the classes set forth in Section 626.9741(8)(c), F.S. attributable to the use of credit reports or scores. Any such studies or analyses shall have been done concerning the specific credit scoring model proposed by the insurer. The Office will utilize generally accepted statistical analysis principles in reviewing studies submitted which support the insurer's analysis that the credit scoring model does not disproportionately impact any class based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. The Office will permit reliance on such studies only to the extent that they permit independent verification of the results. The testing or validation results obtained in the course of the assessment in paragraphs (d) and (f) above. Internal Insurer data that validates the premium differentials proposed based on the scores or ranges of scores. Industry or countrywide data may be used to the extent that the Florida insurer data lacks credibility based upon generally accepted actuarial standards. Insurers using industry or countrywide data for validation shall supply Florida insurer data and demonstrate that generally accepted actuarial standards would allow reliance on each set of data to the extent the insurer has done so. Validation data including claims on personal lines residential insurance policies that are the result of acts of God shall not be used unless such acts occurred prior to January 1, 2004. The mere copying of another company's system will not fulfill the requirement to validate proposed premium differentials unless the filer has used a method or system for less than 3 years and demonstrates that it is not cost effective to retrospectively analyze its own data. Companies under common ownership, management, and control may copy to fulfill the requirement to validate proposed premium differentials if they demonstrate that the characteristics of the business to be written by the affiliate doing the copying are sufficiently similar to the affiliate being copied to presume common differentials will be accurate. The credibility standards and any judgmental adjustments, including limitations on effects, that have been used in the process of deriving premium differentials proposed and validated in paragraph (i) above. An explanation of how the credit scoring methodology treats discrepancies in the information that could have been obtained from different consumer reporting agencies: Equifax, Experian, or TransUnion. This shall not be construed to require insurers to obtain multiple reports for each insured or applicant. 1. The date that each of the analyses, tests, and validations required in paragraphs (d) through (j) above was most recently performed, and a certification that the results continue to be applicable. 2. Any item not reviewed in the previous 5 years is unacceptable. Specific Authority 624.308(1), 626.9741(8) FS. Law Implemented 624.307(1), 626.9741 FS. History-- New . The Petition 1. Statutory Definitions of "Unfairly Discriminatory" The main issue raised by Petitioners is that the Proposed Rule's definition of "unfairly discriminatory," and those portions of the Proposed Rule that rely on this definition, are invalid because they are vague, and enlarge, modify, and contravene the provisions of the law implemented and other provisions of the insurance code. Section 626.9741, Florida Statutes, does not define "unfairly discriminatory." Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit reports or scores "must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory." Subsection 626.9741(8)(c), Florida Statutes, provides that the FSC may adopt rules, including standards to ensure that rates or premiums "associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." Chapter 627, Part I, Florida Statutes, is referred to as the "Rating Law." § 627.011, Fla. Stat. The purpose of the Rating Law is to "promote the public welfare by regulating insurance rates . . . to the end that they shall not be excessive, inadequate, or unfairly discriminatory." § 627.031(1)(a), Fla. Stat. The Rating Law provisions referenced by Subsection 626.9741(5), Florida Statutes, in relation to ensuring that rates are not "unfairly discriminatory" are Sections 627.062 and 627.0651, Florida Statutes. Section 627.062, Florida Statutes, titled "Rate standards," provides that "[t]he rates for all classes of insurance to which the provisions of this part are applicable shall not be excessive, inadequate, or unfairly discriminatory." § 627.062(1), Fla. Stat. Subsection 627.062(2)(e)6., Florida Statutes, provides: A rate shall be deemed unfairly discriminatory as to a risk or group of risks if the application of premium discounts, credits, or surcharges among such risks does not bear a reasonable relationship to the expected loss and expense experience among the various risks. Section 627.0651, Florida Statutes, titled "Making and use of rates for motor vehicle insurance," provides, in relevant part: One rate shall be deemed unfairly discriminatory in relation to another in the same class if it clearly fails to reflect equitably the difference in expected losses and expenses. Rates are not unfairly discriminatory because different premiums result for policyholders with like loss exposures but different expense factors, or like expense factors but different loss exposures, so long as rates reflect the differences with reasonable accuracy. Rates are not unfairly discriminatory if averaged broadly among members of a group; nor are rates unfairly discriminatory even though they are lower than rates for nonmembers of the group. However, such rates are unfairly discriminatory if they are not actuarially measurable and credible and sufficiently related to actual or expected loss and expense experience of the group so as to assure that nonmembers of the group are not unfairly discriminated against. Use of a single United States Postal Service zip code as a rating territory shall be deemed unfairly discriminatory. Petitioners point out that each of these statutory examples describing "unfairly discriminatory" rates has an actuarial basis, i.e., rates must be related to the actual or expected loss and expense factors for a given group or class, rather than any extraneous factors. If two risks have the same expected losses and expenses, the insurer must charge them the same rate. If the risks have different expected losses and expenses, the insurer must charge them different rates. Michael Miller, Petitioners' expert actuary, testified that the term "unfairly discriminatory" has been used in the insurance industry for well over 100 years and has always had this cost-based definition. Mr. Miller is a fellow of the Casualty Actuarial Society ("CAS"), a professional organization whose purpose is the advancement of the body of knowledge of actuarial science, including the promulgation of industry standards and a code of professional conduct. Mr. Miller was chair of the CAS ratemaking committee when it developed the CAS "Statement of Principles Regarding Property and Casualty Insurance Ratemaking," a guide for actuaries to follow when establishing rates.5 Principle 4 of the Statement of Principles provides: "A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk." In layman's terms, Mr. Miller explained that different types of risks are reflected in a rate calculation. To calculate the expected cost of a given risk, and thus the rate to be charged, the insurer must determine the expected losses for that risk during the policy period. The loss portion reflects the risk associated with an occurrence and the severity of a claim. While the loss portion does not account for the entirety of the rate charged, it is the most important in terms of magnitude. Mr. Miller cautioned that the calculation of risk is a quantification of expected loss, but not an attempt to predict who is going to have an accident or make a claim. There is some likelihood that every insured will make a claim, though most never do, and this uncertainty is built into the incurred loss portion of the rate. No single risk factor is a complete measure of a person's likelihood of having an accident or of the severity of the ensuing claim. The prediction of losses is determined through a risk classification plan that take into consideration many risk factors (also called rating factors) to determine the likelihood of an accident and the extent of the claim. As to automobile insurance, Mr. Miller listed such risk factors as the age, gender, and marital status of the driver, the type, model and age of the car, the liability limits of the coverage, and the geographical location where the car is garaged. As to homeowners insurance, Mr. Miller listed such risk factors as the location of the home, its value and type of construction, the age of the utilities and electrical wiring, and the amount of insurance to be carried. 2. Credit Scoring as a Rating Factor In the current market, the credit score of the applicant or insured is a rating factor common to automobile and homeowners insurance. Subsection 626.9741(2)(c), Florida Statutes, defines "credit score" as follows: a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Credit scores" (more accurately termed "credit-based insurance scores") are derived from credit data that have been found to be predictive of a loss. Lamont Boyd, Fair Isaac's insurance market manager, explained the manner in which Fair Isaac produced its credit scoring model. The company obtained information from various insurance companies on millions of customers. This information included the customers' names, addresses, and the premiums earned by the companies on those policies as well as the losses incurred. Fair Isaac next requested the credit reporting agencies to review their archived files for the credit information on those insurance company customers. The credit agencies matched the credit files with the insurance customers, then "depersonalized" the files so that there was no way for Fair Isaac to know the identity of any particular customer. According to Mr. Lamont, the data were "color blind" and "income blind." Fair Isaac's analysts took these files from the credit reporting agencies and studied the data in an effort to find the most predictive characteristics of future loss propensity. The model was developed to account for all the predictive characteristics identified by Fair Isaac's analysts, and to give weight to those characteristics in accordance to their relative accuracy as predictors of loss. Fair Isaac does not directly sell its credit scores to insurance companies. Rather, Fair Isaac's models are implemented by the credit reporting agencies. When an insurance company wants Fair Isaac's credit score, it purchases access to the model's results from the credit reporting agency. Other vendors offer similar credit scoring models to insurance companies, and in recent years, some insurance companies have developed their own scoring models. Several academic studies of credit scoring were admitted and discussed at the final hearing in these cases. There appears to be no serious debate that credit scoring is a valid and important predictor of losses. The controversy over the use of credit scoring arises over its possible "unfairly discriminatory" impact "based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." § 626.9741(8)(c), Fla. Stat. Mr. Miller was one of two principal authors of a June 2003 study titled, "The Relationship of Credit-Based Insurance Scores to Private Passenger Automobile Insurance Loss Propensity." This study was commissioned by several insurance industry trade organizations, including AIA and NAMIC. The study addressed three questions: whether credit-based insurance scores are related to the propensity for loss; whether credit- based insurance scores measure risk that is already measured by other risk factors; and what is the relative importance to accurate risk assessment of the use of credit-based insurance scores. The study was based on a nationwide random sample of private passenger automobile policy and claim records. Records from all 50 states were included in roughly the same proportion as each state's registered motor vehicles bear to total registered vehicles in the United States. The data samples were provided by seven insurers, and represented approximately 2.7 million automobiles, each insured for 12 months.6 The study examined all major automobile coverages: bodily injury liability, property damage liability, medical payments coverage, personal injury protection coverage, comprehensive coverage, and collision coverage. The study concluded that credit-based insurance scores were correlated with loss propensity. The study found that insurance scores overlap to some degree with other risk factors, but that after fully accounting for the overlaps, insurance scores significantly increase the accuracy of the risk assessment process. The study found that, for each of the six automobile coverages examined, insurance scores are among the three most important risk factors.7 Mr. Miller's study did not examine the question of causality, i.e., why credit-based insurance scores are predictive of loss propensity. Dr. Patrick Brockett testified for Petitioners as an expert in actuarial science, risk management and insurance, and statistics. Dr. Brockett is a professor in the departments of management science and information systems, finance, and mathematics at the University of Texas at Austin. He occupies the Gus S. Wortham Memorial Chair in Risk Management and Insurance, and is the director of the university's risk management and insurance program. Dr. Brockett is the former director of the University of Texas' actuarial science program and continues to direct the study of students seeking their doctoral degrees in actuarial science. His areas of academic research are actuarial science, risk management and insurance, statistics, and general quantitative methods in business. Dr. Brockett has written more than 130 publications, most of which relate to actuarial science and insurance. He has spent his entire career in academia, and has never been employed by an insurance company. In 2002, Lieutenant Governor Bill Ratliff of Texas asked the Bureau of Business Research ("BBR") of the University of Texas' McCombs School of Business to provide an independent, nonpartisan study to examine the relationship between credit history and insurance losses in automobile insurance. Dr. Brockett was one of four named authors of this BBR study, issued in March 2003 and titled, "A Statistical Analysis of the Relationship between Credit History and Insurance Losses." The BBR research team solicited data from insurance companies representing the top 70 percent of the automobile insurers in Texas, and compiled a database of more than 173,000 automobile insurance policies from the first quarter of 1998 that included the following 12 months' premium and loss history. ChoicePoint was then retained to match the named insureds with their credit histories and to supply a credit score for each insured person. The BBR research team then examined the credit score and its relationship with prospective losses for the insurance policy. The results were summarized in the study as follows: Using logistic and multiple regression analyses, the research team tested whether the credit score for the named insured on a policy was significantly related to incurred losses for that policy. It was determined that there was a significant relationship. In general, lower credit scores were associated with larger incurred losses. Next, logistic and multiple regression analyses examined whether the revealed relationship between credit score and incurred losses was explainable by existing underwriting variables, or whether the credit score added new information about losses not contained in the existing underwriting variables. It was determined that credit score did yield new information not contained in the existing underwriting variables. What the study does not attempt to explain is why credit scoring adds significantly to the insurer's ability to predict insurance losses. In other words, causality was not investigated. In addition, the research team did not examine such variables as race, ethnicity, and income in the study, and therefore this report does not speculate about the possible effects that credit scoring may have in raising or lowering premiums for specific groups of people. Such an assessment would require a different study and different data. At the hearing, Dr. Brockett testified that the BBR study demonstrated a "strong and significant relationship between credit scoring and incurred losses," and that credit scoring retained its predictive power even after the other risk variables were accounted for. Dr. Brockett further testified that credit scoring has a disproportionate effect on the classifications of age and marital status, because the very young tend to have credit scores that are lower than those of older people. If the question is simply whether the use of credit scores will have a greater impact on the young and the single, the answer would be in the affirmative. However, Dr. Brockett also noted that young, single people will also have higher losses than older, married people, and, thus, the use of credit scores is not "unfairly discriminatory" in the sense that term is employed in the insurance industry.8 Mr. Miller testified that nothing in the actuarial standards of practice requires that a risk factor be causally related to a loss. The Actuarial Standards Board's Standard of Practice 12,9 dealing with risk classification, states that a risk factor is appropriate for use if there is a demonstrated relationship between the risk factor and the insurance losses, and that this relationship may be established by statistical or other mathematical analysis of data. If the risk characteristic is shown to be related to an expected outcome, the actuary need not establish a cause-and-effect relationship between the risk characteristic and the expected outcome. As an example, Mr. Miller offered the fact that past automobile accidents do not cause future accidents, although past accidents are predictive of future risk. Past traffic violations, the age of the driver, the gender of the driver, and the geographical location are all risk factors in automobile insurance, though none of these factors can be said to cause future accidents. They help insurers predict the probability of a loss, but do not predict who will have an accident or why the accident will occur. Mr. Miller opined that credit scoring is a similar risk factor. It is demonstrably significant as a predictor of risk, though there is no causal relationship between credit scores and losses and only an incomplete understanding of why credit scoring works as a predictor of loss. At the hearing, Dr. Brockett discussed a study that he has co-authored with Linda Golden, a business professor at the University of Texas at Austin. Titled "Biological and Psychobehavioral Correlates of Risk Taking, Credit Scores, and Automobile Insurance Losses: Toward an Explication of Why Credit Scoring Works," the study has been peer-reviewed and at the time of the hearing had been accepted for publication in the Journal of Risk and Insurance. In this study, the authors conducted a detailed review of existing scientific literature concerning the biological, psychological, and behavioral attributes of risky automobile drivers and insured losses, and a similar review of literature concerning the biological, psychological, and behavioral attributes of financial risk takers. The study found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Dr. Brockett testified that this study provides a direction for future research into the reasons why credit scoring works as an insurance risk characteristic. 3. The Proposed Rule's Definition of "Unfairly Discriminatory" Petitioners contend that the Proposed Rule's definition of the term "unfairly discriminatory" expands upon and is contrary to the statutory definition of the term discussed in section C.1. supra, and that this expanded definition operates to impose a ban on the use of credit scoring by insurance companies. As noted above, Section 626.9741, Florida Statutes, does not define the term "unfairly discriminatory." The provisions of the Rating Law10 define the term as it is generally understood by the insurance industry: a rate is deemed "unfairly discriminatory" if the premium charged does not equitably reflect the differences in expected losses and expenses between policyholders. Two provisions of Section 626.9741, Florida Statutes, employ the term "unfairly discriminatory": (5) A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. * * * (8) The commission may adopt rules to administer this section. The rules may include, but need not be limited to: * * * (c) Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners contend that the statute's use of the term "unfairly discriminatory" is unexceptionable, that the Legislature simply intended the term to be used and understood in the traditional sense of actuarial soundness alone. Respondents agree that Subsection 626.9741(5), Florida Statutes, calls for the agency to apply the traditional definition of "unfairly discriminatory" as that term is employed in the statutes directly referenced, Sections 627.062 and 627.0651, Florida Statutes, the relevant texts of which are set forth in Findings of Fact 18 and 19 above. However, Respondents contend that Subsection 626.9741(8)(c), Florida Statutes, calls for more than the application of the Rating Law's definition of the term. Respondents assert that in the context of this provision, "unfairly discriminatory" contemplates not only the predictive function, but also "discrimination" in its more common sense, as the term is employed in state and federal civil rights law regarding race, color, religion, marital status, age, gender, income, national origin, or place of residence. At the hearing, OIR General Counsel Steven Parton testified as to the reasons why the agency chose the federal body of law using the term "disparate impact" as the test for unfair discrimination in the Proposed Rule: Well, first of all, what we were looking for is a workable definition that people would have some understanding as to what it meant when we talked about unfair discrimination. We were also looking for a test that did not require any willfulness, because it was not our concern that, in fact, insurance companies were engaging willfully in unfair discrimination. What we believed is going on, and we think all of the studies that are out there suggest, is that credit scoring is having a disparate impact upon various people, whether it be income, whether it be race. . . . Respondents' position is that Subsection 626.9741(8)(c), Florida Statutes, requires that a proposed rate or premium be rejected if it has a "disproportionately" negative effect on one of the named classes of persons, even though the rate or premium equitably reflects the differences in expected losses and expenses between policyholders. In the words of Mr. Parton, "This is not an actuarial rule." Mr. Parton explained the agency's rationale for employing a definition of "unfairly discriminatory" that is different from the actuarial usage employed in the Rating Law. Subsection 626.9741(5), Florida Statutes, already provides that an insurer's rate filings may not be "excessive, inadequate, or unfairly discriminatory" in the actuarial sense. To read Subsection 626.9741(8)(c), Florida Statutes, as simply a reiteration of the actuarial "unfair discrimination" rule would render the provision, "a nullity. There would be no force and effect with regards to that." Thus, the Proposed Rule defines "unfairly discriminatory" to mean "that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S." Proposed Florida Administrative Code Rule 69O-125.005(1)(e). OIR's actuary, Howard Eagelfeld, explained that "disproportionate effect" means "having a different effect on one group . . . causing it to pay more or less premium than its proportionate share in the general population or than it would have to pay based upon all other known considerations." Mr. Eagelfeld's explanation is not incorporated into the language of the Proposed Rule. Consistent with the actuarial definition of "unfairly discriminatory," the Proposed Rule requires that any credit scoring methodology must be "demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance," and sets forth detailed criteria through which the insurer can make the required demonstration. Proposed Florida Administrative Code Rule 69O-125.005(9)(a)-(f) and (h)-(l). Proposed Florida Administrative Code Rule 69O-125.005(9)(g) sets forth Respondents' "civil rights" usage of the term "unfairly discriminatory." The insurer's demonstration of the validity of its credit scoring methodology must include: [d]ocumentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), F.S. A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory.11 Mr. Parton, who testified in defense of the Proposed Rule as one of its chief draftsmen, stated that the agency was concerned that the use of credit scoring may be having a disproportionate effect on minorities. Respondents believe that credit scoring may simply be a surrogate measure for income, and that using income as a basis for setting rates would have an obviously disparate impact on lower-income persons, including the young and the elderly. Mr. Parton testified that "neither the insurance industry nor anyone else" has researched the theory that credit scoring may be a surrogate for income. Mr. Miller referenced a 1998 analysis performed by AIA indicating that the average credit scores do not vary significantly according to the income group. In fact, the lowest income group (persons making less than $15,000 per year) had the highest average credit score, and the average credit scores actually dropped as income levels rose until the income range reached $50,000 to $74,000 per year, when the credit scores began to rise. Mr. Miller testified that a credit score is no more predictive of income level than a coin flip. However, Respondents introduced a January 2003 report to the Washington State Legislature prepared by the Social & Economic Sciences Research Center of Washington State University, titled "Effect of Credit Scoring on Auto Insurance Underwriting and Pricing." The purpose of the study was to determine whether credit scoring has unequal impacts on specific demographic groups. For this study, the researchers received data from three insurance companies on several thousand randomly chosen customers, including the customers' age, gender, residential zip code, and their credit scores and/or rate classifications. The researchers contacted about 1,000 of each insurance company's customers and obtained information about their ethnicity, marital status, and income levels. The study's findings were summarized as follows: The demographic patterns discerned by the study are: Age is the most significant factor. In almost every analysis, older drivers have, on average, higher credit scores, lower credit-based rate assignments, and less likelihood of lacking a valid credit score. Income is also a significant factor. Credit scores and premium costs improve as income rises. People in the lowest income categories-- less than $20,000 per year and between $20,000 and $35,000 per year-- often experienced higher premiums and lower credit scores. More people in lower income categories also lacked sufficient credit history to have a credit score. Ethnicity was found to be significant in some cases, but because of differences among the three firms studied and the small number of ethnic minorities in the samples, the data are not broadly conclusive. In general, Asian/Pacific Islanders had credit scores more similar to whites than to other minorities. When other minority groups had significant differences from whites, the differences were in the direction of higher premiums. In the sample of cases where insurance was cancelled based on credit score, minorities who were not Asian/Pacific Islanders had greater difficulty finding replacement insurance, and were more likely to experience a lapse in insurance while they searched for a new policy. The analysis also considered gender, marital status and location, but for these factors, significant unequal effects were far less frequent. (emphasis added) The evidence appears equivocal on the question of whether credit scoring is a surrogate for income. The Washington study seems to indicate that ethnicity may be a significant factor in credit scoring, but that significant unequal effects are infrequent regarding gender and marital status. The evidence demonstrates that the use of credit scores by insurers would tend to have a negative impact on young people. Mr. Miller testified that persons between ages 25 and 30 have lower credit scores than older people. Petitioners argue that by defining "unfairly discriminatory" to mean "disproportionate effect," the Proposed Rule effectively prohibits insurers from using credit scores, if only because all the parties recognize that credit scores have a "disproportionate effect" on young people. Petitioners contend that this prohibition is in contravention of Section 626.9741(1), Florida Statutes, which states that the purpose of the statute is to "regulate and limit" the use of credit scores, not to ban them outright. Respondents counter that if the use of credit scores is "unfairly discriminatory" toward one of the listed classes of persons in contravention of Subsection 626.9741(8)(c), Florida Statutes, then the "limitation" allowed by the statute must include prohibition. This point is obviously true but sidesteps the real issues: whether the statute's undefined prohibition on "unfair discrimination" authorizes the agency to employ a "disparate impact" or "disproportionate effect" definition in the Proposed Rule, and, if so, whether the Proposed Rule sufficiently defines any of those terms to permit an insurer to comply with the rule's requirements. Proposed Florida Administrative Code Rule 69O-125.005(2) provides that the insurer bears the burden of demonstrating that its credit scoring methodology does not disproportionately affect persons based upon their race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners state that no insurer can demonstrate, consistent with the Proposed Rule, that its credit scoring methodology does not have a disproportionate effect on persons based upon their age. Therefore, no insurer will ever be permitted to use credit scores under the terms of the Proposed Rule. As discussed more fully in Findings of Fact 73 through 76 below, Petitioners also contend that the Proposed Rule provides no guidance as to what "disproportionate effect" and "disparate impact" mean, and that this lack of definitional guidance will permit the agency to reject any rate filing that uses credit scoring, based upon an arbitrary determination that it has a "disproportionate effect" on one of the classes named in Subsection 626.9741(8)(c), Florida Statutes. Petitioners also presented evidence that no insurer collects data on race, color, religion, or national origin from applicants or insureds. Mr. Miller testified that there is no reliable independent source for race, color, religious affiliation, or national origin data. Mr. Eagelfeld agreed that there is no independent source from which insurers can obtain credible data on race or religious affiliation. Mr. Parton testified that this lack of data can be remedied by the insurance companies commencing to request race, color, religion, and national origin information from their customers, because there is no legal impediment to their doing so. Mr. Miller testified that he would question the reliability of the method suggested by Mr. Parton because many persons will refuse to answer such sensitive questions or may not answer them correctly. Mr. Miller stated that, as an actuary, he would not certify the results of a study based on demographic data obtained in this manner and would qualify any resulting actuarial opinion due to the unreliability of the database. Petitioners also object to the vagueness of the broad categories of "race, color, religion and national origin." Mr. Miller testified that the Proposed Rule lacks "operational definitions" for those terms that would enable insurers to perform the required calculations. The Proposed Rule places the burden on the insurer to demonstrate no disproportionate effect on persons based on these categories, but offers no guidance as to how these demographic classes should be categorized by an insurer seeking to make such a demonstration. Petitioners point out that even if the insurer is able to ascertain the categories sought by the regulators, the Proposed Rule gives no guidance as to whether the "disproportionate effect" criterion mandates perfect proportionality among all races, colors, religions, and national origins, or whether some degree of difference is tolerable. Petitioners contend that this lack of guidance provides unbridled discretion to the regulator to reject any disproportionate effect study submitted by an insurer. At his deposition, Mr. Parton was asked how an insurer should break down racial classifications in order to show that there is no disproportionate effect on race. His answer was as follows: There is African-American, Cuban-American, Spanish-American, African-American, Haitian- American. Are you-- you know, whatever the make-up of your book of business is-- you're the one in control of it. You can ask these folks what their ethnic background is. At his deposition, Mr. Parton frankly admitted that he had no idea what "color" classifications an insurer should use, yet he also stated that an insurer must demonstrate no disproportionate effect on each and every listed category, including "color." At the final hearing, when asked to list the categories of "color," Mr. Parton responded, "I suppose Indian, African-American, Chinese, Japanese, all of those."12 At the final hearing, Mr. Parton was asked whether the Proposed Rule contemplates requiring insurers to demonstrate distinctions between such groups as "Latvian-Americans" and "Czech-Americans." Mr. Parton's reply was as follows: No. And I don't think it was contemplated by the Legislature. . . . The question is race by any other name, whether it be national origin, ethnicity, color, is something that they're concerned about in terms of an impact. What we would anticipate, and what we have always anticipated, is the industry would demonstrate whether or not there is an adverse effect against those folks who have traditionally in Florida been discriminated against, and that would be African-Americans and certain Hispanic groups. In our opinion, at least, if you could demonstrate that the credit scoring was not adversely impacting it, it may very well answer the questions to any other subgroup that you may want to name. At the hearing, Mr. Parton was also questioned as to distinctions between religions and testified as follows: The impact of credit scoring on religion is going to be in the area of what we call thin files, or no files. That is to say people who do not have enough credit history from which credit scores can be done, or they're going to be treated somehow differently because of that lack of history. A simple question that needs to be asked by the insurance company is: "Do you, as a result of your religious belief or whatever [sect] you are in, are you forbidden as a precept of your religious belief from engaging in the use of credit?" When cross-examined on the subject, Mr. Parton could not confidently identify any religious group that forbids the use of credit. He thought that Muslims and Quakers may be such groups. Mr. Parton concluded by stating, "I don't think it is necessary to identify those groups. The question is whether or not you have a religious group that you prescribe to that forbids it." Petitioners contend that, in addition to failing to define the statutory terms of race, color, religion, and national origin in a manner that permits insurer compliance, the Proposed Rule fails to provide an operational definition of "disproportionate effect." The following is a hypothetical question put to Mr. Parton at his deposition, and Mr. Parton's answer: Q: Let's assume that African-Americans make up 10 percent of the population. Let's just use two groups for the sake of clarity. Caucasians make up 90 percent. If the application of credit scoring in underwriting results in African-Americans paying 11 percent of the premium and Caucasians paying 89 percent of the premium, is that, in your mind, a disproportionate affect [sic]? A: It may be. I think it would give rise under this rule that perhaps there is a presumption that it is, but that presumption is not [an irrebuttable] one.[13] For instance, if you then had testimony that a 1 percent difference between the two was statistically insignificant, then I would suggest that that presumption would be overridden. This answer led to a lengthy discussion regarding a second hypothetical in which African-Americans made up 29 percent of the population, and also made up 35 percent of the lowest, or most unfavorable, tier of an insurance company's risk classifications. Mr. Parton ultimately opined that if the difference in the two numbers was found to be "statistically significant" and attributable only to the credit score, then he would conclude that the use of credit scoring unfairly discriminated against African-Americans. As to whether his answer would be the same if the hypothetical were adjusted to state that African-Americans made up 33 percent of the lowest tier, Mr. Parton responded: "That would be up to expert testimony to be provided on it. That's what trials are all about."14 Aside from expert testimony to demonstrate that the difference was "statistically insignificant," Mr. Parton could think of no way that an insurer could rebut the presumption that the difference was unfairly discriminatory under the "disproportionate effect" definition set forth in the proposed rule. He stated that, "I can't anticipate, nor does the rule propose to anticipate, doing the job of the insurer of demonstrating that its rates are not unfairly discriminatory." Mr. Parton testified that an insurer's showing that the credit score was a valid and important predictor of risk would not be sufficient to rebut the presumption of disproportionate effect. Summary Findings Credit-based insurance scoring is a valid and important predictor of risk, significantly increasing the accuracy of the risk assessment process. The evidence is still inconclusive as to why credit scoring is an effective predictor of risk, though a study co-authored by Dr. Brockett has found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Though the evidence was equivocal on the question of whether credit scoring is simply a surrogate for income, the evidence clearly demonstrated that the use of credit scores by insurance companies has a greater negative overall effect on young people, who tend to have lower credit scores than older people. Petitioners and Fair Isaac emphasized their contention that compliance with the Proposed Rule would be impossible, and thus the Proposed Rule in fact would operate as a prohibition on the use of credit scoring by insurance companies. At best, Petitioners demonstrated that compliance with the Proposed Rule would be impracticable at first, given the current business practices in the industry regarding the collection of customer data regarding race and religion. The evidence indicated no legal barriers to the collection of such data by the insurance companies. Questions as to the reliability of the data are speculative until a methodology for the collection of the data is devised. Subsection 626.9741(8)(c), Florida Statutes, authorizes the FSC to adopt rules that may include: Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners' contention that the statute's use of "unfairly discriminatory" contemplates nothing more than the actuarial definition of the term as employed by the Rating Law is rejected. As Respondents pointed out, Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit scores must comply with the Rating Law's requirements that the rates not be "unfairly discriminatory" in the actuarial sense. If Subsection 626.9741(8)(c), Florida Statutes, merely reiterates the actuarial requirement, then it is, in Mr. Parton's words, "a nullity."15 Thus, it is found that the Legislature contemplated some level of scrutiny beyond actuarial soundness to determine whether the use of credit scores "unfairly discriminates" in the case of the classes listed in Subsection 626.9741(8)(c), Florida Statutes. It is found that the Legislature empowered FSC to adopt rules establishing standards to ensure that an insurer's rates or premiums associated with the use of credit scores meet this added level of scrutiny. However, it must be found that the term "unfairly discriminatory" as employed in the Proposed Rule is essentially undefined. FSC has not adopted a "standard" by which insurers can measure their rates and premiums, and the statutory term "unfairly discriminatory" is thus subject to arbitrary enforcement by the regulating agency. Proposed Florida Administrative Code Rule 69O-125.005(1)(e) defines "unfairly discriminatory" in terms of adverse decisions that "disproportionately affect" persons in the classes set forth in Subsection 626.9741(8)(c), Florida Statutes, but does not define what is a "disproportionate effect." At Subsection (9)(g), the Proposed Rule requires "statistical testing" of the credit scoring model to determine whether it results in a "disproportionate impact" on the listed classes. This subsection attempts to define its terms as follows: A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Thus, the Proposed Rule provides that a "disproportionate effect" equals a "disparate impact" equals "unfairly discriminatory," without defining any of these terms in such a way that an insurer could have any clear notion, prior to the regulator's pronouncement on its rate filing, whether its credit scoring methodology was in compliance with the rule. Indeed, Mr. Parton's testimony evinced a disinclination on the part of the agency to offer guidance to insurers who attempt to understand this circular definition. The tenor of his testimony indicated that the agency itself is unsure of exactly what an insurer could submit to satisfy the "disproportionate effect" test, aside from perfect proportionality, which all parties concede is not possible at least as to young people, or a showing that any lack of perfect proportionality is "statistically insignificant," whatever that means. Mr. Parton seemed to say that OIR will know a valid use of credit scoring when it sees one, though it cannot describe such a use beforehand. Mr. Eagelfeld offered what might be a workable definition of "disproportionate effect," but his definition is not incorporated into the Proposed Rule. Mr. Parton attempted to assure the Petitioners that OIR would take a reasonable view of the endless racial and ethnic categories that could be subsumed under the literal language of the Proposed Rule, but again, Mr. Parton's assurances are not part of the Proposed Rule. Mr. Parton's testimony referenced federal and state civil rights laws as the source for the term "disparate impact." Federal case law under Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2, has defined a "disparate impact" claim as "one that 'involves employment practices that are facially neutral in their treatment of different groups, but that in fact fall more harshly on one group than another and cannot be justified by business necessity.'" Adams v. Florida Power Corporation, 255 F.3d 1322, 1324 n.4 (11th Cir. 2001), quoting Hazen Paper Co. v. Biggins, 507 U.S. 604, 609, 113 S. Ct. 1701, 1705, 123 L. Ed. 2d 338 (1993). The Proposed Rule does not reference this definition, nor did Mr. Parton detail how OIR proposes to apply or modify this definition in enforcing the Proposed Rule. Without further definition, all three of the terms employed in this circular definition are conclusions, not "standards" that the insurer and the regulator can agree upon at the outset of the statistical and analytical process leading to approval or rejection of the insurer's rates. Absent some definitional guidance, a conclusory term such as "disparate impact" can mean anything the regulator wishes it to mean in a specific case. The confusion is compounded by the Proposed Rule's failure to refine the broad terms "race," "color," and "religion" in a manner that would allow an insurer to prepare a meaningful rate submission utilizing credit scoring. In his testimony, Mr. Parton attempted to limit the Proposed Rule's impact to those groups "who have traditionally in Florida been discriminated against," but the actual language of the Proposed Rule makes no such distinction. Mr. Parton also attempted to limit the reach of "religion" to groups whose beliefs forbid them from engaging in the use of credit, but the language of the Proposed Rule does not support Mr. Parton's distinction.

USC (1) 42 U.S.C 2000e Florida Laws (18) 119.07120.52120.536120.54120.56120.57120.68624.307624.308626.9741627.011627.031627.062627.0629627.0651688.002760.10760.11 Florida Administrative Code (1) 69O-125.005
# 3
PAMELA Y. DENNIS vs. BOARD OF COSMETOLOGY, 88-004552 (1988)
Division of Administrative Hearings, Florida Number: 88-004552 Latest Update: Sep. 20, 1989

Findings Of Fact This case arose upon notification to the Petitioner, Pamela Y. Dennis, that she had received a failing grade on the Cosmetology Instructors Examination. Specifically, she received 70 percent on the examination and needed a 75 to pass. She chose to contest this action by the agency in according her a failing score, by contesting her grade on questions 2, 5, 7, 12, 16 and 17 on the lecture portion of that examination. She ultimately requested and received a formal proceeding before this Hearing Officer. The Petitioner testified on her own behalf. In essence, the testimony consisted of her contention that as to the contested questions, some of the examiners gave her scores of "yes", meaning that she had answered correctly, while others gave her scores of "no" and others gave her partial credit. Her complaint is, in essence, that if one examiner gave her a "yes" result on a question, why was that grade not accepted in order to give her full credit for the question. She also complained that the comments the examiners gave, when they accorded a "no" score or a partial score for a particular question, are differing in nature and some examiners, as to the same question, made no comment at all. She contends that the examiners' scorings were inconsistent with each other on each of the questions at issue and that therefore, if any examiner gave her a "yes" answer, then she should have received full credit for the answer to that question. In fact, however, as established by Dr. Eunice Loewe, Ph.D., the examination developer, the Petitioner was accorded credit for all yes or "y" grades given her by the examiners on the questions at issue, as well as all "p", or partial credit, scores given her. She only received no credit at all on a given question if none of the examiners gave her a partial or a "yes" score on that question. In other words, the scoring was not done by grading according to what the majority of examiners gave her for a particular question. If that had been done, she would actually have lost points below that which she actually scored, according to Dr. Loewe, because by a "majority vote" method of scoring, a majority of examiners giving her a "no" grade on a given question, even when some examiners had graded her with a "yes", would prevent her from getting any credit at all for that question. That was not done in the grading, according to Dr. Loewe, rather, the Petitioner was accorded credit for all partial or "yes" answers. The examiners are currently teachers of the same subject matter. They are required to make comment on a candidate's response to a particular question or a portion of the examination when they give her a "partial" or a "no" grade on that question. It is not required that all comments be the same, because this is a demonstration type examination where the grading to a large extent must, by necessity, be somewhat subjective. Because this is a physical demonstration type of examination, the agency, in an attempt to be totally fair with candidates, required the examiners to make these comments if they were not going to give her a "yes" grade on a particular question or area of classroom presentation. In summary, Dr. Loewe established that the Petitioner was a "borderline candidate" who had close to a passing score and upon whose grading the examiners were split as to the questions involved. The point is that if the grades on the questions involved had been by a majority vote of the examiners with no credit being given at all for any question if the majority of examiners did not vote "yes", the Petitioner would have received a lower score. The fact that the examiners were split on these questions did not in itself penalize the Petitioner because she was given credit for partial or "yes" grades by the examiners. There was no showing that the individual examiner's grades of "no", "yes" or "p" (for partial) on any of the questions at issue were incorrect grades. In fact, a review of the score sheets of the various examiners, in evidence as Petitioner's Composite Exhibit 2, reveals that although some do not make comment on some questions, where others do make comment, and some comments are different, the majority of the examiners grading "no" or "partial" made similar comments about the same question. There was simply no showing by the Petitioner that the method of administration of this examination was unfair, nor that the content of the questions was unfair or biased. It was not demonstrated that the examiners had unfairly scored the lecture demonstration portion of the examination. Indeed, the Petitioner was a borderline candidate and came close to passing the examination. She had a complete lesson plan as required and presented a fairly thorough lecture. She did not follow the lesson plan closely enough, however, according to a number of the examiners. They indicated by their comments that she seemed to "jump around" and other comments indicated that she relied too heavily on her notes and seemed to read from notes to a great extent in making her lecture presentation. These example comments and others revealed by the testimony of record and the score sheet exhibits reveal that the examiners did indeed give careful consideration to her presentation and they have not been shown to have graded unfairly or in a biased manner.

Recommendation Having considered the foregoing Findings of Fact and Conclusions of Law, the evidence of record and the candor and demeanor of the witnesses, it is therefore RECOMMENDED that a Final Order be entered by the Respondent agency determining that the score accorded Pamela Y. Dennis on the Cosmetology Instructors examination was an accurate score and that her petition be dismissed. DONE and ENTERED this 20th day of September, 1989, at Tallahassee, Florida. P. MICHAEL RUFF Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 20th day of September, 1989. COPIES FURNISHED: Pamela Y. Dennis, pro se 2221 Dumfries Circle Jacksonville, Florida 32216 H. Reynolds Sampson Deputy General Counsel Department of Professional Regulation Suite 60 1940 North Monroe Street Tallahassee, Florida 32399-0792 Kenneth E. Easley, General Counsel Department of Professional Regulation Suite 60 1940 North Monroe Street Tallahassee, Florida 32399-0792 Myrtle Aase Executive Director Board of Cosmetology Department of Professional Regulation Suite 60 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (1) 120.57
# 4
LUCKY GRAHAM vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 92-003892 (1992)
Division of Administrative Hearings, Florida Filed:Miami, Florida Jun. 25, 1992 Number: 92-003892 Latest Update: Nov. 04, 1993

The Issue At issue in these proceedings is whether petitioner suffers from "retardation," as that term is defined by Section 393.063(41), Florida Statutes, and therefore qualifies for services under Chapter 393, Florida Statutes, the "Developmental Disabilities Prevention and Community Services Act."

Findings Of Fact Petitioner, Lucky Graham (Lucky), was born September 18, 1973, and was, at the time of hearing, 19 years of age. Lucky has resided his entire life with his grandmother, Susie Griggs, in Miami, Dade County, Florida, and has been effectively abandoned by his mother and father. When not attending the Dorsey Skill Center, a program offered by the Dade County Public School system to develop minimal skills necessary to acquire a vocational skill, Lucky spends most of his free time alone in his room, and does not interact socially or play with other children beyond his immediate family. Notwithstanding, Lucky does interact with members of his immediate family; attend family outings; contribute to minor chores around the house such as hanging laundry, washing dishes and mopping floors; maintain himself and his room in a neat manner; and prepare food and drink for himself, at least to some unspecified extent. Lucky cannot, however, without supervision, shop or make change, but can utilize public transportation to and from Dorsey Skill Center without supervision. Lucky's limited social skills are, likewise, apparent at the Dorsey Skill Center where his interaction with other students is limited. Lucky's functional performance, as opposed to his learning ability, is also apparent from his past performance at school, where it was rated at the first grade level. As such, he is unable to read or write to any significant extent and cannot perform mathematical calculations beyond the most basic addition and substraction; i.e., he cannot add two digit numbers that require carrying and cannot perform substraction that requires borrowing from another number (regrouping). He did, however, complete a vocational training program for auto body repair and was, as of October 8, 1992, and apparently at the time of hearing, enrolled in a auto mechanics program at Dorsey Skill Center. (Tr. p 46, Petitioner's Exhibit 9). The quality of Lucky's performance was not, however, placed of record. Current and past testing administered through the Dade County School System, for functional ability (vocational ability), as opposed to learning ability, evidence that Lucky functions on a level comparable to mildly mentally retarded individuals. In this regard, he was found to be impulsive, disorganized and lacking concentration, and to be most appropriately placed in a sheltered workshop environment with direct supervision and below competitive employment capacity. During the course of his life, Lucky has been administered a number of intelligence assessment tests. In July 1977, at age 3 years 10 months, he was administered the Stanford Binet by the University of Miami Child Development Center and achieved an IQ score of 55. Lucky was described as "hesitant in coming into the testing room but . . . fairly cooperative throughout." Thereafter, he was administered the following intellectual assessment instruments by the Dade County Public Schools prior to his eighteenth birthday: in March 1980, at age 6 years 6 months, he was administered the Wechsler Intelligence Scale for Children--Revised (WISC-R) and received a verbal score of 65, a performance score of 55, and a full scale IQ score of 56; and, in October 1984, at age 11 years 1 month, he was administered the WISC-R and received a verbal score of 58, a performance score of 58, and a full scale IQ score of 54. During these testing sessions, Lucky was observed to have been minimally cooperative, with low frustration level, and highly distractible. If reliable, such tests would reflect a performance which was two or more standard deviations from the mean, and within the mild range of mental retardation. While not administered contemporaneously with the administration of intellectual assessment instruments, a Vineland Adaptive Behavior Scales (Vineland) was administered to Lucky through the Dade County Public Schools in January 1988, when he was 14 years 4 months. The results of such test reflected an adaptive behavior score of 51, and an age equivalent of 5 years. Such result would indicate a deficit in Lucky's adaptive behavior skills compared with other children his age. On August 8, 1991, pursuant to an order of the Circuit Court, Dade County, Florida, Lucky was evaluated by Walter B. Reid, Ph.D., a clinical psychologist associated with the Metropolitan Dade County Department of Human Resources, Office of Rehabilitative Services, Juvenile Court Mental Health Clinic. Dr. Reid administered the Wechsler Adult Intelligence Scale (WAIS) to Lucky, whose cooperation during such testing was observed to be good, and he achieved a verbal score of 68, a performance score of 70, and a full scale IQ of Dr. Reid concluded that Lucky suffered mild mental retardation and opined: . . . his [Lucky's] abilities should be thoroughly assessed by the Division of Vocational Rehabilitation as it is my opinion . . . this young man can function in a sheltered workshop and live in a group adult facility . . . Plans should be under- taken immediately to get this youth into appropriate training as soon as he gets out of high school in order for him to learn skills that will make it possible for him to work and to learn skills in the area of socialization. This is a pleasant young man, who, in my opinion, has the capability of working and living semi-independently. Thereafter, on August 26, 1991, apparently at the request of the Circuit Court, Juvenile Division, Lucky was assessed by the Department pursuant to the "Developmental Disabilities Prevention and Community Services Act," Chapter 393, Florida Services, to determine whether he was eligible for services as a consequence of a disorder or syndrome which was attributable to retardation. The Wechsler Adult Intelligence Scale-Revised (WAIS-R) was administered to Lucky, who was described as cooperative and motivated during the session, and he achieved a verbal score of 71, a performance score of 78, and a full scale IQ of 73. This placed Lucky within the borderline range of intellectual functioning, but not two or more standard deviations from the mean score of the WAIS-R. A subtest analysis revealed strengths in "the putting together" of concrete forms and psychomotor speed. Difficulties were noticed in verbal conceptualization and language abilities. In addition to the WAIS-R, Lucky was also administered the Vineland Adaptive Behavior Scales. He obtained a communication domain standard score of 30, a daily living skills domain standard score of 90, and a socialization domain score of 63. His adaptive Behavior Composite Score was 56. This score placed Lucky within the Moderate range of adaptive functioning. Based on the foregoing testing, the Department, following review by and the recommendation of its Diagnosis and Evaluation Team, advised the court that Lucky was not eligible for services of the Developmental Services Program Office under the category of mental retardation. The basic reason for such denial was Lucky's failure to test two or more standard deviations from the mean score of the WAIS-R which was administered on August 26, 1991, as well as the failure of the Vineland to reliable reflect a significant deficit in adaptive behavior. Also considered was the questionable reliability of prior testing.1/ Following the Department's denial, a timely request for formal hearing pursuant to Section 120.57(1), Florida Statutes, was filed on behalf of Lucky to review, de novo, the Department's decision. Here, resolution of the issue as to whether Lucky has been shown to suffer from "retardation" as that term is defined by law, discussed infra, resolves itself to a determination of the reliability of the various tests that have been administered to Lucky, as well as the proper interpretation to be accorded those tests. In such endeavor, the testimony of Bill E. Mosman, Ph.D., Psychology, which was lucid, cogent, and credible, has been accorded deference. In the opinion of Dr. Mosman, accepted protocol dictates that an IQ score alone, derived from an intelligence assessment instrument, is not a reliable indicator of mental retardation unless it is a valid reliable score. Such opinion likewise prevails with regard to adaptive behavior instruments. Here, Dr. Mosman opines that the IQ scores attributable to Lucky are not a reliable indication of mental retardation because Lucky's performance on most of the various parts of the tests reflects a performance level above that ascribed to those suffering retardation. In the opinion of Dr. Mosman, which is credited, the full scale scores ascribed to Lucky were artificially lowered because of his deficiencies in only a few parts of the tests. These deficiencies are reasonably attributable to a learning disability and, to a lesser extent, certain deficits in socialization, and not mental retardation. Consistent with such conclusion is the lack of cooperation and motivation exhibited by Lucky during earlier testing, and the otherwise inexplicable rise in his full scale IQ score over prior testing. Consequently, the test results do not reliably reflect a disorder attributable to retardation. The same opinion prevails regarding Lucky's performance on the adaptive behavior instruments which, when examined by their constituent parts, demonstrates that Lucky scores lower in the areas consistent with learning disabilities as opposed to retardation. In sum, although Lucky may be functioning at a low intelligence level, he is not mentally retarded. 2/

Recommendation Based on the foregoing findings of fact and conclusions of law, it is RECOMMENDED that a final order be rendered which denies petitioner's application for services for the developmentally disabled under the category of mental retardation. DONE AND ORDERED in Tallahassee, Leon County, Florida, this 10th day of August 1993. WILLIAM J. KENDRICK Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 10 day of August, 1993.

Florida Laws (3) 120.57393.063393.065
# 5
DIVISION OF REAL ESTATE vs HOWARD SARVEN WILLIAMS, 98-003520 (1998)
Division of Administrative Hearings, Florida Filed:Shalimar, Florida Aug. 03, 1998 Number: 98-003520 Latest Update: Jul. 15, 2004

The Issue The issue is whether Respondent's license as a real estate salesperson should be disciplined for the reasons given in the Administrative Complaint filed on May 20, 1998.

Findings Of Fact Based upon all of the evidence, the following findings of fact are determined: In this disciplinary action, Petitioner, Department of Business and Professional Regulation, Division of Real Estate (Division), seeks to impose penal sanctions on the license of Respondent, Howard Sarven Williams, a licensed real estate salesperson, on the ground that he failed to disclose that he had pled guilty to a crime when he filed his application for licensure in September 1994. In his Election of Rights Form filed with the Division, Respondent disputed this allegation, contended that his incorrect response "was done with the mistaken belief that it could be answered that way," and requested a formal hearing. Respondent is subject to the regulatory jurisdiction of the Division, having been issued license no. SL 0617682 by the Division in late 1994. The license remained inactive from January 1, 1995, until February 8, 1995; on that date, Respondent became an active salesperson with J.A.S. Coastal Realty, Inc. in Destin, Florida, until June 20, 1998. Between then and December 1998, he had no employing broker. Whether he is currently employed as a realtor is not of record. It is undisputed that on November 9, 1994, Respondent pled no contest to 12 counts of keeping a gambling house, a felony of the third degree. The offenses related to the illicit placement by Respondent (and two other individuals now deceased) of video gambling machines in approximately 10 VFW clubs and American Legion posts in Northwest Florida. On November 10, 1994, the court withheld adjudication of guilt; it placed Respondent on 10 years' supervised probation; and it ordered him to pay a fine and investigative costs totaling in excess of $25,000.00. Respondent was arrested in late 1993. On September 23, 1994, or before he entered his plea of no contest, Respondent completed and filed with the Division an application for licensure as a real estate salesperson. Question 9 on the application asks in part the following: Have you ever been convicted of a crime, found guilty, or entered a plea of guilty or nolo contendere (no contest), even if adjudication was withheld? At the time the application was filled out, Respondent had not yet entered his plea of no contest. Therefore, he properly answered the foregoing question in the negative. Although Respondent was statutorily required to notify the Commission in writing of this matter within 30 days after entering his plea, he has not been charged with violating that statute. The record does not reveal how the Division learned that Respondent had pled no contest to the charges. In any event, in March 1998, or more than three years later, a Division investigator interviewed Respondent who readily admitted that he had pled no contest to the charges, that he was still on probation, and that he was making monthly payments on the substantial fine imposed in 1994. The issuance of the Administrative Complaint followed. Although the evidence does not support the charge, as narrowly drawn in the Administrative Complaint, it should be noted that Respondent says he mistakenly assumed (without the advice of counsel) that because he had pled no contest and adjudication of guilt was withheld, he had not been convicted of a crime. Thus, he believed that his record was clean. At the same time, the plea is a matter of public record, and Respondent did not intend to make a fraudulent statement in order to secure his license.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Florida Real Estate Commission enter a final order dismissing the Administrative Complaint, with prejudice. DONE AND ENTERED this 23rd day of November, 1999, in Tallahassee, Leon County, Florida. DONALD R. ALEXANDER Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 23rd day of November, 1999. COPIES FURNISHED: Herbert S. Fecker, Director Division of Real Estate Department of Business and Professional Regulation Post Office Box 1900 Orlando, Florida 32802-1900 Laura McCarthy, Esquire Department of Business and Professional Regulation Post Office Box 1900 Orlando, Florida 32802-1900 Drew S. Pinkerton, Esquire Post Office Box 2379 Fort Walton Beach, Florida 32549-2379 Barbara D. Auger, General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.569120.57475.25
# 6
NATURE'S WAY NURSERY OF MIAMI, INC. vs FLORIDA DEPARTMENT OF HEALTH, AN EXECUTIVE BRANCH AGENCY OF THE STATE OF FLORIDA, 18-000721 (2018)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Feb. 12, 2018 Number: 18-000721 Latest Update: Jul. 16, 2018

The Issue The issue to be decided is whether Petitioner meets the "within-one-point" condition of eligibility for licensure as a medical marijuana treatment center under section 381.986(8)(a)2.a., Florida Statutes.

Findings Of Fact BACKGROUND AND PARTIES Respondent Florida Department of Health (the "Department" or "DOH") is the agency responsible for administering and enforcing laws that relate to the general health of the people of the state. The Department's jurisdiction includes the state's medical marijuana program, which the Department oversees. Art. X, § 29, Fla. Const.; § 381.986, Fla. Stat. Enacted in 2014, section 381.986, Florida Statutes (2015) (the "Noneuphoric Cannabis Law"), legalized the use of low-THC cannabis by qualified patients having specified illnesses, such as cancer and debilitating conditions that produce severe and persistent seizures and muscle spasms. The Noneuphoric Cannabis Law directed the Department to select one dispensing organization ("DO") for each of five geographic areas referred to as the northwest, northeast, central, southwest, and southeast regions of Florida. Once licensed, a regional DO would be authorized to cultivate, process, and sell medical marijuana, statewide, to qualified patients. Section 381.986(5)(b), Florida Statutes (2015), prescribed various conditions that an applicant would need to meet to be licensed as a DO, and it required the Department to "develop an application form and impose an initial application and biennial renewal fee." DOH was, further, granted authority to "adopt rules necessary to implement" the Noneuphoric Cannabis Law. § 381.986(5)(d), Fla. Stat. (2015). Accordingly, the Department's Office of Compassionate Use ("OCU"), which is now known as the Office of Medical Marijuana Use, adopted rules under which a nursery could apply for a DO license. Incorporated by reference in these rules is a form of an Application for Low-THC Cannabis Dispensing Organization Approval ("Application"). See Fla. Admin. Code R. 64-4.002 (incorporating Form DH9008-OCU-2/2015). To apply for one of the initial DO licenses, a nursery needed to submit a completed Application, including the $60,063.00 application fee, no later than July 8, 2015.1/ See Fla. Admin. Code R. 64-4.002(5). Petitioner Nature's Way of Miami, Inc. ("Nature's Way"), is a nursery located in Miami, Florida, which grows and sells tropical plants to big box retailers throughout the nation. Nature's Way timely applied to the Department in 2015 for licensure as a DO in the southeast region. THE 2015 DO APPLICATION CYCLE Although the current dispute arises from the Department's intended denial of Nature's Way's October 19, 2017, application for registration as a medical marijuana treatment center ("MMTC"), which is the name by which DOs are now known, the licensing criterion at the heart of this matter, the "One Point Condition," can be satisfied only by a nursery, such as Nature's Way, whose 2015 application for licensure as a DO was evaluated, scored, and not approved as of the enactment, in 2017, of legislation that substantially overhauled the Noneuphoric Cannabis Law. See Ch. 2017-232, Laws of Fla. The current iteration of section 381.986, in effect as of this writing, will be called the "Medical Marijuana Law." The One Point Condition operates retroactively in that it establishes a previously nonexistent basis for licensure that depends upon pre-enactment events. This is analogous to the legislative creation of a new cause of action, involving as it does the imposition of a new duty (to issue licenses) on the Department and the bestowal of a new right (to become licensed) on former applicants based on their past actions. The Department contends that all of the material facts surrounding these pre-enactment events have been conclusively established due to some combination of (i) Nature's Way's waiver of hearing rights, (ii) administrative finality, and (iii) the retroactive reach of the Medical Marijuana Law. Nature's Way, in contrast, maintains that there remain material facts subject to genuine dispute. The undersigned rejects the Department's argument that all of the facts material to Nature's Way's current application are beyond dispute. In brief, the undersigned holds that the One Point Condition places new legal significance on two categories of pre-enactment facts, namely (i) historical and ultimate facts that have never been determined with finality in a judicial or quasi-judicial proceeding and thus remain subject to dispute; and (ii) facts, both historical and ultimate, that were a critical and necessary part of the final agency action determining an applicant's substantial interests in obtaining a DO license under the Noneuphoric Cannabis Law. Because facts that have been established, quasi-judicially, with finality between parties (hereafter, "adjudicated facts") are binding on those parties in subsequent litigation under the doctrine of administrative finality, they would not be subject to genuine dispute in a proceeding to determine the substantial interests of an applicant seeking licensure under the One Point Condition who was a party to the prior proceeding. In sum, because facts surrounding the inaugural competition under the Noneuphoric Cannabis Law for regional DO licenses are material to the determination of whether an applicant for licensure as an MMTC under the Medical Marijuana Law meets the One Point Condition, these seemingly unrelated matters must be recounted, and found, herein. To understand the issues at hand, it is essential first to become familiar with the evaluation and scoring of, and the agency actions with respect to, the applications submitted during the 2015 DO application cycle. The Competitive, Comparative Evaluation As stated in the Application, OCU viewed its duty to select five regional DOs as requiring OCU to choose "the most dependable, most qualified" applicant in each region "that can consistently deliver high-quality" medical marijuana. For ease of reference, such an applicant will be referred to as the "Best" applicant for short. Conversely, an applicant not chosen by OCU as "the most dependable, most qualified" applicant in a given region will be called, simply, "Not Best." Given the limited number of available DO licenses under the Noneuphoric Cannabis Law, the 2015 application process necessarily entailed a competition. As the Application explained, applicants were not required to meet any "mandatory minimum criteria set by the OCU," but would be evaluated comparatively in relation to the "other Applicants" for the same regional license, using criteria "drawn directly from the Statute." Clearly, the comparative evaluation would require the item-by-item comparison of competing applicants, where the "items" being compared would be identifiable factors drawn from the statute and established in advance. Contrary to the Department's current litigating position, however, it is not an intrinsic characteristic of a comparative evaluation that observations made in the course thereof must be recorded using only comparative or superlative adjectives (e.g., least qualified, qualified, more qualified, most qualified).2/ Moreover, nothing in the Noneuphoric Cannabis Law, the Application, or Florida Administrative Code Rule 64-4.002 stated expressly, or necessarily implied, that in conducting the comparative evaluation, OCU would not quantify (express numerically an amount denoting) the perceived margins of difference between competing applications. Quite the opposite is true, in fact, because, as will be seen, rule 64-4.002 necessarily implied, if it did not explicitly require, that the applicants would receive scores which expressed their relative merit in interpretable intervals. Specifically, the Department was required to "substantively review, evaluate, and score" all timely submitted and complete applications. Fla. Admin. Code R. 64-4.002(5)(a). This evaluation was to be conducted by a three-person committee (the "Reviewers"), each member of which had the duty to independently review and score each application. See Fla. Admin. Code R. 64-4.002(5)(b). The applicant with the "highest aggregate score" in each region would be selected as the Department's intended licensee for that region. A "score" is commonly understood to be "a number that expresses accomplishment (as in a game or test) or excellence (as in quality) either absolutely in points gained or by comparison to a standard." See "Score," Merriam-Webster.com, http://www.merriam-webster.com (last visited May 30, 2018). Scores are expressed in cardinal numbers, which show quantity, e.g., how many or how much. When used as a verb in this context, the word "score" plainly means "to determine the merit of," or to "grade," id., so that the assigned score should be a cardinal number that tells how much quality the graded application has as compared to the competing applications. The language of the rule leaves little or no doubt that the Reviewers were supposed to score the applicants in a way that quantified the differences between them, rather than with superlatives such as "more qualified" and "most qualified" (or numbers that merely represented superlative adjectives). By rule, the Department had identified the specific items that the Reviewers would consider during the evaluation. These items were organized around five subjects, which the undersigned will refer to as Topics. The five Topics were Cultivation, Processing, Dispensing, Medical Director, and Financials. Under the Topics of Cultivation, Processing, and Dispensing were four Subtopics (the undersigned's term): Technical Ability; Infrastructure; Premises, Resources, Personnel; and Accountability. In the event, the 12 Topic-Subtopic combinations (e.g., Cultivation-Technical Ability, Cultivation- Infrastructure), together with the two undivided Topics (i.e., Medical Director and Financials), operated as 14 separate evaluation categories. The undersigned refers to these 14 categories as Domains. The Department assigned a weight (by rule) to each Topic, denoting the relative importance of each in assessing an applicant's overall merit. The Subtopics, in turn, were worth 25% of their respective Topics' scores, so that a Topic's raw or unadjusted score would be the average of its four Subtopics' scores, if it had them. The 14 Domains and their associated weights are shown in the following table: CULTIVATION 30% 1. Cultivation – Technical Ability 25% out of 30% 2. Cultivation – Infrastructure 25% out of 30% 3. Cultivation – Premises, Resources, Personnel 25% out of 30% 4. Cultivation – Accountability 25% out of 30% PROCESSING 30% 5. Processing – Technical Ability 25% out of 30% 6. Processing – Infrastructure 25% out of 30% 7. Processing: Premises, Resources, Personnel 25% out of 30% 8. Processing: Accountability 25% out of 30% DISPENSING 15% 9. Dispensing: Technical Ability 25% out of 15% 10. Dispensing: Infrastructure 25% out of 15% 11. Dispensing: Premises, Resources, Personnel 25% out of 15% 12. Dispensing: Accountability 25% out of 15% 13. MEDICAL DIRECTOR 5% 14. FINANCIALS 20% If there were any ambiguity in the meaning of the word "score" as used in rule 64-4.002(5)(b), the fact of the weighting scheme removes all uncertainty, because in order to take a meaningful percentage (or fraction) of a number, the number must signify a divisible quantity, or else the reduction of the number, x, to say, 20% of x, will not be interpretable. Some additional explanation here might be helpful. If the number 5 is used to express how much of something we have, e.g., 5 pounds of flour, we can comprehend the meaning of 20% of that value (1 pound of flour). On the other hand, if we have coded the rank of "first place" with the number 5 (rather than, e.g., the letter A, which would be equally functional as a symbol), the meaning of 20% of that value is incomprehensible (no different, in fact, than the meaning of 20% of A). To be sure, we could multiply the number 5 by 0.20 and get 1, but the product of this operation, despite being mathematically correct (i.e., true in the abstract, as a computational result), would have no contextual meaning. This is because 20% of first place makes no sense. Coding the rank of first place with the misleading symbol of "5 points" would not help, either, because the underlying referent——still a position, not a quantity——is indivisible no matter what symbol it is given.3/ We can take this analysis further. The weighting scheme clearly required that the points awarded to an applicant for each Topic must contribute a prescribed proportionate share both to the applicant's final score per Reviewer, as well as to its aggregate score. For example, an applicant's score for Financials had to be 20% of its final Reviewer scores and 20% of its aggregate score, fixing the ratio of unweighted Financials points to final points (both Reviewer and aggregate) at 5:1. For this to work, a point scale having fixed boundaries had to be used, and the maximum number of points available for the final scores needed to be equal to the maximum number of points available for the raw (unweighted) scores at the Topic level. In other words, to preserve proportionality, if the applicants were scored on a 100-point scale, the maximum final score had to be 100, and the maximum raw score for each of the five Topics needed to be 100, too. The reasons for this are as follows. If there were no limit to the number of points an applicant could earn at the Topic level (like a baseball game), the proportionality of the weighting scheme could not be maintained; an applicant might run up huge scores in lower-weighted Topics, for example, making them proportionately more important to its final score than higher-weighted Topics. Similarly, if the maximum number of points available at the Topic level differed from the maximum number of points available as a final score, the proportionality of the weighting scheme (the prescribed ratios) would be upset, obviously, because, needless to say, 30% of, e.g., 75 points is not equal to 30% of 100 points. If a point scale is required to preserve proportionality, and it is, then so, too, must the intervals between points be the same, for all scores, in all categories, or else the proportionality of the weighting scheme will fail. For a scale to be uniform and meaningful, which is necessary to maintain the required proportionality, the points in it must be equidistant from each other; that is, the interval between 4 and 5, for example, needs to be the same as the interval between 2 and 3, and the distance between 85 and 95 (if the scale goes that high) has to equal that between 25 and 35.4/ When the distances between values are known, the numbers are said to express interval data.5/ Unless the distances between points are certain and identical, the prescribed proportions of the weighting scheme established in rule 64-4.002 will be without meaning. Simply stated, there can be no sense of proportion without interpretable intervals. We cannot say that a 5:1 relationship exists between two point totals (scores) if we have no idea what the distance is between 5 points and 1 point.6/ The weighting system thus necessarily implied that the "scores" assigned by the Reviewers during the comparative evaluation would be numerical values (points) that (i) expressed quantity; (ii) bore some rational relationship to the amount of quality the Reviewer perceived in an applicant in relation to the other applicants; and (iii) constituted interval data. In other words, the rule unambiguously required that relative quality be counted (quantified), not merely coded. The Scoring Methodology: Interval Coding In performing the comparative evaluation of the initial applications filed in 2015, the Reviewers were required to use Form DH8007-OCU-2/2015, "Scorecard for Low-THC Cannabis Dispensing Organization Selection" (the "Scorecard"), which is incorporated by reference in rule 64-4.002(5)(a). There are no instructions on the Scorecard. The Department's rules are silent to how the Reviewers were supposed to score applications using the Scorecard, and they provide no process for generating aggregate scores from Reviewer scores. To fill these gaps, the Department devised several policies that governed its free-form decision-making in the run- up to taking preliminary agency action on the applications. Regarding raw scores, the Department decided that the Reviewers would sort the applications by region and then rank the applications, from best to worst, on a per-Domain basis, so that each Reviewer would rank each applicant 14 times. An applicant's raw Domanial score would be its position in the ranking, from 1 to x, where x was both (i) equal to the number of applicants within the region under review and (ii) the number assigned to the rank of first place (or Best). In other words, the Reviewer's judgments as to the descending order of suitability of the competing applicants, per Domain, were symbolized or coded with numbers that the Department called "rank scores," and which were thereafter used as the applicants' raw Domanial scores. To be more specific, in a five-applicant field such as the southeast region, the evaluative judgments of the Reviewers were coded as follows: Evaluative Judgment Symbol ("Rank Score") Best qualified applicant ("Best") 5 points Less qualified than the best qualified applicant, but better qualified than all other applicants ("Second Best") 4 points Evaluative Judgment Symbol ("Rank Score") Less qualified than two better qualified applicants, but better qualified than all other applicants ("Third Best") 3 points Less qualified than three better qualified applicants, but better qualified than all other applicants ("Fourth Best") 2 points Less qualified than four better qualified applicants ("Fifth Best") 1 point The Department's unfortunate decision to code the Reviewers' qualitative judgments regarding positions in rank orders with symbols that look like quantitative judgments regarding amounts of quality led inexorably to extremely misleading results. The so-called "rank scores" give the false impression of interval data, tricking the consumer (and evidently the Department, too) into believing that the distance between scores is certain and the same; that, in other words, an applicant with a "rank score" of 4 is 2 points better than an applicant with a "rank score" of 2. If this deception had been intentional (and, to be clear, there is no evidence it was), we could fairly call it fraud. Even without bad intent, the decision to code positions in ranked series with "scores" expressed as "points" was a colossal blunder that turned the scoring process into a dumpster fire. Before proceeding, it must be made clear that an applicant's being ranked Best in a Domain meant only that, as the highest-ranked applicant, it was deemed more suitable, by some unknown margin, than all the others within the group. By the same token, to be named Second Best meant only that this applicant was less good, in some unknown degree, than the Best applicant, and better, in some unknown degree, than the Third Best and remaining, lower-ranked applicants. The degree of difference in suitability between any two applicants in any Domanial ranking might have been a tiny sliver or a wide gap, even if they occupied adjacent positions, e.g., Second Best and Third Best. The Reviewers made no findings with respect to degrees of difference. Moreover, it cannot truthfully be claimed that the interval between, say, Second Best and Third Best is the same as that between Third Best and Fourth Best, for there exists no basis in fact for such a claim. In sum, the Department's Domanial "rank scores" merely symbolized the applicants' positions in sets of ordered applications. Numbers which designate the respective places (ranks) occupied by items in an ordered list are called ordinal numbers. The type of non-metric data that the "rank scores" symbolize is known as ordinal data, meaning that although the information can be arranged in a meaningful order, there is no unit or meter by which the intervals between places in the ranking can be measured. Because it is grossly misleading to refer to positions in a ranking as "scores" counted in "points," the so-called "rank scores" will hereafter be referred to as "Ordinals"——a constant reminder that we are working with ordinal data. This is important to keep in mind because, as will be seen, there are limits on the kinds of mathematical manipulation that can appropriately be carried out with ordinal data. The Department's policy of coding positions in a rank order with "rank scores" expressed as "points" will be called the "Interval Coding Policy." In conducting the evaluation, the Reviewers followed the Interval Coding Policy. The Computational Methodology: Interval Statements and More Once the Reviewers finished evaluating and coding the applications, the evaluative phase of the Department's free-form process was concluded. The Reviewers had produced a dataset of Domanial Ordinals——42 Domanial Ordinals for each applicant to be exact——that collectively comprised a compilation of information, stored in the scorecards. This universe of Domanial Ordinals will be called herein the "Evaluation Data." The Department would use the Evaluation Data in the next phase of its free-form process as grounds for computing the applicants' aggregate scores. Rule 64-4.002(5)(b) provides that "scorecards from each reviewer will be combined to generate an aggregate score for each application. The Applicant with the highest aggregate score in each dispensing region shall be selected as the region's Dispensing Organization." Notice that the rule here switches to the passive voice. The tasks of (i) "combin[ing]" scorecards to "generate" aggregate scores and of (ii) "select[ing]" regional DOs were not assigned to the Reviewers, whose work was done upon submission of the scorecards. As mentioned previously, the rule does not specify how the Evaluation Data will be used to generate aggregate scores. The Department formulated extralegal policies7/ for this purpose, which can be stated as follows: (i) the Ordinals, which in actuality are numeric code for uncountable information content, shall be deemed real (counted) points, i.e., equidistant units of measurement on a 5-point interval scale (the "Deemed Points Policy"); (ii) in determining aggregate scores, the three Reviewer scores will be averaged instead of added together, so that "aggregate score" means "average Reviewer score" (the "Aggregate Definition"); and (iii) the results of mathematical computations used to determine weighted scores at the Reviewer level and, ultimately, the aggregate scores themselves will be carried out to the fourth decimal place (the "Four Decimal Policy"). The Department's computational process for generating aggregate scores operated like this. For each applicant, a Reviewer final score was derived from each Reviewer, using that Reviewer's 14 Domanial Ordinals for the applicant. For each of the subdivided Topics (Cultivation, Processing, and Dispensing), the mean of the Reviewer's four Domanial Ordinals for the applicant (one Domanial Ordinal for each Subtopic) was determined by adding the four numbers (which, remember, were whole numbers as discussed above) and dividing the sum by 4. The results of these mathematical operations were reported to the second decimal place. (The Reviewer raw score for each of the subdivided Topics was, in other words, the Reviewer's average Subtopic Domanial Ordinal.) For the undivided Topics of Medical Director and Financials, the Reviewer raw score was simply the Domanial Ordinal, as there was only one Domanial Ordinal per undivided Topic. The five Reviewer raw Topic scores (per Reviewer) were then adjusted to account for the applicable weight factor. So, the Reviewer raw scores for Cultivation and Processing were each multiplied by 0.30; raw scores for Dispensing were multiplied by 0.15; raw scores (Domanial Ordinals) for Medical Director were multiplied by 0.05; and raw scores (Domanial Ordinals) for Financials were multiplied by 0.20. These operations produced five Reviewer weighted-Topic scores (per Reviewer), carried out (eventually) to the fourth decimal place. The Reviewer final score was computed by adding the five Reviewer weighted-Topic scores. Thus, each applicant wound up with three Reviewer final scores, each reported to the fourth decimal place pursuant to the Four Decimal Policy. The computations by which the Department determined the three Reviewer final scores are reflected (but not shown) in a "Master Spreadsheet"8/ that the Department prepared. Comprising three pages (one for each Reviewer), the Master Spreadsheet shows all of the Evaluation Data, plus the 15 Reviewer raw Topic scores per applicant, and the three Reviewer final scores for each applicant. Therein, the Reviewer final scores of Reviewer 2 and Reviewer 3 were not reported as numbers having five significant digits, but were rounded to the nearest hundredth. To generate an applicant's aggregate score, the Department, following the Aggregate Definition, computed the average Reviewer final score by adding the three Reviewer final scores and dividing the sum by 3. The result, under the Four Decimal Policy, was carried out the ten-thousandth decimal point. The Department referred to the aggregate score as the "final rank" in its internal worksheets. The Department further assigned a "regional rank" to each applicant, which ordered the applicants, from best to worst, based on their aggregate scores. Put another way, the regional rank was an applicant's Ultimate Ordinal. The Reviewer final scores and the "final ranks" (all carried out to the fourth decimal place), together with the "regional ranks," are set forth in a table the Department has labeled its November 2015 Aggregated Score Card (the "Score Card"). The Score Card does not contain the Evaluation Data. The Master Spreadsheet and Score Card are work papers from the Department's free-form comparative evaluation of DO applications in 2015. Essentially notes, these public records provide some insight into how and why the Department made the decisions it took that year, approving some and denying many of the applications it had reviewed. For reasons that will soon become clear, it is important to remember that although these work papers contain relevant information——information which, in fact, informed agency decisions——they are not themselves, separately or taken together, agency actions. Furthermore, not every fact or "evidence" an agency considers during free-form deliberations is necessary and critical to its preliminary agency action. Predecisional matters that the agency takes into account in arriving at its intended action that are merely "deliberative" facts (as opposed to adjudicative facts upon which a party's substantial interests depend) might be informative or explanatory, but they are not a critical and necessary part of the decision. Preliminary Agency Actions Once the aggregate scores had been computed, the Department was ready to take preliminary agency action on the applications. As to each application, the Department made a binary decision: Best or Not Best. The intended action on the applications of the five Best applicants (one per region), which were identified by their aggregate scores (highest per region), would be to grant them. Each of the Not Best applicants, so deemed due to their not having been among the highest scored applicants, would be notified that the Department intended to deny its application. To explain in greater detail, the ultimate factual determination that the Department made for each application was whether the applicant was, or was not, the most dependable, most qualified nursery as compared to the alternatives available in a particular region. The evidence of facts behind these determinations consisted of the applications themselves, whose representations were taken as true. That is, when the Reviewers formed opinions about the relative suitability of the applicants in connection with the multiple categories of criteria, they accepted the facts as stated in the applications; their judgments were based, in effect, on a record of undisputed facts. The Reviewers' Ordinally-coded judgments regarding which applicants were Best, Second Best, Third Best, and so forth in each Domain amounted to a kind of evidence, loosely analogous to opinion testimony, which was in conflict inasmuch as the Reviewers did not agree on all rankings. (The aggregate scores are apparently supposed to synthesize the disparate opinions, to produce a simulacrum of a consensus; because the Reviewers did not collaborate as a collegial body, the aggregate scores do not represent a real consensus.) Crucially, however, despite appearances, the Evaluation Data comprising the Reviewers' opinions was not quantitative but qualitative, for the Reviewers, as mentioned, made no attempt to quantify the relative suitability of the applicants in numeric terms and thus produced no interval data whatsoever. Using the Deemed Points Policy and the Four Decimal Policy, the Department purported to turn the water of Evaluation Data into the wine of finely tuned aggregate scores, which latter provided the direct grounds for the Department's ultimate decisions as to which applicants were the most dependable, most qualified nurseries. The aggregate scores, however, were (and are) devoid of quantitative content and therefore cannot be compared mathematically to find interval differences; it is impossible, after all, to extract information that was never present to begin with. As will be explained, the aggregate scores, if properly construed and corrected for flagrant overprecision, provide at most a very rough idea of the Reviewers' "consensus opinion" (constructive, not actual) as to the relative order of the applicants, sorted by suitability (most – least). In the end, the Department's preliminary decisions on the DO applications were qualitative, not quantitative, and were formulated at a level of generality, i.e., Best-grant/Not Best- deny, far above such particular details as whether an applicant's aggregate score constituted a true interval statement. It was neither critical nor necessary to the preliminary agency actions actually taken that findings be made measuring the precise space between applicants; or that the seemingly granular aggregate scores be adjudged true or credible to the ten-thousandths point. Clear Points of Entry The Department decided preliminarily that Costa was Best and that four other southeast region applicants, including Nature's Way, were Not Best. Accordingly, the Department's intended agency action was to grant Costa's application and deny the rest. Letters dated November 23, 2015, were sent to the applicants informing them either that "your application received the highest score" and thus is granted, or that because "[you were] not the highest scored applicant in [your] region, your application . . . is denied," whichever was the case. The letters contained a clear point of entry whose legal sufficiency as to the stated and recognizable agency action Nature's Way does not dispute, which concluded with the usual warning that the "[f]ailure to file a petition within 21 days shall constitute a waiver of the right to a hearing on this agency action."9/ (Emphasis added). Nature's Way decided not to request a hearing in 2015, and therefore it is undisputed that the Department's proposed action, i.e., the denial of Nature's Way's application because the applicant was not deemed to be the most dependable, most qualified nursery for purposes of selecting a DO for the southeast region, became final agency action without a formal hearing, the right to which Nature's Way elected to waive. The Department argues that Nature's Way thereby waived, forever and for all purposes, the right to a hearing on the question of whether its and Costa's Department-computed aggregate scores of 2.8833 and 4.4000, respectively, are, in fact, true as interval statements of quantity. (Note that if these scores are false as interval data, as Nature's Way contends, then the statement that Costa's score exceeds Nature's Way's score by 1.5167 points is false, also, because it is impossible to calculate a true, interpretable difference (interval) between two values unless those values are expressions of quantified data. Simply put, you cannot subtract Fourth Best from Best.) The Department's waiver argument overreaches. To be sure, Nature's Way waived the right to a hearing on the proposed denial of its application, which was the only recognizable agency action in clear view in 2015. Nature's Way is not attempting in this proceeding, however, to contest the denial of its 2015 DO application. What the Department is really trying to say is that, in contesting the proposed denial of its 2017 MMTC application, Nature's Way is barred by administrative finality from "relitigating" matters, such as the truth of the aggregate scores as quantifiable facts, which were supposedly decided conclusively in the final agency action on its DO application in 2015. The finality issue boils down to whether the truth of the aggregate scores, as measurable quantities, was actually adjudicated (or even judicable) in 2015, so that the numbers 2.8833 and 4.4000 are now incontestably true interval data, such that one figure can meaningfully be subtracted from the other for purposes of applying the One Point Condition. The Department did not explicitly adjudicate the question of the aggregate scores' validity as interval data in taking final agency action on Nature's Way's application and probably never gave the matter serious thought. Thus, we must consider whether the aggregate scores, as quantities, were critical and necessary to the relevant agency action. In this regard, Nature's Way contends that the absence, in the notices of intended decision, of any mention of the numbers comprising the scores is compelling, even dispositive, evidence that the particular scores were not important. Not surprisingly, the Department asserts that its omission of the "final ranks" from the notices to the applicants is "legally irrelevant" to the question of whether the scores were necessarily determined to be true quantities in the agency action. The Department is wrong about the supposed irrelevance of the noninclusion of the scores in the 2015 notices. As regards Nature's Way, the notice of intended action is the only written "order" that was entered determining the applicant's substantial interests; thus, the notice/"order" is the most persuasive proof of what the Department actually decided. That the Department failed, at the time when it would have mattered to the sufficiency of the clear point of entry, to include a finding to the effect that "your aggregate score was determined to be 2.8833 points on a 5-point scale as compared the highest measured score of 4.4000 points" is strong evidence that the truth of an applicant's aggregate score as a statement of fact expressing quantified interval data was not part and parcel of the decision then being taken; if the Department had thought, then, that truthful interval statements of fact were critical and necessary to its proposed action, it presumably would have (and certainly should have) included such information in the notice, over which it had absolute control. Ultimately, the question of whether the aggregate scores were indispensable to, and thus necessarily decided in, the Department's notice of intent/"order" depends on the meaning of the scores. There is a strong tendency to look at a number, such as 2.8833, and assume that it is unambiguous——and, indeed, the Department is unquestionably attempting to capitalize on that tendency. But numbers can be ambiguous.10/ The aggregate scores are, clearly, open to interpretation. To begin, however, it must be stated up front that there is no dispute about the existence of the aggregate scores. It is an undisputed historical fact, for example, that Nature's Way had a final ranking (aggregate score) of 2.8833 as computed by the Department in November 2015. There is likewise no dispute that Costa's Department-computed aggregate score was 4.4000. In this sense, the scores are historical facts—— relevant ones, too, since an applicant needed to have had an aggregate score in 2015 to take advantage of the One Point Condition enacted in 2017. The existence of the scores, however, is a separate property from their meaning. Clearly, the aggregate scores that exist from history purport to convey information about the applicants; in effect, they are statements. The ambiguity arises from the fact that each score could be interpreted as having either of two different meanings. On the one hand, an aggregate score could be understood as a numerically coded non- quantity, namely a rank. In other words, the aggregate scores could be interpreted reasonably as ordinal data. On the other hand, an aggregate score could be understood as a quantified measurement taken in units of equal value, i.e., interval data. In 2015, the Department insisted (when it suited its purposes) that the aggregate scores were numeric shorthand for its discretionary value judgments about which applicants were best suited, by region, to be DOs, reflecting where the applicants, by region, stood in relation to the best suited applicants and to each other. The Department took this position because it wanted to limit the scope of the formal hearings requested by disappointed applicants to reviewing its decisions for abuse of discretion. Yet, even then, the Department wanted the aggregate scores to be seen as something more rigorously determined than a discretionary ranking. Scores such as 2.8833 and 3.2125 plainly connote a much greater degree of precision than "these applicants are less qualified than others." Indeed, in one formal hearing, the Department strongly implied that the aggregate scores expressed interval data, arguing that they showed "the [Department's position regarding the] order of magnitude" of the differences in "qualitative value" between the applicants, so that a Fourth Best applicant having a score of 2.6458 was asserted to be "far behind" the highest-scored applicant whose final ranking was 4.1042.11/ A ranking, of course, expresses order but not magnitude; interval data, in contrast, expresses both order and magnitude, and it is factual in nature, capable of being true or false. In short, as far as the meaning of the aggregate scores is concerned, the Department has wanted to have it both ways. Currently, the Department is all-in on the notion that the aggregate scores constitute precise interval data, i.e., quantified facts. In its Proposed Recommended Order, on page 11, the Department argues that "Nature's Way does not meet the within-one-point requirement" because "Nature's Way's Final Rank [aggregate score of 2.8833] is 1.5167 points less than the highest Final Rank [Cost's aggregate score, 4.4000] in its region." This is a straight-up statement of fact, not a value judgment or policy preference. Moreover, it is a statement of fact which is true only if the two aggregate scores being compared (2.8833 and 4.4000), themselves, are true statements of quantifiable fact about the respective applicants. The Department now even goes so far as to claim that the aggregate score is the precise and true number (quantity) of points that an applicant earned as a matter of fact. On page 4 of its Proposed Recommended Order, the Department states that Costa "earned a Final Rank of 4.4000" and that Nature's Way had an "earned Final Rank of 2.8833." In this view, the scores tell us not that, in the Department's discretionary assignment of value, Costa was better suited to be the DO for the southeast region, but rather that (in a contest, it is insinuated, the Department merely refereed) Costa outscored Nature's Way by exactly 1.5167 points——and that the points have meaning as equidistant units of measurement. If the scores were understood and used only as ordinal data, i.e., solely as numerical expressions of the Department's discretionary value judgment that Costa was Best and Nature's Way, Not Best, then the scores were part of the Department's action on Nature's Way's application. But that is not the meaning being ascribed to the scores in this case. Rather, as just mentioned, the Department is using the aggregate scores as interval statements of quantifiable fact, claiming that Nature's Way "earned" exactly 2.8833 points on a 5-point scale where each point represents a standard unit of measurement, while Costa "earned" 4.4000 points; this, again, is the only way it would be correct to say that Costa was 1.5167 points better than Nature's Way. The aggregate scores assuredly did not need to have this meaning to support the Department's final action on Nature's Way's application. This is because the Department reasonably could have grounded——and, in fact, had to base——its denial of Nature's Way's application on an understanding that the scores expressed numerically (i) the Department's discretionary choice of Costa as the most dependable, most qualified nursery among the southeast region applicants and (ii) the direction of the also- rans (next best to least qualified) in a particular order behind Costa without quantifying any particular distances from Costa or between them. That is, it was not necessary and critical, in 2015, for the Department to find that Costa was 1.5167 points better than Nature's Way in order to deny Nature's Way's application on the more abstract, but sufficient, ground that Nature's Way was Not Best. (Nor could the Department have made such a finding, given that genuine measured quantities were not included in the Evaluation Data.) The point should not get lost that Nature's Way, and the other nurseries, applied for a DO license, not an aggregate score. The agency action in 2015 was not, therefore, to grant a particular score to an application, nor, certainly, was it to grant the applications of those whose score was a particular number, or within one point of a particular number. It was, rather, to choose the most dependable, most qualified nurseries and grant them licenses, while simultaneously denying the other applications. The aggregate scores guided these decisions, to be sure, but they were not, themselves, the matters being decided. Unlike now, where the aggregate scores are facts that must be proven true as quantities so that the "within-one-point" issue can be decided through formal proceedings, they were, then, "proof," sort of, of the ultimate fact that Costa (or another applicant) was the most qualified nursery for a region—— "proof" upon which, moreover, the Department was required to rely in deciding through free-form proceedings whether it intended to grant or deny a particular application. The undersigned finds that while the aggregate scores, as unquantified value judgments (i.e., nonnumeric opinions coded with numbers), were integral to the Department's free-form decision-making process, as interval data they were not essential to the agency action of denying Nature's Way's application——and could not have been, in any event, since the aggregate scores were never infused with quantifiable information content. In short, the truth of the aggregate scores as statements of fact expressing interval data has never been previously adjudicated as between the Department and Nature's Way. Substantiating the foregoing finding is the irrefutable observation that an applicant such as Nature's Way would have gotten nowhere challenging the 2015 proposed agency action based on a dispute about the truth of its aggregate score. Suppose that, after receiving the notice of intended denial, Nature's Way had pored over the Master Spreadsheet and Score Card and determined that the Department had made what it believed was a computational error, which, if corrected, would result in the upward revision of Nature's Way's aggregate score to 3.8833. Imagine, then, what would have happened if Nature's Way had requested a disputed-fact hearing to contest its score based on the alleged mathematical mistake, demanding a correction. Even if the Department disagreed that it had made a mistake, it probably would have denied the hearing request on the grounds that the disputed fact (whether the score should have been 3.8833 instead of 2.8833) was not material, and it would have been within its rights to do so. To change the proposed agency action, Nature's Way would have needed to prove that it was the most dependable, most qualified nursery in the southeast region——not that its aggregate score should have been 3.8833.12/ Now suppose Nature's Way had discovered that an alleged math error had dropped its score to 2.8833 from 4.0033—— an error, in other words, which, if corrected, would have put Nature's Way in first place, above Costa. Even in that seemingly more favorable situation for Nature's Way, to change the proposed denial of its application to a final order granting the same, Nature's Way still would have needed to prove at hearing, where a de novo comparative review of the applications would be undertaken, that it was, in fact, the most dependable, most qualified nursery——an ultimate determination that Costa or another nursery, at least, if not the Department, would almost certainly have disputed. The aggregate scores, together with proof of the alleged math error, might (or might not) have been received in the de novo hearing13/; but, if admitted, evidence establishing that, based on the Evaluation Data, Nature's Way's score actually should have been 4.0033 would not have sufficed, or even been necessary, to prove that Nature's Way was, in fact, the most qualified candidate, since the ALJ would not be sitting in review of the Department's scoring decisions, but instead deciding for himself or herself, anew, the question of relative suitability.14/ The undersigned must acknowledge that the preceding two paragraphs rest on a presupposition of fidelity to the Administrative Procedure Act ("APA"). In fact, in actual proceedings arising from the 2015 preliminary agency actions, as previously mentioned, the Department took the clearly erroneous position that the ALJ was limited to merely reviewing the Department's licensing decisions, under the highly deferential abuse of discretion standard, as opposed to formulating final agency actions, which is standard practice in section 120.57 hearings, where the agency's preliminary decisions are given no deference.15/ The Department actually went farther than that, writing that "the ALJ cannot take the place of the three specially qualified [Reviewers because the] ALJ is not a certified public accountant, the director of the [OCU], and a member of the Drug Policy Advisory Council all in one."16/ It is plain that the Department, left to its own devices, would have afforded a very limited, and probably inadequate, administrative remedy to the disappointed applicants of 2015, because even if the ALJ found that the Department had abused its discretion,17/ the ALJ could not (as the Department would have it) do anything to remedy the situation except, perhaps, remand the case to the Department for a brand new evaluation, as an appellate court would remand a case for a new trial. Of course, the Department had, and has, no basis in law for radically amending the APA in such fashion. As it happened, events, in particular the enactment in 2017 of the Medical Marijuana Law, relieved the Department of the burden of defending its untenable arguments before a court of appeal. It cannot go unmentioned, therefore, that the Department, which believes that none of the applicants was ever entitled to a full and fair opportunity to litigate, de novo, the validity of the scores even as ordinal data for purposes of challenging the preliminary licensing decisions, is currently arguing (in true "heads I win, tails you lose" fashion) that those same scores were conclusively adjudicated via final agency action in 2015 to be true as statements of quantified fact, i.e., as interval data. This position cannot prevail. The Master Spreadsheet and Score Card are not modern-day Tablets of Stone upon which the inerrant Law was inscribed by the hand of the Almighty Bureaucrat. To repeat for emphasis, the truth of the scores, as statements of quantified fact, has never been adjudicated. ENACTMENT OF THE MEDICAL MARIJUANA LAW Effective January 3, 2017, Article X of the Florida Constitution was amended to include a new section 29, which addresses medical marijuana production, possession, dispensing, and use. Generally speaking, section 29 expands access to medical marijuana beyond the framework created by the Florida Legislature in 2014. To implement the newly adopted constitutional provisions and "create a unified regulatory structure," the legislature enacted the Medical Marijuana Law, which substantially revised section 381.986 during the 2017 Special Session. Ch. 2017-232, § 1, Laws of Fla. Among other things, the Medical Marijuana Law establishes a licensing protocol for ten new MMTCs. The relevant language of the new statute states: (8) MEDICAL MARIJUANA TREATMENT CENTERS.— (a) The department shall license medical marijuana treatment centers to ensure reasonable statewide accessibility and availability as necessary for qualified patients registered in the medical marijuana use registry and who are issued a physician certification under this section. * * * The department shall license as medical marijuana treatment centers 10 applicants that meet the requirements of this section, under the following parameters: As soon as practicable, but no later than August 1, 2017, the department shall license any applicant whose application was reviewed, evaluated, and scored by the department and which was denied a dispensing organization license by the department under former s. 381.986, Florida Statutes 2014; which had one or more administrative or judicial challenges pending as of January 1, 2017, or had a final ranking within one point of the highest final ranking in its region under former s. 381.986, Florida Statutes 2014; which meets the requirements of this section; and which provides documentation to the department that it has the existing infrastructure and technical and technological ability to begin cultivating marijuana within 30 days after registration as a medical marijuana treatment center. § 381.986, Fla. Stat. (Emphasis added: The underscored provision is the One Point Condition). The legislature granted the Department rulemaking authority, as needed, to implement the provisions of section 381.986(8). § 381.986(8)(k), Fla. Stat. In addition, the legislature authorized the Department to adopt emergency rules pursuant to section 120.54(4), as necessary to implement section 381.986, without having to find an actual emergency, as otherwise required by section 120.54(4)(a). Ch. 2017-232, § 14, Laws of Fla. IMPLEMENTATION OF THE ONE POINT CONDITION AND ADOPTION OF THE EMERGENCY RULE The One Point Condition went into effect on June 23, 2017. Ch. 2017-232, § 20, Laws of Fla. Thereafter, the Department issued a license to Sun Bulb Nursery (a 2015 DO applicant in the southwest region), because the Department concluded that Sun Bulb's final ranking was within one point of the highest final ranking in the southwest region.18/ Keith St. Germain Nursery Farms ("KSG"), like Nature's Way a 2015 DO applicant for the southeast region, requested MMTC registration pursuant to the One Point Condition in June 2017. In its request for registration, KSG asserted that the One Point Condition is ambiguous and proposed that the Department either calculate the one point difference based on the regional ranks set forth in the Score Card (KSG was the regional Second Best, coded as Ultimate Ordinal 4) or round off the spurious decimal points in the aggregate scores when determining the one point difference. The Department preliminarily denied KSG's request for MMTC registration in August 2017. In its notice of intent, the Department stated in part: The highest-scoring entity in the Southeast Region, Costa Nursery Farms, LLC, received a final aggregate score of 4.4000. KSG received a final aggregate score of 3.2125. Therefore, KSG was not within one point of Costa Farms. KSG requested a disputed-fact hearing on this proposed agency action and also filed with the Division of Administrative Hearings a Petition for Formal Administrative Hearing and Administrative Determination Concerning Unadopted Rules, initiating Keith St. Germain Nursery Farms v. Florida Department of Health, DOAH Case No. 17-5011RU ("KSG's Section 120.56(4) Proceeding"). KSG's Section 120.56(4) Proceeding, which Nature's Way joined as a party by intervention, challenged the legality of the Department's alleged unadopted rules for determining which of the 2015 DO applicants were qualified for licensure pursuant to the One Point Condition. Faced with the KSG litigation, the Department adopted Emergency Rule 64ER17-3, which stated in relevant part: For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). The Department admits that not much analysis or thought was given to the development of this rule, which reflected the Department's knee-jerk conclusion that the One Point Condition's use of the term "final ranking" clearly and unambiguously incorporated the applicants' "aggregate scores" (i.e., "final rank" positions), as stated in the Score Card, into the statute. In any event, the rule's transparent purpose was to adjudicate the pending licensing dispute with KSG and shore up the Department's ongoing refusal (in Department of Health Case No. 2017-0232) to grant KSG a formal hearing on the proposed denial of its application. On October 26, 2017, the Department entered into a settlement agreement with KSG pursuant to which the Department agreed to register KSG as an MMTC. The Department issued a Final Order Adopting Settlement Agreement with KSG on October 30, 2017. That same day (and in order to effectuate the settlement with KSG), the Department issued rule 64ER17-7 (the "Emergency Rule"), the validity of which is at issue in related DOAH Case No. 17-5801RE. The Emergency Rule amends former rule 64ER17-3 to expand the pool of Qualified 2015 Applicants by exactly one, adding KSG——not by name, of course, but by deeming all the regional Second Best applicants to be Within One Point. Because KSG was the only 2015 applicant ranked Second Best in its region that did not have an aggregate score within one point of its region's Best applicant in accordance with rule 64ER17-3, KSG was the only nursery that could take advantage of the newly adopted provisions. As relevant, the Emergency Rule provides as follows: This emergency rule supersedes the emergency rule 64ER17-3 which was filed and effective on September 28, 2017. (1) For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – for the aggregate score under the column "Final Rank" one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) or for the regional rank under the column "Regional Rank" one whole number difference, by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or (2) had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). (Emphasis added). In a nutshell, the Emergency Rule provides that an applicant meets the One Point Condition if either (i) the difference between its aggregate score and the highest regional aggregate score, as those scores were determined by the Department effective November 23, 2015, is less than or equal to 1.0000; or (ii) its regional rank, as determined by the Department effective November 23, 2015, is Second Best. A number of applicants satisfy both criteria, e.g., 3 Boys, McCrory's, Chestnut Hill, and Alpha (northwest region). Some, in contrast, meet only one or the other. Sun Bulb, Treadwell, and Loop's, for example, meet (i) but not (ii). KSG, alone, meets (ii) but not (i). The Department has been unable to come up with a credible, legally cohesive explanation for the amendments that distinguish the Emergency Rule from its predecessor. On the one hand, Christian Bax testified that KSG had persuaded the Department that "within one point" meant, for purposes of the One Point Condition, Second Best (or "second place"), and that this reading represented a reasonable interpretation of a "poorly crafted sentence" using an "unartfully crafted term," i.e., "final ranking." On the other hand, the Department argues in its Proposed Recommended Order (on page 11) that the One Point Condition's "plain language reflects the legislature's intent that the 'second-best' applicant in each region (if otherwise qualified) be licensed as an MMTC." (Emphasis added). Logically, of course, the One Point Condition cannot be both "poorly crafted" (i.e., ambiguous) and written in "plain language" (i.e., unambiguous); legally, it must be one or the other. Put another way, the One Point Condition either must be construed, which entails a legal analysis known as statutory interpretation that is governed by well-known canons of construction and results in a legal ruling declaring the meaning of the ambiguous terms, or it must be applied according to its plain language, if (as a matter of law) it is found to be unambiguous. Obviously, as well, the One Point Condition, whether straightforward or ambiguous, cannot mean both within one point and within one place, since these are completely different statuses.19/ If the statute is clear and unambiguous, only one of the alternatives can be correct; if ambiguous, either might be permissible, but not both simultaneously. By adopting the Emergency Rule, the Department took a position in direct conflict with the notion that the One Point Condition is clear and unambiguous; its reinterpretation of the statute is consistent only with the notion that the statute is ambiguous, and its present attempt to disown that necessarily implicit conclusion is rejected. The irony is that the Department surrendered the high ground of statutory unambiguity, which it initially occupied and stoutly defended, to take up an indefensible position, where, instead of choosing between two arguably permissible, but mutually exclusive, interpretations, as required, it would adopt both interpretations. The only reasonable inference the undersigned can draw from the Department's bizarre maneuver is that the Emergency Rule is not the product of high-minded policy making but rather a litigation tactic, which the Department employed as a necessary step to resolve the multiple disputes then pending between it and KSG. The Emergency Rule was adopted to adjudicate the KSG disputes in KSG's favor, supplanting the original rule that was adopted to adjudicate the same disputes in the Department's favor. THE DENIAL OF NATURE'S WAY'S APPLICATION FOR LICENSURE AS AN MMTC On January 17, 2018——90 days after Nature's Way submitted its request for MMTC registration——the Department issued a letter denying Nature's Way's application ("Denial Letter"). In the Denial Letter, the Department determined that Nature's Way did not have a pending challenge to the denial of its DO licensure application as of January 1, 2017, and that it "did not have a final score within one point of the highest scoring applicant in its region." As a result, the Department determined it was unnecessary to make any findings as to Nature's Way's demonstration and documentation of its ability to cultivate within 30 days of registration, as required by law. With respect to the "within-one-point" determination, the Department's Denial Letter stated: The highest-scoring entity in the Southeast region, Costa Nursery Farms, LLC, received a final aggregate score of 4.4000 and a regional rank of 5. Nature's Way received a final aggregate score of 2.8833 and a regional rank of 2. To implement section 381.986(8)(a)2.a., Florida Statutes, the Department adopted Emergency Rule 64ER17-7. This emergency rule states that "within one point" means for the aggregate score under . . . 'Final Rank' one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) or for the regional rank under . . . 'Regional Rank' one whole number difference, by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Nature's Way was not within one point of Costa Nursery Farms, LLC, either under the "Final Rank" or the "Regional Rank." The Department also asserted that, because Nature's Way had not challenged the Department's November 2015 denial of its DO application, Nature's Way had "thereby waiv[ed] any right to challenge the Department's prior actions or decisions, including the final scoring." THE INVALIDITY OF THE EMERGENCY RULE AND THE VIOLATIONS OF SECTION 120.54 Emergency Rule 64ER17-7(1)(b), (c), and (d) has been declared to be an invalid exercise of delegated legislative authority. See Nature's Way Nursery of Miami, Inc. v. Dep't of Health, DOAH Case Nos. 17-5801RE & 18-0720RU (Fla. DOAH June 15, 2018)(the "Rule Challenge"). It has been determined, as well, in the Rule Challenge, that the Deemed Points Policy and the Four Decimal Policy, which the Department would use as authoritative rules of decision in determining Nature's Way's substantial interests in obtaining an MMTC license, are unadopted rules whose enforcement violates section 120.54(1)(a). A PREVIEW OF THE STATUTORY INTERPRETATION Deciding whether a statute is ambiguous or not, and, when necessary, interpreting an ambiguous statute, are questions of law. As such, these matters will be addressed in greater detail further down, in the Conclusions of Law. These legal conclusions, however, shape the universe of material facts. So that the reader will know why the upcoming findings of fact are necessary and relevant, the undersigned will give a quick peek, here, at his conclusions regarding the One Point Condition. The One Point Condition is ambiguous as a matter of law. It is subject to two reasonable, but mutually exclusive, interpretations, both of which, as mentioned, the Department has embraced——simultaneously——in the Emergency Rule. One of these interpretations, however, is clearly superior, namely that the legislature used the term "final ranking" idiosyncratically as a synonym for "aggregate score." This, in fact, is how the Department initially read the statute, pre-litigation, and how the Department implemented the statute, in the absence of controversy, when it licensed Sun Bulb. The other construction, which requires that "final ranking" be understood as "regional rank," is (just barely) within the range of permissible interpretations; being at best plausible, however, this inferior interpretation is rejected in favor of the other, much better and more natural reading of the statute. The One Point Condition does not implicitly "incorporate" the Score Card, which is not even mentioned therein, or otherwise "validate" the aggregate scores. Nor does the statute purport to adjudicate disputes over aggregate scores. While it is possible that some, many, or all of the legislators who supported the Medical Marijuana Law might have believed that the aggregate scores were adjudicated facts (and thus incontestable), such beliefs, however sincerely held, were incorrect and are irrelevant in any event. The aggregate scores, as previously found, were not, in fact, ever adjudicated with finality, and the legislature is not in the business of adjudicating disputes at the party-vs.-party level. The legislature, as it must, left the work of authoritatively resolving disputes of fact between parties about particular aggregate scores to the branches of government having the power to adjudicate, namely the judiciary and, when authorized, the executive. Finally, the phrase "within one point" was clearly intended to reference one interval data point. That is, the legislature plainly intended that a one-point difference between any two applicants would be the same as a one-point difference between any other two applicants. The obvious goal was to deem licensable any applicant who was, in terms of comparative quality, not more than one-point inferior to (i.e., whose proximity on the quality scale was not farther than one point from) the Best applicant in its region——and that is an interval statement. A quantitative, one-point difference in quality (or whatever the relevant value happens to be) between two items cannot be determined unless the quality (or other relevant value) of the two items is expressed in interval data, using numbers that hold quantitative content. DETERMINING THE INTERVAL DATA POINT DIFFERENCE As discussed above, the Department committed a gross conceptual error when it decided to treat ordinal data as interval data under its Interval Coding and Deemed Points Policies. Sadly, there is no way to fix this problem retroactively; no formula exists for converting or translating non-metric data, such as rankings (which, for the most part, cannot meaningfully be manipulated mathematically), into quantitative data. Further, the defect in the Department's "scoring" process has deprived us of essential information, namely, actual measurements. The upshot is that the question of whether Nature's Way's aggregate score is within one point of Costa's score must be answered without having a quantifiable score for either applicant that can be subtracted from the other's. The unattractive options are either to accept the Department's impossibly defective aggregate scores at face value and render a fiat that cannot be defended as a matter of logic and reason, or instead to examine the mere shadows of scores that are the Ordinals, squinting to see anything that might permit at least a shape of the nonexistent quantitative variables to be reasonably imagined. As the first option is foreign to legal reasoning, not to mention a deformation of the administrative remedy that is the formal hearing under sections 120.569 and 120.57, the undersigned has no choice but to deduce a reasonable approximation of the unknowable interval data by adjusting the ordinal data as best anyone can, keeping in mind that the fault for the insufficiency of the available evidence belongs exclusively to the Department. A Second Look at the Department's Scoring Methodology The Department's scoring methodology was described above. Nevertheless, for purposes of analyzing the available ordinal data to tease out a reasonable approximation of usable interval data, so that we can meaningfully subtract Nature's Way's quantified score from Costa' quantified score, the undersigned proposes that the way the Department arrived at its aggregate scores be reexamined. It will be recalled that each applicant received 14 Ordinals from each reviewer, i.e., one Ordinal per Domain. These will be referred to as Domanial Ordinals. Thus, each applicant received, collectively, 12 Domanial Ordinals apiece for the Main Topics of Cultivation, Processing, and Dispensing; and three Domanial Ordinals apiece for the Main Topics of Medical Director and Financials, for a total of 42 Domanial Ordinals. These five sets of Domanial Ordinals will be referred to generally as Arrays, and specifically as the Cultivation Array, the Processing Array, the Dispensing Array, the MD Array, and the Financials Array. Domanial Ordinals that have been sorted by Array will be referred to, hereafter, as Topical Ordinals. So, for example, the Cultivation Array comprises 12 Topical Ordinals per applicant. A table showing the Arrays of the southeast region applicants is attached as Appendix A. Keeping our attention on the Cultivation Array, observe that if we divide the sum of the 12 Topical Ordinals therein by 12, we will have calculated the mean (or average) of these Topical Ordinals. This value will be referred to as the Mean Topical Ordinal or "MTO." For each applicant, we can find five MTOs, one apiece for the five Main Topics. So, each applicant has a Cultivation MTO, a Processing MTO, and so forth. As discussed, each Main Topic was assigned a weight, e.g., 30% for Cultivation, 20% for Financials. These five weights will be referred to generally as Topical Weights, and specifically as the Cultivation Topical Weight, the Processing Topical Weight, etc. If we reduce, say, the Cultivation MTO to its associated Cultivation Topical Weight (in other words, take 30% of the Cultivation MTO), we will have produced the weighted MTO for the Main Topic of Cultivation. For each applicant, we can find five weighted MTOs ("WMTO"), which will be called specifically the Cultivation WMTO, the Processing WMTO, etc. The sum of each applicant's five WMTOs equals what the Department calls the applicant's aggregate score or final rank. In other words, in the Department's scoring methodology, an MTO is functionally a "Topical raw score" and a WMTO is an "adjusted Topical score" or, more simply, a "Topical subtotal." Thus, we can say, alternatively, that the sum of an applicant's five Topical subtotals equals its DOH-assigned aggregate score. For those in a hurry, an applicant's WMTOs (or Topical subtotals) can be computed quickly by dividing the sum of the Topical Ordinals in each Array by the respective divisors shown in the following table: Dividend Divisor Quotient Sum of the Topical Ordinals in the CULTIVATION Array ÷ 40 - Cultivation WMTO Sum of the Topical Ordinals in the PROCESSING Array ÷ 40 - Processing WMTO Sum of the Topical Ordinals in the DISPENSING Array ÷ 80 - Dispensing WMTO Sum of the Ordinals in Topical the MD Array ÷ 60 - MD WMTO Sum of the Topical Ordinals in the FINANCIALS Array ÷ 15 - Financials WMTO To advance the discussion, it is necessary to introduce some additional concepts. We have become familiar with the Ordinal, i.e., a number that the Department assigned to code a particular rank (5, 4, 3, 2, or 1).20/ From now on, the symbol ? will be used to represent the value of an Ordinal as a variable. There is another value, which we can imagine as a concept, namely the actual measurement or observation, which, as a variable, we will call x. For our purposes, x is the value that a Reviewer would have reported if he or she had been asked to quantify (to the fourth decimal place) the amount of an applicant's suitability vis-à-vis the attribute in view on a scale of 1.0000 to 5.0000, with 5.0000 being "ideal" and 1.0000 meaning, roughly, "serviceable." This value, x, is a theoretical construct only because no Reviewer actually made any such measurements; such measurements, however, could have been made, had the Reviewers been required to do so. Indeed, some vague idea, at least, of x must have been in each Reviewer's mind every time he or she ranked the applicants, or else there would have been no grounds for the rankings. Simply put, a particular value x can be supposed to stand behind every Topical Ordinal because every Topical Ordinal is a function of x. Unfortunately, we do not know x for any Topical Ordinal. Next, there is the true value of x, for which we will give the symbol µ. This is a purely theoretical notion because it represents the value that would be obtained by a perfect measurement, and there is no perfect measurement of anything, certainly not of relative suitability to serve as an MMTC.21/ Finally, measurements are subject to uncertainty, which can be expressed in absolute or relative terms. The absolute uncertainty expresses the size of the range of values in which the true value is highly likely to lie. A measurement given as 150 ± 0.5 pounds tells us that the absolute uncertainty is 0.5 pounds, and that the true value is probably between 149.5 and 150.5 pounds (150 – 0.5 and 150 + 0.5). This uncertainty can be expressed as a percentage of the measured value, i.e., 150 pounds ± .33%, because 0.5 is .33% of 150. With that background out of the way, let's return to concept of the mean. The arithmetic mean is probably the most commonly used operation for determining the central tendency (i.e., the average or typical value) of a dataset. No doubt everyone reading this Order, on many occasions, has found the average of, say, four numbers by adding them together and dividing by 4. When dealing with interval data, the mean is interpretable because the interval is interpretable. Where the distance between 4 and 5, for example, is the same as that between 5 and 6, everyone understands that 4.5 is halfway between 4 and 5. As long as we know that 4.5 is exactly halfway between 4 and 5, the arithmetic mean of 4 and 5 (i.e., 4.5) is interpretable. The mean of a set of measurement results gives an estimate of the true value of the measurement, assuming there is no systematic error in the data. The greater the number of measurements, the better the estimate. Therefore, if, for example, we had in this case an Array of xs, then the mean of that dataset (x¯) would approximate µ, especially for the Cultivation, Processing, and Dispensing Arrays, which have 12 observations apiece. If the Department had used x¯ as the Topical raw score instead of the MTO, then its scoring methodology would have been free of systematic error. But the Department did not use x¯ as the Topical raw score. In the event, it had only Arrays of ?s to work with, so when the Department calculated the mean of an Array, it got the average of a set of Ordinals (?¯), not x¯. Using the mean as a measure of the central tendency of ordinal data is highly problematic, if not impermissible, because the information is not quantifiable. In this case, the Department coded the rankings with numbers, but the numbers (i.e., the Ordinals), not being units of measurement, were just shorthand for content that must be expressed verbally, not quantifiably. The Ordinals, that is, translate meaningfully only as words, not as numbers, as can be seen in the table at paragraph 29, supra. Because these numbers merely signify order, the distances between them have no meaning; the interval, it follows, is not interpretable. In such a situation, 4.5 does not signify a halfway point between 4 and 5. Put another way, the average of Best and Second Best is not "Second-Best-and-a- half," for the obvious reason that the notion is nonsensical. To give a real-life example, the three Topical Ordinals in Nature's Way's MD Array are 5, 3, and 2. The average of Best, Third Best, and Fourth Best is plainly not "Third-Best-and-a- third," any more than the average of Friday, Wednesday, and Tuesday is Wednesday-and-a-third. For these reasons, statisticians and scientists ordinarily use the median or the mode to measure the central tendency of ordinal data, generally regarding the mean of such data to be invalid or uninterpretable. The median is the middle number, which is determined by arranging the data points from lowest to highest, and identifying the one having the same number of data points on either side (if the dataset contains an odd number of data points) or taking the average of the two data points in the middle (if the dataset contains an even number of data points). The mode is the most frequently occurring number. (If no number repeats, then there is no mode, and if two or more numbers recur with the same frequency, then there are multiple modes.) We can easily compute the medians, modes, and means of the Topical Ordinals in each of the applicants' Arrays. They are set forth in the following table. Cultivation 30% Processing 30% Dispensing 15% Medical Director 5% Financials 20% Bill's Median Mode Mean 1 1 1.8333 Median Mode Mean 2 2 1.7500 Median Mode Mean 1 1 1.1667 Median Mode Mean 2 NA 2.0000 Median Mode Mean 1 1 1.0000 Costa Median Mode Mean 5 5 4.6667 Median Mode Mean 4.5 5 4.1667 Median Mode Mean 4 4 4.0000 Median Mode Mean 4 4 4.3333 Median Mode Mean 5 5 4.6667 Keith St. Germain Median Mode Mean 4 4 3.4167 Median Mode Mean 4 4 3.2500 Median Mode Mean 2 2 2.4167 Median Mode Mean 4 NA 3.6667 Median Mode Mean 3 3 3.3333 Nature's Way Median Mode Mean 3 4 3.0833 Median Mode Mean 3 3 2.5833 Median Mode Mean 3.5 3 3.6667 Median Mode Mean 3 NA 3.3333 Median Mode Mean 2 2 2.3333 Redland Median Mode Mean 2 2 2.2500 Median Modes Mean 3.5 3, 4, 5 3.4167 Median Mode Mean 5 5 4.1667 Median Mode Mean 2 NA 2.3333 Median Mode Mean 4 NA 3.6667 It so happens that the associated medians, modes, and means here are remarkably similar——and sometimes the same. The point that must be understood, however, is that the respective means, despite their appearance of exactitude when drawn out to four decimal places, tell us nothing more (if, indeed, they tell us anything) than the medians and the modes, namely whether an applicant was typically ranked Best, Second Best, etc. The median and mode of Costa's Cultivation Ordinals, for example, are both 5, the number which signifies "Best." This supports the conclusion that "Best" was Costa's average ranking under Cultivation. The mean of these same Ordinals, 4.6667, appears to say something more exact about Costa, but, in fact, it does not. At most, the mean of 4.6667 tells us only that Costa was typically rated "Best" in Cultivation. (Because there is no cognizable position of rank associated with the fraction 0.6667, the number 4.6667 must be rounded if it is to be interpreted.) To say that 4.6667 means that Costa outscored KSG by 1.2500 "points" in Cultivation, therefore, or that Costa was 37% more suitable than KSG, would be a serious and indefensible error, for these are, respectively, interval and ratio statements, which are never permissible to make when discussing ordinal data. As should by now be clear, ?¯ is a value having limited usefulness, if any, which cannot ever be understood, properly, as an estimate of µ. The Department, regrettably, treated ?¯ as if it were the same as x¯ and, thus, a reasonable approximation of µ, making the grievous conceptual mistakes of using ordinal data to make interval-driven decisions, e.g., whom to select for licensure when the "difference" between applicants was as infinitesimal as 0.0041 "points," as well as interval representations about the differences between applicants, such as, "Costa's aggregate score is 1.5167 points greater than Nature's Way's aggregate score." Due to this flagrant defect in the Department's analytical process, the aggregate scores which the Department generated are hopelessly infected with systematic error, even though the mathematical calculations behind the flawed scores are computationally correct. Dr. Cornew's Solution Any attempt to translate the Ordinals into a reasonable approximation of interval data is bound to involve a tremendous amount of inherent uncertainty. The Department, however, cannot be permitted to benefit from, or take advantage of, this uncertainty, because the uncertainty flows directly and solely from the Department's fundamental conceptual error, not from any lack or failure of proof attributable to Nature's Way. If we want to ascertain the x behind a particular ?, all we can say for sure is that: [(? – n) + 0.000n] = x = [(? + a) – 0.000a], where n represents the number of places in rank below ?, and a symbolizes the number of places in rank above ?. The Ordinals of 1 and 5 are partial exceptions, because 1 = x = 5. Thus, when ? = 5, we can say [(? – n) + 0.000n] = x = 5, and when ? = 1, we can say 1 = x = [(? + a) – 0.000a]. The table below should make this easier to see. Lowest Possible Value of x Ordinal ? Highest Possible Value of x 1.0004 5 5.0000 1.0003 4 4.9999 1.0002 3 4.9998 1.0001 2 4.9997 1.0000 1 4.9996 As will be immediately apparent, all this tells us is that x could be, effectively, any score from 1 to 5——which ultimately tells us nothing. Accordingly, to make fruitful use of the Ordinals, we must make some assumptions, to narrow the uncertainty. Nature's Way's expert witness, Dr. Ronald W. Cornew,22/ offers a solution that the undersigned finds to be credible and adopts. Dr. Cornew proposes (and the undersigned agrees) that, for purposes of extrapolating the scores (values of x) for a given applicant, we can assume that the Ordinals for every other applicant are true values (µ) of x, in other words, perfectly measured scores expressing interval data——a heroic assumption in the Department's favor. Under this assumption, if the subject applicant's Ordinal is the ranking of, say, 3, we shall assume that the adjacent Ordinals of the other applicants, 2 and 4, are true quantitative values. This, in turn, implies that the true value of the subject applicant's Ordinal, as a quantified score, is anywhere between 2 and 4, since all we know about the subject applicant is that the Reviewer considered it to be, in terms of relative suitability, somewhere between the applicants ranked Fourth Best (2) and Second Best (4). If we make the foregoing Department-friendly assumption that the other applicants' Ordinals are µ, then the following is true for the unseen x behind each of the subject applicant's ?s: [(? – 1) + 0.0001] = x = [(? + 1) – 0.0001]. The Ordinals of 1 and 5 are, again, partial exceptions. Thus, when ? = 5, we can say 4.0001 = x = 5, and when ? = 1, we can say 1 = x = 1.9999. Dr. Cornew sensibly rounds off the insignificant ten-thousandths of points, simplifying what would otherwise be tedious mathematical calculations, so that: Lowest Possible Value of x Ordinal ? Highest Possible Value of x 4 5 5 3 4 5 2 3 4 1 2 3 1 1 2 We have now substantially, albeit artificially, reduced the uncertainty involved in translating ?s to xs. Our assumption allows us to say that x = ? ± 1 except where only negative uncertainty exists (because x cannot exceed 5) and where only positive uncertainty exists (because x cannot be less than 1). It is important to keep in mind, however, that (even with the very generous, pro-Department assumption about other applicants' "scores") the best we can do is identify the range of values within which x likely falls, meaning that the highest values and lowest values are not alternatives; rather, the extrapolated score comprises those two values and all values in between, at once. In other words, if the narrowest statement we can reasonably make is that an applicant's score could be any value between l and h inclusive, where l and h represent the low and high endpoints of the range, then what we are actually saying is that the score is all values between l and h inclusive, because none of those values can be excluded. Thus, in consequence of the large uncertainty about the true values of x that arises from the low-information content of the data available for review, Ordinal 3, for example, translates, from ordinal data to interval data, not to a single point or value, but to a score- set, ranging from 2 to 4 inclusive. To calculate Nature's Way's aggregate score-set using Dr. Cornew's method, it is necessary to determine both the applicant's highest possible aggregate score and its lowest possible aggregate score, for these are the endpoints of the range that constitutes the score-set. Finding the high endpoint is accomplished by adding 1 to each Topical Ordinal other than 5, and then computing the aggregate score-set using the mathematical operations described in paragraphs 104-105. The following WMTOs (Topical subtotals) are obtained thereby: Cultivation, 1.2250; Processing, 1.0500; Dispensing, 0.6625; MD, 0.2000; and Financials, 0.6667. The high endpoint of Nature's Way's aggregate score-set is the sum of these numbers, or 3.8042.23/ Finding the low endpoint is accomplished roughly in reverse, by subtracting 1 from each Topical Ordinal other than 1, and then computing the aggregate score-set using the mathematical operations described in paragraphs 104 and 105. The low endpoint for Nature's Way works out to 1.9834. Nature's Way's aggregate score-set, thus, is 1.9834-3.8042.24/ This could be written, alternatively, as 2.8938 ± 0.9104 points, or as 2.8938 ± 31.46%. The low and high endpoints of Costa's aggregate score-set are found the same way, and they are, respectively, 3.4000 and 4.8375.25/ Costa's aggregate score-set is 3.4000- 4.8375, which could also be written as 4.1188 ± 0.7187 points or 4.1188 ± 17.45%. We can now observe that a score of 2.4000 or more is necessary to satisfy the One Point Condition, and that any score between 2.4000 and 3.8375, inclusive, is both necessary and sufficient to satisfy the One Point Condition. We will call this range (2.4000-3.8375) the Proximity Box. A score outside the Proximity Box on the high end, i.e., a score greater than 3.8375, meets the One Point Condition, of course; however, a score that high, being more than sufficient, is not necessary. Nature's Way meets the One Point Condition, therefore, if any value within the range of its score-set falls within the Proximity Box. In fact, 89% of Nature's Way's score- set is inside the Proximity Box. This is easier to see if the aggregate scores of Nature's Way and Costa are overlaid, as follows: As is readily apparent, Nature's Way's aggregate score-set (the green bar) extends far into the Proximity Box (shaded yellow), almost to the hilt, leaving only a handle comprising 10.95% of the range exposed. Notice, further, how the opposite end of Nature's Way's score-set gets to the right of Costa's score-set, from 3.4000 to the tip of the range——coincidentally, a segment of practically the same length (10.63%) as the handle——which means that, based on the available data, we cannot exclude the possibility that Nature's Way actually outscored Costa and would have emerged in 2015 as the highest scored applicant had the Reviewers been required to quantify the differences between applicants. For reasons discussed below, the undersigned suspects that the Reviewers likely would not have scored Nature's Way the winner, but no matter, for that is not the issue. On the dispositive issue, the undersigned determines as a matter of ultimate fact that Nature's Way was likely (indeed, was almost certainly) within one point of Costa. In short, a preponderance of the evidence, and more, supports the finding that Nature's Way satisfies the One Point Condition. An Alternative That Uses the Average Domanial Ordinals to Extrapolate Scores If the Reviewers had actually scored the applicants with numbers reflecting interval data, we could have determined an average Domanial score for each applicant by dividing the sum of all of its scores by 42. This would represent the typical Domanial score for the applicant. Comparing the applicants' typical Domanial scores would reveal not only the typical Domanial ranking at the Reviewer level, but also the typical distribution of suitability——i.e., the degrees of difference, actually quantified——at the Domanial level, as measured at the Reviewer level. We, of course, cannot find the applicants' mean Domanial scores because we lack any Domanial scores. But it might be possible to conjure Domanial scores by making some reasonable assumptions about the relative proximity of the applicants, in terms of suitability, based on the Domanial rankings. We can, for example, calculate the typical Domanial Ordinal for each applicant, by dividing the sum of all of its Domanial Ordinals by 42. Comparing the applicants' respective mean Domanial Ordinals should give us at least a rough idea of the where each applicant was typically ranked in a typical Domain, at the Reviewer level. This latter information might also give us an impression of how close (or separated) the applicants actually were to (or from) each other as a function of suitability. For this purpose, averaging the Domanial Ordinals is preferable to simply averaging the MTOs because the small number of Ordinals in the MD and Financials Arrays makes their MTOs subject to skew. In the Southeast region, the mean Domanial Ordinals are as follows: Rank Applicant Sum of Domanial Ordinals Average Domanial Ordinal 5 Costa 181 4.3095 4 Redland 136 3.2381 3 Keith St. Germain 130 3.0952 2 Nature's Way 129 3.0714 1 Bill's 66 1.5714 If we assume, as the Department assumes, that the mean of ordinal data is at all meaningful, we can say, based on these figures, that Costa was the consensus favorite, with an average rank of Second Best; Bill's was the least favorite by general agreement; and the others were effectively in a three- way tie for second place, each having a typical ranking of Third Best. If we assume further, as the Department assumes, that the mean of ordinal data tells us something useful about the quantitative differences between the applicants, then we can say, based on the figures above, that the unknown scores (x) behind the Ordinals should reflect a distribution of suitability in which three of the applicants are bunched up in the center (at the peak of the Bell Curve so to speak), while the favorite and least favorite stand noticeably apart.26/ When the available data are viewed in this light, it becomes reasonable to expect that if any one of the applicants in the middle (Redland, KSG, or Nature's Way) were found to be "within one point" of Costa, then so too should the others in that group be found, since the putative consensus view of the Reviewers was that these three applicants were effectively indistinguishable on the merits. Indeed, we should be surprised if that were not the case. It also becomes reasonable to imagine what the interval scores might have looked like, if only impressionistically. While it is impossible to bring forth such scores except by reasonable guesswork, we could do worse than using the average Domanial Ordinals set forth above as plugs for the unknown scores. If, in other words, we assume that the following scores correspond to the respective Ordinals of rank, we can recalculate the applicants' aggregate scores, using the Reviewers' rankings. Rank Corresponding Score 5 4.31 4 3.24 3 3.10 2 3.07 1 1.57 The aggregate scores which emerge from this recalculation, while admittedly not scientifically reliable, will at least paint a more accurate picture of the perceived distances between the applicants than do the Department's hopelessly flawed aggregate scores. This exercise produces the following outcome: Applicant Aggregate Score Rank Costa 3.8192 5 Redland 3.2346 4 Keith St. Germain 3.1774 3 Nature's Way 3.0613 2 Bill's 2.0698 1 While the undersigned cannot find that these are likely the applicants' actual aggregate scores (for it is beyond human capacity, given the paucity of available data, to pinpoint the scores with such confidence), he can find that this table likely shows with greater clarity the quantitative differences between the applicants than anything the Department has produced. Using the mean Domanial Ordinals to construct Domanial scores, it is determined that the quantitative difference between Costa and Nature's Way, as best this value can be ascertained, is most likely less than one point (which is sufficient to satisfy the One Point Condition), with three- quarters of a point being an informed, if unscientific, approximation of the genuine difference. ROUNDING OFF THE SPURIOUS DIGITS Remember that the Ordinal 5 does not mean 5 of something that has been counted but the position of 5 in a list of five applicants that have been put in order——nothing more. Recall, too, that there is no interpretable interval between places in a ranking because the difference between 5 and 4 is not the same as that between 4 and 3, etc., and that there is no "second best-and-a-half," which means that taking the average of such numbers is a questionable operation that could easily be misleading if not properly explained. Therefore, as discussed earlier, if the mean of ordinal data is taken, the result must be reported using only as many significant figures as are consistent with the least accurate number, which in this case is one significant figure (whose meaning is only Best, Second Best, Third Best, and so forth). The Department egregiously violated the rule against reliance upon spurious digits, i.e., numbers that lack credible meaning and impart a false sense of accuracy. The Department took advantage of meaningless fractions obtained not by measurement but by mathematical operations, thereby compounding its original error of treating ordinal data as interval data. When the Department says that Nature's Way's aggregate score is 2.8833, it is reporting a number with five significant figures. This number implies that all five figures make sense as increments of a measurement; it implies that the Department's uncertainty about the value is around 0.0001 points——an astonishing degree of accuracy. The trouble is that the aggregate scores, as reported without explanation, are false and deceptive. There is no other way to put it. The Department's reported aggregate scores cannot be rationalized or defended, either, as matters of policy or opinion. This point would be obvious if the Department were saying something more transparent, e.g., that 1 + 1 + 1 + 0 + 0 = 2.8833, for everyone would see the mistake and understand immediately that no policy can change the reality that the sum of three 1s is 3. The falsity at issue is hidden, however, because, to generate each applicant's "aggregate score," the Department started with 42 whole numbers (of ordinal data), each of which is a value from 1 to 5. It then ran the applicant's 42 single- digit, whole number "scores" through a labyrinth of mathematical operations (addition, division, multiplication), none of which improved the accuracy or information content of the original 42 numbers, to produce "aggregate scores" such as 2.8833. This process lent itself nicely to the creation of spreadsheets and tables chocked full of seemingly precise numbers guaranteed to impress.27/ Lacking detailed knowledge (which few people have) about how the numbers were generated, a reasonable person seeing "scores" like 2.8833 points naturally regards them as having substantive value at the microscopic level of ten-thousandths of a point——that's what numbers like that naturally say. He likely believes that these seemingly carefully calibrated measurements are very accurate; after all, results as finely-tuned as 2.8833 are powerful and persuasive when reported with authority. But he has been fooled. The only "measurement" the Department ever took of any applicant was to rank it Best, Second Best, etc.——a "measurement" that was not, and could not have been, fractional. The reported aggregate scores are nothing but weighted averages of ordinal data, dressed up to appear to be something they are not. Remember, the smallest division on the Reviewers' "scale" (using that word loosely here) was 1 rank. No Reviewer used decimal places to evaluate any portion of any application. The aggregate scores implying precision to the ten-thousandth place were all derived from calculations using whole numbers that were code for a value judgment (Best, Second Best, etc.), not quantifiable information. Therefore, in the reported "aggregate scores," none of the digits to the right of first (tenth place) decimal point has any meaning whatsoever; they are nothing but spurious digits introduced by calculations carried out to greater precision than the original data. The first decimal point, moreover, being immediately to the right of the one (and only) significant figure in the aggregate score, is meaningful (assuming that the arithmetic mean of ordinal data even has interpretable meaning, which is controversial) only as an approximation of 1 (whole) rank. Because there is no meaningful fractional rank, the first decimal must be rounded off to avoid a misrepresentation of the data. Ultimately, the only meaning that can be gleaned from the "aggregate score" of 2.8833 is that Nature's Way's typical (or mean) weighted ranking is 2.8833. Because there is no ranking equivalent to 2.8833, this number, if sense is to be made of it, must be rounded to the nearest ranking, which is 3 (because 2.8 ˜ 3), or Third Best. To report this number as if it means something more than that is to mislead. To make decisions based on the premise that 0.8833 means something other than "approximately one whole place in the ranking" is, literally, irrational——indeed, the Department's insistence that its aggregate scores represent true and meaningful quantities of interval data is equivalent, as a statement of logic, to proclaiming that 1 + 1 = 3, the only difference being that the latter statement is immediately recognizable as a delusion. An applicant could only be ranked 1, 2, 3, 4, or 5——not 2.8833 or 4.4000. Likewise, the only meaning that can be taken from the "aggregate score" of 4.4000 is that Costa's average weighted ranking is 4.4000, a number which, for reasons discussed, to be properly understood, must be rounded to the nearest ranking, i.e., 4. The fraction, four-tenths, representing less than half of a position in the ranking, cannot be counted as approximately one whole (additional) place (because 4.4 ? 5). And to treat 0.4000 as meaning four-tenths of a place better than Second Best is absurd. There is no mathematical operation in existence that can turn a number which signifies where in order something is, into one that counts how much of that thing we have. To eliminate the false precision, the spurious digits must be rounded off, which is the established mathematical approach to dealing with numbers that contain uncertainty, as Dr. Cornew credibly confirmed. Rounding to the nearest integer value removes the meaningless figures and eliminates the overprecision manifested by those digits. When the aggregate scores are rounded to remove the deceitful decimals, the results are: Costa, 4; Redland, KSG, and Nature's Way, 3; and Bill's, 2. These corrected "final ranks" require a corresponding adjustment of the "regional ranks" because there is a three-way tie for second place. Thus, using unspurious aggregate scores to regionally rank the applications, the positions are as follows: Costa, 5; Redland, KSG, and Nature's Way, 4; and Bill's, 1. In sum, as yet another alternative to determining whether Nature's Way is "within one point" of Costa, the elimination-of-spurious-digits approach shows that Nature's Way satisfies the One Point Condition.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Florida Department of Health enter a final order approving Nature's Way's application for registration as an MMTC unless Nature's Way fails (i) to meet any pertinent requirement of section 381.986 not set forth in section 381.986(8)(a)2.a., or (ii) to provide documentation to the Department that it has the existing infrastructure and technological ability to begin cultivating marijuana within 30 days after registration. DONE AND ENTERED this 15th day of June, 2018, in Tallahassee, Leon County, Florida. S JOHN G. VAN LANINGHAM Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 15th day of June, 2018.

Florida Laws (7) 120.52120.54120.56120.569120.57120.60381.986
# 7
JULIE MCCUE vs PAM STEWART, AS COMMISSIONER OF EDUCATION, 17-000423 (2017)
Division of Administrative Hearings, Florida Filed:Orlovista, Florida Jan. 18, 2017 Number: 17-000423 Latest Update: Jan. 22, 2018

The Issue The issue for determination is whether Petitioner’s challenge to the failing score she received on the essay section of the Florida Educational Leadership Examination (FELE) should be sustained.

Findings Of Fact Petitioner is a teacher. She received her undergraduate degree in education with a major in social studies from Bowling Green State University in 1996. Since earning her bachelor’s degree, she has taught history, psychology, and sociology over a 20-year span, at high schools in North Carolina, Ohio, and for the past three years, Florida. Petitioner holds a Florida teacher certificate. She did not have to take an exam for that certificate. She likely was issued her Florida teacher certificate on the basis of the Ohio teacher certificate she held when she moved to Florida. Petitioner aspires to add to her teacher certificate by attaining certification in educational leadership, which would require that she take and pass all subparts of the FELE. Petitioner testified that in the district where she is employed as a teacher, she would qualify for a raise in her teacher’s pay upon receiving a master’s degree in educational leadership followed by DOE certification in educational leadership. Petitioner accomplished the first step by receiving a master’s degree in educational leadership from Concordia University in Chicago, Illinois, in 2015.3/ She then initiated the process to take the FELE. Educational leadership certification would also make Petitioner eligible for a leadership position, such as principal, vice principal, or a school district administrative leadership position, if she chooses to go that route. However, Petitioner’s primary motivation in seeking this certification is for the additional compensation, and not because she wants an educational leadership position.4/ Respondent, Pam Stewart, as Commissioner of Education, is the state’s chief educational officer and executive director of DOE. §§ 20.15(2) and 1001.10(1), Fla. Stat. One of DOE’s responsibilities is to review applications for educator certification, and determine the qualifications of applicants according to eligibility standards and prerequisites for the specific type of certification sought. See § 1012.56, Fla. Stat. One common prerequisite is taking and passing an examination relevant to the particular certification. Respondent is authorized to contract for development, administration, and scoring of educator certification exams. § 1012.56(9)(a), Fla. Stat. Pursuant to this authority, following a competitive procurement in 2011, Pearson was awarded a contract to administer and score Florida’s educator certification exams, including the FELE. The State Board of Education (SBE) is the collegial agency head of DOE. § 20.15(1), Fla. Stat. As agency head, the SBE was required to approve the contract with Pearson. The SBE is also charged with promulgating certain rules that set forth policies related to educator certification, such as requirements to achieve a passing score on certification exams. DOE develops recommendations for the SBE regarding promulgating and amending these rules. In developing its recommendations, DOE obtains input and information from a diverse group of Florida experts and stakeholders, including active teachers and principals, district administrators, and academicians from colleges and universities. FELE Essay Development and Scoring DOE develops the FELE, as well as the other educator certification exams, in-house. The FELE is developed and periodically revised to align with SBE-promulgated standards for educational leadership, as well as SBE-promulgated generic subject area competencies. In addition, as required by statute, certification exams, including the FELE, must be aligned to SBE- approved student standards. Details about the FELE, such as the applicable generic competencies, the exam organization, and passing score requirements, are set forth in Florida Administrative Code Rule 6A-4.00821 (the FELE rule). The FELE rule has been amended periodically, but the current version includes a running history, setting forth FELE details that applied during past time periods, as well as those currently in effect. The FELE consists of three subtests. Subtest one is a multiple choice test covering the area described as “Leadership for Student Learning.” Subtest two, also a multiple choice test, covers “Organizational Development.” Subtest three covers “Systems Leadership,” and has two sections: a multiple choice section; and a written performance assessment, or essay, section. The FELE has contained an essay component for many years (as far back as any witness could remember). Before January 2015, the essay score was included in a single composite score given for subtest three. The multiple choice part accounted for most of the weight of the composite score (70 percent); the essay portion accounted for 30 percent of the composite score. Based on input from educators, academicians, and other subject matter experts, DOE recommended that the FELE subtest three be changed by establishing separate passing score requirements for each section, thereby requiring examinees to pass each section. The SBE adopted the recommendation, which is codified in the FELE rule, and has applied to FELE scoring since January 1, 2015. The effect of the change is that an examinee not as proficient in effective written communications can no longer compensate for a weak essay with a strong performance on the multiple choice section. To a lesser extent (given the prior 70:30 weight allocation), the reverse is also true. The policy underlying this scoring change is to give more emphasis to testing writing skills, in recognition of the critical importance of those skills. By giving heightened scrutiny to writing skills, the FELE better aligns with increasingly rigorous SBE-approved student standards for written performance. This policy change is reasonable and within the purview of the SBE; in any event, it is not subject to debate in this case, because Petitioner did not challenge the FELE rule. The generic competencies to be demonstrated by means of the FELE are set forth in the publication “Competencies and Skills Required for Certification in Education Leadership in Florida, Fourth Edition 2012,” adopted by reference in the FELE rule and effective as of January 1, 2014. The competency and skills generally tested by the FELE written performance assessment are: Knowledge of effective communication practices that accomplish school and system- wide goals by building and maintaining collaborative relationships with stakeholders Analyze data and communicate, in writing, appropriate information to stakeholders. Analyze data and communicate, in writing, strategies for creating opportunities within a school that engage stakeholders. Analyze data and communicate, in writing, strategies that increase motivation and improve morale while promoting collegial efforts. This generic description provides a high-level view (aptly described as from the 30,000-foot level) of the competency and skills that an educational leader should possess, which are tested by the written performance assessment. DOE’s job is to distill those qualities down to a test. As reasonably summarized by DOE’s witnesses, the purpose of the FELE written performance assessment, as established by the SBE, is to test for effective written communication skills, and data analysis that drives appropriate strategies for improvement. These overall concepts are built into the general FELE rubric which serves as a guide to scoring, the individual essay prompts, and the supplemental rating criteria (essentially prompt-specific rubrics, making the general rubric specific to each essay prompt). The FELE rule sets forth requirements for how the “test scoring agency” (Pearson) must conduct the scoring of the written performance assessment: Raters Judges. The test scoring agency shall appoint persons to score the written performance assessment who have prior experience as educational leaders, instructional leaders, or school building administrators. Chief Raters. The chief raters shall be raters who have prior experience as educational leaders, instructional leaders, or school building administrators and have demonstrated success as raters. Pursuant to Pearson’s agreement with DOE, DOE retains the right to approve raters who will be scoring the written performance assessments. Therefore, Pearson proposes raters who meet the specified qualifications, and then DOE approves or disapproves the proposed raters. Approved raters must undergo training before they are appointed by Pearson to conduct scoring. There is currently one chief rater for the FELE written performance assessment. The chief rater was a rater before being trained for, and assuming, the chief rater position. The chief rater was trained by Florida DOE chief raters when Pearson became the contractor and the scoring was transitioned to Pearson’s offices in Hadley, Massachusetts, during 2012 to 2013. Pearson employs holistic scoring as the exclusive method for scoring essays, including FELE written performance assessments (as specified in Pearson’s contract with DOE). The holistic scoring method is used to score essay examinations by professionals across the testing service industry. Pearson has extensive experience in the testing service industry, currently providing test scoring services to more than 20 states. Dr. Michael Grogan, Pearson’s director of performance assessment scoring services and a former chief rater, has been leading sessions in holistic scoring or training others since 2003. He described the holistic scoring method as a process of evaluating the overall effect of a response, weighing its strengths and weaknesses, and assigning the response one score. Through training and use of tools, such as rubrics and exemplars, the evaluation process becomes less subjective and more standardized, with professional bias of individual raters minimized, and leading to consistent scoring among trained raters. Training is therefore an integral part of Pearson’s testing services for which DOE contracted. In an intensive two-day training program conducted by the chief rater in Hadley, prospective raters are trained in the holistic scoring method used to score FELE essays. Pearson’s rater training program begins with a review of background about the holistic scoring method generally, including discussions about rater bias. From there, trainees are oriented to the FELE-specific training material. They thoroughly review and discuss the rubric, the score scale, the operational prompt raters will be scoring, and exemplars (other responses to the prompt that have been pre-scored). The rater candidates then employ these tools to begin independently scoring exemplars. Raters-in-training conduct many rounds of independent scoring sessions, interspersed with group discussions regarding how the essays should have been scored. The trainees then move into the calibration test phase, in which they independently score essay exemplars, paired with an experienced rater who independently scores the same exemplars. The trainees score essay after essay, then compare scores with the experienced rater, with the goal to achieve consistency in scores, by equaling or coming within one point of the other rater’s score. Ultimately, the raters must pass the calibration test by achieving scoring consistency to qualify for appointment as raters to score actual FELE essays. Each FELE essay is scored independently by two DOE- approved raters who meet the qualifications in the FELE rule and who have successfully completed training. Pairs of raters receive scoring assignments, one prompt at a time. The assignments are received anonymously; one rater does not know who the other assigned rater is. And neither rater knows anything about the examinee, as the essay is identified solely by a blind number. FELE essay raters work in one room, at individual computer terminals, in Hadley. Security of all testing information is vigilantly maintained, through confidentiality agreements and secure, limited, and protected computer access. For each scoring assignment, raters adhere to a step- by-step process that reinforces their initial training. Raters must first score sample responses to a historic prompt that is different from the assigned prompt, as a training refresher to invoke the holistic scoring mindset. From there, raters review the assigned prompt and the scoring guides (general rubric and supplemental rating criteria). Raters then must score an anchor set of six sample responses, one exemplifying each score category; the historic scores are not revealed until the raters complete their scoring. Raters compare their scores with the anchor scores, and work through any discrepancies. Raters then go through a calibration process of scoring 10 more sample responses to the same prompt. After scoring all 10 essays, the raters learn the scores deemed appropriate for those responses, and must work through any discrepancies until consistency is achieved. Only after scoring many sample essays and achieving success in scoring consistency are the raters permitted to turn to the assigned FELE essay for review and scoring. The chief rater supervises and monitors the raters while they are engaged in their scoring work. The chief rater is physically present in the same room with the raters, monitoring their work online in real time. As raters enter scores, those scores are immediately known by the chief rater, so that any “red flag” issues in scoring results and trends can be addressed immediately. As another tool, “ghost papers,” which are pre- scored essays, are randomly assigned to raters as if they are actual FELE essays. The chief rater monitors ghost paper scoring as another check on consistency with a predetermined measure. The scores of the two raters assigned to score a FELE essay are added together for the total holistic score. Thus, the total score range for a FELE essay is between two points and 12 points: the lowest possible score of two points would be achieved if each rater assigns a score of one point; and the highest score of 12 points would be achieved if each rater assigns six points. The sum of the two raters’ scores will be the score that the FELE essay receives unless the raters’ scores disagree by more than one point. If the two raters’ scores differ by more than one point, then the chief rater steps in to resolve the discrepancy. After FELE essays are scored, the examinee is informed of the final score of between two and 12 points, and the examinee is told whether the score is a passing or failing score. Seven points is a passing score, according to the FELE rule. Raters do not develop written comments as part of their evaluation of FELE essays. Their holistic evaluation is expressed by the point value they assign to the essay. Through the intensive training and the subsequent calibration and recalibration before each FELE essay scoring assignment, Pearson has achieved excellent consistency in rater scoring of the FELE written performance assessment. From September 12, 2016, through October 8, 2016, the four Pearson raters who were scoring FELE essays (including Petitioner’s essay) achieved a coefficient alpha index of 98 percent, meaning that 98 percent of the time, the scores assigned to an essay by a pair of raters were either identical or adjacent (within one point), and when adjacent, were balanced (i.e., each rater was as often the higher scorer as he or she was the lower scorer). This exceeds industry standards. A comparable, high coefficient alpha index was achieved by FELE essay raters for each month in 2015 and 2016. The lowest coefficient alpha index, still exceeding industry standards, was 93 percent in a single month (February 2015). In two months (December 2015 and July 2016), the coefficient alpha index was 94 percent, with the remaining 21 months at between 95 percent and 98 percent. Examinee Perspective: Preparation for the FELE Essay DOE provides detailed information and aids on its website regarding the FELE, including the essay section, for potential examinees. This includes a 40-page test information guide for the FELE. The test information guide contains all of the SBE-adopted competencies and skills, including the competency and skills tested by the written performance assessment. The guide also contains the general FELE essay scoring rubric, and a sample prompt that is representative of the essay prompts actually used. DOE also posts on its website three additional sample FELE essay prompts along with the supplemental rating criteria that correspond to those prompts. Petitioner does not challenge the appropriateness of these materials generally, which she accessed and used to prepare for the FELE written performance assessment. However, Petitioner complained that DOE does not provide more study guide materials or endorse specific vendors of study guide materials so as to more thoroughly prepare potential examinees for their essay tests. Petitioner also complained that when an examinee fails an essay test, DOE does not provide substantive explanations to help the examinee understand the reasons for the failing score and how the examinee can perform better. DOE appropriately responded to this criticism by reference to standards for testing agencies adopted by three authoritative bodies: the American Educational Research Association, the American Psychological Association, and the National Council of Measurement Education. These standards dictate that as testing agency, DOE’s responsibility is to develop tests that evaluate whether individuals are prepared with the necessary skills. It is not DOE’s responsibility, and it would not be appropriate for DOE, as the testing agency, to prepare individuals to pass its tests, or coach individuals on how to perform better on tests they do not pass. The information DOE makes publicly available is appropriate and sufficient to explain the FELE essay exam and scoring process, and to allow an examinee to know what to expect in a prompt and what is expected of the examinee in a response. The DOE test information guide explains the FELE essay and scoring process, as follows: Your response will be scored holistically by two raters. The personal views you express will not be an issue; however, the skill with which you express those views, the logic of your arguments, the quality of your data analysis and interpretation, and the appropriateness of your implementation plans will be very important in the scoring. Your response will be scored on two constructs: communication skills, including ideas, focus, organization, and mechanics (capitalization, punctuation, spelling, and usage) and data analysis, interpretation, and evaluation, including data explanation, application, relevant implications, and analysis of trends. The raters will use the criteria on the following page when evaluating your response. The score you receive for your written performance assessment will be the combined total of the two raters’ scores. (R. Exh. 2 at 13 of 40). On “the following page” of the test information guide, the general FELE essay rubric is set forth in its entirety. The rubric is also available on the DOE website as a separate, stand- alone document. The rubric is simply a comparative description of the extent to which an essay demonstrates the generic competency and skills to be tested--effective written communication skills, with data analysis that drives appropriate strategies for improvement. For example, recognizing that part of effective written communication is use of proper grammar and syntax, the rubric describes that quality comparatively, differentiating between best, better, good, not-so-good, worse, and worst. Similarly, the rubric addresses whether proposed strategies are appropriate by comparing the extent to which the strategies are aligned with the data findings, relevant implications, and trends. But these are just parts--and not discrete parts--of the evaluation. As explained in the test information guide, holistic evaluation judges the overall effect of a response, considering all aspects of effective communication and data analysis, in a process of weighing and balancing strengths and weaknesses. Of course, DOE does not make publicly available those essay prompts being used in FELE tests, or the supplemental rating criteria for those prompts; these are protected, confidential testing material. It would be unreasonable for examinees to expect more from a testing agency than what DOE makes available. Score Verification An examinee who fails the written performance assessment (or any other FELE subtest or section) may request score verification, to verify that the failed exam was scored correctly. The score verification procedures are set forth in the FELE rule. The score verification rule provides that DOE makes the determination as to whether an examinee’s test was scored correctly. DOE is authorized to consult with field-specific subject matter experts in making this determination. In practice, though not required by the FELE rule, when a score verification request is directed to the score assigned to a FELE written performance assessment, DOE always consults with a field-specific subject matter expert known as a “chief reviewer.” Chief reviewers are another category of experts (in addition to raters and chief raters) proposed by Pearson pursuant to qualifications identified by DOE, subject to DOE approval. Once approved by DOE, prospective chief reviewers undergo the same rater training in the holistic scoring process as do all other raters, to gain experience in scoring essays and undergo calibration to achieve scoring consistency. In addition, chief reviewers are given training for the chief reviewer role of conducting review and scoring of essays when scores have been contested.5/ Unlike raters and chief raters, chief reviewers do not work at Pearson in Hadley, Massachusetts; they are Florida experts, actively working as principals of Florida schools. Chief reviewers only become involved when an examinee who failed the FELE written performance assessment invokes the score verification process. A chief reviewer is assigned to evaluate whether that essay was scored correctly. The chief reviewer conducts that evaluation by first going through the same step-by-step process as raters, following the same retraining and calibration steps that involve scoring many sample essays. Upon achieving success in the calibration test, the chief reviewer moves on to evaluate the assigned essay response independently, before reviewing the scores the raters gave to that essay. Upon reviewing the raters’ scores, the chief reviewer offers his or her view as to whether the essay score should stand or be changed, and provides a summary rationale for that opinion. This information is conveyed to DOE, which determines the action to take--verify or change the score--and notifies the examinee of the action taken. Petitioner’s FELE Attempts Petitioner took all parts of the FELE for the first time in the summer of 2015, in June and July. She passed subtest one, but failed subtest two and both sections (multiple choice and written performance assessment) of subtest three. FELE examinees can retake failed subtests/sections, and need only retake the parts failed. There are no limits on the number of retakes. The requirements for retakes are that at least 30 days must have elapsed since the last exam attempt, and that examinees pay the registration fees specified in the FELE rule for each retake of a failed subtest and/or section. On April 23, 2016, roughly nine months after her first attempt, Petitioner retook subtest two and both sections of subtest three. To prepare, Petitioner used the “very limited” resources on the DOE website, and purchased some “supplementals,” which she described as materials “on the market that supposed FELE experts sell.” (Tr. 33). She used the material to study and practice writing essays. Petitioner passed subpart two and the multiple choice portion of subpart three. However, she did not pass the written assessment section of subpart three. Petitioner retook the written performance assessment 33 days later (May 26, 2016), but again, did not pass. Petitioner did not invoke the score verification process to question the failing scores she received on her first three FELE essays. Those three failing scores stand as final, as she did not challenge them. Petitioner explained that she did not challenge them because she was embarrassed, because as a teacher, she believed that she would pass the test. However, while Petitioner has had many years of success as a teacher, the skills for teaching do not necessarily correlate to the skills required for educational leadership positions, as several DOE witnesses credibly attested. Nonetheless, Petitioner tried again, in an effort to qualify for the pay raise her district would provide. She retook the FELE essay section for the fourth time on September 28, 2016. Petitioner testified that, as she had done before, she reviewed the material on DOE’s website, such as the test information guide with its general rubric, and she practiced writing essays using the sample essay prompts and supplemental rating criteria. In what was described as a “eureka moment,” she also found what she described as “the rubric” on the website, which she proceeded to memorize. Rather than the rubric, however, what Petitioner memorized was the generic competency and skills tested by the written performance assessment. Petitioner made a point of incorporating words from the competency and skills document in her essay. Petitioner did not pass. Each of the four times Petitioner took the FELE written performance assessment, including the most recent attempt at issue in this case, both raters assigned to score her essay gave the essay three points, for a total score of six points. Since in each of her four attempts, Petitioner’s essay was scored the same by both raters, Petitioner’s essays were never reviewed by a chief rater, because there was never a discrepancy in the raters’ scores for the chief rater to resolve. Petitioner’s Challenge to Her Fourth Six-Point Essay Score When Petitioner was notified that her fourth essay attempt resulted in the same score--six, on a scale ranging from two points to 12 points--this time Petitioner took the next step, by requesting a score verification session. Following the procedures in the FELE rule for score verification, Petitioner registered, paid the required fee, and went to the designated Pearson site. There, she was able to review the essay prompt, as well as her written response. Petitioner testified that she prepared a “statement of specific scoring errors” (so named in the FELE rule--more aptly, in her case, a statement explaining why she thinks her essay score was erroneous), which she submitted to Pearson at the end of her session. By rule, the statement is then filed with DOE. The statement Petitioner prepared was not offered into evidence, apparently by choice, as Petitioner was looking for it at one point, stating that it was “part of the confidential stuff” (Tr. 78) that had been produced by DOE. Petitioner attempted to describe the statement of scoring errors that she recalls completing. She described it as primarily demonstrating where in her essay she addressed what she characterized as the “rubric” that she had found on DOE’s website and memorized. As noted previously, this was not the rubric, but rather, was the high-level description of the competency and skills tested by the FELE written performance assessment. As described, Petitioner’s statement explaining that she “memorized” the competency/skills ingredients, and showing where she included competency/skills buzz-words in her essay (e.g., “morale”; she also said “celebration,” but that word does not appear in the competency/skills), would not seem to be the sort of statement that would be persuasive as to a claim of an erroneous score. It would be a mistake to memorize and repeat words from the generic competency/skills without regard to whether they are used in a way that makes sense in the responding to the specific instructions of the essay prompt. DOE conducted its review, and the score was verified through a process consistent with DOE’s practice of consulting a chief reviewer retained by Pearson with DOE approval, who was qualified as a subject matter expert in the field of Florida educational leadership. The assigned chief reviewer was also qualified by Pearson training in the holistic scoring method and in conducting score verification reviews. The chief reviewer who undertook to verify Petitioner’s essay score did not review Petitioner’s statement explaining why she believed her essay score was erroneous. Instead, he independently evaluated Petitioner’s essay, following the same holistic method, including the step-by-step retraining and calibration process, used by all raters to score a FELE essay. Then the chief reviewer reviewed the scores separately assigned by the two raters who scored Petitioner’s essay. He concluded that the assigned scores of three were appropriate for Petitioner’s essay, and that no change should be made. The chief reviewer provided a summary rationale for his determination.6/ Petitioner complains that the chief reviewer should have been given her statement explaining why her score was erroneous, because that might have affected the chief reviewer’s decision. However, pursuant to the FELE rule, the chief reviewer’s role is consultative only; DOE makes the determination of whether Petitioner’s essay was scored correctly, which is why the rule provides that the statement of asserted scoring errors is filed with DOE. Petitioner presented no evidence proving that DOE did not consider Petitioner’s statement explaining why she believed her essay score was erroneous. No testimony was offered by a witness with personal knowledge of any review given to Petitioner’s statement; that review would have been done by a member of DOE’s “scoring and reporting team” (Tr. 260-261), none of whom testified. If Petitioner had proven that the statement was not considered by DOE, the failure to offer that statement into evidence would make it impossible to determine the import, if any, of such failure. Petitioner was notified by DOE that the “essay score that you questioned has been reviewed by a Chief Reviewer. As a result of this review, the Department has determined that the written performance section that you questioned is indeed scored correctly.” Petitioner was informed that if she was not satisfied with the outcome, she was entitled to dispute the decision pursuant to sections 120.569 and 120.57. Petitioner availed herself of that opportunity,7/ and was given the chance in a de novo evidentiary hearing to present evidence to support her challenge to her exam score. At the hearing, Petitioner offered only her own testimony as support for her challenge to the scoring of her essay. She isolated portions of the supplemental rating criteria and attempted to identify where her essay addressed the isolated portions, for which, in her view, she ought to have been awarded “a point” here or “a half-point” there. She also referred to isolated parts of the summary comments from the raters and chief reviewers, and attempted to identify the parts of her essay that did or did not do what the comment portions stated. Petitioner was not shown to be, tendered as, or qualified as an expert in either educational leadership or holistic scoring of essays. Her attempt to tally points by comparing isolated parts of the prompt-specific rubric to isolated parts of her essay is contrary to the holistic scoring approach used to score the FELE written performance assessment. Petitioner offered no comprehensive, holistic evaluation of her essay as a whole, nor was she shown to be qualified to do so. Besides being contrary to the holistic scoring method, Petitioner’s critique of the scoring of her essay was wholly unpersuasive. Without undermining the confidentiality of the ingredients of Petitioner’s testimony (the essay prompt, her essay, the supplemental rating criteria, and the historic anchors), overall, the undersigned did not find Petitioner’s critique credible or accurate. Although awkward to try to explain in code, some examples follow to illustrate the basis for this overall finding. As one example, Petitioner referred to data points that the prompt-specific rubric indicated should be identified in response to the prompt. If a “data point” that should have been identified was that A was consistently lower than B, Petitioner called attention to a part of her essay identifying A as low. She acknowledged that her essay did not expressly compare A to B at all, much less over time, but Petitioner argued that those comparisons were implicit. She said that she should have gotten at least a half-point for partially identifying the data point. That argument is rejected. The point that needed to be made was a comparative assessment over a time span. Where another data point called for identifying that two things were “substantially lower” than other things, Petitioner said that she sufficiently identified this point by saying that one of those two things was “lowest” (or “worst”). However, the point that needed to be made was not just that something was lowest or worst, but also, that another thing was also lower, and that the degree of separation between those two things and other things was substantial. Overall as to the data points, Petitioner failed to identify several significant trends, and failed to offer sufficient comparative analysis as to the trends she did identify. She reported data or averages of data without identifying the relevant implications of the data, as would have come from making the appropriate comparisons and identifying the appropriate trends. In terms of the competency/skills language, she did not analyze the data and communicate, in writing, appropriate information to the stakeholders identified in the prompt as the target audience. The data point failures were particularly problematic when taken to the next step of proposing specific strategies that would lead to improvement in the areas shown to be needed from the data points. For example, Petitioner’s failure to identify the second data point in the supplemental rating criteria resulted in Petitioner proposing action that was at odds with what the second data point showed.8/ Petitioner’s attempted critique of her essay score was riddled with other inconsistencies. For example, Petitioner acknowledged that she often failed to summarize specific data for each of the three years, choosing instead to provide three-year averages. Petitioner’s explanation was that she did not want to repeat data in the prompt because that would be condescending to her target audience. This is a weak rationale, one which is at odds with the instructions given with the prompt. Petitioner also said it should have been a positive that instead of just citing yearly numbers, she went to the trouble of calculating three-year averages. Instead, it appeared more negative than positive, by masking information needed to respond to the prompt. While Petitioner defended her omission of specific data because of the target audience she was instructed to address, Petitioner inconsistently sought to explain an odd statement using the word “celebrated” (Jt. Exh. 3 at 1, first sentence of second paragraph) as being directed more to certain other stakeholders than to the target audience. She did this because the “rubric” (i.e., the competency/skills), said to communicate to stakeholders, and also “talks about morale and celebration.” (Tr. 59). This is an example of Petitioner’s ineffective strategy of throwing out words from the competency/skills in ways that were contrary to specific instructions in the prompt. The target audience identified in an essay prompt may be certain stakeholders, instead of all stakeholders. For example, the sample prompt in the test information guide (R. Exh. 2 at 34), instructs the writer to prepare a memorandum for school advisory council members. The use of the word “stakeholders” in the competency/skills would not justify ignoring the essay prompt instructions by writing with a communication style more suited to a different audience of other stakeholders. Petitioner disagreed with the suggestion in both chief reviewers’ written comments that the essay’s responses to the third and fourth bullet points in the prompt (Jt. Exh. 1) were generalized, lacking specifics and examples. Petitioner failed to persuasively establish that her essay provided sufficient detail in this regard to avoid being fairly characterized as responding to these bullet points with “generalizations.” By failing to adequately analyze the data, relevant implications, and trends, Petitioner’s responses to these bullet points were either too general (e.g., research to find strategies), or in the one instance where specific action was described, the action was at odds with data points she missed. Her responses lacked appropriate specific action driven by data analysis. Petitioner admitted that her essay had a number of misspellings, grammatical errors, and punctuation errors. She acknowledged that this is an area that the raters are supposed to consider. It is a necessary part of effective written communication. In this regard, by the undersigned’s count, 29 of the 37 sentences in Petitioner’s essay suffer from one or more errors of grammar, syntax, punctuation, or misspellings. More than half of those sentences (at least 15 of 29) suffer from errors of grammar and syntax, such as pairing “neither” with “or” instead of “neither . . . nor,” using non-parallel structure, using plural subjects with singular verbs or singular subjects with plural verbs, and using conditional language (such as “would do” and “would be”) without a corresponding condition (e.g., that action would be appropriate, if the trend continues). In addition, the last sentence of the second paragraph on page one is not a complete sentence, ending in mid-word. Petitioner admitted that she ran out of time to complete the thought. As to this consideration, Petitioner’s essay appears to the undersigned to fall somewhere between the general rubric’s description for a “three” (“The writer demonstrates some errors in the use of proper grammar and syntax that do not detract from the overall effect.”), and the general rubric’s description for a “two” (“The writer demonstrates serious and frequent errors in proper grammar and syntax.”). Petitioner’s essay admittedly did not meet the general rubric’s description for a score of “four” (“The writer demonstrates satisfactory use of proper grammar and syntax.”). This does not automatically doom Petitioner’s essay to a score of three or less than three. However, it demonstrates the fallacy of Petitioner’s approach of seizing on isolated parts of the prompt-specific rubric (supplemental rating criteria) to compare to her essay, without approaching the scoring process holistically. Even if Petitioner had persuasively critiqued parts of the essay scoring, as Respondent aptly notes, it is not simply a matter of checking off boxes and adding up points. Petitioner failed to prove that the holistic scoring of her essay was incorrect, arbitrary, capricious, or devoid of logic and reason. She offered no evidence that a proper holistic evaluation of her essay would result in a higher total score than six; indeed, she offered no holistic evaluation of her essay at all. Petitioner’s critique of various parts in isolation did not credibly or effectively prove that her score of six was too low; if anything, a non-expert’s review of various parts in isolation could suggest that a score of six would be generous. But that is not the scoring approach called for here. Petitioner failed to prove that there was anything unfair, discriminatory, or fraudulent about the process by which the written performance assessment exam was developed, administered, and scored.9/ Petitioner pointed to the passage rate on the FELE written performance exam following the adoption of a separate passing score requirement. In 2015 and 2016, the passage rates for first-time test takers were 54 percent and 50 percent, respectively. The data is collected and reported for first-time test takers only, because that is considered the most reliable. Historically, performance on essay examinations goes down, not up, with multiple retakes. The passage rates reflect a mix of both examinees prepared in an academic educational leadership program geared to Florida standards, and those whose educational background does not include a Florida-focused program. Historically, examinees from academic programs aligned to Florida standards have greater success passing the FELE essay than those from out-of-state academic programs that are not aligned to Florida standards. Petitioner may have been at a disadvantage in this regard, as it does not appear that her master’s program at Concordia University was aligned to Florida’s educational leadership standards. The passage rates, standing alone, do not prove that the written performance assessment is unfair, arbitrary, or capricious. It may be that the SBE’s decision to increase scrutiny of the writing skills of FELE examinees results in fewer examinees achieving a passing score. Perhaps that is a good thing. Perhaps too many examinees achieved passing scores on the FELE in the past, despite weak written communication skills. In any event, the overall written performance assessment passage rates, standing alone, provide no support for Petitioner’s challenge to the score given to her essay. Petitioner failed to prove that the scoring verification process was unfair, arbitrary, capricious, or contrary to the procedures codified in the FELE rule. Petitioner pointed to evidence that essay scores are changed only on occasion, and that no scores were changed in 2016. Those facts, standing alone, do not support an inference that the score verification process is unfair, arbitrary, or capricious. An equally reasonable or more reasonable inference is that the scores to be verified were appropriate.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that a final order be entered rejecting Petitioner’s challenge to the failing score she received on the written performance assessment section of the Florida Educational Leadership Exam taken in September 2016, and dismissing the petition in this proceeding. DONE AND ENTERED this 13th day of October, 2017, in Tallahassee, Leon County, Florida. S ELIZABETH W. MCARTHUR Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 13th day of October, 2017.

Florida Laws (5) 1001.101012.56120.569120.5720.15
# 8
PATRICIA WILSON vs BARBER`S BOARD, 93-002524 (1993)
Division of Administrative Hearings, Florida Filed:Jacksonville, Florida May 05, 1993 Number: 93-002524 Latest Update: Jun. 11, 1996

The Issue Whether items 63, 74, 92, 119 and 124 of the January 1993 Barber Licensure Examination were valid and correctly graded as to Petitioner. Whether Petitioner's grade report correctly reflected the score achieved by Petitioner on the January 1993 Barber Licensure Examination.

Findings Of Fact Upon consideration of the evidence adduced at the hearing, the following relevant findings of fact are made: Petitioner, Patricia Wilson, was a candidate (Number 0100037) for the written portion of the January 1993 Barber Licensure Examination given on January 25, 1993. Petitioner questioned the validity and the answers supplied by Respondent's answer key for items number 63, 74, 92, 119 and 124 covered in the January 1993 Barber Licensure Examination. When Petitioner's witness, Yvette Stewart, a licensed barber in the state of Florida, was read each item in question, it was apparent that the witness clearly understood each item and that the items were neither misleading nor confusing to the witness. Likewise, when the witness was asked to choose an answer for each item from several possible answers, the witness chose the answer given in the Respondent's answer key as the correct answer. Because more than 50 per cent of the candidates taking the examination failed to correctly answer item 119, the Respondent reviewed item 119 to determine its validity. After reviewing item 119 and the study material from which the item was derived, the Department determined that item 119 was valid and that the answer to item 119 in the Respondent's answer key was correct. The Petitioner failed to present sufficient evidence to show that items 63, 74, 92, 119 and 124 were invalid or that the Respondent's answers for those items on the Respondent's answer key were incorrect. There were 125 items to be answered by the examinee on the written portion of the examination. Petitioner answered 93 items correctly. The maximum score that could be achieved on the written portion of the examination was 100 per cent. The weight to be given each item was determined by dividing 100 (maximum score) by 125 (total number of items) which equals 0.8. The grade report on the written portion of the examination received by the Petitioner indicated that Petitioner score was 74. This score was determined by multiplying 93 (total correct answers) by 0.8 (weight given each correct answer). This equals 74.4 per cent but when rounded off in accordance with the Respondent's rules would be 74.00 which was the score shown on the grade report as achieved by the Petitioner. The grade report listed the different areas of study that the examinees were required to be tested on and the score achieved by the examinee on each area study as follows: Hygiene and Ethics 7.00 Florida Law 5.00 Skin Care and Function 9.00 Hair Services and Struct 9.00 Cosmetic Chemistry 10.00 Scalp and Facial Treat 8.00 Coloring and Bleaching 10.00 Permanent Waving 10.00 Hair Straightening 4.00 Implements 3.00 Total Of Individual Scores 75.00 This total score would meet the minimum score of 75.00 required for passing the examination. The individual scores shown above in Finding of Fact 9 and on the Grade Report were determined by multiplying the number of correct answers achieved by the Petitioner in each area of study by 0.8 (weight given each correct answer) and rounding off in accordance with the Respondent's rules. The individual scores as set out in the Grade Report are compared with the actual score derived as set out in Finding of Fact 8 as follows: Individual Score Actual Score Correct Answers 7.00 7.2 9 5.00 4.8 6 9.00 8.8 11 9.00 8.8 11 10.00 9.6 12 8.00 8.0 10 10.00 10.4 13 10.00 9.6 12 4.00 4.0 5 3.00 3.2 4 Total 75.00 74.4 93 The Grade Report does not explain how the Respondent arrived at the score of 74.00 or that the total of therounded off individual scores is not to be considered as the score achieved.

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law, it is recommended that the Respondent enter a final order denying the Petitioner's request for reconsideration of her grade on the written portion of the January 1993 Barbers' Examination. RECOMMENDED this 29th day of September, 1993, at Tallahassee, Florida. WILLIAM R. CAVE Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 29th day of September, 1993. APPENDIX TO RECOMMENDED ORDER, CASE NO. 92-2524 The following constitutes my specific rulings, pursuant to Section 120.59(2), Florida Statutes, on all of the proposed findings of fact submitted by the parties in this case. Petitioner's Proposed Findings of Fact. The first sentence of proposed finding of fact 1 is adopted in substance as modified in Finding of Fact 4. The second sentence is not supported by competent substantial evidence in the record. Proposed finding of fact 2 is not supported by competent substantial evidence in the record. Proposed finding of fact 3 is more of a statement than a finding of fact. Proposed finding of fact 4 is adopted in substance as modified in Finding of Fact 8. Proposed finding of fact 5 is more of a statement than a finding of fact. There was no showing that Petitioner should be given credit for her answer to item 119. Respondent's Proposed Findings of Fact. 1-2. Proposed findings of fact 1 & 2 adopted in substance as modified in Findings of Fact 6 & 8, respectively. 3. Proposed finding of fact 3 adopted in substance as modified in Findings of Fact 3-5. COPIES FURNISHED: Patricia Wilson, Pro se. 1023 Huron Street Jacksonville, Florida 32205 Robert A. Jackson, Esquire Office of the General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792 Darlene F. Keller, Director Division Real Estate 400 West Robinson Street Post Office Box 1900 Orlando, Florida 32802-1900 Jack McRay, Esquire Acting General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.57476.114476.144
# 9
SHARON PERRI vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 02-000876 (2002)
Division of Administrative Hearings, Florida Filed:Cocoa, Florida Mar. 01, 2002 Number: 02-000876 Latest Update: Sep. 12, 2002

The Issue Whether Petitioner has a developmental disability that makes her eligible to receive services from the Department of Children and Family Services pursuant to Section 393.061, Florida Statutes, et seq.

Findings Of Fact Based upon the testimony and evidence received at the hearing, the following findings are made: Petitioner is almost 59 years old. She has lived a very sheltered life, and she has always been considered to be "slow" by her family. Petitioner moved to Florida in the early 1990's, and she currently resides in Merritt Island. Petitioner lived at home with her parents until two and one-half years ago when her mother had a debilitating stroke and was moved into a nursing home. Since then, Petitioner has lived by herself. Petitioner never learned to ride a bike or drive a car. She did not date. Petitioner's work experience, as detailed in the 1974 report prepared by psychologist William McManus (discussed below), was limited to 11 years as a stock clerk in a family business. She has not worked since 1973. Petitioner has the social skills of a 12 to 13-year-old child. She reads at the fifth grade level. Petitioner is incapable of managing her own finances. Petitioner's social security check is sent to Ms. Michalsky, who pays Petitioner's rent for her. Petitioner is incapable of managing her own diet. Her meals consist primarily of sweets, microwave foods, and sodas. Ms. Michalsky, Petitioner's second cousin and the only relative who lives near her, has been Petitioner's de facto guardian since Petitioner's mother suffered the stroke. Ms. Michalsky has children of her own, and she is unable to adequately care for Petitioner. It was apparent from Ms. Michalsky testimony at hearing that she is genuinely concerned for Petitioner's safety and well-being. Petitioner attended and graduated from St. Mary of Perpetual Help High School (St. Mary) in June 1962. Out of a class of 99 students, Petitioner was ranked 99th. Petitioner's transcript from St. Mary shows that she received grades at or near the lowest passing grade in all of her classes. This suggests that Petitioner was being "socially promoted." Petitioner's transcript also shows that she scored very poorly on all of the standardized tests that she took. Petitioner took the Otis S-A Test Form A (Otis Test) in January 1958. She was 14 years old at the time. The purpose of the Otis Test is to determine a cognitive IQ. A score of 100 is considered average. The standard deviation for the test is 15. A person whose score is more than two standard deviations below the average, i.e., a score below 70, is considered to be retarded. Petitioner's IQ, as determined by the Otis Test, was 73. The margin of error for the Otis Test is +/- five points. Thus, Petitioner's "actual" IQ was between 68 and 78. Petitioner scored in the third percentile of the Differential Aptitude Test (DAT), meaning that she scored higher than only three percent of the people who took the test. Petitioner took this test in April 1959. She was 15 years old at the time. Her score on the DAT roughly translates into an IQ level of 75. Petitioner was in the first percentile on the SRA National Education Development Test, meaning that she scored higher than only one percent of the people who took the test. Petitioner took this test in the spring of 1960. She was 17 years old at the time. In July 1974, Petitioner was examined by William McManus, a licensed psychologist. Mr. McManus examined Petitioner based upon the Wechsler Adult Intelligence Scale (Wechsler Scale). Petitioner was 31 years old at the time. The Wechsler Scale includes 11 subtests, each of which are separately scored. The scores of the subtests are used to formulate a verbal IQ, a performance IQ, and an overall IQ. The separate scoring of the subtests allows a more detailed analysis of the subject's IQ, which in turn results in a more accurate reflection of the subject's learning abilities. The average score on each subtest is ten. Scores between seven and ten are considered average; scores between five and seven are considered borderline; and scores less than five are considered very low. There is typically no "scatter" in the scores of a person who is retarded. In other words, the person's score on all or almost all of the 11 subtests are in the very low range, i.e., below five. There was considerable "scatter" in the Petitioner's scores on the subtests. She scored in the average range on five of the 11 subtests; she scored in the borderline range on four of the subtests; and she scored in the very low range on only two of the subtests. Petitioner's overall IQ, as determined by the Wechsler Scale, was 75. Her verbal IQ was 79 and her performance IQ was 73. The information originally submitted to the Department with Petitioner's application for developmental services included only medical records. Those records did not include any of the IQ test scores described above. Neither the medical records originally submitted to the Department (which were not introduced at the hearing), nor any of the evidence introduced at the hearing suggest that Petitioner suffers from cerebral palsy, autism, spina bifida, or Prader-Willi syndrome. The denial letter issued by the Department on July 24, 2001, was based only upon the medical records submitted with the application. After receiving the denial letter, Ms. Michalsky spoke with Department employee Pat Rosbury regarding the type of information needed by the Department. Based upon those conversations, Ms. Michalsky provided additional records to the Department, including records showing the IQ test results described above. Ms. Michalsky was unable to obtain any additional records from Petitioner's childhood because such records are over 50 years old. The Department forwarded the supplemental records to Dr. Yerushalmi on October 16, 2001, because the scores showed borderline retardation. Dr. Yerushalmi did not personally evaluate Petitioner, but based upon her review of the IQ test scores described above, she concluded that Petitioner is not retarded and, hence, not eligible for developmental services from the Department. Dr. Yerushalmi "suspects" that Petitioner had a learning disability as a child and that disability, coupled with her sheltered upbringing, led to her current state. The Department did not issue a new denial letter after Dr. Yerushalmi's review of the supplemental records confirmed the Department's original decision that Petitioner is ineligible for developmental services. Petitioner's request for a formal administrative hearing was dated October 17, 2001, and was received by the Department on October 19, 2001.

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Department of Children and Family Services issue a final order that determines Petitioner to be ineligible for developmental services. DONE AND ENTERED this 6th day of June, 2002, in Tallahassee, Leon County, Florida. T. KENT WETHERELL, II Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 6th day of June, 2002.

Florida Laws (3) 120.57393.063393.065
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer