Elawyers Elawyers
Washington| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 48 similar cases
THE FLORIDA INSURANCE COUNCIL, INC.; THE AMERICAN INSURANCE ASSOCIATION; PROPERTY CASUALTY INSURERS ASSOCIATION OF AMERICA; AND NATIONAL ASSOCIATION OF MUTUAL INSURANCE COMPANIES vs DEPARTMENT OF FINANCIAL SERVICES, OFFICE OF INSURANCE REGULATION, AND THE FINANCIAL SERVICES COMMISSION, 05-002803RP (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 03, 2005 Number: 05-002803RP Latest Update: May 17, 2007

The Issue At issue in this proceeding is whether proposed Florida Administrative Code Rule 69O-125.005 is an invalid exercise of delegated legislative authority.

Findings Of Fact Petitioners AIA is a trade association made up of 40 groups of insurance companies. AIA member companies annually write $6 billion in property, casualty, and automobile insurance in Florida. AIA's primary purpose is to represent the interests of its member insurance groups in regulatory and legislative matters throughout the United States, including Florida. NAMIC is a trade association consisting of 1,430 members, mostly mutual insurance companies. NAMIC member companies annually write $10 billion in property, casualty, and automobile insurance in Florida. NAMIC represents the interests of its member insurance companies in regulatory and legislative matters throughout the United States, including Florida. PCI is a national trade association of property and casualty insurance companies consisting of 1,055 members. PCI members include mutual insurance companies, stock insurance companies, and reciprocal insurers that write property and casualty insurance in Florida. PCI members annually write approximately $15 billion in premiums in Florida. PCI participated in the OIR's workshops on the Proposed Rule. PCI's assistant vice president and regional manager, William Stander, testified that if the Proposed Rule is adopted, PCI's member companies would be required either to withdraw from the Florida market or drastically reorganize their business model. FIC is an insurance trade association made up of 39 insurance groups that represent approximately 250 insurance companies writing all lines of insurance. All of FIC's members are licensed in Florida and write approximately $27 billion in premiums in Florida. FIC has participated in rule challenges in the past, and participated in the workshop and public hearing process conducted by OIR for this Proposed Rule. FIC President Guy Marvin testified that FIC's property and casualty members use credit scoring and would be affected by the Proposed Rule. A substantial number of Petitioners' members are insurers writing property and casualty insurance and/or motor vehicle insurance coverage in Florida. These members use credit-based insurance scoring in their underwriting and rating processes. They would be directly regulated by the Proposed Rule in their underwriting and rating methods and in the rate filing processes set forth in Sections 627.062 and 627.0651, Florida Statutes. Fair Isaac originated credit-based insurance scoring and is a leading provider of credit-based insurance scoring information in the United States and Canada. Fair Isaac has invested millions of dollars in the development and maintenance of its credit-based insurance models. Fair Isaac concedes that it is not an insurer and, thus, would not be directly regulated by the Proposed Rule. However, Fair Isaac would be directly affected by any negative impact that the Proposed Rule would have in setting limits on the use of credit-based insurance score models in Florida. Lamont Boyd, a manager in Fair Isaac's global scoring division, testified that if the Proposed Rule goes into effect Fair Isaac would, at a minimum, lose all of the revenue it currently generates from insurance companies that use its scores in the State of Florida, because Fair Isaac's credit-based insurance scoring model cannot meet the requirements of the Proposed Rule regarding racial, ethnic, and religious categorization. Mr. Boyd also testified that enactment of the Proposed Rule could cause a "ripple effect" of similar regulations in other states, further impairing Fair Isaac's business. The Statute and Proposed Rule During the 1990s, insurance companies' use of consumer credit information for underwriting and rating automobile and residential property insurance policies greatly increased. Insurance regulators expressed concern that the use of consumer credit reports, credit histories and credit-based insurance scoring models could have a negative effect on consumers' ability to obtain and keep insurance at appropriate rates. Of particular concern was the possibility that the use of credit scoring would particularly hurt minorities, people with low incomes, and young people, because those persons would be more likely to have poor credit scores. On September 19, 2001, Insurance Commissioner Tom Gallagher appointed a task force to examine the use of credit reports and develop recommendations for the Legislature or for the promulgation of rules regarding the use of credit scoring by the insurance industry. The task force met on four separate occasions throughout the state in 2001, and issued its report on January 23, 2002. The task force report conceded that the evidence supporting the negative impact of the use of credit reports on specific groups is "primarily anecdotal," and that the insurance industry had submitted anecdotal evidence to the contrary. Among its nine recommendations, the task force recommended the following: A comprehensive and independent investigation of the relationship between insurers' use of consumer credit information and risk of loss including the impact by race, income, geographic location and age. A prohibition against the use of credit reports as the sole basis for making underwriting or rating decisions. That insurers using credit as an underwriting or rating factor be required to provide regulators with sufficient information to independently verify that use. That insurers be required to send a copy of the credit report to those consumers whose adverse insurance decision is a result of their consumer credit information and a simple explanation of the specific credit characteristics that caused the adverse decision. That insurers not be permitted to draw a negative inference from a bad credit score that is due to medical bills, little or no credit information, or other special circumstances that are clearly not related to an applicant's or policyholder's insurability. That the impact of credit reports be mitigated by imposing limits on the weight that insurers can give to them in the decision to write a policy and limits on the amount the premium can be increased due to credit information. No evidence was presented that the "comprehensive and independent investigation" of insurers' use of credit information was undertaken by the Legislature. However, the other recommendations of the task force were addressed in Senate Bills 40A and 42A, enacted by the Legislature and signed by the governor on June 26, 2003. These companion bills, each with an effective date of January 1, 2004, were codified as Sections 626.9741 and 626.97411, Florida Statutes, respectively. Chapters 2003-407 and 2003-408, Laws of Florida. Section 626.9741, Florida Statutes, provides: The purpose of this section is to regulate and limit the use of credit reports and credit scores by insurers for underwriting and rating purposes. This section applies only to personal lines motor vehicle insurance and personal lines residential insurance, which includes homeowners, mobile home owners' dwelling, tenants, condominium unit owners, cooperative unit owners, and similar types of insurance. As used in this section, the term: "Adverse decision" means a decision to refuse to issue or renew a policy of insurance; to issue a policy with exclusions or restrictions; to increase the rates or premium charged for a policy of insurance; to place an insured or applicant in a rating tier that does not have the lowest available rates for which that insured or applicant is otherwise eligible; or to place an applicant or insured with a company operating under common management, control, or ownership which does not offer the lowest rates available, within the affiliate group of insurance companies, for which that insured or applicant is otherwise eligible. "Credit report" means any written, oral, or other communication of any information by a consumer reporting agency, as defined in the federal Fair Credit Reporting Act, 15 U.S.C. ss. 1681 et seq., bearing on a consumer's credit worthiness, credit standing, or credit capacity, which is used or expected to be used or collected as a factor to establish a person's eligibility for credit or insurance, or any other purpose authorized pursuant to the applicable provision of such federal act. A credit score alone, as calculated by a credit reporting agency or by or for the insurer, may not be considered a credit report. "Credit score" means a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Tier" means a category within a single insurer into which insureds with substantially similar risk, exposure, or expense factors are placed for purposes of determining rate or premium. An insurer must inform an applicant or insured, in the same medium as the application is taken, that a credit report or score is being requested for underwriting or rating purposes. An insurer that makes an adverse decision based, in whole or in part, upon a credit report must provide at no charge, a copy of the credit report to the applicant or insured or provide the applicant or insured with the name, address, and telephone number of the consumer reporting agency from which the insured or applicant may obtain the credit report. The insurer must provide notification to the consumer explaining the reasons for the adverse decision. The reasons must be provided in sufficiently clear and specific language so that a person can identify the basis for the insurer's adverse decision. Such notification shall include a description of the four primary reasons, or such fewer number as existed, which were the primary influences of the adverse decision. The use of generalized terms such as "poor credit history," "poor credit rating," or "poor insurance score" does not meet the explanation requirements of this subsection. A credit score may not be used in underwriting or rating insurance unless the scoring process produces information in sufficient detail to permit compliance with the requirements of this subsection. It shall not be deemed an adverse decision if, due to the insured's credit report or credit score, the insured continues to receive a less favorable rate or placement in a less favorable tier or company at the time of renewal except for renewals or reunderwriting required by this section. (4)(a) An insurer may not request a credit report or score based upon the race, color, religion, marital status, age, gender, income, national origin, or place of residence of the applicant or insured. An insurer may not make an adverse decision solely because of information contained in a credit report or score without consideration of any other underwriting or rating factor. An insurer may not make an adverse decision or use a credit score that could lead to such a decision if based, in whole or in part, on: The absence of, or an insufficient, credit history, in which instance the insurer shall: Treat the consumer as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer; Treat the consumer as if the applicant or insured had neutral credit information, as defined by the insurer; Exclude the use of credit information as a factor and use only other underwriting criteria; Collection accounts with a medical industry code, if so identified on the consumer's credit report; Place of residence; or Any other circumstance that the Financial Services Commission determines, by rule, lacks sufficient statistical correlation and actuarial justification as a predictor of insurance risk. An insurer may use the number of credit inquiries requested or made regarding the applicant or insured except for: Credit inquiries not initiated by the consumer or inquiries requested by the consumer for his or her own credit information. Inquiries relating to insurance coverage, if so identified on a consumer's credit report. Collection accounts with a medical industry code, if so identified on the consumer's credit report Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the home mortgage industry and made within 30 days of one another, unless only one inquiry is considered. Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry and made within 30 days of one another, unless only one inquiry is considered. An insurer must, upon the request of an applicant or insured, provide a means of appeal for an applicant or insured whose credit report or credit score is unduly influenced by a dissolution of marriage, the death of a spouse, or temporary loss of employment. The insurer must complete its review within 10 business days after the request by the applicant or insured and receipt of reasonable documentation requested by the insurer, and, if the insurer determines that the credit report or credit score was unduly influenced by any of such factors, the insurer shall treat the applicant or insured as if the applicant or insured had neutral credit information or shall exclude the credit information, as defined by the insurer, whichever is more favorable to the applicant or insured. An insurer shall not be considered out of compliance with its underwriting rules or rates or forms filed with the Office of Insurance Regulation or out of compliance with any other state law or rule as a result of granting any exceptions pursuant to this subsection. A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. An insurer that requests or uses credit reports and credit scoring in its underwriting and rating methods shall maintain and adhere to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, this section, and all rules related thereto. (7)(a) An insurer shall establish procedures to review the credit history of an insured who was adversely affected by the use of the insured's credit history at the initial rating of the policy, or at a subsequent renewal thereof. This review must be performed at a minimum of once every 2 years or at the request of the insured, whichever is sooner, and the insurer shall adjust the premium of the insured to reflect any improvement in the credit history. The procedures must provide that, with respect to existing policyholders, the review of a credit report will not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan. (b) However, as an alternative to the requirements of paragraph (a), an insurer that used a credit report or credit score for an insured upon inception of a policy, who will not use a credit report or score for reunderwriting, shall reevaluate the insured within the first 3 years after inception, based on other allowable underwriting or rating factors, excluding credit information if the insurer does not increase the rates or premium charged to the insured based on the exclusion of credit reports or credit scores. The commission may adopt rules to administer this section. The rules may include, but need not be limited to: Information that must be included in filings to demonstrate compliance with subsection (3). Statistical detail that insurers using credit reports or scores under subsection (5) must retain and report annually to the Office of Insurance Regulation. Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Standards for review of models, methods, programs, or any other process by which to grade or rank credit report data and which may produce credit scores in order to ensure that the insurer demonstrates that such grading, ranking, or scoring is valid in predicting insurance risk of an applicant or insured. Section 626.97411, Florida Statutes, provides: Credit scoring methodologies and related data and information that are trade secrets as defined in s. 688.002 and that are filed with the Office of Insurance Regulation pursuant to a rate filing or other filing required by law are confidential and exempt from the provisions of s. 119.07(1) and s. 24(a), Art. I of the State Constitution.3 Following extensive rule development workshops and industry comment, proposed Florida Administrative Code Rule 69O-125.005 was initially published in the Florida Administrative Weekly, on February 11, 2005.4 The Proposed Rule states, as follows: 69O-125.005 Use of Credit Reports and Credit Scores by Insurers. For the purpose of this rule, the following definitions apply: "Applicant", for purposes of Section 626.9741, F.S., means an individual whose credit report or score is requested for underwriting or rating purposes relating to personal lines motor vehicle or personal lines residential insurance and shall not include individuals who have merely requested a quote. "Credit scoring methodology" means any methodology that uses credit reports or credit scores, in whole or in part, for underwriting or rating purposes. "Data cleansing" means the correction or enhancement of presumed incomplete, incorrect, missing, or improperly formatted information. "Personal lines motor vehicle" insurance means insurance against loss or damage to any motorized land vehicle or any loss, liability, or expense resulting from or incidental to ownership, maintenance or use of such vehicle if the contract of insurance shows one or more natural persons as named insureds. The following are not included in this definition: Vehicles used as public livery or conveyance; Vehicles rented to others; Vehicles with more than four wheels; Vehicles used primarily for commercial purposes; and Vehicles with a net vehicle weight of more than 5,000 pounds designed or used for the carriage of goods (other than the personal effects of passengers) or drawing a trailer designed or used for the carriage of such goods. The following are specifically included, inter alia, in this definition: Motorcycles; Motor homes; Antique or classic automobiles; and Recreational vehicles. "Unfairly discriminatory" means that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S. Insurers may not use any credit scoring methodology that is unfairly discriminatory. The burden of demonstrating that the credit scoring methodology is not unfairly discriminatory is upon the insurer. An insurer may not request or use a credit report or credit score in its underwriting or rating method unless it maintains and adheres to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, Section 626.9741, F.S., and these rules. Upon initial use or any change in that use, insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall include the following information in filings submitted pursuant to Section 627.062 or 627.0651, F.S. A listing of the types of individuals whose credit reports or scores the company will use or attempt to use to underwrite or rate a given policy. For example: Person signing application; Named insured or spouse; and All listed operators. How those individual reports or scores will be combined if more than one is used. For example: Average score used; Highest score used. The name(s) of the consumer reporting agencies or any other third party vendors from which the company will obtain or attempt to obtain credit reports or scores. Precise identifying information specifying or describing the credit scoring methodology, if any, the company will use including: Common or trade name; Version, subtype, or intended segment of business the system was designed for; and Any other information needed to distinguish a particular credit scoring methodology from other similar ones, whether developed by the company or by a third party vendor. The effect of particular scores or ranges of scores (or, for companies not using scores, the effect of particular items appearing on a credit report) on any of the following as applicable: Rate or premium charged for a policy of insurance; Placement of an insured or applicant in a rating tier; Placement of an applicant or insured in a company within an affiliated group of insurance companies; Decision to refuse to issue or renew a policy of insurance or to issue a policy with exclusions or restrictions or limitations in payment plans. The effect of the absence or insufficiency of credit history (as referenced in Section 626.9741(4)(c)1., F.S.) on any items listed in paragraph (e) above. The manner in which collection accounts identified with a medical industry code (as referenced in Section 626.9741(4)(c)2., F.S.) on a consumer's credit report will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which collection accounts that are not identified with a medical industry code, but which an applicant or insured demonstrates are the direct result of significant and extraordinary medical expenses, will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which the following will be treated in the underwriting or rating process, or within any credit scoring methodology used: Credit inquiries not initiated by the consumer; Requests by the consumer for the consumer's own credit information; Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another; Multiple lender inquiries that are not coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another, but that an applicant or insured demonstrates are the direct result of such inquiries; Inquiries relating to insurance coverage, if so identified on a consumer's credit report; and Inquiries relating to insurance coverage that are not so identified on a consumer's credit report, but which an applicant or insured demonstrates are the direct result of such inquiries. The list of all clear and specific primary reasons that may be cited to the consumer as the basis or explanation for an adverse decision under Section 626.9741(3), F.S. and the criteria determining when each of those reasons will be so cited. A description of the process that the insurer will use to correct any error in premium charged the insured, or in underwriting decision made concerning the insured, if the basis of the premium charged or the decision made is a disputed item that is later removed from the credit report or corrected, provided that the insured first notifies the insurer that the item has been removed or corrected. A certification that no use of credit reports or scores in rating insurance will apply to any component of a rate or premium attributed to hurricane coverage for residential properties as separately identified in accordance with Section 627.0629, F.S. Insurers desiring to make adverse decisions for personal lines motor vehicle policies or personal lines residential policies based on the absence or insufficiency of credit history shall either: Treat such consumers or applicants as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer and does not result in a disparate impact on persons belonging to any of the classes set forth in Section 626.9741(8)(c), This information will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. The information shall include: Data comparing experience for each category of those with absent or insufficient credit history to each category of insureds separately treated with respect to credit and having sufficient credit history; A statistically credible method of analysis that concludes that the relationship between absence or insufficiency and the risk assumed is not due to chance; A statistically credible method of analysis that concludes that absence or insufficiency of credit history does not disparately impact persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S.; A statistically credible method of analysis that confirms that the treatment proposed by the insurer is quantitatively appropriate; and Statistical tests establishing that the treatment proposed by the insurer is warranted for the total of all consumers with absence or insufficiency of credit history and for at least two subsets of such consumers. Treat such consumers as if the applicant or insured had neutral credit information, as defined by the insurer. Should an insurer fail to specify a definition, neutral is defined as the average score that a stratified random sample of consumers or applicants having sufficient credit history would attain using the insurer's credit scoring methodology; or Exclude credit as a factor and use other criteria. These other criteria must be specified by the insurer and must not result in average treatment for the totality of consumers with an absence of or insufficiency of credit history any less favorable than the treatment of average consumers or applicants having sufficient credit history. Insurers desiring to make adverse decisions for personal lines motor vehicle or personal lines residential insurance based on information contained in a credit report or score shall file with the Office information establishing that the results of such decisions do not correlate so closely with the zip code of residence of the insured as to constitute a decision based on place of residence of the insured in violation of Section 626.9741(4)(c)(3), F.S. (7)(a) Insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall develop, maintain, and adhere to written procedures consistent with Section 626.9741(4)(e), F.S. providing appeals for applicants or insureds whose credit reports or scores are unduly influenced by dissolution of marriage, death of a spouse, or temporary loss of employment. (b) These procedures shall be subject to examination by the Office at any time. (8)(a)1. Insurers using credit reports or credit scoring in rating personal lines motor vehicle or personal lines residential insurance shall develop, maintain, and adhere to written procedures to review the credit history of an insured who was adversely affected by such use at initial rating of the policy or subsequent renewal thereof. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: A review shall be conducted: No later than 2 years following the date of any adverse decision, or Any time, at the request of the insured, but no more than once per policy period without insurer assent. The insurer shall notify the named insureds annually of their right to request the review in (II) above. Renewal notices issued 120 days or less after the effective date of this rule are not included in this requirement. The insurer shall adjust the premium to reflect any improvement in credit history no later than the first renewal date that follows a review of credit history. The renewal premium shall be subject to other rating factors lawfully used by the insurer. The review shall not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan based on credit history. (b)1. As an alternative to the requirements in paragraph (8)(a), insurers using credit reports or scores at the inception of a policy but not for re-underwriting shall develop, maintain, and adhere to written procedures. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: Insureds shall be reevaluated no later than 3 years following policy inception based on allowable underwriting or rating factors, excluding credit information. The rate or premium charged to an insured shall not be greater, solely as a result of the reevaluation, than the rate or premium charged for the immediately preceding policy term. This shall not be construed to prohibit an insurer from applying regular underwriting criteria (which may result in a greater premium) or general rate increases to the premium charged. For insureds that received an adverse decision notification at policy inception, no residual effects of that adverse decision shall survive the reevaluation. This means that the reevaluation must be complete enough to make it possible for insureds adversely impacted at inception to attain the lowest available rate for which comparable insureds are eligible, considering only allowable underwriting or rating factors (excluding credit information) at the time of the reevaluation. No credit scoring methodology shall be used for personal lines motor vehicle or personal lines residential property insurance unless that methodology has been demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance. The demonstration of validity detailed below need only be provided with the first rate, rule, or underwriting guidelines filing following the effective date of this rule and at any time a change is made in the credit scoring methodology. Other such filings may instead refer to the most recent prior filing containing a demonstration. Information supplied in the context of a demonstration of validity will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. A demonstration of validity shall include: A listing of the persons that contributed substantially to the development of the most current version of the method, including resumes of the persons, if obtainable, indicating their qualifications and experience in similar endeavors. An enumeration of all data cleansing techniques that have been used in the development of the method, which shall include: The nature of each technique; Any biases the technique might introduce; and The prevalence of each type of invalid information prior to correction or enhancement. All data that was used by the model developers in the derivation and calibration of the model parameters. Data shall be in sufficient detail to permit the Office to conduct multiple regression testing for validation of the credit scoring methodology. Data, including field definitions, shall be supplied in electronic format compatible with the software used by the Office. Statistical results showing that the model and parameters are predictive and not overlapping or duplicative of any other variables used to rate an applicant to such a degree as to render their combined use actuarially unsound. Such results shall include the period of time for which each element from a credit report is used. A precise listing of all elements from a credit report that are used in scoring, and the formula used to compute the score, including the time period during which each element is used. Such listing is confidential if properly so identified by the insurer. An assessment by a qualified actuary, economist, or statistician (whether or not employed by the insurer) other than persons who contributed substantially to the development of the credit scoring methodology, concluding that there is a significant statistical correlation between the scores and frequency or severity of claims. The assessment shall: Identify the person performing the assessment and show his or her educational and professional experience qualifications; and Include a test of robustness of the model, showing that it performs well on a credible validation data set. The validation data set may not be the one from which the model was developed. Documentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Statistical analysis shall be performed on the current insureds of the insurer using the proposed credit scoring model, and shall include the raw data and detailed results on each classification set forth in Section 626.9741(8)(c), F.S. In lieu of such analysis insurers may use the alternative in 2. below. Alternatively, insurers may submit statistical studies and analyses that have been performed by educational institutions, independent professional associations, or other reputable entities recognized in the field, that indicate that there is no disproportionate impact on any of the classes set forth in Section 626.9741(8)(c), F.S. attributable to the use of credit reports or scores. Any such studies or analyses shall have been done concerning the specific credit scoring model proposed by the insurer. The Office will utilize generally accepted statistical analysis principles in reviewing studies submitted which support the insurer's analysis that the credit scoring model does not disproportionately impact any class based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. The Office will permit reliance on such studies only to the extent that they permit independent verification of the results. The testing or validation results obtained in the course of the assessment in paragraphs (d) and (f) above. Internal Insurer data that validates the premium differentials proposed based on the scores or ranges of scores. Industry or countrywide data may be used to the extent that the Florida insurer data lacks credibility based upon generally accepted actuarial standards. Insurers using industry or countrywide data for validation shall supply Florida insurer data and demonstrate that generally accepted actuarial standards would allow reliance on each set of data to the extent the insurer has done so. Validation data including claims on personal lines residential insurance policies that are the result of acts of God shall not be used unless such acts occurred prior to January 1, 2004. The mere copying of another company's system will not fulfill the requirement to validate proposed premium differentials unless the filer has used a method or system for less than 3 years and demonstrates that it is not cost effective to retrospectively analyze its own data. Companies under common ownership, management, and control may copy to fulfill the requirement to validate proposed premium differentials if they demonstrate that the characteristics of the business to be written by the affiliate doing the copying are sufficiently similar to the affiliate being copied to presume common differentials will be accurate. The credibility standards and any judgmental adjustments, including limitations on effects, that have been used in the process of deriving premium differentials proposed and validated in paragraph (i) above. An explanation of how the credit scoring methodology treats discrepancies in the information that could have been obtained from different consumer reporting agencies: Equifax, Experian, or TransUnion. This shall not be construed to require insurers to obtain multiple reports for each insured or applicant. 1. The date that each of the analyses, tests, and validations required in paragraphs (d) through (j) above was most recently performed, and a certification that the results continue to be applicable. 2. Any item not reviewed in the previous 5 years is unacceptable. Specific Authority 624.308(1), 626.9741(8) FS. Law Implemented 624.307(1), 626.9741 FS. History-- New . The Petition 1. Statutory Definitions of "Unfairly Discriminatory" The main issue raised by Petitioners is that the Proposed Rule's definition of "unfairly discriminatory," and those portions of the Proposed Rule that rely on this definition, are invalid because they are vague, and enlarge, modify, and contravene the provisions of the law implemented and other provisions of the insurance code. Section 626.9741, Florida Statutes, does not define "unfairly discriminatory." Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit reports or scores "must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory." Subsection 626.9741(8)(c), Florida Statutes, provides that the FSC may adopt rules, including standards to ensure that rates or premiums "associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." Chapter 627, Part I, Florida Statutes, is referred to as the "Rating Law." § 627.011, Fla. Stat. The purpose of the Rating Law is to "promote the public welfare by regulating insurance rates . . . to the end that they shall not be excessive, inadequate, or unfairly discriminatory." § 627.031(1)(a), Fla. Stat. The Rating Law provisions referenced by Subsection 626.9741(5), Florida Statutes, in relation to ensuring that rates are not "unfairly discriminatory" are Sections 627.062 and 627.0651, Florida Statutes. Section 627.062, Florida Statutes, titled "Rate standards," provides that "[t]he rates for all classes of insurance to which the provisions of this part are applicable shall not be excessive, inadequate, or unfairly discriminatory." § 627.062(1), Fla. Stat. Subsection 627.062(2)(e)6., Florida Statutes, provides: A rate shall be deemed unfairly discriminatory as to a risk or group of risks if the application of premium discounts, credits, or surcharges among such risks does not bear a reasonable relationship to the expected loss and expense experience among the various risks. Section 627.0651, Florida Statutes, titled "Making and use of rates for motor vehicle insurance," provides, in relevant part: One rate shall be deemed unfairly discriminatory in relation to another in the same class if it clearly fails to reflect equitably the difference in expected losses and expenses. Rates are not unfairly discriminatory because different premiums result for policyholders with like loss exposures but different expense factors, or like expense factors but different loss exposures, so long as rates reflect the differences with reasonable accuracy. Rates are not unfairly discriminatory if averaged broadly among members of a group; nor are rates unfairly discriminatory even though they are lower than rates for nonmembers of the group. However, such rates are unfairly discriminatory if they are not actuarially measurable and credible and sufficiently related to actual or expected loss and expense experience of the group so as to assure that nonmembers of the group are not unfairly discriminated against. Use of a single United States Postal Service zip code as a rating territory shall be deemed unfairly discriminatory. Petitioners point out that each of these statutory examples describing "unfairly discriminatory" rates has an actuarial basis, i.e., rates must be related to the actual or expected loss and expense factors for a given group or class, rather than any extraneous factors. If two risks have the same expected losses and expenses, the insurer must charge them the same rate. If the risks have different expected losses and expenses, the insurer must charge them different rates. Michael Miller, Petitioners' expert actuary, testified that the term "unfairly discriminatory" has been used in the insurance industry for well over 100 years and has always had this cost-based definition. Mr. Miller is a fellow of the Casualty Actuarial Society ("CAS"), a professional organization whose purpose is the advancement of the body of knowledge of actuarial science, including the promulgation of industry standards and a code of professional conduct. Mr. Miller was chair of the CAS ratemaking committee when it developed the CAS "Statement of Principles Regarding Property and Casualty Insurance Ratemaking," a guide for actuaries to follow when establishing rates.5 Principle 4 of the Statement of Principles provides: "A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk." In layman's terms, Mr. Miller explained that different types of risks are reflected in a rate calculation. To calculate the expected cost of a given risk, and thus the rate to be charged, the insurer must determine the expected losses for that risk during the policy period. The loss portion reflects the risk associated with an occurrence and the severity of a claim. While the loss portion does not account for the entirety of the rate charged, it is the most important in terms of magnitude. Mr. Miller cautioned that the calculation of risk is a quantification of expected loss, but not an attempt to predict who is going to have an accident or make a claim. There is some likelihood that every insured will make a claim, though most never do, and this uncertainty is built into the incurred loss portion of the rate. No single risk factor is a complete measure of a person's likelihood of having an accident or of the severity of the ensuing claim. The prediction of losses is determined through a risk classification plan that take into consideration many risk factors (also called rating factors) to determine the likelihood of an accident and the extent of the claim. As to automobile insurance, Mr. Miller listed such risk factors as the age, gender, and marital status of the driver, the type, model and age of the car, the liability limits of the coverage, and the geographical location where the car is garaged. As to homeowners insurance, Mr. Miller listed such risk factors as the location of the home, its value and type of construction, the age of the utilities and electrical wiring, and the amount of insurance to be carried. 2. Credit Scoring as a Rating Factor In the current market, the credit score of the applicant or insured is a rating factor common to automobile and homeowners insurance. Subsection 626.9741(2)(c), Florida Statutes, defines "credit score" as follows: a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Credit scores" (more accurately termed "credit-based insurance scores") are derived from credit data that have been found to be predictive of a loss. Lamont Boyd, Fair Isaac's insurance market manager, explained the manner in which Fair Isaac produced its credit scoring model. The company obtained information from various insurance companies on millions of customers. This information included the customers' names, addresses, and the premiums earned by the companies on those policies as well as the losses incurred. Fair Isaac next requested the credit reporting agencies to review their archived files for the credit information on those insurance company customers. The credit agencies matched the credit files with the insurance customers, then "depersonalized" the files so that there was no way for Fair Isaac to know the identity of any particular customer. According to Mr. Lamont, the data were "color blind" and "income blind." Fair Isaac's analysts took these files from the credit reporting agencies and studied the data in an effort to find the most predictive characteristics of future loss propensity. The model was developed to account for all the predictive characteristics identified by Fair Isaac's analysts, and to give weight to those characteristics in accordance to their relative accuracy as predictors of loss. Fair Isaac does not directly sell its credit scores to insurance companies. Rather, Fair Isaac's models are implemented by the credit reporting agencies. When an insurance company wants Fair Isaac's credit score, it purchases access to the model's results from the credit reporting agency. Other vendors offer similar credit scoring models to insurance companies, and in recent years, some insurance companies have developed their own scoring models. Several academic studies of credit scoring were admitted and discussed at the final hearing in these cases. There appears to be no serious debate that credit scoring is a valid and important predictor of losses. The controversy over the use of credit scoring arises over its possible "unfairly discriminatory" impact "based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." § 626.9741(8)(c), Fla. Stat. Mr. Miller was one of two principal authors of a June 2003 study titled, "The Relationship of Credit-Based Insurance Scores to Private Passenger Automobile Insurance Loss Propensity." This study was commissioned by several insurance industry trade organizations, including AIA and NAMIC. The study addressed three questions: whether credit-based insurance scores are related to the propensity for loss; whether credit- based insurance scores measure risk that is already measured by other risk factors; and what is the relative importance to accurate risk assessment of the use of credit-based insurance scores. The study was based on a nationwide random sample of private passenger automobile policy and claim records. Records from all 50 states were included in roughly the same proportion as each state's registered motor vehicles bear to total registered vehicles in the United States. The data samples were provided by seven insurers, and represented approximately 2.7 million automobiles, each insured for 12 months.6 The study examined all major automobile coverages: bodily injury liability, property damage liability, medical payments coverage, personal injury protection coverage, comprehensive coverage, and collision coverage. The study concluded that credit-based insurance scores were correlated with loss propensity. The study found that insurance scores overlap to some degree with other risk factors, but that after fully accounting for the overlaps, insurance scores significantly increase the accuracy of the risk assessment process. The study found that, for each of the six automobile coverages examined, insurance scores are among the three most important risk factors.7 Mr. Miller's study did not examine the question of causality, i.e., why credit-based insurance scores are predictive of loss propensity. Dr. Patrick Brockett testified for Petitioners as an expert in actuarial science, risk management and insurance, and statistics. Dr. Brockett is a professor in the departments of management science and information systems, finance, and mathematics at the University of Texas at Austin. He occupies the Gus S. Wortham Memorial Chair in Risk Management and Insurance, and is the director of the university's risk management and insurance program. Dr. Brockett is the former director of the University of Texas' actuarial science program and continues to direct the study of students seeking their doctoral degrees in actuarial science. His areas of academic research are actuarial science, risk management and insurance, statistics, and general quantitative methods in business. Dr. Brockett has written more than 130 publications, most of which relate to actuarial science and insurance. He has spent his entire career in academia, and has never been employed by an insurance company. In 2002, Lieutenant Governor Bill Ratliff of Texas asked the Bureau of Business Research ("BBR") of the University of Texas' McCombs School of Business to provide an independent, nonpartisan study to examine the relationship between credit history and insurance losses in automobile insurance. Dr. Brockett was one of four named authors of this BBR study, issued in March 2003 and titled, "A Statistical Analysis of the Relationship between Credit History and Insurance Losses." The BBR research team solicited data from insurance companies representing the top 70 percent of the automobile insurers in Texas, and compiled a database of more than 173,000 automobile insurance policies from the first quarter of 1998 that included the following 12 months' premium and loss history. ChoicePoint was then retained to match the named insureds with their credit histories and to supply a credit score for each insured person. The BBR research team then examined the credit score and its relationship with prospective losses for the insurance policy. The results were summarized in the study as follows: Using logistic and multiple regression analyses, the research team tested whether the credit score for the named insured on a policy was significantly related to incurred losses for that policy. It was determined that there was a significant relationship. In general, lower credit scores were associated with larger incurred losses. Next, logistic and multiple regression analyses examined whether the revealed relationship between credit score and incurred losses was explainable by existing underwriting variables, or whether the credit score added new information about losses not contained in the existing underwriting variables. It was determined that credit score did yield new information not contained in the existing underwriting variables. What the study does not attempt to explain is why credit scoring adds significantly to the insurer's ability to predict insurance losses. In other words, causality was not investigated. In addition, the research team did not examine such variables as race, ethnicity, and income in the study, and therefore this report does not speculate about the possible effects that credit scoring may have in raising or lowering premiums for specific groups of people. Such an assessment would require a different study and different data. At the hearing, Dr. Brockett testified that the BBR study demonstrated a "strong and significant relationship between credit scoring and incurred losses," and that credit scoring retained its predictive power even after the other risk variables were accounted for. Dr. Brockett further testified that credit scoring has a disproportionate effect on the classifications of age and marital status, because the very young tend to have credit scores that are lower than those of older people. If the question is simply whether the use of credit scores will have a greater impact on the young and the single, the answer would be in the affirmative. However, Dr. Brockett also noted that young, single people will also have higher losses than older, married people, and, thus, the use of credit scores is not "unfairly discriminatory" in the sense that term is employed in the insurance industry.8 Mr. Miller testified that nothing in the actuarial standards of practice requires that a risk factor be causally related to a loss. The Actuarial Standards Board's Standard of Practice 12,9 dealing with risk classification, states that a risk factor is appropriate for use if there is a demonstrated relationship between the risk factor and the insurance losses, and that this relationship may be established by statistical or other mathematical analysis of data. If the risk characteristic is shown to be related to an expected outcome, the actuary need not establish a cause-and-effect relationship between the risk characteristic and the expected outcome. As an example, Mr. Miller offered the fact that past automobile accidents do not cause future accidents, although past accidents are predictive of future risk. Past traffic violations, the age of the driver, the gender of the driver, and the geographical location are all risk factors in automobile insurance, though none of these factors can be said to cause future accidents. They help insurers predict the probability of a loss, but do not predict who will have an accident or why the accident will occur. Mr. Miller opined that credit scoring is a similar risk factor. It is demonstrably significant as a predictor of risk, though there is no causal relationship between credit scores and losses and only an incomplete understanding of why credit scoring works as a predictor of loss. At the hearing, Dr. Brockett discussed a study that he has co-authored with Linda Golden, a business professor at the University of Texas at Austin. Titled "Biological and Psychobehavioral Correlates of Risk Taking, Credit Scores, and Automobile Insurance Losses: Toward an Explication of Why Credit Scoring Works," the study has been peer-reviewed and at the time of the hearing had been accepted for publication in the Journal of Risk and Insurance. In this study, the authors conducted a detailed review of existing scientific literature concerning the biological, psychological, and behavioral attributes of risky automobile drivers and insured losses, and a similar review of literature concerning the biological, psychological, and behavioral attributes of financial risk takers. The study found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Dr. Brockett testified that this study provides a direction for future research into the reasons why credit scoring works as an insurance risk characteristic. 3. The Proposed Rule's Definition of "Unfairly Discriminatory" Petitioners contend that the Proposed Rule's definition of the term "unfairly discriminatory" expands upon and is contrary to the statutory definition of the term discussed in section C.1. supra, and that this expanded definition operates to impose a ban on the use of credit scoring by insurance companies. As noted above, Section 626.9741, Florida Statutes, does not define the term "unfairly discriminatory." The provisions of the Rating Law10 define the term as it is generally understood by the insurance industry: a rate is deemed "unfairly discriminatory" if the premium charged does not equitably reflect the differences in expected losses and expenses between policyholders. Two provisions of Section 626.9741, Florida Statutes, employ the term "unfairly discriminatory": (5) A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. * * * (8) The commission may adopt rules to administer this section. The rules may include, but need not be limited to: * * * (c) Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners contend that the statute's use of the term "unfairly discriminatory" is unexceptionable, that the Legislature simply intended the term to be used and understood in the traditional sense of actuarial soundness alone. Respondents agree that Subsection 626.9741(5), Florida Statutes, calls for the agency to apply the traditional definition of "unfairly discriminatory" as that term is employed in the statutes directly referenced, Sections 627.062 and 627.0651, Florida Statutes, the relevant texts of which are set forth in Findings of Fact 18 and 19 above. However, Respondents contend that Subsection 626.9741(8)(c), Florida Statutes, calls for more than the application of the Rating Law's definition of the term. Respondents assert that in the context of this provision, "unfairly discriminatory" contemplates not only the predictive function, but also "discrimination" in its more common sense, as the term is employed in state and federal civil rights law regarding race, color, religion, marital status, age, gender, income, national origin, or place of residence. At the hearing, OIR General Counsel Steven Parton testified as to the reasons why the agency chose the federal body of law using the term "disparate impact" as the test for unfair discrimination in the Proposed Rule: Well, first of all, what we were looking for is a workable definition that people would have some understanding as to what it meant when we talked about unfair discrimination. We were also looking for a test that did not require any willfulness, because it was not our concern that, in fact, insurance companies were engaging willfully in unfair discrimination. What we believed is going on, and we think all of the studies that are out there suggest, is that credit scoring is having a disparate impact upon various people, whether it be income, whether it be race. . . . Respondents' position is that Subsection 626.9741(8)(c), Florida Statutes, requires that a proposed rate or premium be rejected if it has a "disproportionately" negative effect on one of the named classes of persons, even though the rate or premium equitably reflects the differences in expected losses and expenses between policyholders. In the words of Mr. Parton, "This is not an actuarial rule." Mr. Parton explained the agency's rationale for employing a definition of "unfairly discriminatory" that is different from the actuarial usage employed in the Rating Law. Subsection 626.9741(5), Florida Statutes, already provides that an insurer's rate filings may not be "excessive, inadequate, or unfairly discriminatory" in the actuarial sense. To read Subsection 626.9741(8)(c), Florida Statutes, as simply a reiteration of the actuarial "unfair discrimination" rule would render the provision, "a nullity. There would be no force and effect with regards to that." Thus, the Proposed Rule defines "unfairly discriminatory" to mean "that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S." Proposed Florida Administrative Code Rule 69O-125.005(1)(e). OIR's actuary, Howard Eagelfeld, explained that "disproportionate effect" means "having a different effect on one group . . . causing it to pay more or less premium than its proportionate share in the general population or than it would have to pay based upon all other known considerations." Mr. Eagelfeld's explanation is not incorporated into the language of the Proposed Rule. Consistent with the actuarial definition of "unfairly discriminatory," the Proposed Rule requires that any credit scoring methodology must be "demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance," and sets forth detailed criteria through which the insurer can make the required demonstration. Proposed Florida Administrative Code Rule 69O-125.005(9)(a)-(f) and (h)-(l). Proposed Florida Administrative Code Rule 69O-125.005(9)(g) sets forth Respondents' "civil rights" usage of the term "unfairly discriminatory." The insurer's demonstration of the validity of its credit scoring methodology must include: [d]ocumentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), F.S. A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory.11 Mr. Parton, who testified in defense of the Proposed Rule as one of its chief draftsmen, stated that the agency was concerned that the use of credit scoring may be having a disproportionate effect on minorities. Respondents believe that credit scoring may simply be a surrogate measure for income, and that using income as a basis for setting rates would have an obviously disparate impact on lower-income persons, including the young and the elderly. Mr. Parton testified that "neither the insurance industry nor anyone else" has researched the theory that credit scoring may be a surrogate for income. Mr. Miller referenced a 1998 analysis performed by AIA indicating that the average credit scores do not vary significantly according to the income group. In fact, the lowest income group (persons making less than $15,000 per year) had the highest average credit score, and the average credit scores actually dropped as income levels rose until the income range reached $50,000 to $74,000 per year, when the credit scores began to rise. Mr. Miller testified that a credit score is no more predictive of income level than a coin flip. However, Respondents introduced a January 2003 report to the Washington State Legislature prepared by the Social & Economic Sciences Research Center of Washington State University, titled "Effect of Credit Scoring on Auto Insurance Underwriting and Pricing." The purpose of the study was to determine whether credit scoring has unequal impacts on specific demographic groups. For this study, the researchers received data from three insurance companies on several thousand randomly chosen customers, including the customers' age, gender, residential zip code, and their credit scores and/or rate classifications. The researchers contacted about 1,000 of each insurance company's customers and obtained information about their ethnicity, marital status, and income levels. The study's findings were summarized as follows: The demographic patterns discerned by the study are: Age is the most significant factor. In almost every analysis, older drivers have, on average, higher credit scores, lower credit-based rate assignments, and less likelihood of lacking a valid credit score. Income is also a significant factor. Credit scores and premium costs improve as income rises. People in the lowest income categories-- less than $20,000 per year and between $20,000 and $35,000 per year-- often experienced higher premiums and lower credit scores. More people in lower income categories also lacked sufficient credit history to have a credit score. Ethnicity was found to be significant in some cases, but because of differences among the three firms studied and the small number of ethnic minorities in the samples, the data are not broadly conclusive. In general, Asian/Pacific Islanders had credit scores more similar to whites than to other minorities. When other minority groups had significant differences from whites, the differences were in the direction of higher premiums. In the sample of cases where insurance was cancelled based on credit score, minorities who were not Asian/Pacific Islanders had greater difficulty finding replacement insurance, and were more likely to experience a lapse in insurance while they searched for a new policy. The analysis also considered gender, marital status and location, but for these factors, significant unequal effects were far less frequent. (emphasis added) The evidence appears equivocal on the question of whether credit scoring is a surrogate for income. The Washington study seems to indicate that ethnicity may be a significant factor in credit scoring, but that significant unequal effects are infrequent regarding gender and marital status. The evidence demonstrates that the use of credit scores by insurers would tend to have a negative impact on young people. Mr. Miller testified that persons between ages 25 and 30 have lower credit scores than older people. Petitioners argue that by defining "unfairly discriminatory" to mean "disproportionate effect," the Proposed Rule effectively prohibits insurers from using credit scores, if only because all the parties recognize that credit scores have a "disproportionate effect" on young people. Petitioners contend that this prohibition is in contravention of Section 626.9741(1), Florida Statutes, which states that the purpose of the statute is to "regulate and limit" the use of credit scores, not to ban them outright. Respondents counter that if the use of credit scores is "unfairly discriminatory" toward one of the listed classes of persons in contravention of Subsection 626.9741(8)(c), Florida Statutes, then the "limitation" allowed by the statute must include prohibition. This point is obviously true but sidesteps the real issues: whether the statute's undefined prohibition on "unfair discrimination" authorizes the agency to employ a "disparate impact" or "disproportionate effect" definition in the Proposed Rule, and, if so, whether the Proposed Rule sufficiently defines any of those terms to permit an insurer to comply with the rule's requirements. Proposed Florida Administrative Code Rule 69O-125.005(2) provides that the insurer bears the burden of demonstrating that its credit scoring methodology does not disproportionately affect persons based upon their race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners state that no insurer can demonstrate, consistent with the Proposed Rule, that its credit scoring methodology does not have a disproportionate effect on persons based upon their age. Therefore, no insurer will ever be permitted to use credit scores under the terms of the Proposed Rule. As discussed more fully in Findings of Fact 73 through 76 below, Petitioners also contend that the Proposed Rule provides no guidance as to what "disproportionate effect" and "disparate impact" mean, and that this lack of definitional guidance will permit the agency to reject any rate filing that uses credit scoring, based upon an arbitrary determination that it has a "disproportionate effect" on one of the classes named in Subsection 626.9741(8)(c), Florida Statutes. Petitioners also presented evidence that no insurer collects data on race, color, religion, or national origin from applicants or insureds. Mr. Miller testified that there is no reliable independent source for race, color, religious affiliation, or national origin data. Mr. Eagelfeld agreed that there is no independent source from which insurers can obtain credible data on race or religious affiliation. Mr. Parton testified that this lack of data can be remedied by the insurance companies commencing to request race, color, religion, and national origin information from their customers, because there is no legal impediment to their doing so. Mr. Miller testified that he would question the reliability of the method suggested by Mr. Parton because many persons will refuse to answer such sensitive questions or may not answer them correctly. Mr. Miller stated that, as an actuary, he would not certify the results of a study based on demographic data obtained in this manner and would qualify any resulting actuarial opinion due to the unreliability of the database. Petitioners also object to the vagueness of the broad categories of "race, color, religion and national origin." Mr. Miller testified that the Proposed Rule lacks "operational definitions" for those terms that would enable insurers to perform the required calculations. The Proposed Rule places the burden on the insurer to demonstrate no disproportionate effect on persons based on these categories, but offers no guidance as to how these demographic classes should be categorized by an insurer seeking to make such a demonstration. Petitioners point out that even if the insurer is able to ascertain the categories sought by the regulators, the Proposed Rule gives no guidance as to whether the "disproportionate effect" criterion mandates perfect proportionality among all races, colors, religions, and national origins, or whether some degree of difference is tolerable. Petitioners contend that this lack of guidance provides unbridled discretion to the regulator to reject any disproportionate effect study submitted by an insurer. At his deposition, Mr. Parton was asked how an insurer should break down racial classifications in order to show that there is no disproportionate effect on race. His answer was as follows: There is African-American, Cuban-American, Spanish-American, African-American, Haitian- American. Are you-- you know, whatever the make-up of your book of business is-- you're the one in control of it. You can ask these folks what their ethnic background is. At his deposition, Mr. Parton frankly admitted that he had no idea what "color" classifications an insurer should use, yet he also stated that an insurer must demonstrate no disproportionate effect on each and every listed category, including "color." At the final hearing, when asked to list the categories of "color," Mr. Parton responded, "I suppose Indian, African-American, Chinese, Japanese, all of those."12 At the final hearing, Mr. Parton was asked whether the Proposed Rule contemplates requiring insurers to demonstrate distinctions between such groups as "Latvian-Americans" and "Czech-Americans." Mr. Parton's reply was as follows: No. And I don't think it was contemplated by the Legislature. . . . The question is race by any other name, whether it be national origin, ethnicity, color, is something that they're concerned about in terms of an impact. What we would anticipate, and what we have always anticipated, is the industry would demonstrate whether or not there is an adverse effect against those folks who have traditionally in Florida been discriminated against, and that would be African-Americans and certain Hispanic groups. In our opinion, at least, if you could demonstrate that the credit scoring was not adversely impacting it, it may very well answer the questions to any other subgroup that you may want to name. At the hearing, Mr. Parton was also questioned as to distinctions between religions and testified as follows: The impact of credit scoring on religion is going to be in the area of what we call thin files, or no files. That is to say people who do not have enough credit history from which credit scores can be done, or they're going to be treated somehow differently because of that lack of history. A simple question that needs to be asked by the insurance company is: "Do you, as a result of your religious belief or whatever [sect] you are in, are you forbidden as a precept of your religious belief from engaging in the use of credit?" When cross-examined on the subject, Mr. Parton could not confidently identify any religious group that forbids the use of credit. He thought that Muslims and Quakers may be such groups. Mr. Parton concluded by stating, "I don't think it is necessary to identify those groups. The question is whether or not you have a religious group that you prescribe to that forbids it." Petitioners contend that, in addition to failing to define the statutory terms of race, color, religion, and national origin in a manner that permits insurer compliance, the Proposed Rule fails to provide an operational definition of "disproportionate effect." The following is a hypothetical question put to Mr. Parton at his deposition, and Mr. Parton's answer: Q: Let's assume that African-Americans make up 10 percent of the population. Let's just use two groups for the sake of clarity. Caucasians make up 90 percent. If the application of credit scoring in underwriting results in African-Americans paying 11 percent of the premium and Caucasians paying 89 percent of the premium, is that, in your mind, a disproportionate affect [sic]? A: It may be. I think it would give rise under this rule that perhaps there is a presumption that it is, but that presumption is not [an irrebuttable] one.[13] For instance, if you then had testimony that a 1 percent difference between the two was statistically insignificant, then I would suggest that that presumption would be overridden. This answer led to a lengthy discussion regarding a second hypothetical in which African-Americans made up 29 percent of the population, and also made up 35 percent of the lowest, or most unfavorable, tier of an insurance company's risk classifications. Mr. Parton ultimately opined that if the difference in the two numbers was found to be "statistically significant" and attributable only to the credit score, then he would conclude that the use of credit scoring unfairly discriminated against African-Americans. As to whether his answer would be the same if the hypothetical were adjusted to state that African-Americans made up 33 percent of the lowest tier, Mr. Parton responded: "That would be up to expert testimony to be provided on it. That's what trials are all about."14 Aside from expert testimony to demonstrate that the difference was "statistically insignificant," Mr. Parton could think of no way that an insurer could rebut the presumption that the difference was unfairly discriminatory under the "disproportionate effect" definition set forth in the proposed rule. He stated that, "I can't anticipate, nor does the rule propose to anticipate, doing the job of the insurer of demonstrating that its rates are not unfairly discriminatory." Mr. Parton testified that an insurer's showing that the credit score was a valid and important predictor of risk would not be sufficient to rebut the presumption of disproportionate effect. Summary Findings Credit-based insurance scoring is a valid and important predictor of risk, significantly increasing the accuracy of the risk assessment process. The evidence is still inconclusive as to why credit scoring is an effective predictor of risk, though a study co-authored by Dr. Brockett has found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Though the evidence was equivocal on the question of whether credit scoring is simply a surrogate for income, the evidence clearly demonstrated that the use of credit scores by insurance companies has a greater negative overall effect on young people, who tend to have lower credit scores than older people. Petitioners and Fair Isaac emphasized their contention that compliance with the Proposed Rule would be impossible, and thus the Proposed Rule in fact would operate as a prohibition on the use of credit scoring by insurance companies. At best, Petitioners demonstrated that compliance with the Proposed Rule would be impracticable at first, given the current business practices in the industry regarding the collection of customer data regarding race and religion. The evidence indicated no legal barriers to the collection of such data by the insurance companies. Questions as to the reliability of the data are speculative until a methodology for the collection of the data is devised. Subsection 626.9741(8)(c), Florida Statutes, authorizes the FSC to adopt rules that may include: Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners' contention that the statute's use of "unfairly discriminatory" contemplates nothing more than the actuarial definition of the term as employed by the Rating Law is rejected. As Respondents pointed out, Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit scores must comply with the Rating Law's requirements that the rates not be "unfairly discriminatory" in the actuarial sense. If Subsection 626.9741(8)(c), Florida Statutes, merely reiterates the actuarial requirement, then it is, in Mr. Parton's words, "a nullity."15 Thus, it is found that the Legislature contemplated some level of scrutiny beyond actuarial soundness to determine whether the use of credit scores "unfairly discriminates" in the case of the classes listed in Subsection 626.9741(8)(c), Florida Statutes. It is found that the Legislature empowered FSC to adopt rules establishing standards to ensure that an insurer's rates or premiums associated with the use of credit scores meet this added level of scrutiny. However, it must be found that the term "unfairly discriminatory" as employed in the Proposed Rule is essentially undefined. FSC has not adopted a "standard" by which insurers can measure their rates and premiums, and the statutory term "unfairly discriminatory" is thus subject to arbitrary enforcement by the regulating agency. Proposed Florida Administrative Code Rule 69O-125.005(1)(e) defines "unfairly discriminatory" in terms of adverse decisions that "disproportionately affect" persons in the classes set forth in Subsection 626.9741(8)(c), Florida Statutes, but does not define what is a "disproportionate effect." At Subsection (9)(g), the Proposed Rule requires "statistical testing" of the credit scoring model to determine whether it results in a "disproportionate impact" on the listed classes. This subsection attempts to define its terms as follows: A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Thus, the Proposed Rule provides that a "disproportionate effect" equals a "disparate impact" equals "unfairly discriminatory," without defining any of these terms in such a way that an insurer could have any clear notion, prior to the regulator's pronouncement on its rate filing, whether its credit scoring methodology was in compliance with the rule. Indeed, Mr. Parton's testimony evinced a disinclination on the part of the agency to offer guidance to insurers who attempt to understand this circular definition. The tenor of his testimony indicated that the agency itself is unsure of exactly what an insurer could submit to satisfy the "disproportionate effect" test, aside from perfect proportionality, which all parties concede is not possible at least as to young people, or a showing that any lack of perfect proportionality is "statistically insignificant," whatever that means. Mr. Parton seemed to say that OIR will know a valid use of credit scoring when it sees one, though it cannot describe such a use beforehand. Mr. Eagelfeld offered what might be a workable definition of "disproportionate effect," but his definition is not incorporated into the Proposed Rule. Mr. Parton attempted to assure the Petitioners that OIR would take a reasonable view of the endless racial and ethnic categories that could be subsumed under the literal language of the Proposed Rule, but again, Mr. Parton's assurances are not part of the Proposed Rule. Mr. Parton's testimony referenced federal and state civil rights laws as the source for the term "disparate impact." Federal case law under Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2, has defined a "disparate impact" claim as "one that 'involves employment practices that are facially neutral in their treatment of different groups, but that in fact fall more harshly on one group than another and cannot be justified by business necessity.'" Adams v. Florida Power Corporation, 255 F.3d 1322, 1324 n.4 (11th Cir. 2001), quoting Hazen Paper Co. v. Biggins, 507 U.S. 604, 609, 113 S. Ct. 1701, 1705, 123 L. Ed. 2d 338 (1993). The Proposed Rule does not reference this definition, nor did Mr. Parton detail how OIR proposes to apply or modify this definition in enforcing the Proposed Rule. Without further definition, all three of the terms employed in this circular definition are conclusions, not "standards" that the insurer and the regulator can agree upon at the outset of the statistical and analytical process leading to approval or rejection of the insurer's rates. Absent some definitional guidance, a conclusory term such as "disparate impact" can mean anything the regulator wishes it to mean in a specific case. The confusion is compounded by the Proposed Rule's failure to refine the broad terms "race," "color," and "religion" in a manner that would allow an insurer to prepare a meaningful rate submission utilizing credit scoring. In his testimony, Mr. Parton attempted to limit the Proposed Rule's impact to those groups "who have traditionally in Florida been discriminated against," but the actual language of the Proposed Rule makes no such distinction. Mr. Parton also attempted to limit the reach of "religion" to groups whose beliefs forbid them from engaging in the use of credit, but the language of the Proposed Rule does not support Mr. Parton's distinction.

USC (1) 42 U.S.C 2000e Florida Laws (18) 119.07120.52120.536120.54120.56120.57120.68624.307624.308626.9741627.011627.031627.062627.0629627.0651688.002760.10760.11 Florida Administrative Code (1) 69O-125.005
# 1
RHC AND ASSOCIATES, INC. vs HILLSBOROUGH COUNTY SCHOOL BOARD, 09-006060BID (2009)
Division of Administrative Hearings, Florida Filed:Tampa, Florida Nov. 05, 2009 Number: 09-006060BID Latest Update: Mar. 16, 2010

Findings Of Fact The findings below are based on the undisputed facts set forth in Petitioner's Protest and supplements thereto, Respondent's Motion to Dismiss, Petitioner's Response in Opposition to Motion to Dismiss, and representations by the parties during the motion hearing. On October 7, 2009, Respondent electronically posted its final ranking of firms which had submitted proposals to provide mechanical engineering services for six HVAC projects for Respondent in 2010. Respondent's electronic posting of the final ranking of firms included the following language: "Failure to file a protest within the time prescribed in Section 120.57(3), shall constitute a waiver of proceeding under Chapter 120, Florida Statutes." On October 12, 2009, Petitioner filed a Notice of Intent to Protest the final rankings. On October 22, 2009, Petitioner filed its Protest. Although Petitioner's Protest was timely filed, Petitioner initially did not file a bond or other security. The Protest alleges that Petitioner was not required to file a bond, because Respondent did not include in its final ranking notice that a failure to post a bond would constitute a waiver of proceedings under Subsection 120.57(3)(a), Florida Statutes. Additionally, the Protest alleges that Respondent: (1) failed to provide Petitioner with notice of the estimated contract amounts within 72 hours, exclusive of Saturdays and Sundays and state holidays, of the filing of a notice of protest as required by Subsection 287.042(2)(c), Florida Statutes; and (2) because Respondent had not provided that notice, Petitioner was unable to calculate the amount of the bond required and was, therefore, relieved of the obligation to file a bond. On October 30, 2009, Respondent, through counsel, wrote to Petitioner. In this correspondence, Respondent informed Petitioner that Section 287.042, Florida Statutes, did not apply to Respondent because it was not an "agency" for purposes of that law. Respondent further informed Petitioner that Section 255.0516, Florida Statutes, allowed Respondent to require a bond in the amount of two percent of the lowest accepted bid or $25,000. Respondent also notified Petitioner that because it was protesting all six project awards, all awards must be included in the calculation of the bond amount required. Finally, Petitioner was allowed ten days within which to post a bond. On November 3, 2009, Petitioner submitted to Respondent a cashier's check in the amount of $3,143.70 and noted that the check was intended to serve as security for the Protest "as required by F.S. 287.042(2)(c)." In the letter which accompanied the check, Petitioner also noted that: (1) the amount of the check was determined by calculating one percent of the largest proposed contract award amount of $314,370.00; and (2) Petitioner was providing that amount "under duress," because Respondent had "just published the contract award amounts." The relief requested by Petitioner in the Protest is that: (1) it be awarded one of the six HVAC projects comprising the final ranking; and/or (2) alternatively, all six awards be rescinded and "start the entire process over." The final ranking which Petitioner protests included six separate projects, each of which had a separate construction budget. Those projects and their respective construction budgets are as follows: Northwest--$1,144,000; Tampa Palms--$2,649,081; Yates--$2,770,828; Ferrell--$2,550,758; Stewart--$2,805,437; and Erwin--$4,191,603. The proposed fees for each project were as follows: $97,240 (Northwest); $211,926 (Tampa Palms); $221,666 (Yates); $204,061 (Ferrell); $224,435 (Stewart); and $314,370 (Erwin).

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that Respondent, Hillsborough County School Board, issue a final order dismissing the Protest filed by Petitioner, RHC and Associates, Inc. DONE AND ENTERED this 20th day of January, 2010, in Tallahassee, Leon County, Florida. S CAROLYN S. HOLIFIELD Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 20th day of January, 2010.

Florida Laws (5) 120.57255.0516287.012287.042287.055 Florida Administrative Code (1) 28-110.005
# 2
JOHN D. WATSON vs FLORIDA ENGINEERS MANAGEMENT CORPORATION, 98-004756 (1998)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Oct. 26, 1998 Number: 98-004756 Latest Update: Apr. 20, 1999

The Issue The issue in this case is whether the Petitioner is entitled to additional credit for his response to question number 123 of the Principles & Practice Civil/Sanitary Engineer Examination administered on April 24, 1998.

Findings Of Fact Petitioner took the April 24, 1998, Principles & Practice Civil/Sanitary Engineer examination. A score of 70 is required to pass the exam. Petitioner obtained a score of 69. In order to achieve a score of 70, Petitioner needs a raw score of 48. Petitioner obtained a score of 69 which is a raw score of 47. Therefore, Petitioner is in need of one (1) additional raw score point. On question number 123, Petitioner received a score of six points out of a possible ten. Question nimber 123 is scored in increments of two raw points. Two additional raw score points awarded to the Petitioner would equal a raw score of 49, equating to a conversion score of seventy-one, a passing score. The National Council of Examiners for Engineering and Surveying (NCEES), the organization that produces the examination, provides a Solution and Scoring Plan which outlines the scoring process used in question number 123. The Petitioner is not allowed a copy of the examination question or the Solution and Scoring Plan for preparation of the Proposed Recommended Order. Question number 123 has three parts: part A, part B, and part C. For a score of ten on question number 123, the Solution and Scoring Plan states that the solution to part A must be correct within allowable tolerances; the solution to part B must state two variables that affect the answer in part A; and the solution to part C must state that anti-lock brakes do not leave skid marks thus making it very had to determine braking distance. For a score of eight points on question number 123, the Solution and Scoring Plan states that part A could contain one error and lists specific allowable errors, and that part B and part C must be answered correctly showing mastery of the concepts involved. Petitioner made an error in part A which falls into the allowable errors listed in the Solution and Scoring Plan under the eight-point scoring plan. Petitioner answered part B correctly. Petitioner contends that he also answered correctly part C, and should be awarded eight points. NCEES marked part C incorrect. Question number 123 is a problem involving a vehicle (vehicle number one) that skids on asphalt and hits another vehicle (vehicle number two). Part C asks "Explain how your investigation of this accident would have changed if vehicle one had been equipped with anti-lock brakes." The Petitioner's answer was as follows: If vehicle one does not "lock" its brakes, its deceleration will be dependent upon its brakes. (Not f). [Judge's note: f is used as the symbol for the co-efficient of friction between the tires and road surface in the problem.] The rate of deceleration (a) must be determined (from testing, mfg, [manufacturer,] etc.) As stated above, the Board accepts a solution that recognizes that the vehicle equipped with anti-lock brakes will not leave skid marks which can be used for computing initial speed using the skid distance equation. The Petitioner's answer pre-supposes that there are no skid marks because the vehicle's wheels do not lock because of the anti-lock brakes; therefore, if the co-efficient of friction of the tires, which generates the skid marks, has no effect. The Petitioner introduced a portion of a commonly used manual for preparation for examination (Petitioner's Exhibit 1), which states, regarding a vehicle that does not lock its brakes, "its decelerations will be dependent upon its brakes." The Board's expert recognized the statement by the Petitioner in response to part C as true, but indicated it was not responsive to the question in that it did not state specifically that the vehicle would not produce skid marks that would be able to be measured for use in the skid distance equation. The solution sheet states regarding part C, "Part C is answered correctly by explaining that anti-lock brakes would not leave skid marks thus making it very had to determine the braking distance."

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law set forth herein, it is, RECOMMENDED: That the Board of Professional Engineers enter a Final Order giving Petitioner credit for part C on the examination and passing the test. DONE AND ENTERED this 25th day of March, 1999, in Tallahassee, Leon County, Florida. STEPHEN F. DEAN Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 25th day of March, 1999. COPIES FURNISHED: Natalie A. Lowe Vice President of Legal Affairs Florida Engineers Management Corporation 1208 Hays Street Tallahassee, Florida 32301 John D. Watson 88 Marine Street St. Augustine, Florida 32084 Dennis Barton, Executive Director Board of Professional Engineers 1208 Hays Street Tallahassee, Florida 32301

Florida Laws (1) 120.57
# 3
AMERICAN CONTRACT BRIDGE LEAGUE vs. OFFICE OF THE COMPTROLLER AND DEPARTMENT OF REVENUE, 76-001237 (1976)
Division of Administrative Hearings, Florida Number: 76-001237 Latest Update: Mar. 21, 1977

The Issue The issue for determination in this cause is whether petitioner is entitled to a refund in the amount of $6,306.32 paid into the state treasury as sales tax. More specifically, the issue is whether the registration or participation fee charged by petitioner to its members at the 1975 summer national bridge tournament is taxable as an "admission" under Florida Statutes 212.02(16) and 212.04.

Findings Of Fact Upon consideration of the oral and documentary evidence adduced at the hearing, the following relevant facts are found: The petitioner, the American Contract Bridge League, Inc., is a nonprofit corporation incorporated under the laws of New York in 1938. Its membership is approximately 200,000, representing areas all over the North American continent. Its purposes include educational, cultural and charitable pursuits. Among other things, petitioner annually sponsors three national tournaments in various areas of the United States. In August of 1975, petitioner held its summer national tournament at the Americana Hotel in Bal Harbour, Dade County, Florida. Over 1,000 tables for approximately 5,500 members were in operation for the nine-day event. Many of these 5,500 members played in two or more events. In order to participate in each event, the member was required to pay a registration fee ranging from $3.00 to $4.50. No sales tax was included by petitioner in its registration fee. While spectators at the tournament were permitted, it was not intended as a spectator event. No special provision was made for the seating of spectators, whose number rarely exceeded one hundred and who were composed primarily of relatives or friends of the actual players or participants. No admission charges were made to spectators. On previous occasions, petitioner has held bridge events in Florida. On no such occasion has the State of Florida attempted to assess the sales tax on petitioner's registration or participation fees. No other state in which petitioner has held its tournaments has assessed petitioner for sales or other taxes on this fee. The respondent Department of Revenue informed petitioner that the registration fees collected at the 1975 summer national tournament constituted a taxable event, subject to the Florida sales tax, and petitioner, under protest, forwarded a check in the amount of $6,306.32. Thereafter, petitioner applied for a refund pursuant to the provisions of F.S. 215.26. The Comptroller denied the refund application.

Recommendation Based upon the findings of fact and conclusions of law recited above, it is recommended that petitioner's request for a refund in the amount of $6,306.32 be denied. Respectfully submitted and entered this 21st day of March, 1977, in Tallahassee, Florida. DIANE D. TREMOR Hearing Officer Division of Administrative Hearings Room 530 Carlton Building Tallahassee, Florida 32304 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 21st day of March, 1977. COPIES FURNISHED: Comptroller Gerald Lewis The Capitol Tallahassee, Florida 32304 Patricia Turner, Esquire Assistant Attorney General Department of Legal Affairs The Bloxham Building Tallahassee, Florida 32304 Paul J. Levine, Esquire 2100 First Federal Building One Southeast 3rd Avenue Miami, Florida 33131

Florida Laws (3) 212.02212.04215.26
# 4
KPMG CONSULTING, INC. vs DEPARTMENT OF REVENUE, 02-001719BID (2002)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida May 01, 2002 Number: 02-001719BID Latest Update: Oct. 15, 2002

The Issue The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from further consideration in the invitation to negotiate process.

Findings Of Fact Procurement Background: The Respondent, the (DOR) is a state agency charged with the responsibility of administering the Child Support Enforcement Program (CSE) for the State of Florida, in accordance with Section 20.21(h), Florida Statutes. The DOR issued an ITN for the CAMS Compliance Enforcement implementation on February 1, 2002. This procurement is designed to give the Department a "state of the art system" that will meet all Federal and State Regulations and Policies for Child Support Enforcement, improve the effectiveness of collections of child support and automate enforcement to the greatest extent possible. It will automate data processing and other decision- support functions and allow rapid implementation of changes in regulatory requirements resulting from revised Federal and State Regulation Policies and Florida initiatives, including statutory initiatives. CSE services suffer from dependence on an inadequate computer system known as the "FLORIDA System" which was not originally designed for CSE and is housed and administered in another agency. The current FLORIDA System cannot meet the Respondent's needs for automation and does not provide the Respondent's need for management and reporting requirements and the need for a more flexible system. The DOR needs a system that will ensure the integrity of its data, will allow the Respondent to consolidate some of the "stand-alone" systems it currently has in place to remedy certain deficiencies of the FLORIDA System and which will help the Child Support Enforcement system and program secure needed improvements. The CSE is also governed by Federal Policy, Rules and Reporting requirements concerning performance. In order to improve its effectiveness in responding to its business partners in the court system, the Department of Children and Family Services, the Sheriff's Departments, employers, financial institutions and workforce development boards, as well as to the Federal requirements, it has become apparent that the CSE agency and system needs a new computer system with the flexibility to respond to the complete requirements of the CSE system. In order to accomplish its goal of acquiring a new computer system, the CSE began the procurement process. The Department hired a team from the Northrup Grumman Corporation headed by Dr. Edward Addy to head the procurement development process. Dr. Addy began a process of defining CSE needs and then developing an ITN which reflected those needs. The process included many individuals in CSE who would be the daily users of the new system. These individuals included Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; Frank Doolittle, Process Manager for Child Support Enforcement Compliance Enforcement and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program. There are two alternative strategies for implementing a large computer system such as CAMS CE: a customized system developed especially for CSE or a Commercial Off The Shelf, Enterprise Resource Plan (COTS/ERP). A COTS/ERP system is a pre-packaged software program, which is implemented as a system- wide solution. Because there is no existing COTS/ERP for child support programs, the team recognized that customization would be required to make the product fit its intended use. The team recognized that other system attributes were also important, such as the ability to convert "legacy data" and to address such factors as data base complexity and data base size. The Evaluation Process: The CAMS CE ITN put forth a tiered process for selecting vendors for negotiation. The first tier involved an evaluation of key proposal topics. The key topics were the vendors past corporate experience (past projects) and its key staff. A vendor was required to score 150 out of a possible 230 points to enable it to continue to the next stage or tier of consideration in the procurement process. The evaluation team wanted to remove vendors who did not have a serious chance of becoming the selected vendor at an early stage. This would prevent an unnecessary expenditure of time and resources by both the CSE and the vendor. The ITN required that the vendors provide three corporate references showing their past corporate experience for evaluation. In other words, the references involved past jobs they had done for other entities which showed relevant experience in relation to the ITN specifications. The Department provided forms to the vendors who in turn provided them to their corporate references that they themselves selected. The vendors also included a summary of their corporate experience in their proposal drafted by the vendors themselves. Table 8.2 of the ITN provided positive and negative criteria by which the corporate references would be evaluated. The list in Table 8.2 is not meant to be exhaustive and is in the nature of an "included but not limited to" standard. The vendors had the freedom to select references whose projects the vendors' believed best fit the criteria upon which each proposal was to be evaluated. For the key staff evaluation standard, the vendors provided summary sheets as well as résumés for each person filling a lead role as key staff members on their proposed project team. Having a competent project team was deemed by the Department to be critical to the success of the procurement and implementation of a large project such as the CAMS CE. Table 8.2 of the ITN provided the criteria by which the key staff would be evaluated. The Evaluation Team: The CSE selected an evaluation team which included Dr. Addy, Mr. Ellis, Mr. Bankirer, Mr. Doolittle and Mr. Esser. Although Dr. Addy had not previously performed the role of an evaluator, he has responded to several procurements for Florida government agencies. He is familiar with Florida's procurement process and has a doctorate in Computer Science as well as seventeen years of experience in information technology. Dr. Addy was the leader of the Northrup Grumman team which primarily developed the ITN with the assistance of personnel from the CSE program itself. Mr. Ellis, Mr. Bankirer and Mr. Doolittle participated in the development of the ITN as well. Mr. Bankirer and Mr. Doolittle had previously been evaluators in other procurements for Federal and State agencies prior to joining the CSE program. Mr. Esser is the Chief of the Bureau of Information Technology at the Department of Highway Safety and Motor Vehicles and has experience in similar, large computer system procurements at that agency. The evaluation team selected by the Department thus has extensive experience in computer technology, as well as knowledge of the requirements of the subject system. The Department provided training regarding the evaluation process to the evaluators as well as a copy of the ITN, the Source Selection Plan and the Source Selection Team Reference Guide. Section 6 of the Source Selection Team Reference Guide entitled "Scoring Concepts" provided guidance to the evaluators for scoring proposals. Section 6.1 entitled "Proposal Evaluation Specification in ITN Section 8" states: Section 8 of the ITN describes the method by which proposals will be evaluated and scored. SST evaluators should be consistent with the method described in the ITN, and the source selection process documented in the Reference Guide and the SST tools are designed to implement this method. All topics that are assigned to an SST evaluator should receive at the proper time an integer score between 0 and 10 (inclusive). Each topic is also assigned a weight factor that is multiplied by the given score in order to place a greater or lesser emphasis on specific topics. (The PES workbook is already set to perform this multiplication upon entry of the score.) Tables 8-2 through 8-6 in the ITN Section 8 list the topics by which the proposals will be scored along with the ITN reference and evaluation and scoring criteria for each topic. The ITN reference points to the primary ITN section that describes the topic. The evaluation and scoring criteria list characteristics that should be used to affect the score negatively or positively. While these characteristics should be used by each SST evaluator, each evaluator is free to emphasize each characteristic more or less than any other characteristic. In addition, the characteristics are not meant to be inclusive, and evaluators may consider other characteristics that are not listed . . . (Emphasis supplied). The preponderant evidence demonstrates that all the evaluators followed these instructions in conducting their evaluations and none used a criterion that was not contained in the ITN, either expressly or implicitly. Scoring Method: The ITN used a 0 to 10 scoring system. The Source Selection Team Guide required that the evaluators use whole integer scores. They were not required to start at "7," which was the average score necessary to achieve a passing 150 points, and then to score up or down from 7. The Department also did not provide guidance to the evaluators regarding a relative value of any score, i.e., what is a "5" as opposed to a "6" or a "7." There is no provision in the ITN which establishes a baseline score or starting point from which the evaluators were required to adjust their scores. The procurement development team had decided to give very little structure to the evaluators as they wanted to have each evaluator score based upon his or her understanding of what was in the proposal. Within the ITN the development team could not sufficiently characterize every potential requirement, in the form that it might be submitted, and provide the consistency of scoring that one would want in a competitive environment. This open-ended approach is a customary method of scoring, particularly in more complex procurements in which generally less guidance is given to evaluators. Providing precise guidance regarding the relative value of any score, regarding the imposition of a baseline score or starting point, from which evaluators were required to adjust their scores, instruction as to weighing of scores and other indicia of precise structure to the evaluators would be more appropriate where the evaluators themselves were not sophisticated, trained and experienced in the type of computer system desired and in the field of information technology and data retrieval generally. The evaluation team, however, was shown to be experienced and trained in information technology and data retrieval and experienced in complex computer system procurement. Mr. Barker is the former Bureau Chief of Procurement for the Department of Management Services. He has 34 years of procurement experience and has participated in many procurements for technology systems similar to CAMS CE. He established that the scoring system used by the Department at this initial stage of the procurement process is a common method. It is customary to leave the numerical value of scores to the discretion of the evaluators based upon each evaluator's experience and review of the relevant documents. According wider discretion to evaluators in such a complex procurement process tends to produce more objective scores. The evaluators scored past corporate experience (references) and key staff according to the criteria in Table 8.2 of the ITN. The evaluators then used different scoring strategies within the discretion accorded to them by the 0 to 10 point scale. Mr. Bankirer established a midrange of 4 to 6 and added or subtracted points based upon how well the proposal addressed the CAMS CE requirements. Evaluator Ellis used 6 as his baseline and added or subtracted points from there. Dr. Addy evaluated the proposals as a composite without a starting point. Mr. Doolittle started with 5 as an average score and then added or subtracted points. Mr. Esser gave points for each attribute in Table 8.2, for key staff, and added the points for the score. For the corporate reference criterion, he subtracted a point for each attribute the reference lacked. As each of the evaluators used the same methodology for the evaluation of each separate vendor's proposal, each vendor was treated the same and thus no specific prejudice to KPMG was demonstrated. Corporate Reference Evaluation: KPMG submitted three corporate references: Duke University Health System (Duke), SSM Health Care (SSM), and Armstrong World Industries (Armstrong). Mr. Bankirer gave the Duke reference a score of 6, the SSM reference a score of 5 and the Armstrong reference a score of 7. Michael Strange, the KPMG Business Development Manager, believed that 6 was a low score. He contended that an average score of 7 was required to make the 150-point threshold for passage to the next level of the ITN consideration. Therefore, a score of 7 would represent minimum compliance, according to Mr. Strange. However, neither the ITN nor the Source Selection Team Guide identified 7 as a minimally compliant score. Mr. Strange's designation of 7 as a minimally compliant score is not provided for in the specifications or the scoring instructions. Mr. James Focht, Senior Manager for KPMG testified that 6 was a low score, based upon the quality of the reference that KPMG had provided. However, Mr. Bankirer found that the Duke reference was actually a small-sized project, with little system development attributes, and that it did not include information regarding a number of records, the data base size involved, the estimated and actual costs and attributes of data base conversion. Mr. Bankirer determined that the Duke reference had little similarity to the CAMS CE procurement requirements and did not provide training or data conversion as attributes for the Duke procurement which are attributes necessary to the CAMS CE procurement. Mr. Strange and Mr. Focht admitted that the Duke reference did not specifically contain the element of data conversion and that under the Table 8.2, omission of this information would negatively affect the score. Mr. Focht admitted that there was no information in the Duke Health reference regarding the number of records and the data base size, all of which factors diminish the quality of Duke as a reference and thus the score accorded to it. Mr. Strange opined that Mr. Bankirer had erred in determining that the Duke project was a significantly small sized project since it only had 1,500 users. Mr. Focht believed that the only size criterion in Table 8.2 was the five million dollar cost threshold, and, because KPMG indicated that the project cost was greater than five million dollars, that KPMG had met the size criterion. Mr. Focht believed that evaluators had difficulty in evaluating the size of the projects in the references due to a lack of training. Mr. Focht was of the view that the evaluator should have been instructed to make "binary choices" on issues such as size. He conceded, however, that evaluators may have looked at other criteria in Table 8.2 to determine the size of the project, such as database size and number of users. However, the corporate references were composite scores by the evaluators, as the ITN did not require separate scores for each factor in Table 8.2. Therefore, Mr. Focht's focus on binary scoring for size, to the exclusion of other criteria, mis-stated the objective of the scoring process. The score given to the corporate references was a composite of all of the factors in Table 8.2, and not merely monetary value size. Although KPMG apparently contends that size, in terms of dollar value, is the critical factor in determining the score for a corporate reference, the vendor questions and answers provided at the pre-proposal conference addressed the issue of relevant criteria. Question 40 of the vendor questions and answers, Volume II, did not single out "project greater than five million dollars" as the only size factor or criterion. QUESTION: Does the state require that each reference provided by the bidder have a contract value greater than $5 million; and serve a large number of users; and include data conversion from a legacy system; and include training development? ANSWER: To get a maximum score for past corporate experience, each reference must meet these criteria. If the criteria are not fully met, the reference will be evaluated, but will be assigned a lower score depending upon the degree to which the referenced project falls short of these required characteristics. Therefore, the cost of the project is shown to be only one component of a composite score. Mr. Strange opined that Mr. Bankirer's comment regarding the Duke reference, "little development, mostly SAP implementation" was irrelevant. Mr. Strange's view was that the CAMS CE was not a development project and Table 8.2 did not specifically list development as a factor on which proposals would be evaluated. Mr. Focht stated that in his belief Mr. Bankirer's comment suggested that Mr. Bankirer did not understand the link between the qualifications in the reference and the nature of KPMG's proposal. Both Strange and Focht believe that the ITN called for a COTS/ERP solution. Mr. Focht stated that the ITN references a COTS/ERP approach numerous times. Although many of the references to COTS/ERP in the ITN also refer to development, Mr. Strange also admitted that the ITN was open to a number of approaches. Furthermore, both the ITN and the Source Selection Team Guide stated that the items in Table 8.2 are not all inclusive and that the evaluators may look to other factors in the ITN. Mr. Bankirer noted that there is no current CSE COTS/ERP product on the market. Therefore, some development will be required to adapt an off-the-shelf product to its intended use as a child support case management system. Mr. Bankirer testified that the Duke project was a small-size project with little development. Duke has three sites while CSE has over 150 sites. Therefore, the Duke project is smaller than CAMS. There was no information provided in the KPMG submittal regarding data base size and number of records with regard to the Duke project. Mr. Bankirer did not receive the information he needed to infer a larger sized-project from the Duke reference. Mr. Esser also gave the Duke reference a score of 6. The reference did not provide the data base information required, which was the number of records in the data base and the number of "gigabytes" of disc storage to store the data, and there was no element of legacy conversion. Dr. Addy gave the Duke reference a score of 5. He accepted the dollar value as greater than five million dollars. He thought that the Duke Project may have included some data conversion, but it was not explicitly stated. The Duke customer evaluated training so he presumed training was provided with the Duke project. The customer ratings for Duke were high as he expected they would be, but similarity to the CAMS CE system was not well explained. He looked at size in terms of numbers of users, number of records and database size. The numbers that were listed were for a relatively small-sized project. There was not much description of the methodology used and so he gave it an overall score of 5. Mr. Doolittle gave the Duke reference a score of 6. He felt that it was an average response. He listed the number of users, the number of locations, that it was on time and on budget, but found that there was no mention of data conversion, database size or number of records. (Consistent with the other evaluators). A review of the evaluators comments makes it apparent that KPMG scores are more a product of a paucity of information provided by KPMG corporate references instead of a lack of evaluator knowledge of the material being evaluated. Mr. Ellis gave a score of 6 for the Duke reference. He used 6 as his baseline. He found the required elements but nothing more justifying in his mind raising the score above 6. Mr. Focht and Mr. Strange expressed the same concerns regarding Bankirer's comment, regarding little development, for the SSM Healthcare reference as they had for the Duke Health reference. However, both Mr. Strange and Mr. Focht admitted that the reference provided no information regarding training. Mr. Strange admitted that the reference had no information regarding data conversion. Training and data conversion are criteria contained in Table 8.2. Mr. Strange also admitted that KPMG had access to Table 8.2 before the proposal was submitted and could have included the information in the proposal. Mr. Bankirer gave the SSM reference a score of 5. He commented that the SAP implementation was not relevant to what the Department was attempting to do with the CAMS CE system. CAMS CE does not have any materials management or procurement components, which was the function of the SAP components and the SSM reference procurement or project. Additionally, there was no training indicated in the SSM reference. Mr. Esser gave the SSM reference a score of 3. His comments were "no training provided, no legacy data conversion, project evaluation was primarily for SAP not KPMG". However, it was KPMG's responsibility in responding to the ITN to provide project information concerning a corporate reference in a clear manner rather than requiring that an evaluator infer compliance with the specifications. Mr. Focht believed that legacy data conversion could be inferred from the reference's description of the project. Mr. Strange opined that Mr. Esser's comment was inaccurate as KPMG installed SAP and made the software work. Mr. Esser gave the SSM reference a score of 3 because the reference described SAP's role, but not KPMG's role in the installation of the software. When providing information in the reference SSM gave answers relating to SAP to the questions regarding system capability, system usability, system reliability but did not state KPMG's role in the installation. SAP is a large enterprise software package. This answer created an impression of little KPMG involvement in the project. Dr. Addy gave the SSM reference a score of 6. Dr. Addy found that the size was over five million dollars and customer ratings were high except for a 7 for usability with reference to a "long learning curve" for users. Data conversion was implied. There was no strong explanation of similarity to CAMS CE. It was generally a small-sized project. He could reason some similarity into it, even though it was not well described in the submittal. Mr. Doolittle gave the SSM reference a score of 6. Mr. Doolittle noted, as positive factors, that the total cost of the project was greater than five million dollars, that it supported 24 sites and 1,500 users as well "migration from a mainframe." However, there were negative factors such as training not being mentioned and a long learning curve for its users. Mr. Ellis gave a score of 6 for SSM, feeling that KPMG met all of the requirements but did not offer more than the basic requirements. Mr. Strange opined that Mr. Bankirer, Dr. Addy and Mr. Ellis (evaluators 1, 5 and 4) were inconsistent with each other in their evaluation of the SSM reference. He stated that this inconsistency showed a flaw in the evaluation process in that the evaluators did not have enough training to uniformly evaluate past corporate experience, thereby, in his view, creating an arbitrary evaluation process. Mr. Bankirer gave the SSM reference a score of 5, Ellis a score of 6, and Addy a score of 6. Even though the scores were similar, Mr. Strange contended that they gave conflicting comments regarding the size of the project. Mr. Ellis stated that the size of the project was hard to determine as the cost was listed as greater than five million dollars and the database size given, but the number of records was not given. Mr. Bankirer found that the project was low in cost and Dr. Addy stated that over five million dollars was a positive factor in his consideration. However, the evaluators looked at all of the factors in Table 8.2 in scoring each reference. Other factors that detracted from KPMG's score for the SSM reference were: similarity to the CAMS system not being explained, according to Dr. Addy; no indication of training (all of the evaluators); the number of records not being provided (evaluator Ellis); little development shown (Bankirer) and usability problems (Dr. Addy). Mr. Strange admitted that the evaluators may have been looking at other factors besides the dollar value size in order to score the SSM reference. Mr. Esser gave the Armstrong reference a score of 6. He felt that the reference did not contain any database information or cost data and that there was no legacy conversion shown. Dr. Addy also gave Armstrong a score of 6. He inferred that this reference had data conversion as well as training and the high dollar volume which were all positive factors. He could not tell, however, from the project description, what role KPMG actually had in the project. Mr. Ellis gave a score of 7 for the Armstrong reference stating that the Armstrong reference offered more information regarding the nature of the project than had the SSM and Duke references. Mr. Bankirer gave KPMG a score of 7 for the Armstrong reference. He found that the positive factors were that the reference had more site locations and offered training but, on the negative side, was not specific regarding KPMG's role in the project. Mr. Focht opined that the evaluators did not understand the nature of the product and services the Department was seeking to obtain as the Department's training did not cover the nature of the procurement and the products and services DOR was seeking. However, when he made this statement he admitted he did not know the evaluators' backgrounds. In fact, Bankirer, Ellis, Addy and Doolittle were part of a group that developed the ITN and clearly knew what CSE was seeking to procure. Further, Mr. Esser stated that he was familiar with COTS and described it as a commercial off-the-shelf software package. Mr. Esser explained that an ERP solution or Enterprise Resource Plan is a package that is designed to do a series of tasks, such as produce standard reports and perform standard operations. He did not believe that he needed further training in COTS/ERP to evaluate the proposals. Mr. Doolittle was also familiar with COTS/ERP and believed, based on the amount of funding, that it was a likely response to the ITN. Dr. Addy's doctoral dissertation research was in the area of software re-use. COTS is one of the components that comprise a development activity and re-use. He became aware during his research of how COTS packages are used in software engineering. He has also been exposed to ERP packages. ERP is only one form of a COTS package. In regard to the development of the ITN and the expectations of the development team, Dr. Addy stated that they were amenable to any solution that met the requirements of the ITN. They fully expected the compliance solutions were going to be comprised of mostly COTS and ERP packages. Furthermore, the ITN in Section 1.1, on page 1-2 states, ". . . FDOR will consider an applicable Enterprise Resource Planning (ERP) or Commercial Off the Shelf (COTS) based solution in addition to custom development." Clearly, this ITN was an open procurement and to train evaluators on only one of the alternative solutions would have biased the evaluation process. Mr. Doolittle gave each of the KPMG corporate references a score of 6. Mr. Strange and Mr. Focht questioned the appropriateness of these scores as the corporate references themselves gave KPMG average ratings of 8.3, 8.2 and 8.0. However, Mr. Focht admitted that Mr. Doolittle's comments regarding the corporate references were a mixture of positive and negative comments. Mr. Focht believed, however, that as the reference corporations considered the same factors for providing ratings on the reference forms, that it was inconsistent for Mr. Doolittle to separately evaluate the same factors that the corporations had already rated. However, there is no evidence in the record that KPMG provided Table 8.2 to the companies completing the reference forms and that the companies consulted the table when completing their reference forms. Therefore, KPMG did not prove that it had taken all measures available to it to improve its scores. Moreover, Mr. Focht's criticism would impose a requirement on Mr. Doolittle's evaluation which was not supported by the ITN. Mr. Focht admitted that there was no criteria in the ITN which limited the evaluator's discretion in scoring to the ratings given to the corporate references by those corporate reference customers. All of the evaluators used Table 8.2 as their guide for scoring the corporate references. As part of his evaluation, Dr. Addy looked at the methodology used by the proposers in each of the corporate references to implement the solution for that reference company. He was looking at methodology to determine its degree of similarity to CAMS CE. While not specifically listed in Table 8.2 as a similarity to CAMS, Table 8.2 states that the list is not all inclusive. Clearly, methodology is a measure of similarity and therefore is not an arbitrary criterion. Moreover, as Dr. Addy used the same process and criteria in evaluating all of the proposals there was no prejudice to KPMG by use of this criterion since all vendors were subjected to it. Mr. Strange stated that KPMG appeared to receive lower scores for SAP applications than other vendors. For example, evaluator 1 gave a score of 7 to Deloitte's reference for Suntax. Suntax is an SAP implementation. It is difficult to draw comparisons across vendors, yet the evaluators consistently found that KPMG references lacked key elements such as data conversion, information on starting and ending costs, and information on database size. All of these missing elements contributed to a reduction in KPMG's scores. Nevertheless, KPMG received average scores of 5.5 for Duke, 5.7 for SSM and 6.3 for Armstrong, compared with the score of 7 received by Deloitte for Suntax. There is only a gap of 1.5 to .7 points between Deloitte and KPMG's scores for SAP implementations, despite the deficient information within KPMG's corporate references. Key Staff Criterion: The proposals contain a summary of the experience of key staff and attached résumés. KPMG's proposed key staff person for Testing Lead was Frank Traglia. Mr. Traglia's summary showed that he had 25-years' experience respectively, in the areas of child support enforcement, information technology, project management and testing. Strange and Focht admitted that Traglia's résumé did not specifically list any testing experience. Mr. Focht further admitted that it was not unreasonable for evaluators to give the Testing Lead a lower score due to the lack of specific testing information in Traglia's résumé. Mr. Strange explained that the résumé was from a database of résumés. The summary sheet, however, was prepared by those KPMG employees who prepared the proposal. All of the evaluators resolved the conflicting information between the summary sheet and the résumé by crediting the résumé as more accurate. Each evaluator thought that the résumé was more specific and expected to see specific information regarding testing experience on the résumé for someone proposed as the Testing Lead person. Evaluators Addy and Ellis gave scores to the Testing Lead criterion of 4 and 5. Mr. Ron Vandenberg (evaluator 8) gave the Testing Lead a score of 9. Mr. Vandenberg was the only evaluator to give the Testing Lead a high score. The other evaluators gave the Testing Lead an average score of 4.2. The Vandenberg score thus appears anomalous. All of the evaluators gave the Testing Lead a lower score as it did not specifically list testing experience. Dr. Addy found that the summary sheet listed 25-years of experience in child support enforcement, information technology, and project management and system testing. As he did not believe this person had 100 years of experience, he assumed those experience categories ran concurrently. A strong candidate for Testing Lead should demonstrate a combination of testing experience, education and certification, according to Dr. Addy. Mr. Doolittle also expected to see testing experience mentioned in the résumé. When evaluating the Testing Lead, Mr. Bankirer first looked at the team skills matrix and found it interesting that testing was not one of the categories of skills listed for the Testing Lead. He then looked at the summary sheet and résumé from Mr. Traglia. He gave a lower score to Traglia as he thought that KPMG should have put forward someone with demonstrable testing experience. The evaluators gave a composite score to key staff based on the criteria in Table 8.2. In order to derive the composite score that he gave each staff person, Mr. Esser created a scoring system wherein he awarded points for each attribute in Table 8.2 and then added the points together to arrive at a composite score. Among the criteria he rated, Mr. Esser awarded points for CSE experience. Mr. Focht and Mr. Strange contended that since the term CSE experience is not actually listed in Table 8.2 that Mr. Esser was incorrect in awarding points for CSE experience in his evaluation. Table 8.2 does refer to relevant experience. There is no specific definition provided in Table 8.2 for relevant experience. Mr. Focht stated that relevant experience is limited to COTS/ERP experience, system development, life cycle and project management methodologies. However, these factors are also not listed in Table 8.2. Mr. Strange limited relevance to experience in the specific role for which the key staff person was proposed. This is a limitation that also is not imposed by Table 8.2. CSE experience is no more or less relevant than the factors posited by KPMG as relevant experience. Moreover, KPMG included a column in its own descriptive table of key staffs for CSE experience. KPMG must have seen this information as relevant if it included it in its proposal as well. Inclusion of this information in its proposal demonstrated that KPMG must have believed CSE experience was relevant at the time its submitted its proposal. Mr. Strange held the view that, in the bidders conference in a reply to a vendor question, the Department representative stated that CSE experience was not required. Therefore, Mr. Esser could not use such experience to evaluate key staff. Question 47 of the Vendor Questions and Answers, Volume 2 stated: QUESTION: In scoring the Past Corporate Experience section, Child Support experience is not mentioned as a criterion. Would the State be willing to modify the criteria to include at least three Child Support implementations as a requirement? ANSWER: No. However, a child support implementation that also meets the other characteristics (contract value greater than $5 million, serves a large number of users, includes data conversion from a legacy system and includes training development) would be considered "similar to CAMS CE." The Department's statement involved the scoring of corporate experience not key staff. It was inapplicable to Mr. Esser's scoring system. Mr. Esser gave the Training Lead a score of 1. According to Esser, the Training Lead did not have a ten-year résumé, for which he deducted one point. The Training Lead had no specialty certification or extensive experience and had no child support experience and received no points. Mr. Esser added one point for the minimum of four years of specific experience and one point for the relevance of his education. Mr. Esser gave the Project Manager a score of 5. The Project Manager had a ten-year résumé and required references and received a point for each. He gave two points for exceeding the minimum required informational technology experience. The Project Manager had twelve years of project management experience for a score of one point, but lacked certification, a relevant education and child support enforcement experience for which he was accorded no points. Mr. Esser gave the Project Liaison person a score of According to Mr. Focht, the Project Liaison should have received a higher score since she has a professional history of having worked for the state technology office. Mr. Esser, however, stated that she did not have four years of specific experience and did not have extensive experience in the field, although she had a relevant education. Mr. Esser gave the Software Lead person a score of 4. The Software Lead, according to Mr. Focht, had a long set of experiences with implementing SAP solutions for a wide variety of different clients and should have received a higher score. Mr. Esser gave a point each for having a ten-year résumé, four years of specific experience in software, extensive experience in this area and relevant education. According to Mr. Focht the Database Lead had experience with database pools including the Florida Retirement System and should have received more points. Mr. Strange concurred with Mr. Focht in stating that Esser had given low scores to key staff and stated that the staff had good experience, which should have generated more points. Mr. Strange believed that Mr. Esser's scoring was inconsistent but provided no basis for that conclusion. Other evaluators also gave key staff positions scores of less than 7. Dr. Addy gave the Software Lead person a score of 5. The Software Lead had 16 years of experience and SAP development experience as positive factors but had no development lead experience. He had a Bachelor of Science and a Master of Science in Mechanical Engineering and a Master's in Business Administration, which were not good matches in education for the role of a Software Lead person. Dr. Addy gave the Training Lead person a score of 5. The Training Lead had six years of consulting experience, a background in SAP consulting and some training experience but did not have certification or education in training. His educational background also was electrical engineering, which is not a strong background for a training person. Dr. Addy gave the subcontractor managers a score of 5. Two of the subcontractors did not list managers at all, which detracted from the score. Mr. Doolittle gave the Training Lead person a He believed that based on his experience and training it was an average response. Table 8.2 contained an item in which a proposer could have points detracted from a score if the key staff person's references were not excellent. The Department did not check references at this stage in the evaluation process. As a result, the evaluators simply did not consider that item when scoring. No proposer's score was adversely affected thereby. KPMG contends that checking references would have given the evaluators greater insight into the work done by those individuals and their relevance and capabilities in the project team. Mr. Focht admitted, however, that any claimed effect on KPMG's score is conjectural. Mr. Strange stated that without reference checks information in the proposals could not be validated but he provided no basis for his opinion that reference checking was necessary at this preliminary stage of the evaluation process. Dr. Addy stated that the process called for checking references during the timeframe of oral presentations. They did not expect the references to change any scores at this point in the process. KPMG asserted that references should be checked to ascertain the veracity of the information in the proposals. However, even if the information in some other proposal was inaccurate it would not change the outcome for KPMG. KPMG would still not have the required number of points to advance to the next evaluation tier. Divergency in Scores The Source Selection Plan established a process for resolving divergent scores. Any item receiving scores with a range of 5 or more was determined to be divergent. The plan provided that the Coordinator identify divergent scores and then report to the evaluators that there were divergent scores for that item. The Coordinator was precluded from telling the evaluator, if his score was the divergent score, i.e., the highest or lowest score. Evaluators would then review that item, but were not required to change their scores. The purpose of the divergent score process was to have evaluators review their scores to see if there were any misperceptions or errors that skewed the scores. The team wished to avoid having any influence on the evaluators' scores. Mr. Strange testified that the Department did not follow the divergent score process in the Source Selection Plan as the coordinator did not tell the evaluators why the scores were divergent. Mr. Strange stated that the evaluator should have been informed which scores were divergent. The Source Selection Plan merely instructed the coordinator to inform the evaluators of the reason why the scores were divergent. Inherently scores were divergent, if there was a five-point score spread. The reason for the divergence was self- explanatory. The evaluators stated that they scored the proposals, submitted the scores and each received an e-mail from Debbie Stephens informing him that there were divergent scores and that they should consider re-scoring. None of the evaluators ultimately changed their scores. Mr. Esser's scores were the lowest of the divergent scores but he did not re-score his proposals as he had spent a great deal of time on the initial scoring and felt his scores to be valid. Neither witnesses Focht or Strange for KPMG provided more than speculation regarding the effect of the divergent scores on KPMG's ultimate score and any role the divergent scoring process may have had in KPMG not attaining the 150 point passage score. Deloitte - Suntax Reference: Susan Wilson, a Child Support Enforcement employee connected with the CAMS project signed a reference for Deloitte Consulting regarding the Suntax System. Mr. Focht was concerned that the evaluators were influenced by her signature on the reference form. Mr. Strange further stated that having someone who is heavily involved in the project sign a reference did not appear to be fair. He was not able to state any positive or negative effect on KPMG by Wilson's reference for Deloitte, however. Evaluator Esser has met Susan Wilson but has had no significant professional interaction with her. He could not recall anything that he knew about Ms. Wilson that would favorably influence him in scoring the Deloitte reference. Dr. Addy also was not influenced by Wilson. Mr. Doolittle has only worked with Wilson for a very short time and did not know her well. He has also evaluated other proposals where department employees were a reference and was not influenced by that either. Mr. Ellis has only known Wilson from two to four months. Her signature on the reference form did not influence him either positively or negatively. Mr. Bankirer had not known Wilson for a long time when he evaluated the Suntax reference. He took the reference at face value and was not influenced by Wilson's signature. It is not unusual for someone within an organization to create a reference for a company who is competing for work to be done for the organization.

Recommendation Having considered the foregoing Findings of Fact, Conclusions of Law, the evidence of record and the pleadings and arguments of the parties, it is, therefore, RECOMMENDED that a final order be entered by the State of Florida Department of Revenue upholding the proposed agency action which disqualified KPMG from further participation in the evaluation process regarding the subject CAMS CE Invitation to Negotiate. DONE AND ENTERED this 26th day of September, 2002, in Tallahassee, Leon County, Florida. P. MICHAEL RUFF Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with Clerk of the Division of Administrative Hearings this 26th day of September, 2002. COPIES FURNISHED: Cindy Horne, Esquire Earl Black, Esquire Department of Revenue Post Office Box 6668 Tallahassee, Florida 32399-0100 Robert S. Cohen, Esquire D. Andrew Byrne, Esquire Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive Tallahassee, Florida 32308 Seann M. Frazier, Esquire Greenburg, Traurig, P.A. 101 East College Avenue Tallahassee, Florida 32302 Bruce Hoffmann, General Counsel Department of Revenue 204 Carlton Building Tallahassee, Florida 32399-0100 James Zingale, Executive Director Department of Revenue 104 Carlton Building Tallahassee, Florida 32399-0100

Florida Laws (3) 120.569120.5720.21
# 5
GEORGIOS GAITANTZIS vs FLORIDA ENGINEERS MANAGEMENT CORPORATION, 98-004757 (1998)
Division of Administrative Hearings, Florida Filed:Jacksonville, Florida Oct. 26, 1998 Number: 98-004757 Latest Update: Apr. 20, 1999

The Issue Did Petitioner pass the Mechanical Engineers Examination he took on April 24, 1998?

Findings Of Fact On April 24, 1998, Petitioner took the Mechanical Engineers Examination. He received a score of 69 for his effort. A passing score was 70. The Mechanical Engineers Examination was administered under Respondent's auspices. As alluded to in the preliminary statement, Petitioner challenged the score received on problem 146. The maximum score available for that problem was ten points. Petitioner received eight points. In accordance with the National Council of Examiners for Engineering and Surveying Principles in Practice of Engineering Examinations for spring 1998, score conversion table - discipline specific, Petitioner had a raw score of 47 which equated to a conversion of 69, to include the eight raw points received for problem 146. In addition, the examination provided a scoring plan for problem 146, which assigns scores in increments of two points from zero to ten. To pass, it would be necessary for Petitioner to receive an incremental increase of two points, raising his score from eight points to ten points. This would give him a raw score of 49 points. According to the score conversion table - discipline specific, that would give Petitioner 71 points. According to the scoring plan for problem 146 to receive the ten points, Petitioner would have to demonstrate: Exceptional competence (it is not necessary that the solution to the problem be perfect) generally complete, one math error. Shows in-depth understanding of cooling load calculation psychrometrics. Problem 146 required Petitioner to: Determine the required cooling coil supply air quantity (cfm) and the conditions (°F db and °F wb) of the air entering and leaving the coil." Petitioner was provided a psychrometric chart to assist in solving problem 146. The examination candidates were also allowed to bring reference sources to the examination to assist in solving the examination problems. Petitioner brought to the examination, the Air-Conditioning Systems Design Manual prepared by the ASHRAE 581-RP Project Team, Harold G. Lorsch, Principal Investigator. Petitioner used that manual to determine the wet-bulb temperature of the air entering the coil. In particular, he used an equation from the manual involving air mixtures. For that part of the solution he arrived at a temperature of 65.6°F wb. According to the problem solution by Respondent's affiliate testing agency, reference ASHRAE Fundamentals Chapter 26, the coil entering wet-bulb temperature taken from the psychrometric chart was 66.12°F wb. The scorer in grading Petitioner's solution for problem 146 placed an "x" by the answer provided 65.6°F wb and wrote the words "psychrometric chart." No other entry or comment was made by that scorer in initially reviewing the solution Petitioner provided for that problem. This led to the score of eight. The scoring plan for problem 146 for the April 1998 examination taken by Respondent equates the score of eight as: MORE THAN MINIMUM BUT LESS THAN EXCEPTIONAL COMPETENCE Either a) Provides correct solution to problem with two math errors or incorrect dry-bulb or wet-bulb for coil entering or leaving conditions or minor total cooling load error, or b) Provides correct solution to items c and d correctly and minor math errors in items a and b of Score 6 below. Petitioner was entitled to review the results of his examination. He exercised that opportunity on September 21, 1998, through a post-examination review session. Petitioner requested and was provided re-scoring of his solution to problem 146. According to correspondence from the National Council of Examiners for Engineering and Surveying to the Florida Member Board from Patricia M. Simpson, Assistant Supervisor of scoring services, the score did not change through re-scoring. In this instance, the October 14, 1998 correspondence on re-scoring states, in relation to problem 146: Incorrect methodology used in calculating coil entering wet-bulb temperature. Incorrect coil entering wet-bulb temperature provided. No calculation provided for coil leaving temperature conditions. The coil leaving wet-bulb temperature in Respondent's proposed solution was 53.22°F wb taken from the psychrometric chart. Petitioner's solution for the coil leaving wet-bulb temperature taken from the psychrometric chart was 53.3°F wb. At hearing Respondent did not provide an expert to establish the basis for point deduction in the original score and the re-scoring of Petitioner's solution for problem 146. Moreover, Respondent did not present expert witnesses to defend the commentary, the preferred written solution in its examination materials. Consequently, Respondent's preferred solution constitutes hearsay about which no facts may be found accepting the validity of Respondent's proposed solution, as opposed to merely reporting that information.1 By contrast, Petitioner provided direct evidence concerning the solution provided for problem 146 in response to the criticisms of his solution that were unsupported by competent evidence at hearing. More importantly the criticisms were responded to at hearing by Geoffrey Spencer, P.E., a mechanical engineer licensed to practice in Florida, who was accepted as an expert in that field for purposes of the hearing. As Petitioner explained at hearing, he used the Air- Conditioning Systems Design Manual equation to arrive at the coil entering wet-bulb temperature, which he believed would provide the answer as readily as the use of the psychrometric chart. (Although the psychrometric chart had been provided to Petitioner for solving problem 146, the instructions for that problem did not prohibit the use of the equation or formula.) Petitioner in his testimony pointed out the equivalency of the process of the use of the psychrometric chart and the equation. Petitioner deemed the equation to be more accurate than the psychrometric chart. Petitioner had a concern that if the answer on the coil entering wet-bulb temperature was inaccurate, this would present difficulty in solving the rest of problem 146 because the error would be carried forward. Petitioner pointed out in his testimony that the solution for determining the coil entering wet-bulb temperature was set out in his answer. The answer that was derived by use of the formula was more time consuming but less prone to error, according the Petitioner's testimony. Petitioner points out in his testimony that the answer he derived, 65.6°F wb, is not significantly different than Respondent's proposed solution of 66.12°F wb. (The instructions concerning problem 146 did not explain what decimal point of a degree the candidate had to respond to in order to get full credit for that portion of the solution to the problem.) Petitioner in his testimony concerning his solution for the coil leaving wet-bulb temperature indicated that the calculation for arriving at that temperature was taken from the psychrometric chart and is sufficiently detailed to be understood. Further, Petitioner testified that the degree of accuracy in which the answer was given as 53.3°F wb, as opposed to Respondent's proposed solution of 53.22°F wb, is in recognition of the use of the psychrometric chart. Petitioner questions whether the proposed solution by Respondent, two decimal points, could be arrived at by the use of the psychrometric chart. In relation to the calculation of the coil entering wet-bulb temperature, Mr. Spencer testified that the formula from the Air-Conditioning Systems Design Manual or the psychrometric chart could have been used. Moreover, Mr. Spencer stated his opinion that the solution for coil entering wet-bulb temperature of 65.6°F wb by Petitioner is sufficiently close to Respondent's proposed solution of 66.12°F wb to be acceptable. Mr. Spencer expressed the opinion that Petitioner had correctly used the formula from the manual in solving the coil entering wet-bulb temperature. Mr. Spencer expressed the opinion that the psychrometric chart is an easier source for obtaining the solution than the use of the formula from the manual. In Mr. Spencer's opinion, the formula shows a more basic knowledge of the physics involved than the use of the psychrometric chart would demonstrate. In relation to the coil leaving wet-bulb temperature, Mr. Spencer expressed the opinion that Petitioner had adequately explained the manner of deriving the answer. Further, Mr. Spencer expressed the opinion that the answer derived was sufficiently accurate. The testimony of Petitioner and opinion of Mr. Spencer is unrefuted and accepted.

Recommendation Upon consideration of the facts found and conclusions of law reached, it is RECOMMENDED: That a final order be entered which finds that Petitioner passed the Florida Board of Professional Engineers April 24, 1998, Mechanical Engineers Examination with a score of 71. DONE AND ENTERED this 22nd day of February, 1999, in Tallahassee, Leon County, Florida. CHARLES C. ADAMS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 22nd day of February, 1999.

Florida Laws (2) 120.569120.57
# 6
KNAUS SYSTEMS, INC. OF FLORIDA vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 99-001230BID (1999)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Mar. 19, 1999 Number: 99-001230BID Latest Update: Sep. 23, 1999

The Issue The issue is whether Respondent's proposed decision to award a computer-maintenance contract to Intervenor is clearly erroneous, contrary to competition, arbitrary, or capricious.

Findings Of Fact On November 20, 1998, Respondent issued a Request for Proposals titled "The Maintenance of Network Terminal Equipment" (RFP). The purpose of the RFP is to obtain a three-year maintenance service contract for video display terminals, printers, microcomputers, and related components located throughout the State of Florida. The RFP seeks a three-year, labor-intensive contract projected at the hearing to be worth between $3 million and $3.5 million. RFP Section 6.1 promises a "comprehensive, fair, and impartial evaluation" of all timely submitted offers by an "Evaluation Committee," which is an undefined term. Nothing in the RFP describes the Evaluation Committee, in terms of number or qualifications, except that repeated references to "each evaluator" imply the existence of more than one member. Section 6.1.A identifies four evaluation categories: Corporate Experience (100 points), Project Staff (200 points), Minimum Maintenance Service Requirements (200 points), and Cost (500 points). The category at issue in this case is Corporate Experience. Section 6.1.B states that the Procurement Officer will evaluate whether each offer meets the "fatal criteria." The only relevant fatal criterion is 10, which states: "Are there three (3) years of financial statements for the proposer and any proposed subcontractors, TAB 6?" RFP, Section 6.3.A.10. The RFP does not define "financial statements," nor does it require audited financial statements. The Procurement Officer bore the responsibility for determining whether offers complied with the fatal criteria, and he testified that he applied this fatal criterion by checking for a balance sheet, income statement, and statement of changes in financial position. Tr., p. 84. However, the Procurement Officer, acknowledging the absence of any definition of "financial statements," testified that he would accept "even a balance sheet and income statement," which is exactly what he received from Intervenor. Tr., p. 99. The Procurement Officer added: "I didn't throw out anyone for lack of submitting any other financial statements that are commonly included in audited financial statements." Id. Section 6.1.B also provides that offers meeting the "fatal criteria" will be scored by the Evaluation Committee, which will score each responsive offer "based on the evaluation criteria provided in Section 6.3 " Regarding Corporate Experience, Section 6.1.C.3 states: "The criteria, which will be used in evaluating Corporate Experience, are listed in the Rating Sheet, see Section 6.3.B." Section 6.3 states that the non-fatal criteria for each of the four categories are listed on the Rating Sheet, which is part of the RFP. Each evaluator must assign a score from 0-4 for each of these criteria. The meaning of each point value is as follows: 0 = no value; proposer has no capability or has ignored this area 1 = poor; proposer has little or no direct capability or has not covered this area, but there is some indication of marginal capability 2 = acceptable; proposer has adequate capability 3 = good; proposer has a good approach with above average capability 4 = superior; proposer has excellent capability and an outstanding approach Section 6.3.B lists 40 evaluation criteria divided among three categories. (The fourth category is Cost; its scoring methodology is irrelevant to this case.) Project Staff and Minimum Maintenance Service Requirements contain a total of 37 criteria. Corporate Experience contains only three criteria. The three criteria of Corporate Experience are: Does the proposal present financial information that supports the proposer's ability to perform this work required by this Request for Proposal? (RFP section 5.6.B) Is the ratio of current assets to current liabilities at least 2:1? Is the debt to net worth ratio (total liabilities/net worth) equal to or less than 1? Has the cash/operating capital exceeded projected monthly operating expenses over the past three years? Does the proposer have sufficient financial resources to complete the project? Does the proposal document the proposer's experience, organization, technical qualifications, skills, and facilities? (RFP section 5.6.B) Is the experience supplied (including subcontractor experience) relevant? Has the proposer (including any subcontractors) previously provided the maintenance services required by the department? Have the proposer and any subcontractors previously worked together? Does the proposer[-]supplied organization chart demonstrate the capability to perform well on this project? Have the projects supplied by the proposer or for any subcontractors been performed recently enough to be relevant? What percentage of the work is to be done by the proposer and each subcontractor? Does the proposal present maintenance projects similar to the requirements of this RFP as references? (RFP section 5.6.B) Is each project described in sufficient detail so that the department is able to judge its complexity and relevance? Are projects similar or greater in scope? How broad is the range of equipment that was serviced? How current is the project? The challenge focuses exclusively on the first criterion under Corporate Experience. On this criterion, the evaluators gave Intervenor an average of 3.0 and Petitioner an average of 2.0. The Procurement Officer prepared an Evaluation Manual for the evaluators. The Evaluation Manual states: Scoring should reflect the evaluator's independent evaluation of the proposal's response to each evaluation criterion. Following each evaluation criterion are considerations each evaluator may use in determining an evaluation score. These considerations are only suggestions. The considerations provided are not intended to be an all-inclusive list and will not be scored independently for the criterion that they address. Joint Exhibit 8, page 4. Nothing among the documents given prospective offerors informed them explicitly that the evaluators were not required to consider any of the bulleted items listed under each of the criteria. However, the Procurement Officer conducted a Proposers' Conference, at which he stated that the bullets under all of the criteria were strictly suggestions that the evaluators were free to ignore. Tr., p. 115. The Procurement Officer provided this information in response to a question asked by a representative of Intervenor. Joint Exhibit 23, pp. 63-64. The RFP did not require attendance at the Proposers' Conference, nor did Respondent publish the response following the conference. The three bullets under the first criterion under Corporate Experience appear in Respondent's manual titled "Developing a Request for Proposal (RFP)." The exhibit in evidence is a copy of the manual issued on April 1, 1998, but this manual has been in existence well prior to that. The manual suggests that the RFP include a criterion for evaluating the adequacy of the offeror's financial resources. Under the category of reviewing financial statements, the manual lists the first three bullets, as well as other considerations. However, nothing in the manual requires the inclusion of these bulleted items as scoring criteria or the consideration of these bulleted items within one or more scoring criteria. The rating sheets contain a space for comments. The following are the scores and comments from each of the five evaluators for the challenged criterion regarding the financial resources of Petitioner and Intervenor. Evaluator 1 assigned Intervenor a 2, noting "high debt, loss in income 1998." Evaluator 1 assigned Petitioner a 1, noting "financial information limited. Total assets less than value of contract." Evaluators 2 and 4 each assigned Intervenor a 3 and Petitioner a 2 without any comments. Evaluator 3 assigned Intervenor a 3, noting "Exceeds all requirements." Evaluator 3 assigned Petitioner a 3, noting "financials appear to meet this requirement. However, the replacement parts-inventory [sic] dollars seem very low in relations [sic] to the mentioned state contracts that are currently existing [sic]-[.]" Evaluator 5 assigned Intervenor a 4 without any comments, but citing the presence of a 10-K report in response to where he found the financial information. Evaluator 5 assigned Petitioner a 1 originally, noting "asset/liabilities 1:1." However, he changed his score to a 2 and lined out his comment. In general, the five evaluators have technical backgrounds in telecommunications or information management. They do not have significant backgrounds in business or financial matters. Evaluator 1 has a limited financial background, having taken a couple of accounting courses in college. His testimony during his deposition was evasive. Unwilling or unable at the deposition to discuss substantively the financial statements, Evaluator 1 claimed not to recall nearly all material aspects of the evaluation that had taken place about four months earlier. Evaluators 2 and 3 testified at the hearing. Evaluator 2 owns a company, although he has never read the financial statements of any company besides his own. However, he believes that he can read financial statements to determine if a corporation is profitable. On the other hand, Evaluator 2 admits that he does not know how to calculate the ratio of current assets to liabilities from the financial statements or the difference between a balance sheet and an income statement. Evaluator 2 also admits that he does not know how the value of determining whether the ratio of debt to net worth is less than 1. Evaluator 2 concedes that he does not know how to determine if an offeror had sufficient cash to complete the contract. However, during his deposition, Evaluator 2 testified that he checked the financial statements for cash on hand and monthly income, although he admitted that he did not know how much cash a company would need to perform the contract. Evaluator 2 also admitted in his deposition that, in giving Intervenor a 3 and Petitioner a 2, he did not compare the net worth or ratio of cash to operating expenses of the two offerors. Evaluator 3 testified that he has some relevant education in college, but he has not previously examined financial statements for Respondent. Like Evaluator 2, Evaluator 3 testified that he did not compute any of the bulleted ratios and was incapable of calculating the current ratio described in the first bullet or the other ratios described in the second and third bullets. Evaluator 3 conceded that he did not determine whether the offerors had sufficient resources to complete the project. In his deposition, Evaluator 3 admitted that his review of the financial criterion was largely confined to checking to see if an offeror's assets exceeded its liabilities. Evaluator 3 conceded that he did not compare debt loads. In two respects, Evaluator 3 approached the evaluation differently from his counterparts. First, he assumed that someone had already determined that the offerors were financially able to service the contract. Second, evidently relying on information not contained in the offers or RFP, Evaluator 3 determined that Petitioner's parts inventory was too low. In his deposition, Evaluator 4 stated that he felt that it was optional whether he had to consider whether the financial information supported an offeror's ability to perform the contract. In rating Intervenor, Evaluator 4 admitted that he was unaware of its debt load. Evaluator 4 testified in his deposition that he did not feel qualified to decide whether an offeror could perform financially under the RFP. In his deposition, Evaluator 5 testified that he did not know what financial resources an offeror must possess to be able to complete the contract. He also admitted that he never determined if Intervenor had operated at a loss for the past two years. In addressing the qualifications of the evaluators to score the financial criterion, it is useful to compare their evaluations to what was being evaluated. The Administrative Law Judge rejects Petitioner's implicit invitation to assess the qualifications of the evaluators without regard to the extent to which their evaluations corresponded with, or failed to correspond with, that which they were evaluating. It is impossible to perform much of a comparative analysis of the financial resources of Petitioner and Intervenor because of the paucity of financial information supplied by Petitioner. Petitioner did not submit audited, reviewed, or even compiled financial statements, so that a credibility issue attaches to its owner-generated statements. Also, Petitioner did not submit a statement of changes in financial position, which is the first financial document that the Procurement Officer testified that he would consult in assessing a corporation's financial resources. Tr., p. 88. Absent this data concerning cash flow, it is not possible to identify reliably the information necessary to consider the third bullet, which asks the evaluator to compare historic cash flow from operations (which is derived from the statement of changes in financial position) with the "projected monthly operating expenses" (which is derived from the income statement). Subject to these important qualifications concerning Petitioner's financial statements, Petitioner's balance sheet reveals a current ratio of 5:1 and a ratio of total liabilities to net worth of well under 1. By contrast, Intervenor's audited financial statements (for DecisionOne Corporation and Subsidiaries) reveal a current ratio of barely 1:1, total liabilities in excess of total assets, and a negative shareholder's equity of $204,468,000. Intervenor's income statement discloses a net loss of $171,641,000 in fiscal year ending 1998 with a note suggesting that $69,000,000 of this loss is attributable to nonrecurring merger expenses. If interest is included, as it should be (given its impact on real-world cash flow), Intervenor's statement of changes in financial position reports negative cash flows for the past three years. Counting interest and taxes, the negative cash flow in 1998 is $37,298,000. This negative cash flow is attributable to the payment of a $244,000,000 to Intervenor's parent, but negative cash flows of $13,144,000 and $11,961,000 in 1997 and 1996, respectively, do not include any dividend payments. Perhaps partly due to the already-discussed problems in ascertaining the role, at hearing, of the accuracy of the scoring, Intervenor did not elicit explanatory testimony concerning its relatively complicated financial statements, although Intervenor's forbearance seems directed more to not developing the evidentiary record concerning the formal and substantive deficiencies of Petitioner's financial statements. However, it is clear that, except for Evaluator 1, Respondent's evaluators could not and did not understand much more of Intervenor's financial statements than that they were professionally prepared and contain large numbers. Turning to the extent to which the scores correspond to what the evaluators were scoring, Petitioner's financial statements are incomplete and owner-generated. Given these facts, the evaluators could legitimately give Petitioner a 2, which is an "acceptable" score, reflective of "adequate capability." The evaluators could also have legitimately given Petitioner a 1, indicative of a "poor" score with "some indication of marginal capability." The evaluators could not have given Petitioner a 0 because its financial statements are at least partly present in the offer and reflect some financial capability. By contrast, Intervenor's financial statements are completed and audited. However, they portray a company that is in financial distress with substantial losses, a negative shareholder's equity, and ongoing negative cash flows. Although much better in form than the financial statements of Petitioner, Intervenor's financial statements raise at least one question as to form because, although disclosing interest and tax payments, they attempt to stress a modified cash flow without regard to these substantial cost items. Given the sizeable losses suffered recently by Intervenor, the evaluators could not rationally assign Intervenor a 3, which is "good" and reflective of "above average capability." Without dealing with Intervenor's losses and specifically identifying cash flow that would be available, after debt service and other expenditures, to service the contract, the evaluators could not rationally assign Intervenor even a 2. Except for Evaluator 1, the evaluators never identified the financial condition of Intervenor and thus never considered it in their scoring. Undermined from the start by a lack of knowledge of roughly how much financial capacity would be necessary to service the three-year contract, the scoring process, as applied to Intervenor, is further undermined by the near-total absence in the record of any informed reason for the scoring of Intervenor's offer. Evaluator 3 erroneously believed that someone not on the evaluation team had already determined that the offerors were financially capable of performing the contract. Evaluator 4 erroneously believed that evaluating the financial condition of the offerors was optional, and admitted that he was unqualified to perform this task in any event. Evaluator 2 claimed to be able to identify losses on a financial statement, but, if he did so as to Intervenor's statements, there is no evidence in the record that he gave the matter any thought. Evaluator 5 expressly admitted that he never made this determination. The only informed bases in the record, either contemporaneous with the scoring process or at any later time through the hearing, for the scoring of the subject criterion in the offers of Petitioner and Intervenor are the evaluation forms of Evaluator 1. In these forms, Evaluator 1 correctly noted the loss suffered by Intervenor in 1998 and the already- mentioned formal deficiencies of Petitioner's financial statements. However, the sole contribution of Evaluator 1 to this case is in the comments on his forms. He was unwilling and unable to discuss any aspect of his scoring when questioned at his deposition. The case of the financial qualifications of the evaluators thus comes down to four evaluators who had no idea what they were doing and one evaluator who offers only two spare, handwritten notes suggestive of a rational basis for distinguishing between the financial capabilities of the two offerors. This is insufficient. The RFP promised an informed evaluation by more than one evaluator. Even if the RFP did not so promise, the promising comments of Evaluator 1 are not indicative of his qualifications when, for no good reason, he could not recall the recently completed evaluation process or could not or would not respond meaningfully to questions concerning the financial materials that he was evaluating. For the purpose of assessing the qualifications of Evaluation 1, the hint of rationality present in his two comments is overwhelmingly offset by the actual financial condition of Intervenor. Rejecting a chance to discuss his evaluation, Evaluator 1 has chosen to let his evaluation be judged on the strength of its correspondence to the subject matter of the evaluation, Intervenor's financial statements. Under all of the circumstances, Evaluator 1's evaluation of the subject criterion in Intervenor's offer was clearly erroneous and contrary to competition. The remaining evaluators' evaluations of this criterion were clearly erroneous, contrary to competition, arbitrary, and capricious. However, Petitioner has elected not to make a direct issue of the accuracy of the scores. Addressing the qualifications of the evaluators, then, their evident lack of qualifications, coupled with the already-described grave deficiencies in the results of their scoring the first criterion of Intervenor's offer and the material impact on the outcome of the relative scoring of the offers of Intervenor and Petitioner, has rendered the evaluation process clearly erroneous, contrary to competition, arbitrary, and capricious.

Recommendation It is RECOMMENDED that the Department of Children and Family Services enter a final order rejecting all offers. DONE AND ENTERED this 3rd day of September, 1999, in Tallahassee, Leon County, Florida. ___________________________________ ROBERT E. MEALE Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 3rd day of September, 1999. COPIES FURNISHED: Gregory D. Venz, Agency Clerk Department of Children and Family Services Building 2, Room 204B 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 John S. Slye, General Counsel Department of Children and Family Services Building 2, Room 204 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 William E. Williams Andrew Berton, Jr. Huey Guilday Post Office Box 1794 Tallahassee, Florida 32302-1794 R. Beth Atchison Assistant General Counsel Department of Children and Family Services Building 2, Room 204 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 Gregory P. Borgognoni Kluger Peretz 17th Floor, Miami Center 201 South Biscayne Boulevard Miami, Florida 33131

Florida Laws (3) 120.57287.001287.057
# 7
DON HALL vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 99-004530 (1999)
Division of Administrative Hearings, Florida Filed:Jacksonville, Florida Oct. 26, 1999 Number: 99-004530 Latest Update: Sep. 28, 2000

The Issue The issue is whether Petitioner's son is eligible for assistance from the Developmental Services Program.

Findings Of Fact Based upon all of the evidence, the following findings of fact are determined: Background In this proceeding, Petitioner, Donald Hall, Sr., has appealed an eligibility decision of Respondent, Department of Children and Family Services (Department), which denied an application for mental retardation assistance for his son, Donald Hall, Jr. (Don), now almost 21 years of age, under the Developmental Services Program (Program). As a ground, the Department simply stated that the son was "not eligible for assistance." As clarified at hearing, Respondent takes the position that Don does not meet the statutory definition of a retarded person and therefore he does not qualify for assistance. The test for assistance The Program provides services to persons with specific developmental disabilities, such as mental retardation, cerebral palsy, spina bifida, and autism. In order to be eligible for mental retardation assistance, an individual must meet the definition of "retardation," as that term is defined in Section 393.063(44), Florida Statutes (1999). That provision defines the term as meaning "significantly subaverage general intellectual functioning existing concurrently with deficits in adaptive behavior and manifested during the period from conception to age 18." As further defined by the same statute, the term "significantly subaverage general intellectual functioning" means "performance which is two or more standard deviations from the mean score on a standardized intelligence test specified in the rules of the department." In this case, the mean score is 100, and the standard deviation is 15; thus, an individual must have general intellectual functioning of at least two deviations below 100, or a score of less than 70, in order to qualify under this part of the definition. To determine intellectual functioning, standardized testing is performed; one such test is the Wechsler Intelligence Scale for Children (Wechsler), as revised from time to time, which was administered to Don. "Adaptive behavior" is defined as "the effectiveness or degree with which an individual meets the standards of personal independence and social responsibility expected of his or her age, cultural group, and community." In plainer terms, adaptive behavior means the individual's ability to function in everyday tasks in the world. This includes such things as providing personal care to oneself, expressing oneself, and finding one's way around. This behavior is measured by instruments such as the Vineland Adaptive Behavior Scale (Vineland). Finally, both the subaverage general intellectual functioning and deficits in adaptive behavior must have manifested and been present before the individual reached the age of 18. In this case, the Department asserts that it is "eighty percent" sure that Don is not mentally retarded. It acknowledges, however, that he does have "significant difficulties in all areas of functioning." More specifically, the Department bases its denial on the fact that Don's 1995 tests indicated that his adaptive behavior was equivalent to other children of the same age, and that his intellectual functioning tests, principally the 1990 test and one score in 1995, revealed that he is in the borderline range between low average and mentally retarded. Don's background Don was born on November 5, 1979. Even while attending an educable mentally handicapped class at Parkwood Heights Elementary School, a public school in Duval County, Florida, Don experienced difficulty in coping with the curriculum. Indeed, after he had already repeated the first and third grades, and he was in danger of failing the fourth grade as well, public school officials transferred Don from the public school to Morning Star School (Morning Star), a private school for students with learning disabilities, including those who are mildly mentally handicapped. Later, when teachers at Morning Star expressed concern that Don had "gone as far as they could help him," and he was too old to retain eligibility, Don was referred by a child study team to Alden Road Exceptional Student Center (Alden Road), a public school (grades 6-12) for mentally handicapped students. Due to his present age (almost 21), he has only one year of eligibility left at Alden Road. At the school, Don receives limited academic instruction and has a supervised job. Don became eligible for Social Security death benefits when his natural mother died. Recently, his parents (father and stepmother) made application for those benefits to be converted to greater, more permanent Social Security benefits because of his condition. Their request was quickly approved, and Donald now receives lifetime monthly Social Security benefits. Don's test results for general intellectual functioning On April 24, 1990, when Don was 10 years old, he was given a psychological evaluation, which included the Wechsler test, to produce verbal, performance, and full scale intelligence quotients (IQs). The verbal IQ is a composite score of several subtests that make up the intelligence scale, including verbal reasoning, verbal memory, and verbal expressive skills. The performance score is based on a group of nonverbal tests, such as putting blocks and puzzles together, sequencing pictures, and marking coded symbols in a timed environment. Those results indicated a verbal IQ of 78, a performance IQ of 77, and a full scale IQ of 76. These scores placed him in the "borderline range" of intellectual functioning somewhere between low average and mentally retarded. The Wechsler test was revised in 1991 to provide a more valid estimate of intellectual functioning compared to the current day population. This resulted in students who retook the test scoring at least 5 points lower, and sometimes even lower, than they did on the earlier version of the test. Therefore, it is not surprising that Don attained lower scores on subsequent tests. The evidence establishes that a child will typically attain higher IQ scores at an earlier age, and that as he grows older, his scores will "tail off." This is because a child's intellectual skills reach a plateau, and the child is not learning new skills at a higher level as his age increases. Therefore, later tests scores are more indicative of Don's intellectual functioning. In 1993, when he was 13 years old, Don was again evaluated by the Duval County School Board and received a verbal IQ of 65, a performance IQ of 54, and a full scale IQ of 56 on the Wechsler test. More than likely for the two reasons given above, these scores were substantially lower than the scores achieved in 1990, and they indicated that Don was "in the range of mild mental retardatation" and therefore eligible for services. In 1995, when Don was 16 years old, he was again given the Wechsler test by a psychologist and was found to have a verbal IQ of 71, a performance IQ of 54, and a full scale IQ of Except for the verbal score, Don's IQ scores placed him in the range of mild mental retardation. On the 1995 verbal IQ score, which is made up of ten subtests, Don had one subtest with a score of 91, which raised his overall verbal IQ score to 71. Without that score, the verbal IQ would have been in the 60s, or in the mildly mentally retarded range. The evidence shows that it is quite common for children with mild to moderate deficiencies to score within the average range on some types of achievement measures. For example, some mildly retarded children will achieve a high level on academic tests, such as in the 80s or 90s, but they have little comprehension as to what those words mean. More than likely, Don fits within this category, and an overall verbal score of less than 70 is more reflective of his intellectual functioning. Based on the 1993 and 1995 tests, Don has general intellectual functioning of at least two deviations below 100, and therefore he qualifies for assistance under this part of the test. Adaptive behavior skills As noted above, this category measures Don's ability to deal with everyday tasks. To be eligible for services, an applicant must have deficits in his adaptive behavior which manifested before the age of 18. Presently, and for eight months out of the year, Don works from noon until 8:00 p.m. Monday through Friday at Jacksonville University "in the skullery room and [doing] tables." He relies on community transportation (from door to door) to get to and from work. When not working, he attends Alden Road where he receives limited academic instruction. According to a Vineland instrument prepared by an Alden Road teacher in December 1995, Don then had an overall adaptive behavior composite of 16 years old, or one roughly equivalent to other children of the same age. More specifically, in terms of communication, he was functioning at the age of 16; in terms of daily living skills, he was reporting at a greater level than the 18-year-old level; and in terms of socialization, he was slightly lower than a 16-year-old. The teacher who prepared the raw data on which the test score was derived was surprised to learn that her data produced a result which indicated that Don had adaptive skills equivalent to someone his own age. Based on her actual experience with him in the classroom, she found Don to be "functioning way below" her own son, who was the same age as Don. She further established that he can follow only the most "simple" instructions, and he will always need someone "looking out for him." This was corroborated by Don's parents and family friends. The Vineland test result also differs markedly from Don's real life experience. Don lives at home with his father and stepmother; he requires "constant supervision all day," even while working; and he is unable to live by himself. He is a "very trusting person," is easily subject to unscrupulous persons who could take advantage of him, and cannot manage his own money. Indeed, his psychologist described him as being "an easy target to be taken advantage of [by others]." Although Don is able to administer to some of his basic personal hygiene needs, he still requires constant reminders to do such things as wash his hair or brush his teeth. Finally, Don has minimal problem solving skills, and he is easily confused by instructions unless they are "very simple." In short, these are real deficits in adaptive behavior and are sufficient to make Don eligible for Program services.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Department of Children and Family Services enter a final order granting Petitioner's application for Program benefits for Donald Hall, Jr. DONE AND ENTERED this 14th day of July, 2000, in Tallahassee, Leon County, Florida. DONALD R. ALEXANDER Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 14th day of July, 2000. COPIES FURNISHED: Virginia A. Daire, Agency Clerk Department of Children and Family Services Building 2, Room 204B 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 Josefina M. Tomayo, General Counsel Department of Children and Family Services Building 2, Room 204 1317 Winewood Boulevard Tallahassee, Florida 32399-0700 Kathryn L. Sands, Esquire 1830 Atlantic Boulevard Jacksonville, Florida 32207-3404 Roger L. D. Williams, Esquire Department of Children and Family Services Post Office Box 2417 Jacksonville, Florida 32231-0083

Florida Laws (3) 120.569120.57393.063
# 8
DAVID E. ALLEY vs FLORIDA ENGINEERS MANAGEMENT CORPORATION, 99-002815 (1999)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Jun. 24, 1999 Number: 99-002815 Latest Update: Oct. 05, 1999

The Issue The issue for disposition in this proceeding is whether Petitioner is entitled to a passing grade on the Principles and Practice of Engineering examination administered on October 30, 1998.

Findings Of Fact Petitioner is an applicant for licensure as a professional engineer in the State of Florida. Respondent is a nonprofit corporation created by the Florida Legislature to provide administrative, investigative and prosecutorial services to the Board of Professional Engineers pursuant to Section 471.038, Florida Statutes. On October 30, 1998, Petitioner sat for the Principles and Practice Engineering Examination in electrical engineering. This is a nation-wide examination developed, controlled, and administered by the National Council of Examiners for Engineering and Surveying (NCEES). Petitioner received a raw score of 47 on this examination. For the electrical engineering discipline, a raw score of 47 results in a converted score of 69. A minimum converted score of 70 is required to pass this examination. A raw score of 48 results in a converted score of 70. Petitioner needs 1 raw score point to achieve a passing score on this examination. Petitioner initially challenged the scoring of multiple choice questions nos. 527 and 530. Petitioner had received a raw score of 0 on these two questions. Petitioner requested NCEES to rescore questions nos. 527 and 530, but after the rescoring NCEES determined that he was not entitled to any additional raw score points. Questions nos. 527 and 530 are each worth 1 raw score point. Petitioner's answer to question no. 527 represents the most practical, "real world" answer to this question, as conceded by Respondent's expert, and Petitioner is entitled to 1 raw score point for his answer. Although he made an articulate, reasonable explanation for his answer to question no. 530 and for his challenge to the text of the question, Petitioner has agreed to accept the 1 additional point he needed for a passing score and abandon the challenge to question no. 530 as moot.

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that a final order be entered granting Petitioner credit for his response to examination question no. 527 and adjusting his examination grade to reflect a passing score. DONE AND ENTERED this 3rd day of August, 1999, in Tallahassee, Leon County, Florida. MARY CLARK Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 3rd day of August, 1999. COPIES FURNISHED: David E. Alley 4827 Springwater Circle Melbourne, Florida 32940 William H. Hollimon, Esquire Ausley & McMullen, P.A. 227 South Calhoun Street Tallahassee, Florida 32302 Dennis Barton, Executive Director Board Professional Engineers Department of Business and Professional Regulation 1208 Hays Street Tallahassee, Florida 32301 Natalie A. Lowe, Esquire Vice President for Legal Affairs Florida Engineers Management Corporation Department of Business and Professional Regulation 1208 Hays Street Tallahassee, Florida 32301 William Woodyard, General Counsel Department of Business and Professional Regulation Northwood Centre 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.569120.57471.038
# 9
THOMAS J. BARNETT, JR. vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 94-003904 (1994)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Jul. 15, 1994 Number: 94-003904 Latest Update: Mar. 23, 1995

The Issue Is Petitioner entitled to receive supported living services from Respondent? See Section 393.066, Florida Statutes.

Findings Of Fact Petitioner is 18 years old. He lives with his paternal grandmother and step-grandfather at 3109 Brandywine Drive, Tallahassee, Florida. On March 23, 1994, the Petitioner applied for developmental services. Petitioner's natural mother was institutionalized for retardation for an indeterminate length of time at a Sunland Center. Shortly after the Petitioner's birth, his mother left the Petitioner and his father. Petitioner has lived with his paternal grandmother since he was 13 weeks old. Petitioner's grandmother raised her own three children and has experience in child rearing and the development of children. She noticed that Petitioner's development was slow when Petitioner did not begin walking at age 17 months and did not begin to speak intelligible words until 30 months of age. Petitioner was taken to the Florida State University Psychology Clinic at age 4 years 3 months (4.3 years) in an effort to determine why his development was slow. This was the first time the Petitioner's Intelligence Quotient (IQ) was tested. He obtained a 77 on the Stanford-Binet L-M test, and an 87 on the Vineland Adaptive Behavior Scales. FSU advised the Petitioner's grandmother that Petitioner might have developmental problems and to observe him closely and retest him if he had problems in school. As a result, Petitioner's IQ was tested several times between ages 5 and 17. Testing dates and scores of these test are as follows: October 80 4.3 Stanford-Binet FSU Psy. Clinic IQ 77 Vineland Adaptive FSU Psy. Clinic 87 July 81 5.0 FSU Psy. Clinic Stanford-Binet IQ 84 May 84 7.10 WISC-R FSIQ 84-87 85 9.0 WISC-R FSIQ 80 April 86 9.9 WISC-R Psych. Assoc., Dr.Cook FSIQ 69 June 86 9.11 WISC-R Leon Cty. School, Barnes FSIQ 72 March 91 14.8 WISC-R Leon Cty. School, Popp FSIQ 69 April 92 15.9 Vineland Adapt. Psych. Assoc. Dr. Clark 62 July 93 17.0 WAIS-R, Psych. Assoc. Dr. Deitchman 70 Dr. Thomas Clark, who holds a doctorate in clinical psychology and is a board certified clinical psychologist, testified regarding intelligence testing and his examination of the Petitioner and the records of Petitioner's intelligence testing. The numbers in the far right column in Paragraph 5, above, all reflect the IQ of the Petitioner. IQ scores of 70 or lower placed a person two or more standard deviations below the mean on standardized intelligence tests. Individuals with mental retardation, who may exhibit higher IQ test scores when they are younger, may have their scores decrease as they get older. This is a recognized phenomenon in the mildly retarded. Scores on IQ tests may be inflated by a practice factor which occurs when the test is administered more than once within a six-month period. The record reflects that the Petitioner was tested two times in 1986, and his second score of 72 was higher because of the practice factor. The increase of Petitioner's score was within 2 to 3 points above his general performance on the first test in 1986 and his subsequent tests in 1991, 1992, and 1993, which is the predicted increase due to the practice factor. Since age 9.9, with the exception of the 72 due to the practice factor, the Petitioner has not scored above 70 on an IQ test. Based upon his examination and testing of the Petitioner and his review of the Petitioner's records, Dr. Clark's professional opinion was that the Petitioner was more than two standard deviations below the average in intellectual performance. Although the Petitioner suffers from Attention Deficit Disorder and has some emotional problems, Dr. Clark stated this did not alter his opinion regarding the Petitioner's IQ or his intellectual performance. Dr. Clark found that Petitioner's adaptive behavior was low for Petitioner's IQ. The parties stipulated that the measurement of Petitioner's general intellectual functioning existed concurrently with deficits in his adaptive behavior as manifested during the period from conception to age 18. Based upon its assessment, the Leon County Schools recommended that the Petitioner be placed in the community-based educational program which is designed for students who are mentally retarded within the educable range. The Petitioner has been awarded Supplemental Security Income under Title XVI of the Social Security Act upon a determination that he is mentally retarded. Since his completion of school, the Petitioner has been attending workshops conducted by Goodwill Industries to develop job skills and job coping skills. He has been unable to maintain employment, and has been discharged from all of the positions to which he has been referred. Petitioner was referred to the Department of Health and Rehabilitative Services Developmental Services by officials of Vocational Rehabilitation (Composite Exhibit 1-C). Petitioner's grandparents take him shopping, assist the Petitioner in maintaining his daily life, live with Petitioner on a daily basis, and give him support and try to assist him in controlling his "excessive loud talking". Without the care of his grandparents, the Petitioner would not be able to maintain the activities of daily living. Petitioner's friends include neighborhood children whose ages range from 3 years to 12 years. Their parents have requested Petitioner no longer play with them due to his size, age and conduct. Petitioner's testimony and demeanor while testifying reveal a young adult who is mentally retarded and whose adaptive skills are consistent with his IQ. Petitioner's grandmother testified that even though he is 18 1/2 years old, the Petitioner acts like a boy between 9 and 10 years old. The Respondent's position was that Petitioner's earlier test scores indicated that he was not two deviations below average intellectual performance, and the Petitioner's later test scores were adversely impacted by his emotional and attention deficit problems; therefore, Petitioner was ineligible for developmental services. The testimony of Dr. Clark clearly refuted the assertion that the Petitioner's earlier high test scores indicated a higher IQ, and refuted the alleged negative impact upon IQ testing of Petitioner's attention deficit and emotional disorder. Petitioner presented competent evidence and expert testimony concerning Petitioner's intellectual function to establish that Petitioner's performance was two or more standard deviations from the mean score on a standardized intelligence test. Petitioner's showing was unrebutted by the Respondent.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is; RECOMMENDED: That a Final Order be entered approving Petitioner's eligibility for developmental services. DONE and ENTERED this 23rd day of March, 1995, in Tallahassee, Florida. STEPHEN F. DEAN Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 23rd day of March, 1995. APPENDIX TO RECOMMENDED ORDER Both parties submitted proposed findings which were read and considered. The following states which of those findings were adopted, and which were rejected and why: Petitioner's Recommended Order Findings Paragraph 1 Paragraph 1 Paragraph 2 Subsumed in Paragraph 14 Paragraph 3 Paragraph 15 Paragraph 4 Subsumed in Paragraph 14 Paragraph 5 Subsumed in Paragraph 16 Paragraph 6 Paragraph 17 Paragraph 7 Paragraph 2 Paragraph 8 Paragraph 3 Paragraph 9 Paragraph 4 Paragraph 10 Paragraph 5 Paragraph 11 Subsumed in Paragraph 9 Paragraph 12 Irrelevant Paragraphs 13,14 Subsumed in Paragraphs 16-19 Paragraphs 15-17 True, but made part of Statement of Case Paragraphs 18-21 Subsumed in Paragraph 20 Paragraphs 22-25 Subsumed in Paragraphs 6-10,21 Paragraph 26 Paragraph 11 Paragraph 27 Paragraph 22 Respondent's Recommended Order Findings Paragraph 1 Paragraph 1 Paragraph 2 Rejected as contrary to the more credible evidence summarized in Paragraph 20. Paragraph 3 Paragraph 5 in which the typographical error regarding the test of October 1980 is corrected. The facts set forth in the footnotes are rejected, particularly the assertion that Dr. Cook's reference to a "recent" administration of an IQ test did not fix the date of the test sufficiently to say whether the practice effect would impact its administration. Paragraph 5 Subsumed in Paragraphs 7 and 21 Paragraph 6 See comments for Paragraph 3. As stated in the findings, this premise was specifically rejected. Paragraph 8 Paragraph 1 Paragraph 9 Irrelevant Paragraph 10 Subsumed in various other findings. Paragraph 11 True; however, the Petitioner's application is based solely upon his allegation that he is mentally retarded. COPIES FURNISHED: Daniel W. Dobbins, Esquire 433 North Magnolia Drive Tallahassee, FL 32308 John R. Perry, Esquire Department of Health and Rehabilitative Services 2639 North Monroe Street, Suite 252A Tallahassee, FL 32399-2949 Robert L. Powell, Agency Clerk Department of Health and Rehabilitative Services 1323 Winewood Boulevard Tallahassee, FL 32399-0700 Kim Tucker, General Counsel Department of Health and Rehabilitative Services 1323 Winewood Boulevard Tallahassee, FL 32399-0700

Florida Laws (5) 120.57393.063393.065393.0667.10
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer