Elawyers Elawyers
Washington| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 48 similar cases
THOMAS J. BARNETT, JR. vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 94-003904 (1994)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Jul. 15, 1994 Number: 94-003904 Latest Update: Mar. 23, 1995

The Issue Is Petitioner entitled to receive supported living services from Respondent? See Section 393.066, Florida Statutes.

Findings Of Fact Petitioner is 18 years old. He lives with his paternal grandmother and step-grandfather at 3109 Brandywine Drive, Tallahassee, Florida. On March 23, 1994, the Petitioner applied for developmental services. Petitioner's natural mother was institutionalized for retardation for an indeterminate length of time at a Sunland Center. Shortly after the Petitioner's birth, his mother left the Petitioner and his father. Petitioner has lived with his paternal grandmother since he was 13 weeks old. Petitioner's grandmother raised her own three children and has experience in child rearing and the development of children. She noticed that Petitioner's development was slow when Petitioner did not begin walking at age 17 months and did not begin to speak intelligible words until 30 months of age. Petitioner was taken to the Florida State University Psychology Clinic at age 4 years 3 months (4.3 years) in an effort to determine why his development was slow. This was the first time the Petitioner's Intelligence Quotient (IQ) was tested. He obtained a 77 on the Stanford-Binet L-M test, and an 87 on the Vineland Adaptive Behavior Scales. FSU advised the Petitioner's grandmother that Petitioner might have developmental problems and to observe him closely and retest him if he had problems in school. As a result, Petitioner's IQ was tested several times between ages 5 and 17. Testing dates and scores of these test are as follows: October 80 4.3 Stanford-Binet FSU Psy. Clinic IQ 77 Vineland Adaptive FSU Psy. Clinic 87 July 81 5.0 FSU Psy. Clinic Stanford-Binet IQ 84 May 84 7.10 WISC-R FSIQ 84-87 85 9.0 WISC-R FSIQ 80 April 86 9.9 WISC-R Psych. Assoc., Dr.Cook FSIQ 69 June 86 9.11 WISC-R Leon Cty. School, Barnes FSIQ 72 March 91 14.8 WISC-R Leon Cty. School, Popp FSIQ 69 April 92 15.9 Vineland Adapt. Psych. Assoc. Dr. Clark 62 July 93 17.0 WAIS-R, Psych. Assoc. Dr. Deitchman 70 Dr. Thomas Clark, who holds a doctorate in clinical psychology and is a board certified clinical psychologist, testified regarding intelligence testing and his examination of the Petitioner and the records of Petitioner's intelligence testing. The numbers in the far right column in Paragraph 5, above, all reflect the IQ of the Petitioner. IQ scores of 70 or lower placed a person two or more standard deviations below the mean on standardized intelligence tests. Individuals with mental retardation, who may exhibit higher IQ test scores when they are younger, may have their scores decrease as they get older. This is a recognized phenomenon in the mildly retarded. Scores on IQ tests may be inflated by a practice factor which occurs when the test is administered more than once within a six-month period. The record reflects that the Petitioner was tested two times in 1986, and his second score of 72 was higher because of the practice factor. The increase of Petitioner's score was within 2 to 3 points above his general performance on the first test in 1986 and his subsequent tests in 1991, 1992, and 1993, which is the predicted increase due to the practice factor. Since age 9.9, with the exception of the 72 due to the practice factor, the Petitioner has not scored above 70 on an IQ test. Based upon his examination and testing of the Petitioner and his review of the Petitioner's records, Dr. Clark's professional opinion was that the Petitioner was more than two standard deviations below the average in intellectual performance. Although the Petitioner suffers from Attention Deficit Disorder and has some emotional problems, Dr. Clark stated this did not alter his opinion regarding the Petitioner's IQ or his intellectual performance. Dr. Clark found that Petitioner's adaptive behavior was low for Petitioner's IQ. The parties stipulated that the measurement of Petitioner's general intellectual functioning existed concurrently with deficits in his adaptive behavior as manifested during the period from conception to age 18. Based upon its assessment, the Leon County Schools recommended that the Petitioner be placed in the community-based educational program which is designed for students who are mentally retarded within the educable range. The Petitioner has been awarded Supplemental Security Income under Title XVI of the Social Security Act upon a determination that he is mentally retarded. Since his completion of school, the Petitioner has been attending workshops conducted by Goodwill Industries to develop job skills and job coping skills. He has been unable to maintain employment, and has been discharged from all of the positions to which he has been referred. Petitioner was referred to the Department of Health and Rehabilitative Services Developmental Services by officials of Vocational Rehabilitation (Composite Exhibit 1-C). Petitioner's grandparents take him shopping, assist the Petitioner in maintaining his daily life, live with Petitioner on a daily basis, and give him support and try to assist him in controlling his "excessive loud talking". Without the care of his grandparents, the Petitioner would not be able to maintain the activities of daily living. Petitioner's friends include neighborhood children whose ages range from 3 years to 12 years. Their parents have requested Petitioner no longer play with them due to his size, age and conduct. Petitioner's testimony and demeanor while testifying reveal a young adult who is mentally retarded and whose adaptive skills are consistent with his IQ. Petitioner's grandmother testified that even though he is 18 1/2 years old, the Petitioner acts like a boy between 9 and 10 years old. The Respondent's position was that Petitioner's earlier test scores indicated that he was not two deviations below average intellectual performance, and the Petitioner's later test scores were adversely impacted by his emotional and attention deficit problems; therefore, Petitioner was ineligible for developmental services. The testimony of Dr. Clark clearly refuted the assertion that the Petitioner's earlier high test scores indicated a higher IQ, and refuted the alleged negative impact upon IQ testing of Petitioner's attention deficit and emotional disorder. Petitioner presented competent evidence and expert testimony concerning Petitioner's intellectual function to establish that Petitioner's performance was two or more standard deviations from the mean score on a standardized intelligence test. Petitioner's showing was unrebutted by the Respondent.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is; RECOMMENDED: That a Final Order be entered approving Petitioner's eligibility for developmental services. DONE and ENTERED this 23rd day of March, 1995, in Tallahassee, Florida. STEPHEN F. DEAN Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 23rd day of March, 1995. APPENDIX TO RECOMMENDED ORDER Both parties submitted proposed findings which were read and considered. The following states which of those findings were adopted, and which were rejected and why: Petitioner's Recommended Order Findings Paragraph 1 Paragraph 1 Paragraph 2 Subsumed in Paragraph 14 Paragraph 3 Paragraph 15 Paragraph 4 Subsumed in Paragraph 14 Paragraph 5 Subsumed in Paragraph 16 Paragraph 6 Paragraph 17 Paragraph 7 Paragraph 2 Paragraph 8 Paragraph 3 Paragraph 9 Paragraph 4 Paragraph 10 Paragraph 5 Paragraph 11 Subsumed in Paragraph 9 Paragraph 12 Irrelevant Paragraphs 13,14 Subsumed in Paragraphs 16-19 Paragraphs 15-17 True, but made part of Statement of Case Paragraphs 18-21 Subsumed in Paragraph 20 Paragraphs 22-25 Subsumed in Paragraphs 6-10,21 Paragraph 26 Paragraph 11 Paragraph 27 Paragraph 22 Respondent's Recommended Order Findings Paragraph 1 Paragraph 1 Paragraph 2 Rejected as contrary to the more credible evidence summarized in Paragraph 20. Paragraph 3 Paragraph 5 in which the typographical error regarding the test of October 1980 is corrected. The facts set forth in the footnotes are rejected, particularly the assertion that Dr. Cook's reference to a "recent" administration of an IQ test did not fix the date of the test sufficiently to say whether the practice effect would impact its administration. Paragraph 5 Subsumed in Paragraphs 7 and 21 Paragraph 6 See comments for Paragraph 3. As stated in the findings, this premise was specifically rejected. Paragraph 8 Paragraph 1 Paragraph 9 Irrelevant Paragraph 10 Subsumed in various other findings. Paragraph 11 True; however, the Petitioner's application is based solely upon his allegation that he is mentally retarded. COPIES FURNISHED: Daniel W. Dobbins, Esquire 433 North Magnolia Drive Tallahassee, FL 32308 John R. Perry, Esquire Department of Health and Rehabilitative Services 2639 North Monroe Street, Suite 252A Tallahassee, FL 32399-2949 Robert L. Powell, Agency Clerk Department of Health and Rehabilitative Services 1323 Winewood Boulevard Tallahassee, FL 32399-0700 Kim Tucker, General Counsel Department of Health and Rehabilitative Services 1323 Winewood Boulevard Tallahassee, FL 32399-0700

Florida Laws (5) 120.57393.063393.065393.0667.10
# 1
SYED M. SAFDAR vs BOARD OF PROFESSIONAL ENGINEERS, 97-005941 (1997)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Dec. 18, 1997 Number: 97-005941 Latest Update: Jan. 27, 1999

The Issue The issue presented is whether Petitioner achieved a passing score on the April 1997 civil/sanitary engineer examination.

Findings Of Fact Petitioner took the April 1997 examination for licensure as a civil engineer. The examination was purchased by Respondent from the National Council of Examiners for Engineering and Surveying (hereinafter "NCEES"). The minimum passing score on that examination was 70. Petitioner was advised by Respondent's Bureau of Testing that he had achieved a score of 65. Petitioner challenged his score on question numbered 125 only. The maximum points that could be awarded for the answer to that question were 10. Petitioner was awarded 6 points for his answer to that question. There were three parts to question numbered 125. NCEES' scoring plan for that question provided that 10 points should be awarded where a candidate answered the question correctly in all aspects and the numerical results were within two percent, plus or minus, of the approved solution. In other words, a candidate could receive the maximum points for an answer which contained a mathematical error as long as the error resulted in an answer within two percent of the correct mathematical answer. NCEES' scoring plan also provided that a candidate could only receive eight points out of ten where the candidate answered part (a) correctly but made one or more errors in part (b) or part (c). Petitioner answered parts (a) and (c) correctly but he made a mathematical error in part (b) which caused his answer to be incorrect by 100 percent. When Petitioner's examination was initially scored, he was only awarded six points for his solution to question numbered 125 due to errors he made in both parts (b) and (c). At the final hearing in this cause Respondent's expert agreed with Petitioner that Petitioner's answer to part (c) should have been considered correct. Respondent's expert opined, therefore, that Petitioner should receive eight points rather than six. Petitioner cannot, however, be given full credit for his answers because his mathematical error caused his answer to part (b) to be incorrect by 100 percent rather than by only two percent as permitted to be able to receive full credit. Accordingly, Petitioner does not qualify to receive full credit for his answer to question numbered 125. Petitioner's raw score on the examination is now 45 due to the additional two points for question numbered 125. The scoring conversion scale converts a raw score of 45 to a scaled score of 67, less than a passing grade. The conversion scale is not a linear scale, i.e., it is not a percentage scale. The maximum raw score available on the examination is 80, with a maximum possible converted scaled score of 100. Computer calculations have established the converted scaled score based upon each possible total raw score. In other words, the conversion table converts every possible total raw score from one through 80. The conversion ratio varies along that table. For example, a total raw score on the examination of two becomes a scaled score of four, of four becomes a scaled score of eight, of 45 becomes a scaled score of 67.

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that a final order be entered finding that Petitioner achieved a score of 67 and, therefore, failed to achieve a passing score on the April 1997 civil/sanitary engineer examination. DONE AND ENTERED this 27th day of May, 1998, in Tallahassee, Leon County, Florida. LINDA M. RIGOT Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 27th day of May, 1998. COPIES FURNISHED: R. Beth Atchison, Esquire Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0750 Syed M. Safdar, pro se 1740 Northeast 125th Street, No. 1 Miami, Florida 33181 Lynda L. Goodgame, General Counsel Department of Business and Professional Regulation Northwood Centre 1940 North Monroe Street Tallahassee, Florida 32399-0792 Angel Gonzalez, Executive Director Board of Professional Engineers Northwood Centre 1940 North Monroe Street Tallahassee, Florida 32399-0755

Florida Laws (3) 120.569120.57471.015
# 2
DIVISION OF REAL ESTATE vs. SCARLETT P. FAULK, STANLEY MAC PHILLIPS, AND SCARLETT FAULK & ASSOCIATES, INC., 87-003847 (1987)
Division of Administrative Hearings, Florida Number: 87-003847 Latest Update: May 26, 1988

Findings Of Fact At all times relevant hereto Scarlett P. Faulk and Scarlett Faulk and Associates, Inc. were licensed as a broker and corporate broker, respectively, by the Florida Board of Real Estate. Lily Nelson, broker at Sandpiper Realty, managed property at 1800 Gulf Boulevard, Bellaire Shores, owned by Larry and Sheena Bowa, who resided out of state. This property consisted of a residence which Ms. Nelson rented on behalf of the Bowas. Scarlett Faulk owned a residence located at 1720 Gulf Boulevard which she had purchased in June, 1986. In late June, 1986, Faulk telephoned Lily Nelson to ask if the Bowas were interested in selling their property at 1800 Gulf Boulevard as she might have a client interested in the property. Ms. Faulk's brother, Mac Phillips, was planning to move to Clearwater and was looking for a residence. Also, Faulk had another client, Clarence Trice, to whom she had sold several properties over the past few years. At the time, Trice was contemplating the purchase of property at 1420 Gulf Boulevard and had asked Faulk to join him in a joint venture to purchase this property. Faulk declined, but suggested that her brother, Mac Phillips, might be interested. On June 25, 1986, Phillips wired $62,500 to Faulk to participate in this purchase, but Trice opted to purchase the property by himself. Faulk held these funds in her escrow account (Exhibit 2). Mrs. Bowa told Lily Nelson that she would talk it over with her husband and call back. When she did call back to say they were interested in selling, she inquired about prices in the neighborhood. Mrs. Bowa then agreed to have the property listed for $600,000. This was communicated to Ms. Faulk who passed the information to Phillips. Phillips made an offer of $500,000 for the property, and Bowa countered with $525,000 which Phillips accepted. The contract to purchase the property at 1800 Gulf Boulevard was executed by the buyer on July 1, 1986, and by the sellers on July 7, 1986. Rebecca Watson, at all times relevant, was registered as a real estate sales person and associated with the Respondent. Ms. Watson had a client, Scane Bowler, whose wife was interested in having a house built on a lot facing the Gulf of Mexico. Lots on the west side of Gulf Boulevard face the Gulf of Mexico. Rebecca Watson asked Respondent Faulk if she could show the Bowlers the residence at 1720 Gulf Boulevard that Faulk had recently purchased. Faulk agreed, met Watson and her client, and allowed Watson to show the house. This was the occasion on which Faulk first met the Bowlers. This meeting occurred June 27, 1986, the day the Bowlers departed to attend the tennis matches at Wimbleton. Bowler told Watson the price Faulk was asking, $725,000, was more than the $600,000 he was willing to pay for gulf front property. Bowler asked Watson to keep looking and he would contact her when they returned from Wimbleton in about ten days. When Bowler returned to Clearwater from Wimbleton on July 10, 1986, he contacted Watson to inquire if any lots had become available. Watson showed the Bowlers 1800 Gulf Boulevard and told them that Phillips, the brother of Faulk, had a contract to purchase the property. The Bowlers liked the property and inquired if Phillips would sell the contract to them. Following some negotiation, Phillips sold the contract to the Bowlers for $100,000, and Bowler was the grantee on the deed executed by Bowa. When Bowa learned from Bowler that Bowler was paying $625,000 for the property for which Bowa was getting only $525,000, Ms. Bowa wrote a letter to the Florida Board of Real Estate. After the closing, Bowler instituted civil proceedings against Faulk.

Florida Laws (2) 475.01475.25
# 4
PATRICIA WILSON vs BARBER`S BOARD, 93-002524 (1993)
Division of Administrative Hearings, Florida Filed:Jacksonville, Florida May 05, 1993 Number: 93-002524 Latest Update: Jun. 11, 1996

The Issue Whether items 63, 74, 92, 119 and 124 of the January 1993 Barber Licensure Examination were valid and correctly graded as to Petitioner. Whether Petitioner's grade report correctly reflected the score achieved by Petitioner on the January 1993 Barber Licensure Examination.

Findings Of Fact Upon consideration of the evidence adduced at the hearing, the following relevant findings of fact are made: Petitioner, Patricia Wilson, was a candidate (Number 0100037) for the written portion of the January 1993 Barber Licensure Examination given on January 25, 1993. Petitioner questioned the validity and the answers supplied by Respondent's answer key for items number 63, 74, 92, 119 and 124 covered in the January 1993 Barber Licensure Examination. When Petitioner's witness, Yvette Stewart, a licensed barber in the state of Florida, was read each item in question, it was apparent that the witness clearly understood each item and that the items were neither misleading nor confusing to the witness. Likewise, when the witness was asked to choose an answer for each item from several possible answers, the witness chose the answer given in the Respondent's answer key as the correct answer. Because more than 50 per cent of the candidates taking the examination failed to correctly answer item 119, the Respondent reviewed item 119 to determine its validity. After reviewing item 119 and the study material from which the item was derived, the Department determined that item 119 was valid and that the answer to item 119 in the Respondent's answer key was correct. The Petitioner failed to present sufficient evidence to show that items 63, 74, 92, 119 and 124 were invalid or that the Respondent's answers for those items on the Respondent's answer key were incorrect. There were 125 items to be answered by the examinee on the written portion of the examination. Petitioner answered 93 items correctly. The maximum score that could be achieved on the written portion of the examination was 100 per cent. The weight to be given each item was determined by dividing 100 (maximum score) by 125 (total number of items) which equals 0.8. The grade report on the written portion of the examination received by the Petitioner indicated that Petitioner score was 74. This score was determined by multiplying 93 (total correct answers) by 0.8 (weight given each correct answer). This equals 74.4 per cent but when rounded off in accordance with the Respondent's rules would be 74.00 which was the score shown on the grade report as achieved by the Petitioner. The grade report listed the different areas of study that the examinees were required to be tested on and the score achieved by the examinee on each area study as follows: Hygiene and Ethics 7.00 Florida Law 5.00 Skin Care and Function 9.00 Hair Services and Struct 9.00 Cosmetic Chemistry 10.00 Scalp and Facial Treat 8.00 Coloring and Bleaching 10.00 Permanent Waving 10.00 Hair Straightening 4.00 Implements 3.00 Total Of Individual Scores 75.00 This total score would meet the minimum score of 75.00 required for passing the examination. The individual scores shown above in Finding of Fact 9 and on the Grade Report were determined by multiplying the number of correct answers achieved by the Petitioner in each area of study by 0.8 (weight given each correct answer) and rounding off in accordance with the Respondent's rules. The individual scores as set out in the Grade Report are compared with the actual score derived as set out in Finding of Fact 8 as follows: Individual Score Actual Score Correct Answers 7.00 7.2 9 5.00 4.8 6 9.00 8.8 11 9.00 8.8 11 10.00 9.6 12 8.00 8.0 10 10.00 10.4 13 10.00 9.6 12 4.00 4.0 5 3.00 3.2 4 Total 75.00 74.4 93 The Grade Report does not explain how the Respondent arrived at the score of 74.00 or that the total of therounded off individual scores is not to be considered as the score achieved.

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law, it is recommended that the Respondent enter a final order denying the Petitioner's request for reconsideration of her grade on the written portion of the January 1993 Barbers' Examination. RECOMMENDED this 29th day of September, 1993, at Tallahassee, Florida. WILLIAM R. CAVE Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 29th day of September, 1993. APPENDIX TO RECOMMENDED ORDER, CASE NO. 92-2524 The following constitutes my specific rulings, pursuant to Section 120.59(2), Florida Statutes, on all of the proposed findings of fact submitted by the parties in this case. Petitioner's Proposed Findings of Fact. The first sentence of proposed finding of fact 1 is adopted in substance as modified in Finding of Fact 4. The second sentence is not supported by competent substantial evidence in the record. Proposed finding of fact 2 is not supported by competent substantial evidence in the record. Proposed finding of fact 3 is more of a statement than a finding of fact. Proposed finding of fact 4 is adopted in substance as modified in Finding of Fact 8. Proposed finding of fact 5 is more of a statement than a finding of fact. There was no showing that Petitioner should be given credit for her answer to item 119. Respondent's Proposed Findings of Fact. 1-2. Proposed findings of fact 1 & 2 adopted in substance as modified in Findings of Fact 6 & 8, respectively. 3. Proposed finding of fact 3 adopted in substance as modified in Findings of Fact 3-5. COPIES FURNISHED: Patricia Wilson, Pro se. 1023 Huron Street Jacksonville, Florida 32205 Robert A. Jackson, Esquire Office of the General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792 Darlene F. Keller, Director Division Real Estate 400 West Robinson Street Post Office Box 1900 Orlando, Florida 32802-1900 Jack McRay, Esquire Acting General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.57476.114476.144
# 5
FAMILY ARCADE ALLIANCE vs DEPARTMENT OF REVENUE, 91-005338RP (1991)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 23, 1991 Number: 91-005338RP Latest Update: Mar. 17, 1992

The Issue The issues are whether proposed rules 12-18.008, 12A-15.001 and 12A-1.044, Florida Administrative Code, are valid exercises of delegated legislative authority.

Findings Of Fact The Parties The Family Arcade Alliance (Alliance) is a group composed primarily of businesses that operate amusement game machines in the State of Florida which are activated either by token or coin. The parties agree that the Alliance is a substantially affected person as that term is defined in Section 120.54(4)(a), Florida Statutes (1991), and has standing to maintain these proceedings. The Department of Revenue (Department) is the entity of state government charged with the administration of the revenue laws. The Tax and the Implementing Rules Except for the period the services tax was in force, no sales tax had been imposed on charges made for the use of coin-operated amusement machines before the enactment of Chapter 91-112, Laws of Florida, which became effective on July 1, 1991. The Act imposed a 6 percent sales tax on each taxable transaction. Coin-operated amusement machines found in Florida are typical of those machines throughout the United States. The charges for consumer use of the machines are multiples of twenty-five-cent coins, i.e., 25 cents, 50 cents, 75 cents, and one dollar. The sales tax is most often added to the sale price of goods, but it is not practicable for the sellers of all products or services to separately state and collect sales tax from consumers. For example, there is no convenient way separately to collect and account for the sales tax on items purchased from vending machines such as snacks or beverages, or from newspaper racks. For these types of items, a seller reduces the price of the object or service sold, so that the tax is included in the receipts in the vending machine, newspaper rack or here, the coin-operated amusement machine. There are subtleties in the administration of the sales tax which are rarely noticed. The sales tax due on the purchase of goods or services is calculated at the rate of 6 percent only where the purchase price is a round dollar amount. For that portion of the sales price which is less than a dollar, the statute imposes not a 6 percent tax, but rather a tax computed according to a specific statutory schedule: Amount above or below Sales tax whole dollar amount statutorily imposed 1-9 0 10-16 1 17-33 2 34-50 3 51-66 4 67-83 5 84-100 6 Section 212.12(9)(a) through (h), Florida Statutes (1991). In most transactions the effect of the schedule is negligible and the consumer never realizes that the tax rate is greater than 6 percent for the portion of the sales price that is not a round dollar amount. Where a very large percentage of sales come from transactions of less than a dollar, the statutory schedule for the imposition of the sales tax takes on a greater significance. For those transactions between 9 cents up to a dollar the schedule's effective tax rate is never below the nominal tax rate of 6 percent, and may be as high as 11.76 percent. For example, the 1 cent sales tax on a 10 cent transaction yields an effective tax rate of 10 percent, not 6 percent. Where it is impracticable for businesses in an industry to separately state the tax for each sale, the statutes permit sellers (who are called "dealers" in the language of the statute) to file their tax returns on a gross receipts basis. Rather than add the amount of the tax to each transaction, taxes are presumed to be included in all the transactions and the dealer calculates the tax based on his gross receipts by using the effective tax rate promulgated by the Department in a rule. See Section 212.07(2), Florida Statutes (1991). Businesses also have the option to prove to the Department that in their specific situation the tax due is actually lower than a rule's effective tax rate for the industry, but those businesses must demonstrate the accuracy of their contentions that a lower tax is due. Applying the statutory tax schedule to sales prices which are typical in the amusement game machine industry (which are sometimes referred to as "price points") the following effective tax rates are generated at each price point: Total Sales Presumed Presumed Effective Price Selling Price Sales Tax Tax Rate 25 cents 23 cents 2 cents 8.7% 50 cents 47 cents 3 cents 6.38% 75 cents 70 cents 5 cents 7.14% $1.00 94 cents 6 cents 6.38% The determination of an effective tax rate for an industry as a whole also requires the identification of industry gross receipts from each of the price points. Once that effective tax rate is adopted as a rule, the Department treats dealers who pay tax using the effective tax rate as if they had remitted tax on each individual transaction. Proposed Rule 12A-1.044 establishes an industry-wide effective tax rate for monies inserted into coin-operated amusement machines or token dispensing machines of 7.81 percent. For counties with a one half or one percent surtax, the effective tax rates are 8.38 percent and 8.46 percent respectively. These rates include allowances for multiple plays, i.e., where the consumer deposits multiple coins to activate the machine. Proposed Rule 12A-1.044(1)(b) defines coin-operated amusement machines as: Any machine operated by coin, slug, token, coupon or similar device for the purpose of entertainment or amusement. Amusement machines include, but are not limited to, coin-operated radio and televisions, telescopes, pinball machines, music machines, juke boxes, mechanical games, video games, arcade games, billiard tables, moving picture viewers, shooting galleries, mechanical rides and all similar amusement devices. Proposed Rule 12-18.008 contained a definition of "coin-operated amusement machines" when the rule was first published which was essentially similar, but that rule's nonexclusive list of amusement machines did not include radios, televisions or telescopes. The Department has prepared a notice to be filed with the Joint Administrative Procedures Committee conforming the definitions so they will be identical. The current differences found in the nonexclusive descriptive lists are so slight as to be inconsequential. The Petitioners have failed to prove any confusion or ambiguity resulting from the differences that would impede evenhanded enforcement of the rule. Proposed Rule 12A-15.011 did not contain a separate definition of coin-operated amusement machines. Owners of amusement machines do not always own locations on which to place them. Machine owners may go to landowners and lease the right to place their machines on the landowner's property. The transaction becomes a lease of real property or a license to use real property. Sometimes owners of locations suitable for the placement of amusement machines lease machines from machine owners. Those transactions become leases of tangible personal property. Both transactions are subject to sales tax after July 1, 1991. Proposed rules 12A- 1.044(9)(c), (d) and 10(a), (c) prescribe which party to the leases of real estate or personal property will be responsible to collect, report and remit the tax. Under subsection 9(d) of proposed rule 12A-1.044, sales tax will not be due on any payment made to an owner of an amusement machine by the owner of the location where that machine is placed if: a) the lease of tangible personalty is written, b) the lease was executed prior to July 1, 1991, and c) the machine involved was purchased by the lessor prior to July 1, 1991. The tax will be effective only upon the expiration or renewal of the written lease. Similarly, proposed 12A-1.044(10)(d) provides that sales tax will not be due on written agreements for the lease of locations to owners of amusement machines if: a) the agreement to rent the space to the machine owner is in writing, and b) was entered into before July 1, 1991. At the termination of the lease agreement, the transaction becomes taxable. Changes to the proposed rules The Department published changes to the proposed rule 12A-1.044(3)(e) on October 18, 1991, which prescribed additional bookkeeping requirements on any amusement machine operators who wished to avoid the effective tax rate established in the proposed rule, and demonstrate instead a lower effective tax rate for their machines. The significant portions of the amendments read: In order to substantiate a lower effective tax rate, an operator is required to maintain books and records which contain the following information: * * * b. For an amusement machine operator, a list identifying each machine by name and serial number, the cost per play on each machine, the total receipts from each machine and the date the receipts are removed from each machine. If an operator establishes a lower effective tax rate on a per vending or amusement machine basis, the operator must also establish an effective tax rate for any machine which produces a higher rate than that prescribed in this rule. Operators using an effective rate other than the applicable tax rate prescribed within this rule must recompute the rate on a monthly basis. (Exhibit 6, pg. 4-5) There was also a change noticed to subsection (e) of the proposed rule 12A-1.044, which reads: (e) For the purposes of this rule, possession of an amusement or vending machine means either actual or constructive possession and control. To determine if a person has constructive possession and control, the following indicia shall be considered: right of access to the machine; duty to repair; title to the machine; risk of loss from damages to the machine; and the party possessing the keys to the money box. If, based on the indicia set out above, the owner of the machine has constructive possession and control, but the location owner has physical possession of the machine, then the operator shall be determined by who has the key to the money box and is responsible for removing the receipts. If both the owner of the machine and the location owner have keys to the money box and are responsible for removing the receipts, then they shall designate in writing who shall be considered the operator. Absent such designation, the owner of the machine shall be deemed to be the operator. (Exhibit 6, pg. 1-2) The Amusement Game Machine Industry All operators must be aware of how much money an amusement machine produces in order to determine whether it should be replaced or rotated to another location when that is possible, for if games are not changed over time, patrons become bored and go elsewhere to play games on machines which are new to them. The sophistication with which operators track machine production varies. It is in the economic self interest of all operators to keep track of the revenues produced by each machine in some way. In general, amusement game machine businesses fall into one of three categories: free standing independent operators, route vendors, and mall operators. Free standing independent operators have game arcades located in detached buildings, and offer patrons the use of amusement machines much in the same way that bowling alleys are usually freestanding amusement businesses. Like bowling alleys, they are designed to be destinations to which patrons travel with the specific purpose of recreation or amusement. They are usually independent businesses, not franchises or chains. Route operators place machines individually or in small numbers at other businesses, such as bars or convenience stores. People who use the machines are usually at the location for some other purpose. Those games are maintained on a regular basis by an operator who travels a route from game location to game location. The route operator or the location owner may empty the machine's money box. Mall operators tend to be parts of large chains of amusement game operators who rent store space in regional shopping malls. The mall is the patron's destination, and the game parlor is just one of the stores in the mall. Amusement machines are operated by either coin or by token. About 75 percent of independent amusement game operators use coin-operated machines. About 75 percent of the large chain operators found in malls use tokens. The cost of converting a coin-activated amusement machine to a token-activated amusement machine is about thirty dollars per machine. The mechanism costs $10 to $12, the rest of the cost comes from labor. Token operators must buy an original supply of tokens and periodically replenish that supply. The use of tokens enhances security because it gives the operator better control over their cash and permits the operator to run "promotions," for example, offering 5 rather than 4 tokens for a dollar for a specific period in an attempt to increase traffic in the store. Depending on the number purchased, tokens cost operators between 5 and 10 cents each. Token-activated machines accept only tokens. Coin-operated machines only accept a single denomination of coin. Change machines generally accept quarters and one, five and ten dollar bills. A change machine may be used either to provide players with quarters, which can be used to activate coin- operated machines, or they can be filled with tokens rather than quarters, and become a token dispenser. In a token-operated amusement location, the only machines which contain money are the change machines used to dispense tokens. The game machines will contain only tokens. Token machines record the insertion of each coin and bill by an internal meter as a domination of coin or currency is inserted. Token dispensing machines record their receivables as follows: when one quarter is inserted, the machine records one transaction. When a fifty-cent piece is inserted, the machine records one transaction. When three quarters are inserted, the machine records three transactions. When a dollar bill is inserted, the machine records one transaction. When a five dollar bill is inserted, the machine records one transaction. When a ten dollar bill is inserted, the machine records one transaction. Token machine meters record separately for each domination the total number of times coins or currency of each domination are deposited in the machine. The internal meters of token dispensing machines do not distinguish between insertion of several coins or bills by one person and the insertion of singular coins or bills by several persons. Token dispensing machines cannot distinguish the insertion of four quarters by one person on a single occasion from the insertion of one quarter by each of four persons at four different times. Similarly, the internal meters of amusement machines activated by coin rather than by token do not distinguish between insertion of several coins or bills by one person and the insertion of single coins or bills by several persons. Machines which are coin-activated also do not distinguish between the insertion of four quarters by one person at one time or the insertion of one quarter by each of four persons at different times. Coin-operation has certain cost advantages. The operator avoids the cost of switching the machine from coin to token operation, for machines are manufactured to use coins, and avoids the cost of purchasing and replenishing a supply of tokens. The operator does not risk activation of his machine by tokens purchased at another arcade, which have no value to him, and can better take advantage of impulse spending. Coin-operated machines do not have a separate device for collecting tax and it is not possible for an operator to fit games with machinery to collect an additional two cents on a transaction initiated by depositing a quarter in a machine. There are alternative methods available to operators of amusement game machines to recapture the amount of the new sales tax they may otherwise absorb.1 One is to raise the price of games. This can be done either by setting the machines to produce a shorter play time, or to require more quarters or tokens to activate the machines. Raising the prices will not necessarily increase an operator's revenues, because customers of coin-operated amusement businesses usually have a set amount of money budgeted to spend and will stop playing when they have spent that money. In economic terms, consumer demand for amusement play is inelastic. Amusement businesses could also sell tokens over- the-counter, and collect sales tax as an additional charge, much as they would if they sold small foods items over the counter such as candy bars. Over-the- counter sales systems significantly increase labor costs. An amusement business open for 90 hours per week might well incur an additional $30,000-a-year in operating costs by switching to an over-the-counter token sales system. In a small coin-operated business, the operator often removes the receipts by emptying the contents of each machine into a larger cup or container, without counting the receipts from each machine separately because it is too time consuming to do so. But see Finding 17 above. With a token-operated business, the operator can determine the percentage of revenue derived from twenty-five cent transactions, as distinct from token sales initiated by the insertion of one, five or ten dollar bills into token dispensing machines. The proposed rule has the effect (although it is unintended) of placing the coin-operated amusement operators at a relative disadvantage in computing sales tax when compared to the token-operated businesses. Token operators can establish that they are responsible for paying a tax rate lower than the 7.81 percent effective rate set in the rule because many of their sales are for one dollar, five dollars or ten dollars. The smaller businesses using coin-operated machines do not have the technological capacity to demonstrate that customers are spending dollars rather than single quarters. Consequently, coin operators will have an incentive to shift to token sales rather than pay the proposed rule's higher effective tax rate if a large percentage of their patrons spend dollars rather than single quarters. For example, Mr. Scott Neslund is an owner of a small business which has 80 amusement machines at a freestanding token-operated location. He is atypical of small amusement game operators because 75 percent of them use coin-operated machines rather than token-operated machines. Mr. Neslund can demonstrate that 92 percent of his sales are for one dollar or more. By applying the tax rate of six percent to those transactions, he pays substantially less than the proposed rule's effective tax rate of 7.81 percent. This is very significant to Mr. Neslund because over the nine years from 1982 to 1990, his average profit margin was 7.77 percent. Although a flat 6 percent tax would have consumed 73 percent of that profit margin, if his businesses were on a coin-operated basis he would have been required to pay the proposed rule's 7.81 percent effective tax rate, which would have consumed 93 percent of his profit margin, leaving him with a very thin profit margin of 1/2 of 1 percent. The difference between a 1/2 of 1 percent profit margin and 2 percent profit margin, on a percentage basis, is a four hundred percent difference. Mr. Neslund's average profit annually had been $24,000. The effective tax rate of 7.81 percent would take $22,7000 of that amount, leaving an average annual profit of only $1,700. It is impossible to extrapolate from this single example and have confidence in the accuracy of the extrapolation, however. The Department's Effective Tax Rate Study There is no data for the amusement game industry specific to Florida concerning the number of transactions occurring at specified price points, but there is national data available which the Department considered. There is no reason to believe that the Florida amusement game industry is significantly different from the national industry. Nationally approximately 80 percent of all plays and 60 percent of all revenues come from single quarter (twenty-five- cent) plays. The Department's study used the typical sale prices charged in the industry and the categories of coin-operated amusement games reported in the national survey. Using them the Department derived an estimate of revenues by type of game for Florida. The effective tax rate the Department derived is the Department's best estimate of the price mix of transactions which occur through amusement machines. It is not itself an issue in this proceeding. Petitioners' counsel specifically agreed that they were not contesting the setting of the effective tax rate at 7.81 percent and presented no evidence that any other effective tax rate should have been set. The Department's Economic Impact Statement Dr. Brian McGavin of the Department prepared in July 1991 paragraphs 2, 3 and 5 of the economic impact statement for the proposed rules (Exhibits 14, 15 and 16), which concluded that proposed rules 12A-15.001, 12-18.008 and 12A- 1.044 would have no effect on small businesses. The economic impact statements for all three proposed rules contain identical information and involve the same issues concerning economic impact. Before drafting the economic impact statement published with these rules, Dr. McGavin had completed one other economic impact statement, had used a small manual which gave a general description of the process for developing economic impact statements and had discussed the process with another economist, Al Friesen, and his supervisor, Dr. James Francis, the Department's director of tax research. Dr. Francis prepares or reviews more than a dozen economic impact statements annually, and is well aware of the definition of small businesses found in Section 288.703(1), Florida Statutes. Dr. Francis reviewed Dr. McGavin's work and agreed with Dr. McGavin's conclusions. Paragraphs 2, 3 and 5 of the economic impact statements for these rules state: Estimated cost or economic benefits to persons directly affected by the proposed rule. The rule establishes effective tax rates for two categories of machines - 1) amusement machines, 2) vending machines. Amusement machines were not previously taxable (except during the Services tax period). * * * The costs of this rule are primarily compliance costs. The rules establishe several compliance provisions. quarterly sale and use tax reports. submission of supporting information for these reports on electronic media. affixation of registration certifi-cates to machines. presentation of certificates by operators to wholesale dealers. The filing requirement is obviously an integral and necessary part of the sales tax collection process . . . . The costs of complying will be borne by operators. If the operators have previously computerized their records, the marginal compliance costs will be negligible. For a small operator who has not computerized his operations, the costs of minimally configured PC systems - including software and training - would be roughly $2,000. This could be a major expense for a small operator . . . . We do not have data which will permit us to estimate the proportion of non-computerized operators in this industry. Effect of the proposed action on competition and on the open market for employment. * * * Given the low labor-intensity of this industry the overall effect should be very small. * * * 5. Impact of the proposed action on small business firms. Small business firms are not affected by the proposed action. (Exhibits 14, 15 and 16) The Petitioners demonstrated that before Dr. McGavin prepared the economic impact statement he did not read section 120.54 on rulemaking and he did not conduct any industry research or refer to any sources of information on the amusement game industry in Florida or nationally. He did not use any data to calculate or measure economic impact, consult text books, or refer to any outside sources or statistical information, nor did he talk with any industry experts or representatives. He did not obtain any information about the industry by distributing questionnaires to those in the industry, nor did he know whether there were differences in day-to-day operations between large and small amusement businesses or the different types of accounting and bookkeeping systems used by small businesses. He had not read Section 288.073, Florida Statutes, which defines a small business. He did not know the impact the 7.81 percent effective tax rate established by the rule would have on small business, and he did not analyze the cost difference businesses experienced between the sale of tokens by machine and the sale of tokens over-the-counter by an employee. To the extent it even entered into Dr. McGavin's thought process, Dr. McGavin made the general assumption that token sales would either be made over the counter, in which case the sales tax could be separately collected, or possibly by selling fewer tokens per unit of currency. When the Legislature enacted Chapter 91-112, Laws of Florida, and imposed the tax on the use of coin operated amusement machines, it did not provide for any phasing in of the tax, nor for any tiering of the tax based on the size of the taxpayers. Nothing in the language of the statute imposing the tax indicates that the Legislature believed that there was a distinction to be made in the taxation of larger and smaller businesses which provide the same service, viz, use of amusement machines. The Department does permit certain accommodations to businesses which have a small volume of sales. A business may report quarterly rather than monthly if its tax liability is less than $100 for the preceding quarter, and if the tax liability is less than $200 for the previous six months, a dealer may request semiannual reporting periods. Regardless of size, a business with more than one location in a county may file one return. Both of these provisions may lessen the burden of complying with the tax imposed on the use of coin-operated amusement machines. The Economic Impact Analysis Performed For The Challengers By Dr. Elton Scott Dr. Elton Scott is an economist and a professor at the Florida State University. The Petitioners engaged him to evaluate the economic impact statement the Department had prepared when these proposed rules were published. After conducting his own analysis, Dr. Scott wrote a report in which he determined that the Department's economic impact statement was deficient. According to Dr. Scott, one must understand an industry to determine whether an economic impact flows from a regulation and to determine the magnitude of any impact or the differential impact the regulation may have on large and small businesses. To prepare his own economic impact analysis, Dr. Scott first obtained information about the operational characteristics of the industry by speaking directly with a handful of industry members. He developed a questionnaire that tested the experience and background of operators so that he could evaluate the reliability or accuracy of information he received from them. He then asked additional questions about the operators' individual businesses and questions about differences between large and small operators within the industry. Dr. Scott's testimony outlines the factors which should be used to make an economic impact statement as useful as possible, but his testimony does not, and cannot, establish minimum standards for what an economic impact analysis should contain. Those factors are controlled by the Legislature, and no doubt the requirements imposed on agencies could be more onerous, and if faithfully followed could produce more useful economic impact statements. The economic impact small businesses will bear is caused by the statute, not by the implementing rule, with the possible exception of the electronic filing requirement, which has not been challenged in any of the three proceedings consolidated here. Large businesses have several advantages over smaller ones. Large businesses have sophisticated accounting systems, whether they use token or coin-operated machines, which allow tracking not only of gross receipts but kinds of plays, which enhance the operator's ability to establish that the tax due is lower than the effective tax rate, while the less sophisticated systems of metering receipts in coin-operated small businesses require reliance on the effective tax rates. (Exhibit 9 pg. 4) Large businesses may extend the useful life of a game machine by rotating the machine from one location to another, may deal directly with manufactures in purchasing a larger number of games or machines and therefore obtain more favorable discounts. Small businesses cannot rotate games if they have only one location, and purchase at higher prices from manufactures. In general, smaller businesses have lower profit margins than larger businesses. All of these advantages exist independently of any rule implementing the sales tax statute.

Florida Laws (10) 120.52120.54120.68212.02212.031212.05212.07212.12288.703689.01 Florida Administrative Code (5) 12-18.00812A-1.00412A-1.04412A-15.00112A-15.011
# 6
PSYCHOTHERAPEUTIC SERVICES OF FLORIDA, INC. vs DEPARTMENT OF CHILDREN AND FAMILY SERVICES, 05-002800BID (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 03, 2005 Number: 05-002800BID Latest Update: May 25, 2006

The Issue Whether Department of Children and Families' (DCF's) intent to award nine contracts for Florida Assertive Community Treatment (FACT), as set forth in Request for Proposal No. 01H05FP3 (RFP), to the Intervenors herein was contrary to that Agency's governing statutes, its rules or policies, or the specifications of the RFP, thereby being clearly erroneous, contrary to competition, arbitrary, or capricious.1/

Findings Of Fact General Facts On April 6, 2005, Respondent DCF's Mental Health Program Office issued a 215-page RFP 01H05FP3 for "Florida Assertive Community Treatment (FACT) Programs for Persons with Severe and Persistent Mental Illnesses Procurement of February and September 2001 Awards." The FACT program is Florida's version of a nationally known model of community mental health intervention for individuals with severe and persistent mental illnesses known as the Program for Assertive Community Treatment (PACT). The PACT model of intervention manual published by the National Association for the Mentally Ill was the basis of developing Florida's adherence to the PACT model. The RFP specifies that proposers commit to PACT's evidence-based team approach. This RFP is not a statewide procurement. It is a single document seeking proposals for 17 separate agency districts/regions, of which Petitioner PSFI was then operating as the incumbent FACT provider in seven districts. The April 6, 2005, RFP contemplated that DCF would contract in each district for an initial three-year term, with a potential three-year renewal provision. The total cost for these contracts, if renewed, is in excess of $100,000,000.00, making this a procurement of substantial size for DCF's Mental Health Program Office. The April 6, 2005, RFP is DCF's second attempt to procure FACT contracts. DCF previously posted and withdrew an RFP for the same 17 contracts, due to concerns that its questions could give certain vendors an unfair advantage. All vendors receiving the RFP had an opportunity to submit written questions about the RFP's contents. Several vendors submitted written questions. The questions and DCF's answers became part of the RFP and were published to all potential vendors prior to the submission of responses. No potential vendor or proposer protested the written specifications and terms of the instant RFP. Therefore, the specifications are not at issue herein. On May 23, 2005, DCF opened the proposals. On June 27, 2005, DCF posted the results of its evaluation(s) in a document entitled "Proposal Tabulation and Notice of Intent to Award," indicating each applicant's score in each of the 17 divisions/regions; indicating it would award to the proposer with the highest score; and providing a mechanism to resolve ties. PSFI has protested DCF's Notice of Intent to Award for the following districts/regions in the April 6, 2005, RFP. The respective scores and intents to award are indicated: District 4 - Jacksonville (highest score - MHRC) Suncoast Region - New Port Richey (MHRC and Harbor tied for the highest score. Harbor is to be awarded the contract based on a tie-breaker procedure) Suncoast Region - Pinellas (highest score - MHRC) Suncoast Region - Hillsborough (highest score - MHRC) District 7 - Rockledge (highest score - MHRC) District 8 - North Fort Myers (highest score - Coastal) District 8 - Naples (highest score - MHRC) District 11 (south) - Miami (highest score - Bayview) District 15 - Stuart (highest score - MHRC) PSFI was the incumbent provider in seven of the foregoing nine protested districts. Bayview was the incumbent provider in the southern region of District 11. PSFI did not protest the District 3 - Gainesville Notice of Intent to Award, wherein PSFI was the successful responder/proposer. Therefore, despite rhetoric to the contrary at hearing, that region, where the same RFP and evaluation procedures accrued to PSFI's benefit, is not at issue herein. Section 2 of the RFP provided: The department reserves the right to reject any and all proposals, withdraw this RFP or to waive minor irregularities when to do so would be in the best interest of the State of Florida. Minor irregularities are defined as a variations from the request for proposal terms and conditions that does not affect the price of the proposal, or give the prospective vendor an advantage or benefit not enjoyed by other prospective vendors, or does not adversely impact the interest of the agency. At its option, the department may correct minor irregularities but is under no obligation to do so whatsoever. Correction or waiver of minor irregularities shall in no way modify the RFP requirements. Stephen Poole is Senior Management Analyst and Supervisor of DCF's Mental Health Program Office. Mr. Poole has been involved with the FACT program since 2000, the first year DCF engaged in a statewide procurement of the program. At all times material to the instant RFP, Mr. Poole's main responsibility was to oversee the FACT initiative. He principally authored the RFP at issue. He had drafted three-to- four RFPs before this one. In developing the instant RFP, Mr. Poole followed DCF's established internal review procedure. He was the sole point of contact for the instant RFP. After review of an Agency for Health Care RFP, the subject of which was closely aligned in the mental health subject area, Mr. Poole selected a 0-10 scoring range for the instant RFP, instead of DCF's historical 0-4 range, to allow individual reviewers more flexibility to score each item in a way that reflected that individual's assessment of each proposal. PSFI's objection to this 1-10 scoring range amounts to an argument that, "It's not the way we've always done it before," and is without merit. DCF expertise and discretion designed the 1-10 rating methodology to give qualified, unbiased scorers latitude to use their own expertise while scoring the proposals. The Agency intended evaluators to have "great latitude," based on their own individualized background and experience, to score each response to each question within each proposal. It was a goal of this RFP that each evaluator would exercise his or her specialized professional education, training, and experience, thereby getting the best result for the Agency. By averaging the scores of three evaluators for each district/region, DCF intended to blend areas of expertise and minimize any irregularities of an individual evaluator that might turn up. All responsive proposals were to be reviewed and rated for Fatal Criteria and Qualitative Requirements by a review panel of DCF personnel. Only proposals meeting the threshold test of Fatal Criteria were reviewed for Qualitative Requirements. Because the basic requirements for a FACT team are the same from area to area, proposers filing in multiple districts/regions submitted the same or almost identical answers to many of the questions asked of them in the RFP. For instance, PSFI submitted ten proposals in response to the RFP. These had identical text and appendices for RFP issues that were not district - or region-specific. Identical text applied to PSFI's responses to Qualitative Requirements 1, 2, 5, 7-14, 16-17, 19-23, 25, and 27-29. Other answers were tailored to the specific districts/regions. MHRC's proposals for the FACT contracts in District - Rockledge, District 15 - Stuart, Suncoast Region- Hillsborough, Suncoast Region - New Port Richey, Suncoast Region - Pinellas, and District 8 - Naples, were identical, with the exception of the identification of the district/region number and the name of the FACT contract that is the subject of each proposal. PSFI also submitted a proposal for each of those contracts. MHRC's proposal to retain its FACT contract in District 4 - Jacksonville is essentially the same as the other six proposals it submitted, except that the District 4 proposal describes aspects of MHRC's current FACT team in the present tense, whereas the other six proposals describe aspects of the proposed FACT teams in the future tense. PSFI also submitted a proposal for that contract/district. Bayview submitted only one proposal, and that was for renewal of its FACT team in District 11 (South) - Miami. Coastal and PSFI submitted proposals for the FACT contract in District 8 - North Fort Myers. Coastal was the only proposer for District 8 - Charlotte. DCF appointed an Evaluation Team to review the 34 proposals received. The review was pursuant to the time line set forth in the RFP, as amended by an addendum issued by DCF. The team of three evaluators always included two of DCF's Central Office employees, Kim Munt and Jane Streit, who each reviewed all 34 proposals for all 17 FACT contracts. For each contract, the third DCF employee on the evaluation team was a DCF employee, selected by the DCF FACT program supervisor in the district/region office where the respective contract would be carried out. The final score for each of 47 questions was the average score of the three evaluators. Ultimately, the district/region office evaluators were Diovelis Stone (District 1), Ken Birtman (District 2), Lisa Cue (District 3), Gene Costlow (District 4), Robert Parkinson (Suncoast Region - Pinellas ), Michael Wade (Suncoast Region - New Port Richey), Geovanna Dominguez (District 7), Linda Pournaras and Marcie Gillis (District 8) (see Findings of Fact 40-42 and 95-98), Joanna Cardwell (District 11), and Carol Eldeen-Todesco (District 15). PSFI complains that all, or most, of the foregoing evaluators had never worked on an RFP before and were insufficiently trained to evaluate this particular RFP, or not trained for it with mathematical precision. On the contrary, all the evaluators received the training specifically designed for this RFP; many had prior FACT experience; and some had prior RFP experience, as related infra. As to the several evaluators' individual abilities to analyze problems associated with FACT, no competent, credible evidence demonstrated that any reviewer was deficient in cognitive ability, thought processes, or reason; nor was it demonstrated that there was any specific bias or favoritism practiced by any evaluator. Specifically, although DCF evaluator Jane Streit's DCF employment did not deal directly with FACT teams, Ms. Streit has earned a Ph.D. in clinical psychology and has 12 years' experience working in the mental health field. DCF evaluator Kim Munt possesses a Master of Science degree and was an Operations Review Specialist in the combined contract management unit for DCF's program offices of Substance Abuse and Mental Health, where she reviewed the model contract attached to the RFP as Attachment One. She also had participated in three previous RFPs. Other individual qualifications of the district evaluators specifically challenged in this proceeding are described infra. Although there is testimony that the evaluators' reading of the RFP before the Initial Meeting, when the evaluators received their specific instructions, followed by formalized training in how to evaluate the proposals, with feedback and testing of the evaluators' understanding of that training, might have been a more desirable approach than the one used, there is no legal requirement for such institutionalized training of bid evaluators, nor is there any other requirement that agencies use "professionalized" bid evaluators. Qualitative Requirement 26 related to the financial stability of the proposing vendors, and was scored by three other evaluators: Cindy Grammas, Janet Holley, and Phyllis McMillman. Petitioner has not challenged any of the scores given for Question 26. On or about May 23, 2005, Mr. Poole read instructions on the proper procedures for reviewing and scoring the proposals to the evaluators at an Initial Meeting of Evaluators. The attending evaluators had an opportunity to ask questions. There was an opportunity for discussion, but no detailed discussion ensued. Afterwards, the evaluators returned to their work locations and independently reviewed the proposals assigned to them. At the Initial Meeting, each evaluator certified that the instructions had been reviewed and discussed as follows: Instructions and Certification for Evaluation Team For Request for Proposal (RFP) 01H05FP3, Released 4/06/05 I agree to read and apply the following list of instructions detailing my responsibilities as an evaluator for RFP 01H05FPH3: ? I will read RFP #01H05FP3 and any addenda in preparation for scoring proposals in response to this RFP. ? I will review the scoring methodology contained in subsection 6.3 entitled, "RFP Rating Methodology," specifically the definition of values attributed to the scores, "0", "1-3", "4-6", "7-9", and "10" as applicable in the scoring of all questions. ? I understand a mandatory review of any scoring variance of more than a value of "7" will take place when it is reported. ? I understand the RFP is the sole source of evaluating all proposals. ? I understand the use of the term "Considerations" is to be used as a guide to assist the evaluator. ? I understand that vendors not currently operating a FACT team will respond to questions as if they will be operating a FACT team in the future or that they will give information about other programs they provide to demonstrate their responsiveness and understanding of the question at issue even though it may not be directly linked to the operation of a FACT team. ? I will not use any personal or professional opinions, knowledge or perceptions either positively or negatively that I may possess about any of the vendors submitting proposals that I am evaluating. ? I understand I have the authority to cease searching any proposal for responses to questions if the response is not in the section indicated. ? I understand that I am to evaluate ALL questions in the RFP with the exception of question number 26 that will be scored by auditors and/or accountants. ? I understand I must record a justification for each score that is to be included in the "Note to Evaluator" section of the Scoring Protocol, and to minimally include a page number reference and/or a brief, written rationale. ? I understand that I must sign a Conflict of Interest Questionnaire/Statement indicating that I have no conflict of interest with any of the vendors submitting a proposal. ? I understand that if a conflict of interest exists, I am required by these instructions to disclose such conflict and excuse myself from scoring any proposals in which a conflict exists. ? I understand that I must sign each and every scoring sheet, known as the Scoring Protocol. ? I understand that I am to begin the scoring of each proposal with a base score of "0" and build a higher score, such as a "1", "2", "3", "4", "5", "6", "7", "8", "9", or "10, as applicable, to be awarded based on the merits of the response. ? I understand the definitions of each of the values used in the scoring or the protocol and acknowledge a copy of those definitions was provided to me. ? I understand that I am to score the proposals independent of other proposals. ? I understand that I am not to discuss my scoring with other evaluators and that I am not to ask questions of other evaluators during the review and scoring of proposals. ? I understand I am permitted to direct questions to Stephen Poole, FACT procurement officer for this RFP, concerning the scoring of the proposals and that I was provided the following phone numbers to call should I have a question: Office phone: (850) 410- 1188 or SC 210-1188; home phone: (850) 422- 1109. ? I understand that I must attend the Debriefing Meeting scheduled for June 23 and June 24, 2005, in person. ? I understand that ALL written documents that I have in my possession concerning RFP #O1H05FP3 must be returned at the Debriefing Meeting. These documents include any and all copies of RFPs, any and all proposals and any and all Scoring Protocols and notes that may have been made separately but not included on the Scoring Protocols. I certify that these instructions were discussed openly in a publicly scheduled meeting and that I affix my signature to this certification indicating my understanding of and compliance with the instructions. Signature Date Representing Each evaluator also had to sign a Conflict of Interest Form designed to assure that he/she had no conflict of interest with any of the vendors submitting proposals that he or she would evaluate. Robert (Rob) Parkinson, the district evaluator for the Suncoast Region - Pinellas, inadvertently failed to check "yes" or "no" to the question, "Are there any other conditions that may cause a conflict of interest?" Mr. Poole did not notice Mr. Parkinson's omission when he collected the conflict of interest statements. However, by signing the Instructions and Certification of the evaluation team, Mr. Parkinson certified that if a conflict of interest existed, he was required to disclose such conflict and to excuse himself from scoring any proposals in which a conflict existed. There is no affirmative evidence that Mr. Parkinson had any conflict of interest for his performance as an evaluator or was biased or prejudiced for or against any of the competing proposers. Therefore, the missing check mark is a minor irregularity which does not evidence bias, prejudice, or preference, and the check's absence should not discount Mr. Parkinson's participation as an evaluator or discount any scores he rendered. (See, also Findings of Fact 79-84.) Linda Pournaras and Marcie Gillis discussed the RFP they had gotten off the DCF website during their car trip to Tallahassee for the Initial Meeting of the evaluators, during a time prior to their receiving any instructions or signing their certifications at the Initial Meeting with Mr. Poole. Ms. Pournaras related her prior RFP experiences, but was clear that Ms. Gillis should listen carefully at the Initial Meeting, follow only those instructions, score independently, and not consult anyone but the authorized contact person (Mr. Poole) after the Initial Meeting closed. Despite PSFI's characterization of this conversation and speculation as to the conversation on Ms. Pournaras' and Ms. Gillis' return trip, the depositions of both women show that Ms. Gillis was not instructed to rely on Ms. Pournaras' interpretations of any RFP and did not do so, even though Ms. Pournaras was her supervisor. The evidence shows that what she was permitted to score, Ms. Gillis scored independently. However, Ms. Gillis, the originally-assigned district evaluator for her district, checked "yes" to the question, "Have you been employed by any of the potential bidders/entities listed within the last 24 months?" By way of full disclosure, she also wrote, "I worked for PSF[I] Naples from 2-04 to 2-05. I do not have a conflict of interest however." Mr. Poole did not replace Ms. Gillis with someone who had not worked for one of the proposers and did not substitute another District 8 representative. Ms. Pournaras testified that she thought she spoke with Mr. Poole about replacing Ms. Gillis, but considering the evidence as a whole, all that is clear is that Ms. Pournaras and Ms. Pournaras's own supervisor decided that Ms. Gillis would not review any PSFI proposals and that Ms. Gillis would review the Coastal proposal. As a result, Ms. Gillis reviewed an unopposed Coastal proposal for the Charlotte County contract and reviewed the Coastal proposal for North Fort Myers. Ms. Pournaras reviewed the PSFI proposal competing with the Coastal proposal for North Fort Myers and the PSFI and MHRC proposals for Naples. At the Initial Meeting, all the evaluators were given blank Scoring Protocol sheets to use in recording their scores for each of the Qualitative Requirements. Each was 48 pages long. Page One contained a reprint of the Rating Methodology set forth in Section 6.3 of the RFP. Each of the remaining pages provided a scoring sheet for an individual Qualitative Requirement from Sections 6.3.1 to 6.3.10 of the RFP, comprised of (a) a reprint of the question and related considerations; (b) a place to record a numerical score, and (c) the "Note to Evaluator" section, which required the evaluator to state: "(2) Where in the proposal you relied upon for the score: (cite page number & paraphrase rationale for score)." (Emphasis supplied). Because of the detail of the foregoing items that went with each individual evaluator during each scoring, the fact that each and every evaluator did not keep a copy of the Instructions and Certification with him/her to refer to while they scored the proposals is without any practical significance. The evaluators were also instructed in the Instructions and Certification to "record a justification for each score that is to be included in the 'Note to Evaluator' section of the Scoring Protocol, and to minimally include a page number reference and/or a brief, written rationale." (Emphasis supplied). In fact, some evaluators provided a comprehensive written justification, some provided a page number only, some provided both, occasionally, someone slipped up and provided no justification; and one district evaluator only provided justifications where he felt a particular scoring range required it. The last was Mr. Costlow in District 4 - Jacksonville. Mr. Costlow felt encouraged to present as much detail as possible, but he also felt he was only required to justify scores he assigned below and above the average range of 4-6 and that the midway range was adequate or satisfactory, so that justifications there were optional. Nonetheless, his comments in justification mostly relate to a proposal's being satisfactory. His justifications were not considered significant nor his scores at great variance with other scores during the Debriefing, when adjustments were made to resolve any irregularities in the scoring system. Mr. Poole testified that the Note to Evaluator Section should have included the same "and/or" language contained in the Instructions and Certification, but he did not explain this to the evaluators. He did not interpret what "and/or" meant in the Instructions and Certification. He felt at the Initial Meeting and at the Debriefing (see Findings of Fact 82, 105, 110, and 112-113), that it was up to the evaluators to justify their answers as they saw fit within the options given. There is no meaningful difference between the "Instructions and Certification" and the "Note to Evaluator," sufficient to invalidate the actual "scoring" of the several evaluators, despite one item using "&" and the other using "and/or." Therefore, it is also determined that the inconsistencies and occasional omissions of some evaluators on the "justification" portions are minor and waiveable irregularities. The RFP, in SECTION 6: PROPOSAL EVALUATION CRITERIA AND RATING SHEET, required DCF to review and rate each responsive proposal for Fatal Criteria and Qualitative Requirements in accordance with the evaluation criteria set forth in the RFP. The RFP Rating Methodology for the proposals was set forth in SECTION 6.3 of the RFP, in pertinent part as follows: When vendors' proposals are screened and meet fatal criteria requirements, the qualitative requirements will be scored based on the factors listed below: No Capability = No or little capability to meet RFP requirements.(Point Value - 0). Poor Capability = Poor or marginal capability to meet RFP requirements (Point Values = 1 through 3). Average Capability = Average capability meet RFP requirements.(Point Values 4 through 6). Above Average Capability = Above average capability to meet RFP requirements. (Point Values = 7 through 9) Excellent Capability = Excellent capability to meet RFP requirements.(Point Values = 10). The maximum number of points that can be scored is 470. Proposals failing to achieve at least 75 percent, or 353 points of the 470 total points, will not be eligible for a FACT contract. One of PSFI's assigned flaws for this RFP and the bidding process is that Mr. Poole provided to the evaluators no definitions of the foregoing terms. Yet it seems this is precisely where the flexible nature of the RFP was intended to be addressed by each individual evaluator's specialized education, training, and experiences. The Qualitative Requirements of the RFP are set forth in paragraph 6.3.1, sub-paragraphs 1-47 of the RFP. Those requirements required DCF to determine the existence and quality of "evidence" in each responsive proposal of the Qualitative Requirements set forth in the RFP. PSFI faults the RFP and the bidding process because the RFP contained no definition of "evidence." However, at the Initial Meeting of Evaluators, Mr. Poole had instructed the evaluators to review the Responses to Written Inquiries, which had also become part of the RFP, (see Finding of Fact 6), and which contained the following explanation of "evidence" to be used by evaluators to score any question: Written Inquiry No. 25 25. What would you consider "evidence" in cases where statistics or documents can't be provided? For example, in the case of "evidence that the individual is the focal point of all activity generated by the team?" Response: Evidence does not necessarily need to be statistics or documents but a detailed explanation about the vendor's vision, values, policies, procedures, and how they directly relate to the individual being the focal point of all activity generated by the team. Any statistics or documents directly related to the issue would strengthen the response. On its face, the first Consideration under Qualitative Requirements 33-38 and 40-41 sought evidence that the proposer could meet each respective performance measure. Some of the evaluators interpreted the thrust of these items to request that the vendor propose a plan for meeting the performance measure. Others looked in the proposals for evidence that the proposer had a good past performance record and a plan for performance of the present RFP. Although PSFI elicited a variety of explanations of how different evaluators' respective thought processes worked, no inconsistency by a single evaluator among proposals was affirmatively demonstrated. No inconsistency within a single district or region (except for North Fort Myers, see infra.) was affirmatively demonstrated. No favoritism toward, or prejudice against, any proposer was affirmatively demonstrated. No scorer preferences for incumbent providers with a "history" was demonstrated. PSFI alleges as a flaw in this RFP and its bidding process that multiple evaluators created scoring scales for themselves, by restricting themselves to narrow portions of the 1-10 scale. Each of the 47 questions provided several delineated "Considerations" for the evaluator to use as guidelines in determining whether the evidence offered by the proposer demonstrated the specific information requested. None of the 47 questions could be answered "yes" or "no." Each of the questions required a narrative response, except for Question 26. (See Finding of Fact 33.) One of the RFP instructions (see Finding of Fact 35), stated that the evaluators were to begin scoring each question with a base score of zero and build up to 10, based on the merits of the response. PSFI established that Ms. Streit, one of the two central office evaluators, did not do this. Ms. Streit described her scoring as "narrow," and she believed that an experienced vendor would likely start with an average score and then she would grade higher if the response merited more points. The scores she assigned ranged from six to nine. Although she did not begin scoring each question with a base score of zero, she did start each proposal's analysis at the average range (4-6) and adjusted her score based on the strength of the response. Her scores for PSFI's proposals ranged from 365-368 and for the MHRC proposals from 367 to 370. This amounts to three deviation points' difference for each of these competitors. Ms. Streit's rationale was consistently applied for each of the 46 questions she reviewed. Her approach was technically contrary to the instructions, but it did not unbalance the scores. It did not de-level the playing field. It was a distinction without a significant difference.5/ Assuming, arguendo, but not finding, that PSFI were entitled to three additional points across the board, it would not alter the final tabulation in any of the districts/regions where MHRC was the high scorer. MHRC and PSFI were the only proposers scored in District 4 - Jacksonville, in District 7 - Rockledge, in District 8 - Naples, and in District 15 - Stuart. In Suncoast - New Port Richey, Harbor and MHRC were tied for first place against PSFI. The scoring in District 8 - North Fort Myers, was an anomaly. (See Findings of Fact 95-98 and Conclusion of Law 135.) Kim Munt, the other central office reviewer, conceded that she might have become more lenient in her scoring over time and, due to the sheer size of the proposals and the need to adhere to the scoring process, she may have had a different "focus" from time-to-time. Ms. Munt initially reviewed proposals randomly, but half-way through the 36 proposals, she began scoring by district. Ms. Munt scored the following PSFI and MHRC proposals: MHRC PSFI June 20 (Stuart) 380 June 21 (Stuart) 346 June 20 (Naples) 380 June 17 (District 4) 338 June 18 (District 4) 379 June 13 345 (Hillsborough) June 16 (Pinellas) 388 June 10 (New Port Richey) 361 June 12 (New Port 379 June 9 (Pinellas) 357 Richey) June 11 379 June 9 (Naples) 358 (Hillsborough) May 24 (Rockledge) 362 May 31 (Rockledge) 314 The above table indicates that Kim Munt gave her lowest score for MHRC on May 24 for Rockledge (362) and she gave her lowest score for PSFI on May 31 for Rockledge (314). While she may not have been "lenient" in scoring the Rockledge proposals, it appears that Ms. Munt was not "more lenient" as she evaluated the proposals. She consistently scored the MHRC proposals higher than the PSFI proposals for the numerous districts for which services were sought by DCF. In fact, MHRC received its lowest score (362) from Ms. Munt in the Rockledge competition, which score of 362 was higher than any score received by PSFI, even though every PSFI proposal was scored later, when Ms. Munt was allegedly "more lenient." Rearranging the foregoing information, Ms. Munt's scores display a scoring pattern that shows no consistent correlation to which proposer's proposal was scored first. Also, Harbor, which tied with MHRC in Suncoast - New Port Richey, was scored on June 11, 2005, in-between Ms. Munt's scorings of MHRC and PSFI. (See Findings of Fact 73-78.) MHRC-Pinellas June 16 388 higher 7 days later PSFI-Pinellas June 9 361 MHRC-New Pt. Richey June 12 379 higher 2 days later PSFI-New Pt. Richey June 10 357 MHRC-Stuart June 20 380 PSFI-Stuart June 21 346 lower 1 day later MHRC-Naples June 20 380 higher 11 days later PSFI-Naples June 9 338 MHRC-District 4 June 18 379 higher 1 day later PSFI-District 4 June 17 338 MHRC-Hillsborough June 11 379 PSFI-Hillsborough June 13 345 lower 2 days later MHRC-Rockledge May 24 362 PSFI-Rockledge May 31 314 lower 6 days later PSFI was scored before MHRC five times out of the seven contracts in dispute. Between them, and in each of those instances, MHRC scored better. However, in three of these, and in four out of the seven districts, there was only one-to-two days' difference in the scoring dates. Ms. Munt believed, and it is only logical that, any loss of focus or any margin for inconsistency would be less where there is less time between scorings (see Finding of Fact 106, n.6), but additionally, Ms. Munt consistently scored MHRC higher than PSFI, whether her rating date for MHRC preceded, or was subsequent to, her rating date for PSFI. MHRC's scores by Munt are compact and vary only from 362 to 388; of these, there are five MHRC scores 379 to 380. PSFI's scores by Ms. Munt are less compact. They vary 314 to 361 points, and PSFI received its lowest score on the last day of Ms. Munt's scoring. The narrow range of Ms. Munt's MHRC scores June 11-20, from 379 to 388 (nine points) is reasonable, given that MHRC's proposals were virtually identical. Her PSFI scores, June 9-21, range from 338 to 361 (a greater spread of 22 points), and present some cause for concern. Nonetheless, given the evidence as a whole and the fact that PSFI's proposals contained identical responses to only 22 questions, the diversity in her scores cannot be determined to be unfair, capricious or arbitrary. PSFI's "bias via the increased leniency of central office evaluator Ms. Munt" theory is not proven. Some evaluators' scores show variations in scoring for identical proposals with similar provisions, but these are explainable by the reasons stated above, by other differences by district, and by innocent human error or confusion in an evaluation as complex as this one. In the absence of some direct evidence of arbitrariness, capriciousness, or bias, or some clear demonstration that these variables could have altered the final tabulation in any district/region, these minor irregularities are of no practical significance and may be waived. Facts Limited to District 4 - Jacksonville MHRC and PSFI were the only providers who had proposals scored by DCF for District 4 - Jacksonville. For District 4 - Jacksonville, MHRC received an averaged score of 377.00 for its proposal, and PSFI received an averaged score of 366.00. DCF's District 4 - Jacksonville evaluator was Gene Costlow. Mr. Costlow had no preference for any vendor, scored all competitors similarly, and believed MHRC provided more information than had been requested in the RFP. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Hunt 379 338 Jane Streit 370 365 Gene Costlow 355 325 Facts Limited to Suncoast Region - New Port Richey DCF scored three proposals for the Suncoast Region - New Port Richey FACT contract, with Intervenor Harbor receiving a score of 379.00, MHRC receiving a score of 379.00, and PSFI receiving a score of 362.67. In Suncoast Region - New Port Richey, the scoring reflects a first place tie between Harbor and MHRC, making PSFI the third place provider. DCF notified the providers in the Proposal Tabulation and Notice of Intent to Award, that it intended to post results of the tiebreaker evaluation on Monday, July 11, 2005. DCF broke the tie and later noticed its intent to award the Suncoast Region - New Port Richey FACT contract to Harbor. DCF's Suncoast Region - New Port Richey evaluator was Mike Wade. Ms. Munt scored PSFI on June 10, 2005, (361); scored Harbor on June 11, 2005, (393); and scored MHRC on June 12, 2005, (379). This is a tight period of "focus" so no "more lenient trend" is likely. The scores do not get progressively higher each day, so no "more lenient trend" is evident in her scores. In Suncoast Region - New Port Richey, the scores for MHRC, Harbor, and PSFI are as follows: MHRC Harbor PSFI Kim Munt 379 393 361 Jane Streit 367 369 370 Mike Wade 364 363 347 Facts Limited to Suncoast Region - Pinellas DCF scored five provider proposals for the Suncoast Region - Pinellas FACT contract. MHRC received a first place score of 396.00. A provider known as "Suncoast Center" received a second place score of 370.00. PSFI received a third place score of 352.00. A fourth place score of 349.00 was assigned to "Northside". A fifth place score of 315.67 was assigned to "Directions for Mental Health". As such, DCF has noticed its intent to award the Suncoast Region - Pinellas FACT contract to MHRC, and PSFI is the third place proposer for that FACT contract. Neither Suncoast Center nor Northside have intervened. DCF's Suncoast Region - Pinellas evaluator was Robert (Rob) Parkinson. Mr. Parkinson had worked with PSFI Fact teams, but he had no preference in vendors. He read the RFP several times before the Initial Meeting. He evaluated consistently. Mr. Parkinson independently scored all the proposals he reviewed on June 17, 2005, except for Question 27. At the Debriefing Meeting, he discovered that the Question 27 Protocol Sheet was missing from his initial scoring packet. He got the necessary sheet. He took time to review relevant portions of each proposal and then left the meeting room to score Question 27 on all the proposals. He took about 25 minutes to score one question on the several proposals assigned to him, and he did this before any debriefing of scores began for his area of the state. He did not discuss his, or anyone else's, score at any time other than as permitted at the part of the Debriefing Meeting for his part of the state. He did not hear any other person's scores for his part of the state called out before he had scored all the proposals for Question 27. He is found to have scored independently. For this part of the state, Ms. Munt scored all the ranked proposers between June 8, 2005, and June 16, 2005, so that she was scoring every one-to-three days in this area, and her "focus" was therefore fairly tight. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 388 357 Jane Streit 368 370 Rob Parkinson 405 319 Facts Limited to Suncoast Region - Hillsborough DCF scored four proposals for the Suncoast Region - Hillsborough FACT contract, which included MHRC with a score of 376.00, an entity known as "Mental Health Care" with a score of 362.00, PSFI with a score of 358.00, and an entity known as "Northside" with a score of 344.67. DCF noticed its intent to award the Suncoast Region - Hillsborough FACT contract to MHRC, and PSFI is the third place proposer for that FACT contract. Neither Mental Health Care nor Northside have intervened. DCF's Suncoast Region - Hillsborough evaluator was Mike Wade. Ms. Munt scored PSFI after MHRC and Northside after she scored PSFI, so no increasing leniency is shown by her scores in this locale. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 372 345 Jane Streit 367 371 Mike Wade 362 348 Facts Limited to District 7 - Rockledge Intervenor MHRC and PSFI were the only providers that had proposals scored by DCF for District 7 Rockledge, with MHRC receiving a first place score of 395.42 and PSFI receiving a second place score of 378.67. DCF has noticed its intent to award the District 7 FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 7 - Rockledge evaluator was Geovanna Dominguez, who is an adult mental health specialist in District 7, where she acts as a FACT team liaison. The scores for MHRC and PSFI are as follows: Kim Munt MHRC 361 PSFI 314 Jane Streit 367 369 Geovanna Dominguez 431 443 Facts Limited to District 8 - North Fort Myers 95 Intervenor Coastal and PSFI were the only providers that had proposals scored by DCF for the District 8 FACT contract, with Coastal receiving a first place score of 399.67 and PSFI receiving a second place score of 350.00. As such, DCF has noticed its intent to award the District 8 - North Fort Myers FACT contract to Coastal Behavioral, and PSFI is the second place proposer for that FACT contract. DCF's District 8 - North Fort Myers' evaluators were Linda Pournaras, who evaluated and scored PSFI's proposal, and Marcie Gillis, who evaluated and scored Coastal's proposal. The scores for Coastal and PSFI are as follows: Coastal PSFI Kim Munt 386 341 Jane Streit 373 370 Marcie Gillis 413 Linda Pournaras 332 Facts Limited to District 8 - Naples Intervenor MHRC and PSFI were the only providers who had proposals scored by DCF for the District 8 - Naples FACT contract, with MHRC receiving a first place score of 381.33 and PSFI receiving a second place score of 356.67. As such, DCF has noticed its intent to award the District 8 - Naples FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 8 - Naples evaluator was Linda Pournaras. The scores for MHRC and PSFI are as follows: Kim Munt MHRC 380 PSFI 358 Jane Streit 367 369 Linda Pournaras 370 333 Facts Limited to District 11 Miami Intervenor Bayview and PSFI were the only providers who had proposals scored by DCF for the District 11 FACT contract, with Bayview receiving a first place score of 393.33 and PFSI receiving a second place score of 377.67. As such, DCF has noticed its intent to award the District 11 FACT contract to Bayview, and PSFI is the second place proposer for that FACT contract. DCF's District 11 evaluator was Joanna Cardwell. Like Mr. Parkinson in Suncoast - Pinellas, Ms. Cardwell also was missing a Question 27 protocol sheet and discovered it was missing upon her arrival at the Debriefing Meeting. She got the necessary sheets from Mr. Poole while in the room set aside for the debriefing. She then independently reviewed and scored the proposals assigned to her with regard to that question during a break and before any scores for her part of the state were called out. She is found to have scored independently. Ms. Streit and Ms. Munt had never previously dealt with PSFI or Bayview. Ms. Munt reviewed the Bayview and PSFI proposals back-to-back on the last two days of the evaluation period, June 21, 2005, for PSFI and June 22, 2005, for Bayview. She did not believe there could be much change in her focus in that short period, and it is found that there was not.6/ The scores for Bayview on June 22, 2005, and for PSFI, on June 21, 2005, excluding Criteria 26, are as follows: Bayview PSFI Kim Munt 372 348 Jane Streit 366 365 Joanna Cardwell 411 410 Facts Limited to District 15 - Stuart Intervenor MHRC and PSFI were the only providers who had proposals scored by DCF for the District 15 FACT contract, with MHRC receiving a first place score of 379.69 and PSFI receiving a second place score of 366.00. As such, DCF has noticed its intent to award the District 15 FACT contract to MHRC, and PSFI is the second place proposer for that FACT contract. DCF's District 15 evaluator was Carol Eldeen-Todesco. Ms. Eldeen-Todesco had some problems scoring all the proposals and even started over once. She, like Ms. Streit, started scoring in the middle range but was consistent. She considered PSFI's proposals harder to read than MHRC's proposals. She could not find one answer concerning daily nursing staffing in the format of the PSFI proposal. Therefore, she gave PSFI a "one" score on that question. Because her score on that question was so far deviant from that of the other two evaluators on her team, the process described in the RFP's instructions for scoring variances greater than seven was used during the Debriefing Meeting. After a team caucus, Ms. Eldeen- Todesco changed her score from "one" to "eight" in favor of PFSI. Petitioner has suffered no inequity in this bid procedure through the foregoing process. The scores for MHRC and PSFI are as follows: MHRC PSFI Kim Munt 380 346 Jane Streit 367 372 Carol Eldeen- Todesco 365 370 Debriefing, Totaling-up, and Expert Testimony After the evaluators finished scoring their proposals, they met again for a Debriefing Meeting in Tallahassee. Mr. Poole tabulated the scores and averaged them to produce a final score for each proposal. The Agency's methodology for averaging the three independent scores had, as intended, effectively leveled and blended the divergent independent opinions. The following results were posted by the Respondent for the nine districts that Petitioner is challenging: District 4 -Mental Health Resource Center Score: 377 -PSFI Score: 346 (below the 353 threshold) Suncoast Region, Hillsborough -MHRC Score: 376 -Mental Healthcare Score: 362 -PSFI Score: 358 -Northside Score: 344(below the 353 threshold) Suncoast Region, New Port Richey -The Harbor Score: 379 -MHRC Score: 379 -PSFI Score: 362 Under a tie-breaker evaluation process, Harbor was declared the winner. Suncoast Region, Pinellas -MHRC Score: 396 -Suncoast Score: 370 -PSFI Score: 352 -Northside Score: 349 -Directions for Mental Health Score: 315 (The last three vendors were below the 353 threshold.) District 7, Rockledge -MHRC Score: 395.42 -PSFI Score: 378.67 District 8, North Fort Myers -Coastal Behavioral Score: 399.67 -PSFI Score: 350 (below the 353 threshold) District 8, Naples -MHRC Score: 381.33 -PSFI Score: 356.67 District 11 -Bayview Score: 393.33 -PSFI Score: 377.67 District 15 -MHRC Score: 379.69 -PSFI Score: 366 The foregoing scores include the scores given for Question 26, which asked for financial resources required to successfully operate a FACT team. PSFI was permitted to present the opinion of an expert statistician concerning the diverances of all the independent evaluators' scores. However, statistical analysis of divergent bid scoring is not generally accepted as probative of anything.7/ Herein, Petitioner's expert applied a concept called Intraclass Correlation Coefficient (ICC), which purports to measure agreement among all the independent raters in this case. It does not measure capriciousness, fairness, arbitrariness, or any other deficiency of the public entity bid process recognized by custom, rule, policy, or statute. Previously, it has been applied mostly to psychiatric diagnoses/studies and has never been tested as to public procurement. Petitioner's expert acknowledged that ICC deals in assuming that being able to get all reviewers' scores close to a mean, so that they are "repeatable," suggests what a "true score" might be, but that a "true score" is a purely theoretical concept, and that divergence of scores between reviewers does not necessarily indicate unfair competition. His process does not even determine whether the outcome of scoring would be different if measurement error were as represented. Therefore, Petitioner's expert's calculations and test is discredited for this case.

Recommendation Based on the foregoing Findings of Facts and Conclusions of Law, it is RECOMMENDED that the Department of Children and Family Services enter a Final Order that discards all bids in District - North Fort Myers, and awards a FACT team contract to the declared highest scorer in each of the other districts challenged in this case. DONE AND ENTERED this 21st day of February, 2006, in Tallahassee, Leon County, Florida. S ELLA JANE P. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 21st day of February, 2006.

Florida Laws (3) 120.57287.001287.012
# 7
KETURA BOUIE | K. B. vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 96-004200 (1996)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Sep. 04, 1996 Number: 96-004200 Latest Update: Jun. 09, 1997

The Issue Whether Ketura Bouie suffers from “retardation”, as that term is defined by Section 393.063(43), Florida Statutes, and therefore qualifies for developmental services offered by the Respondent agency under Chapter 393, Florida Statutes.

Findings Of Fact Ketura Bouie is 15 years old. She currently resides in Tallahassee, Florida. She is enrolled in a new school after transferring from Chatahoochee. Ketura has had several “social” promotions from grade to grade over the years. Her application for developmental services has been denied by the Respondent agency. Wallace Kennedy, Ph.D., is a Board-certified and Florida-licensed clinical psychologist. He was accepted as an expert in clinical psychology and the testing of children. He conducted a psychological evaluation of Ketura on April 12, 1995, for which he has provided a written narrative dated April 13, 1995. His narrative was admitted in evidence. Ketura was 13 years old at the time of Dr. Kennedy’s evaluation. He administered three standardized tests which are recognized and accepted for determining applicants’ eligibility for developmental services. These tests were: a wide range achievement test, Wechsler Intelligence Scale for Children— Revised (WISC-R), and Vineland Adaptive Behavior Scale. (Vineland) The wide range achievement test generally measures literacy. Ketura recognized only half of the upper-case letters of the alphabet and only a few three-letter kindergarten words. Her results indicated that she has the achievement level expected of a five and a half year old kindergarten student, even though she was then placed in the seventh grade. In Dr. Kennedy's view, there is "no chance Ketura will become functionally literate". The WISC-R measures intellectual functioning and academic aptitude without penalizing the child for handicaps. The mean score on this test is 100. To score two or more deviations from this mean, a subject must score 70 or below. All of Ketura’s WISC-R scores on the test administered by Dr. Kennedy in April 1995 were well below 70. They consisted of a verbal score of 46, a performance score of 46, and a full scale score of 40. Ketura’s full scale IQ of 40 is in the lowest tenth of the first percentile and represents a low moderate level of mental retardation. Ketura’s full scale score of 40 is the lowest result that WISC-R can measure. The Vineland measures communication, daily living skills, and socialization. Ketura’s composite score for Dr. Kennedy on the Vineland was 42. In conducting the Vineland test, Dr. Kennedy relied on information obtained through his own observation of Ketura and information obtained from Ketura’s mother. It is typical in the field of clinical psychology to rely on information supplied by parents and caregivers, provided they are determined to be reliable observers. Dr. Kennedy assessed Ketura’s mother to be a reliable observer. Dr. Kennedy’s Vineland test revealed that Ketura has a social maturity level of about six years of age. Her verbal and written communication skills are poor. Ketura has poor judgment regarding her personal safety. She cannot consistently remember to use a seatbelt and cannot safely use a knife. She has poor domestic skills. She has no concept of money or of dates. She does not help with the laundry or any other household task. She cannot use the phone. Ketura’s socialization skills are also poor. She does not have basic social manners. Her table manners and social interactive skills are poor. She has no friends, and at the time of Dr. Kennedy’s evaluation, she was unhappy due to classmates making fun of her for being unable to recite the alphabet. Dr. Kennedy rendered an ultimate diagnosis of moderate mental retardation and opined that Ketura's retardation is permanent. Although Dr. Kennedy observed that Ketura was experiencing low levels of depression and anxiety during his April 1995 tests and interview, he did not make a clinical psychological diagnosis to that effect. He attributed these emotional components to Ketura’s lack of confidence in being able to perform the tasks required during testing. In his opinion, Ketura did not have any behavioral or emotional problems which interfered with the reliability of the tests he administered. Also, there were no other conditions surrounding his evaluation which interfered with the validity or reliability of the test scores, his evaluation, or his determination that Ketura suffers from a degree of retardation which would qualify her for developmental services. In Dr. Kennedy’s expert opinion, even if all of Ketura's depression and anxiety were eliminated during testing, her WISC-R scores would not have placed her above the retarded range in April 1995. The retardation range for qualifying for developmental services is 68 or below. Ketura’s I.Q. was tested several times between 1990 and April 1995 with resulting full scale scores ranging from 40 to All or some of these tests and/or reports on the 1990 - 1995 tests were submitted to the agency with Ketura’s application for developmental services. Also included with Ketura’s application to the agency were mental health reports documenting depression, a recognized mental disorder. The most recent of these was one done as recently as May of 1996. However, none of these reports were offered or admitted in evidence at formal hearing. Respondent’s sole witness and agency representative, was Ms. JoAnne Braun. She is an agency Human Service Counselor III. Ms. Braun is not a Florida-licensed psychologist and she was not tendered as an expert witness in any field. As part of the application process, she visited with Ketura and her mother in their home. She also reviewed Petitioner’s application and mental health records described above. She reviewed the fluctuating psychological test scores beginning in 1990, one of which placed Ketura at 70 and another of which placed her at 74 on a scale of 100. Ms. Braun also reviewed a March 1995 psychological testing series that showed Ketura had a verbal 50, performance 60, and full scale 62 on the WISC-R test, one month before Dr. Kennedy’s April 1995 evaluation described above. However, none of these items which she reviewed was offered or admitted in evidence. The agency has guidelines for assessing eligibility for developmental services. The guidelines were not offered or admitted in evidence. Ms. Braun interpreted the agency's guidelines as requiring her to eliminate the mental health aspect if she felt it could depress Ketura's standard test scores. Because Ms. Braun "could not be sure that the mental health situation did not depress her scores," and because the fluctuation of Ketura’s test scores over the years caused Ms. Braun to think that Ketura’s retardation might not “reasonably be expected to continue indefinitely”, as required by the controlling statute, she opined that Ketura was not eligible for developmental services. Dr. Kennedy's assessment and expert psychological opinion was that if Ketura's scores were once higher and she now tests with lower scores, it might be the result of better testing today; it might be due to what had been required and observed of her during prior school testing situations; it might even be because she was in a particularly good mood on the one day she scored 70 or 74, but his current testing clearly shows she will never again do significantly better on standard tests than she did in April 1995. In his education, training, and experience, it is usual for test scores to deteriorate due to a retarded person's difficulties in learning as that person matures. I do not consider Ms. Braun’s opinion, though in evidence, as sufficient to rebut the expert opinion of Dr. Kennedy. This is particularly so since the items she relied upon are not in evidence and are not the sort of hearsay which may be relied upon for making findings of fact pursuant to Section 120.58(1)(a), Florida Statutes. See, Bellsouth Advertising & Publishing Corp. v. Unemployment Appeals Commission and Robert Stack, 654 So.2d 292 (Fla. 5th DCA 1995); and Tenbroeck v. Castor, 640 So.2d 164, (Fla. 1st DCA 1994). Particularly, there is no evidence that the "guidelines" (also not in evidence) she relied upon have any statutory or rule basis. Therefore, the only test scores and psychological evaluation upon which the undersigned can rely in this de novo proceeding are those of Dr. Kennedy. However, I do accept as binding on the agency Ms. Braun’s credible testimony that the agency does not find that the presence of a mental disorder in and of itself precludes an applicant, such as Ketura, from qualifying to receive developmental services; that Ketura is qualified to receive agency services under another program for alcohol, drug, and mental health problems which Ketura also may have; and that Ketura’s eligibility under that program and under the developmental services program, if she qualifies for both, are not mutually exclusive.

Recommendation Upon the foregoing findings of fact and conclusions of law, it is RECOMMENDED that the Department of Children and Families issue a Final Order awarding Ketura Bouie appropriate developmental services for so long as she qualifies under the statute.RECOMMENDED this 24th day of February, 1997, at Tallahassee, Florida. ELLA JANE P. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 SUNCOM 278-9675 Fax FILING (904) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 24th day of February, 1997. COPIES FURNISHED: Gregory D. Venz, Agency Clerk Department of Children and Families Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Richard A. Doran General Counsel Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Marla Ruth Butler Qualified Representative Children's Advocacy Center Florida State University Tallahassee, FL 32302-0287 Marian Alves, Esquire Department of Health and Rehabilitative Services 2639 North Monroe Street Suite 100A Tallahassee, FL 32399-2949

Florida Laws (2) 120.57393.063
# 8
KPMG CONSULTING, INC. vs DEPARTMENT OF REVENUE, 02-001719BID (2002)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida May 01, 2002 Number: 02-001719BID Latest Update: Oct. 15, 2002

The Issue The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from further consideration in the invitation to negotiate process.

Findings Of Fact Procurement Background: The Respondent, the (DOR) is a state agency charged with the responsibility of administering the Child Support Enforcement Program (CSE) for the State of Florida, in accordance with Section 20.21(h), Florida Statutes. The DOR issued an ITN for the CAMS Compliance Enforcement implementation on February 1, 2002. This procurement is designed to give the Department a "state of the art system" that will meet all Federal and State Regulations and Policies for Child Support Enforcement, improve the effectiveness of collections of child support and automate enforcement to the greatest extent possible. It will automate data processing and other decision- support functions and allow rapid implementation of changes in regulatory requirements resulting from revised Federal and State Regulation Policies and Florida initiatives, including statutory initiatives. CSE services suffer from dependence on an inadequate computer system known as the "FLORIDA System" which was not originally designed for CSE and is housed and administered in another agency. The current FLORIDA System cannot meet the Respondent's needs for automation and does not provide the Respondent's need for management and reporting requirements and the need for a more flexible system. The DOR needs a system that will ensure the integrity of its data, will allow the Respondent to consolidate some of the "stand-alone" systems it currently has in place to remedy certain deficiencies of the FLORIDA System and which will help the Child Support Enforcement system and program secure needed improvements. The CSE is also governed by Federal Policy, Rules and Reporting requirements concerning performance. In order to improve its effectiveness in responding to its business partners in the court system, the Department of Children and Family Services, the Sheriff's Departments, employers, financial institutions and workforce development boards, as well as to the Federal requirements, it has become apparent that the CSE agency and system needs a new computer system with the flexibility to respond to the complete requirements of the CSE system. In order to accomplish its goal of acquiring a new computer system, the CSE began the procurement process. The Department hired a team from the Northrup Grumman Corporation headed by Dr. Edward Addy to head the procurement development process. Dr. Addy began a process of defining CSE needs and then developing an ITN which reflected those needs. The process included many individuals in CSE who would be the daily users of the new system. These individuals included Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; Frank Doolittle, Process Manager for Child Support Enforcement Compliance Enforcement and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program. There are two alternative strategies for implementing a large computer system such as CAMS CE: a customized system developed especially for CSE or a Commercial Off The Shelf, Enterprise Resource Plan (COTS/ERP). A COTS/ERP system is a pre-packaged software program, which is implemented as a system- wide solution. Because there is no existing COTS/ERP for child support programs, the team recognized that customization would be required to make the product fit its intended use. The team recognized that other system attributes were also important, such as the ability to convert "legacy data" and to address such factors as data base complexity and data base size. The Evaluation Process: The CAMS CE ITN put forth a tiered process for selecting vendors for negotiation. The first tier involved an evaluation of key proposal topics. The key topics were the vendors past corporate experience (past projects) and its key staff. A vendor was required to score 150 out of a possible 230 points to enable it to continue to the next stage or tier of consideration in the procurement process. The evaluation team wanted to remove vendors who did not have a serious chance of becoming the selected vendor at an early stage. This would prevent an unnecessary expenditure of time and resources by both the CSE and the vendor. The ITN required that the vendors provide three corporate references showing their past corporate experience for evaluation. In other words, the references involved past jobs they had done for other entities which showed relevant experience in relation to the ITN specifications. The Department provided forms to the vendors who in turn provided them to their corporate references that they themselves selected. The vendors also included a summary of their corporate experience in their proposal drafted by the vendors themselves. Table 8.2 of the ITN provided positive and negative criteria by which the corporate references would be evaluated. The list in Table 8.2 is not meant to be exhaustive and is in the nature of an "included but not limited to" standard. The vendors had the freedom to select references whose projects the vendors' believed best fit the criteria upon which each proposal was to be evaluated. For the key staff evaluation standard, the vendors provided summary sheets as well as résumés for each person filling a lead role as key staff members on their proposed project team. Having a competent project team was deemed by the Department to be critical to the success of the procurement and implementation of a large project such as the CAMS CE. Table 8.2 of the ITN provided the criteria by which the key staff would be evaluated. The Evaluation Team: The CSE selected an evaluation team which included Dr. Addy, Mr. Ellis, Mr. Bankirer, Mr. Doolittle and Mr. Esser. Although Dr. Addy had not previously performed the role of an evaluator, he has responded to several procurements for Florida government agencies. He is familiar with Florida's procurement process and has a doctorate in Computer Science as well as seventeen years of experience in information technology. Dr. Addy was the leader of the Northrup Grumman team which primarily developed the ITN with the assistance of personnel from the CSE program itself. Mr. Ellis, Mr. Bankirer and Mr. Doolittle participated in the development of the ITN as well. Mr. Bankirer and Mr. Doolittle had previously been evaluators in other procurements for Federal and State agencies prior to joining the CSE program. Mr. Esser is the Chief of the Bureau of Information Technology at the Department of Highway Safety and Motor Vehicles and has experience in similar, large computer system procurements at that agency. The evaluation team selected by the Department thus has extensive experience in computer technology, as well as knowledge of the requirements of the subject system. The Department provided training regarding the evaluation process to the evaluators as well as a copy of the ITN, the Source Selection Plan and the Source Selection Team Reference Guide. Section 6 of the Source Selection Team Reference Guide entitled "Scoring Concepts" provided guidance to the evaluators for scoring proposals. Section 6.1 entitled "Proposal Evaluation Specification in ITN Section 8" states: Section 8 of the ITN describes the method by which proposals will be evaluated and scored. SST evaluators should be consistent with the method described in the ITN, and the source selection process documented in the Reference Guide and the SST tools are designed to implement this method. All topics that are assigned to an SST evaluator should receive at the proper time an integer score between 0 and 10 (inclusive). Each topic is also assigned a weight factor that is multiplied by the given score in order to place a greater or lesser emphasis on specific topics. (The PES workbook is already set to perform this multiplication upon entry of the score.) Tables 8-2 through 8-6 in the ITN Section 8 list the topics by which the proposals will be scored along with the ITN reference and evaluation and scoring criteria for each topic. The ITN reference points to the primary ITN section that describes the topic. The evaluation and scoring criteria list characteristics that should be used to affect the score negatively or positively. While these characteristics should be used by each SST evaluator, each evaluator is free to emphasize each characteristic more or less than any other characteristic. In addition, the characteristics are not meant to be inclusive, and evaluators may consider other characteristics that are not listed . . . (Emphasis supplied). The preponderant evidence demonstrates that all the evaluators followed these instructions in conducting their evaluations and none used a criterion that was not contained in the ITN, either expressly or implicitly. Scoring Method: The ITN used a 0 to 10 scoring system. The Source Selection Team Guide required that the evaluators use whole integer scores. They were not required to start at "7," which was the average score necessary to achieve a passing 150 points, and then to score up or down from 7. The Department also did not provide guidance to the evaluators regarding a relative value of any score, i.e., what is a "5" as opposed to a "6" or a "7." There is no provision in the ITN which establishes a baseline score or starting point from which the evaluators were required to adjust their scores. The procurement development team had decided to give very little structure to the evaluators as they wanted to have each evaluator score based upon his or her understanding of what was in the proposal. Within the ITN the development team could not sufficiently characterize every potential requirement, in the form that it might be submitted, and provide the consistency of scoring that one would want in a competitive environment. This open-ended approach is a customary method of scoring, particularly in more complex procurements in which generally less guidance is given to evaluators. Providing precise guidance regarding the relative value of any score, regarding the imposition of a baseline score or starting point, from which evaluators were required to adjust their scores, instruction as to weighing of scores and other indicia of precise structure to the evaluators would be more appropriate where the evaluators themselves were not sophisticated, trained and experienced in the type of computer system desired and in the field of information technology and data retrieval generally. The evaluation team, however, was shown to be experienced and trained in information technology and data retrieval and experienced in complex computer system procurement. Mr. Barker is the former Bureau Chief of Procurement for the Department of Management Services. He has 34 years of procurement experience and has participated in many procurements for technology systems similar to CAMS CE. He established that the scoring system used by the Department at this initial stage of the procurement process is a common method. It is customary to leave the numerical value of scores to the discretion of the evaluators based upon each evaluator's experience and review of the relevant documents. According wider discretion to evaluators in such a complex procurement process tends to produce more objective scores. The evaluators scored past corporate experience (references) and key staff according to the criteria in Table 8.2 of the ITN. The evaluators then used different scoring strategies within the discretion accorded to them by the 0 to 10 point scale. Mr. Bankirer established a midrange of 4 to 6 and added or subtracted points based upon how well the proposal addressed the CAMS CE requirements. Evaluator Ellis used 6 as his baseline and added or subtracted points from there. Dr. Addy evaluated the proposals as a composite without a starting point. Mr. Doolittle started with 5 as an average score and then added or subtracted points. Mr. Esser gave points for each attribute in Table 8.2, for key staff, and added the points for the score. For the corporate reference criterion, he subtracted a point for each attribute the reference lacked. As each of the evaluators used the same methodology for the evaluation of each separate vendor's proposal, each vendor was treated the same and thus no specific prejudice to KPMG was demonstrated. Corporate Reference Evaluation: KPMG submitted three corporate references: Duke University Health System (Duke), SSM Health Care (SSM), and Armstrong World Industries (Armstrong). Mr. Bankirer gave the Duke reference a score of 6, the SSM reference a score of 5 and the Armstrong reference a score of 7. Michael Strange, the KPMG Business Development Manager, believed that 6 was a low score. He contended that an average score of 7 was required to make the 150-point threshold for passage to the next level of the ITN consideration. Therefore, a score of 7 would represent minimum compliance, according to Mr. Strange. However, neither the ITN nor the Source Selection Team Guide identified 7 as a minimally compliant score. Mr. Strange's designation of 7 as a minimally compliant score is not provided for in the specifications or the scoring instructions. Mr. James Focht, Senior Manager for KPMG testified that 6 was a low score, based upon the quality of the reference that KPMG had provided. However, Mr. Bankirer found that the Duke reference was actually a small-sized project, with little system development attributes, and that it did not include information regarding a number of records, the data base size involved, the estimated and actual costs and attributes of data base conversion. Mr. Bankirer determined that the Duke reference had little similarity to the CAMS CE procurement requirements and did not provide training or data conversion as attributes for the Duke procurement which are attributes necessary to the CAMS CE procurement. Mr. Strange and Mr. Focht admitted that the Duke reference did not specifically contain the element of data conversion and that under the Table 8.2, omission of this information would negatively affect the score. Mr. Focht admitted that there was no information in the Duke Health reference regarding the number of records and the data base size, all of which factors diminish the quality of Duke as a reference and thus the score accorded to it. Mr. Strange opined that Mr. Bankirer had erred in determining that the Duke project was a significantly small sized project since it only had 1,500 users. Mr. Focht believed that the only size criterion in Table 8.2 was the five million dollar cost threshold, and, because KPMG indicated that the project cost was greater than five million dollars, that KPMG had met the size criterion. Mr. Focht believed that evaluators had difficulty in evaluating the size of the projects in the references due to a lack of training. Mr. Focht was of the view that the evaluator should have been instructed to make "binary choices" on issues such as size. He conceded, however, that evaluators may have looked at other criteria in Table 8.2 to determine the size of the project, such as database size and number of users. However, the corporate references were composite scores by the evaluators, as the ITN did not require separate scores for each factor in Table 8.2. Therefore, Mr. Focht's focus on binary scoring for size, to the exclusion of other criteria, mis-stated the objective of the scoring process. The score given to the corporate references was a composite of all of the factors in Table 8.2, and not merely monetary value size. Although KPMG apparently contends that size, in terms of dollar value, is the critical factor in determining the score for a corporate reference, the vendor questions and answers provided at the pre-proposal conference addressed the issue of relevant criteria. Question 40 of the vendor questions and answers, Volume II, did not single out "project greater than five million dollars" as the only size factor or criterion. QUESTION: Does the state require that each reference provided by the bidder have a contract value greater than $5 million; and serve a large number of users; and include data conversion from a legacy system; and include training development? ANSWER: To get a maximum score for past corporate experience, each reference must meet these criteria. If the criteria are not fully met, the reference will be evaluated, but will be assigned a lower score depending upon the degree to which the referenced project falls short of these required characteristics. Therefore, the cost of the project is shown to be only one component of a composite score. Mr. Strange opined that Mr. Bankirer's comment regarding the Duke reference, "little development, mostly SAP implementation" was irrelevant. Mr. Strange's view was that the CAMS CE was not a development project and Table 8.2 did not specifically list development as a factor on which proposals would be evaluated. Mr. Focht stated that in his belief Mr. Bankirer's comment suggested that Mr. Bankirer did not understand the link between the qualifications in the reference and the nature of KPMG's proposal. Both Strange and Focht believe that the ITN called for a COTS/ERP solution. Mr. Focht stated that the ITN references a COTS/ERP approach numerous times. Although many of the references to COTS/ERP in the ITN also refer to development, Mr. Strange also admitted that the ITN was open to a number of approaches. Furthermore, both the ITN and the Source Selection Team Guide stated that the items in Table 8.2 are not all inclusive and that the evaluators may look to other factors in the ITN. Mr. Bankirer noted that there is no current CSE COTS/ERP product on the market. Therefore, some development will be required to adapt an off-the-shelf product to its intended use as a child support case management system. Mr. Bankirer testified that the Duke project was a small-size project with little development. Duke has three sites while CSE has over 150 sites. Therefore, the Duke project is smaller than CAMS. There was no information provided in the KPMG submittal regarding data base size and number of records with regard to the Duke project. Mr. Bankirer did not receive the information he needed to infer a larger sized-project from the Duke reference. Mr. Esser also gave the Duke reference a score of 6. The reference did not provide the data base information required, which was the number of records in the data base and the number of "gigabytes" of disc storage to store the data, and there was no element of legacy conversion. Dr. Addy gave the Duke reference a score of 5. He accepted the dollar value as greater than five million dollars. He thought that the Duke Project may have included some data conversion, but it was not explicitly stated. The Duke customer evaluated training so he presumed training was provided with the Duke project. The customer ratings for Duke were high as he expected they would be, but similarity to the CAMS CE system was not well explained. He looked at size in terms of numbers of users, number of records and database size. The numbers that were listed were for a relatively small-sized project. There was not much description of the methodology used and so he gave it an overall score of 5. Mr. Doolittle gave the Duke reference a score of 6. He felt that it was an average response. He listed the number of users, the number of locations, that it was on time and on budget, but found that there was no mention of data conversion, database size or number of records. (Consistent with the other evaluators). A review of the evaluators comments makes it apparent that KPMG scores are more a product of a paucity of information provided by KPMG corporate references instead of a lack of evaluator knowledge of the material being evaluated. Mr. Ellis gave a score of 6 for the Duke reference. He used 6 as his baseline. He found the required elements but nothing more justifying in his mind raising the score above 6. Mr. Focht and Mr. Strange expressed the same concerns regarding Bankirer's comment, regarding little development, for the SSM Healthcare reference as they had for the Duke Health reference. However, both Mr. Strange and Mr. Focht admitted that the reference provided no information regarding training. Mr. Strange admitted that the reference had no information regarding data conversion. Training and data conversion are criteria contained in Table 8.2. Mr. Strange also admitted that KPMG had access to Table 8.2 before the proposal was submitted and could have included the information in the proposal. Mr. Bankirer gave the SSM reference a score of 5. He commented that the SAP implementation was not relevant to what the Department was attempting to do with the CAMS CE system. CAMS CE does not have any materials management or procurement components, which was the function of the SAP components and the SSM reference procurement or project. Additionally, there was no training indicated in the SSM reference. Mr. Esser gave the SSM reference a score of 3. His comments were "no training provided, no legacy data conversion, project evaluation was primarily for SAP not KPMG". However, it was KPMG's responsibility in responding to the ITN to provide project information concerning a corporate reference in a clear manner rather than requiring that an evaluator infer compliance with the specifications. Mr. Focht believed that legacy data conversion could be inferred from the reference's description of the project. Mr. Strange opined that Mr. Esser's comment was inaccurate as KPMG installed SAP and made the software work. Mr. Esser gave the SSM reference a score of 3 because the reference described SAP's role, but not KPMG's role in the installation of the software. When providing information in the reference SSM gave answers relating to SAP to the questions regarding system capability, system usability, system reliability but did not state KPMG's role in the installation. SAP is a large enterprise software package. This answer created an impression of little KPMG involvement in the project. Dr. Addy gave the SSM reference a score of 6. Dr. Addy found that the size was over five million dollars and customer ratings were high except for a 7 for usability with reference to a "long learning curve" for users. Data conversion was implied. There was no strong explanation of similarity to CAMS CE. It was generally a small-sized project. He could reason some similarity into it, even though it was not well described in the submittal. Mr. Doolittle gave the SSM reference a score of 6. Mr. Doolittle noted, as positive factors, that the total cost of the project was greater than five million dollars, that it supported 24 sites and 1,500 users as well "migration from a mainframe." However, there were negative factors such as training not being mentioned and a long learning curve for its users. Mr. Ellis gave a score of 6 for SSM, feeling that KPMG met all of the requirements but did not offer more than the basic requirements. Mr. Strange opined that Mr. Bankirer, Dr. Addy and Mr. Ellis (evaluators 1, 5 and 4) were inconsistent with each other in their evaluation of the SSM reference. He stated that this inconsistency showed a flaw in the evaluation process in that the evaluators did not have enough training to uniformly evaluate past corporate experience, thereby, in his view, creating an arbitrary evaluation process. Mr. Bankirer gave the SSM reference a score of 5, Ellis a score of 6, and Addy a score of 6. Even though the scores were similar, Mr. Strange contended that they gave conflicting comments regarding the size of the project. Mr. Ellis stated that the size of the project was hard to determine as the cost was listed as greater than five million dollars and the database size given, but the number of records was not given. Mr. Bankirer found that the project was low in cost and Dr. Addy stated that over five million dollars was a positive factor in his consideration. However, the evaluators looked at all of the factors in Table 8.2 in scoring each reference. Other factors that detracted from KPMG's score for the SSM reference were: similarity to the CAMS system not being explained, according to Dr. Addy; no indication of training (all of the evaluators); the number of records not being provided (evaluator Ellis); little development shown (Bankirer) and usability problems (Dr. Addy). Mr. Strange admitted that the evaluators may have been looking at other factors besides the dollar value size in order to score the SSM reference. Mr. Esser gave the Armstrong reference a score of 6. He felt that the reference did not contain any database information or cost data and that there was no legacy conversion shown. Dr. Addy also gave Armstrong a score of 6. He inferred that this reference had data conversion as well as training and the high dollar volume which were all positive factors. He could not tell, however, from the project description, what role KPMG actually had in the project. Mr. Ellis gave a score of 7 for the Armstrong reference stating that the Armstrong reference offered more information regarding the nature of the project than had the SSM and Duke references. Mr. Bankirer gave KPMG a score of 7 for the Armstrong reference. He found that the positive factors were that the reference had more site locations and offered training but, on the negative side, was not specific regarding KPMG's role in the project. Mr. Focht opined that the evaluators did not understand the nature of the product and services the Department was seeking to obtain as the Department's training did not cover the nature of the procurement and the products and services DOR was seeking. However, when he made this statement he admitted he did not know the evaluators' backgrounds. In fact, Bankirer, Ellis, Addy and Doolittle were part of a group that developed the ITN and clearly knew what CSE was seeking to procure. Further, Mr. Esser stated that he was familiar with COTS and described it as a commercial off-the-shelf software package. Mr. Esser explained that an ERP solution or Enterprise Resource Plan is a package that is designed to do a series of tasks, such as produce standard reports and perform standard operations. He did not believe that he needed further training in COTS/ERP to evaluate the proposals. Mr. Doolittle was also familiar with COTS/ERP and believed, based on the amount of funding, that it was a likely response to the ITN. Dr. Addy's doctoral dissertation research was in the area of software re-use. COTS is one of the components that comprise a development activity and re-use. He became aware during his research of how COTS packages are used in software engineering. He has also been exposed to ERP packages. ERP is only one form of a COTS package. In regard to the development of the ITN and the expectations of the development team, Dr. Addy stated that they were amenable to any solution that met the requirements of the ITN. They fully expected the compliance solutions were going to be comprised of mostly COTS and ERP packages. Furthermore, the ITN in Section 1.1, on page 1-2 states, ". . . FDOR will consider an applicable Enterprise Resource Planning (ERP) or Commercial Off the Shelf (COTS) based solution in addition to custom development." Clearly, this ITN was an open procurement and to train evaluators on only one of the alternative solutions would have biased the evaluation process. Mr. Doolittle gave each of the KPMG corporate references a score of 6. Mr. Strange and Mr. Focht questioned the appropriateness of these scores as the corporate references themselves gave KPMG average ratings of 8.3, 8.2 and 8.0. However, Mr. Focht admitted that Mr. Doolittle's comments regarding the corporate references were a mixture of positive and negative comments. Mr. Focht believed, however, that as the reference corporations considered the same factors for providing ratings on the reference forms, that it was inconsistent for Mr. Doolittle to separately evaluate the same factors that the corporations had already rated. However, there is no evidence in the record that KPMG provided Table 8.2 to the companies completing the reference forms and that the companies consulted the table when completing their reference forms. Therefore, KPMG did not prove that it had taken all measures available to it to improve its scores. Moreover, Mr. Focht's criticism would impose a requirement on Mr. Doolittle's evaluation which was not supported by the ITN. Mr. Focht admitted that there was no criteria in the ITN which limited the evaluator's discretion in scoring to the ratings given to the corporate references by those corporate reference customers. All of the evaluators used Table 8.2 as their guide for scoring the corporate references. As part of his evaluation, Dr. Addy looked at the methodology used by the proposers in each of the corporate references to implement the solution for that reference company. He was looking at methodology to determine its degree of similarity to CAMS CE. While not specifically listed in Table 8.2 as a similarity to CAMS, Table 8.2 states that the list is not all inclusive. Clearly, methodology is a measure of similarity and therefore is not an arbitrary criterion. Moreover, as Dr. Addy used the same process and criteria in evaluating all of the proposals there was no prejudice to KPMG by use of this criterion since all vendors were subjected to it. Mr. Strange stated that KPMG appeared to receive lower scores for SAP applications than other vendors. For example, evaluator 1 gave a score of 7 to Deloitte's reference for Suntax. Suntax is an SAP implementation. It is difficult to draw comparisons across vendors, yet the evaluators consistently found that KPMG references lacked key elements such as data conversion, information on starting and ending costs, and information on database size. All of these missing elements contributed to a reduction in KPMG's scores. Nevertheless, KPMG received average scores of 5.5 for Duke, 5.7 for SSM and 6.3 for Armstrong, compared with the score of 7 received by Deloitte for Suntax. There is only a gap of 1.5 to .7 points between Deloitte and KPMG's scores for SAP implementations, despite the deficient information within KPMG's corporate references. Key Staff Criterion: The proposals contain a summary of the experience of key staff and attached résumés. KPMG's proposed key staff person for Testing Lead was Frank Traglia. Mr. Traglia's summary showed that he had 25-years' experience respectively, in the areas of child support enforcement, information technology, project management and testing. Strange and Focht admitted that Traglia's résumé did not specifically list any testing experience. Mr. Focht further admitted that it was not unreasonable for evaluators to give the Testing Lead a lower score due to the lack of specific testing information in Traglia's résumé. Mr. Strange explained that the résumé was from a database of résumés. The summary sheet, however, was prepared by those KPMG employees who prepared the proposal. All of the evaluators resolved the conflicting information between the summary sheet and the résumé by crediting the résumé as more accurate. Each evaluator thought that the résumé was more specific and expected to see specific information regarding testing experience on the résumé for someone proposed as the Testing Lead person. Evaluators Addy and Ellis gave scores to the Testing Lead criterion of 4 and 5. Mr. Ron Vandenberg (evaluator 8) gave the Testing Lead a score of 9. Mr. Vandenberg was the only evaluator to give the Testing Lead a high score. The other evaluators gave the Testing Lead an average score of 4.2. The Vandenberg score thus appears anomalous. All of the evaluators gave the Testing Lead a lower score as it did not specifically list testing experience. Dr. Addy found that the summary sheet listed 25-years of experience in child support enforcement, information technology, and project management and system testing. As he did not believe this person had 100 years of experience, he assumed those experience categories ran concurrently. A strong candidate for Testing Lead should demonstrate a combination of testing experience, education and certification, according to Dr. Addy. Mr. Doolittle also expected to see testing experience mentioned in the résumé. When evaluating the Testing Lead, Mr. Bankirer first looked at the team skills matrix and found it interesting that testing was not one of the categories of skills listed for the Testing Lead. He then looked at the summary sheet and résumé from Mr. Traglia. He gave a lower score to Traglia as he thought that KPMG should have put forward someone with demonstrable testing experience. The evaluators gave a composite score to key staff based on the criteria in Table 8.2. In order to derive the composite score that he gave each staff person, Mr. Esser created a scoring system wherein he awarded points for each attribute in Table 8.2 and then added the points together to arrive at a composite score. Among the criteria he rated, Mr. Esser awarded points for CSE experience. Mr. Focht and Mr. Strange contended that since the term CSE experience is not actually listed in Table 8.2 that Mr. Esser was incorrect in awarding points for CSE experience in his evaluation. Table 8.2 does refer to relevant experience. There is no specific definition provided in Table 8.2 for relevant experience. Mr. Focht stated that relevant experience is limited to COTS/ERP experience, system development, life cycle and project management methodologies. However, these factors are also not listed in Table 8.2. Mr. Strange limited relevance to experience in the specific role for which the key staff person was proposed. This is a limitation that also is not imposed by Table 8.2. CSE experience is no more or less relevant than the factors posited by KPMG as relevant experience. Moreover, KPMG included a column in its own descriptive table of key staffs for CSE experience. KPMG must have seen this information as relevant if it included it in its proposal as well. Inclusion of this information in its proposal demonstrated that KPMG must have believed CSE experience was relevant at the time its submitted its proposal. Mr. Strange held the view that, in the bidders conference in a reply to a vendor question, the Department representative stated that CSE experience was not required. Therefore, Mr. Esser could not use such experience to evaluate key staff. Question 47 of the Vendor Questions and Answers, Volume 2 stated: QUESTION: In scoring the Past Corporate Experience section, Child Support experience is not mentioned as a criterion. Would the State be willing to modify the criteria to include at least three Child Support implementations as a requirement? ANSWER: No. However, a child support implementation that also meets the other characteristics (contract value greater than $5 million, serves a large number of users, includes data conversion from a legacy system and includes training development) would be considered "similar to CAMS CE." The Department's statement involved the scoring of corporate experience not key staff. It was inapplicable to Mr. Esser's scoring system. Mr. Esser gave the Training Lead a score of 1. According to Esser, the Training Lead did not have a ten-year résumé, for which he deducted one point. The Training Lead had no specialty certification or extensive experience and had no child support experience and received no points. Mr. Esser added one point for the minimum of four years of specific experience and one point for the relevance of his education. Mr. Esser gave the Project Manager a score of 5. The Project Manager had a ten-year résumé and required references and received a point for each. He gave two points for exceeding the minimum required informational technology experience. The Project Manager had twelve years of project management experience for a score of one point, but lacked certification, a relevant education and child support enforcement experience for which he was accorded no points. Mr. Esser gave the Project Liaison person a score of According to Mr. Focht, the Project Liaison should have received a higher score since she has a professional history of having worked for the state technology office. Mr. Esser, however, stated that she did not have four years of specific experience and did not have extensive experience in the field, although she had a relevant education. Mr. Esser gave the Software Lead person a score of 4. The Software Lead, according to Mr. Focht, had a long set of experiences with implementing SAP solutions for a wide variety of different clients and should have received a higher score. Mr. Esser gave a point each for having a ten-year résumé, four years of specific experience in software, extensive experience in this area and relevant education. According to Mr. Focht the Database Lead had experience with database pools including the Florida Retirement System and should have received more points. Mr. Strange concurred with Mr. Focht in stating that Esser had given low scores to key staff and stated that the staff had good experience, which should have generated more points. Mr. Strange believed that Mr. Esser's scoring was inconsistent but provided no basis for that conclusion. Other evaluators also gave key staff positions scores of less than 7. Dr. Addy gave the Software Lead person a score of 5. The Software Lead had 16 years of experience and SAP development experience as positive factors but had no development lead experience. He had a Bachelor of Science and a Master of Science in Mechanical Engineering and a Master's in Business Administration, which were not good matches in education for the role of a Software Lead person. Dr. Addy gave the Training Lead person a score of 5. The Training Lead had six years of consulting experience, a background in SAP consulting and some training experience but did not have certification or education in training. His educational background also was electrical engineering, which is not a strong background for a training person. Dr. Addy gave the subcontractor managers a score of 5. Two of the subcontractors did not list managers at all, which detracted from the score. Mr. Doolittle gave the Training Lead person a He believed that based on his experience and training it was an average response. Table 8.2 contained an item in which a proposer could have points detracted from a score if the key staff person's references were not excellent. The Department did not check references at this stage in the evaluation process. As a result, the evaluators simply did not consider that item when scoring. No proposer's score was adversely affected thereby. KPMG contends that checking references would have given the evaluators greater insight into the work done by those individuals and their relevance and capabilities in the project team. Mr. Focht admitted, however, that any claimed effect on KPMG's score is conjectural. Mr. Strange stated that without reference checks information in the proposals could not be validated but he provided no basis for his opinion that reference checking was necessary at this preliminary stage of the evaluation process. Dr. Addy stated that the process called for checking references during the timeframe of oral presentations. They did not expect the references to change any scores at this point in the process. KPMG asserted that references should be checked to ascertain the veracity of the information in the proposals. However, even if the information in some other proposal was inaccurate it would not change the outcome for KPMG. KPMG would still not have the required number of points to advance to the next evaluation tier. Divergency in Scores The Source Selection Plan established a process for resolving divergent scores. Any item receiving scores with a range of 5 or more was determined to be divergent. The plan provided that the Coordinator identify divergent scores and then report to the evaluators that there were divergent scores for that item. The Coordinator was precluded from telling the evaluator, if his score was the divergent score, i.e., the highest or lowest score. Evaluators would then review that item, but were not required to change their scores. The purpose of the divergent score process was to have evaluators review their scores to see if there were any misperceptions or errors that skewed the scores. The team wished to avoid having any influence on the evaluators' scores. Mr. Strange testified that the Department did not follow the divergent score process in the Source Selection Plan as the coordinator did not tell the evaluators why the scores were divergent. Mr. Strange stated that the evaluator should have been informed which scores were divergent. The Source Selection Plan merely instructed the coordinator to inform the evaluators of the reason why the scores were divergent. Inherently scores were divergent, if there was a five-point score spread. The reason for the divergence was self- explanatory. The evaluators stated that they scored the proposals, submitted the scores and each received an e-mail from Debbie Stephens informing him that there were divergent scores and that they should consider re-scoring. None of the evaluators ultimately changed their scores. Mr. Esser's scores were the lowest of the divergent scores but he did not re-score his proposals as he had spent a great deal of time on the initial scoring and felt his scores to be valid. Neither witnesses Focht or Strange for KPMG provided more than speculation regarding the effect of the divergent scores on KPMG's ultimate score and any role the divergent scoring process may have had in KPMG not attaining the 150 point passage score. Deloitte - Suntax Reference: Susan Wilson, a Child Support Enforcement employee connected with the CAMS project signed a reference for Deloitte Consulting regarding the Suntax System. Mr. Focht was concerned that the evaluators were influenced by her signature on the reference form. Mr. Strange further stated that having someone who is heavily involved in the project sign a reference did not appear to be fair. He was not able to state any positive or negative effect on KPMG by Wilson's reference for Deloitte, however. Evaluator Esser has met Susan Wilson but has had no significant professional interaction with her. He could not recall anything that he knew about Ms. Wilson that would favorably influence him in scoring the Deloitte reference. Dr. Addy also was not influenced by Wilson. Mr. Doolittle has only worked with Wilson for a very short time and did not know her well. He has also evaluated other proposals where department employees were a reference and was not influenced by that either. Mr. Ellis has only known Wilson from two to four months. Her signature on the reference form did not influence him either positively or negatively. Mr. Bankirer had not known Wilson for a long time when he evaluated the Suntax reference. He took the reference at face value and was not influenced by Wilson's signature. It is not unusual for someone within an organization to create a reference for a company who is competing for work to be done for the organization.

Recommendation Having considered the foregoing Findings of Fact, Conclusions of Law, the evidence of record and the pleadings and arguments of the parties, it is, therefore, RECOMMENDED that a final order be entered by the State of Florida Department of Revenue upholding the proposed agency action which disqualified KPMG from further participation in the evaluation process regarding the subject CAMS CE Invitation to Negotiate. DONE AND ENTERED this 26th day of September, 2002, in Tallahassee, Leon County, Florida. P. MICHAEL RUFF Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with Clerk of the Division of Administrative Hearings this 26th day of September, 2002. COPIES FURNISHED: Cindy Horne, Esquire Earl Black, Esquire Department of Revenue Post Office Box 6668 Tallahassee, Florida 32399-0100 Robert S. Cohen, Esquire D. Andrew Byrne, Esquire Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive Tallahassee, Florida 32308 Seann M. Frazier, Esquire Greenburg, Traurig, P.A. 101 East College Avenue Tallahassee, Florida 32302 Bruce Hoffmann, General Counsel Department of Revenue 204 Carlton Building Tallahassee, Florida 32399-0100 James Zingale, Executive Director Department of Revenue 104 Carlton Building Tallahassee, Florida 32399-0100

Florida Laws (3) 120.569120.5720.21
# 9
THE FLORIDA INSURANCE COUNCIL, INC.; THE AMERICAN INSURANCE ASSOCIATION; PROPERTY CASUALTY INSURERS ASSOCIATION OF AMERICA; AND NATIONAL ASSOCIATION OF MUTUAL INSURANCE COMPANIES vs DEPARTMENT OF FINANCIAL SERVICES, OFFICE OF INSURANCE REGULATION, AND THE FINANCIAL SERVICES COMMISSION, 05-002803RP (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 03, 2005 Number: 05-002803RP Latest Update: May 17, 2007

The Issue At issue in this proceeding is whether proposed Florida Administrative Code Rule 69O-125.005 is an invalid exercise of delegated legislative authority.

Findings Of Fact Petitioners AIA is a trade association made up of 40 groups of insurance companies. AIA member companies annually write $6 billion in property, casualty, and automobile insurance in Florida. AIA's primary purpose is to represent the interests of its member insurance groups in regulatory and legislative matters throughout the United States, including Florida. NAMIC is a trade association consisting of 1,430 members, mostly mutual insurance companies. NAMIC member companies annually write $10 billion in property, casualty, and automobile insurance in Florida. NAMIC represents the interests of its member insurance companies in regulatory and legislative matters throughout the United States, including Florida. PCI is a national trade association of property and casualty insurance companies consisting of 1,055 members. PCI members include mutual insurance companies, stock insurance companies, and reciprocal insurers that write property and casualty insurance in Florida. PCI members annually write approximately $15 billion in premiums in Florida. PCI participated in the OIR's workshops on the Proposed Rule. PCI's assistant vice president and regional manager, William Stander, testified that if the Proposed Rule is adopted, PCI's member companies would be required either to withdraw from the Florida market or drastically reorganize their business model. FIC is an insurance trade association made up of 39 insurance groups that represent approximately 250 insurance companies writing all lines of insurance. All of FIC's members are licensed in Florida and write approximately $27 billion in premiums in Florida. FIC has participated in rule challenges in the past, and participated in the workshop and public hearing process conducted by OIR for this Proposed Rule. FIC President Guy Marvin testified that FIC's property and casualty members use credit scoring and would be affected by the Proposed Rule. A substantial number of Petitioners' members are insurers writing property and casualty insurance and/or motor vehicle insurance coverage in Florida. These members use credit-based insurance scoring in their underwriting and rating processes. They would be directly regulated by the Proposed Rule in their underwriting and rating methods and in the rate filing processes set forth in Sections 627.062 and 627.0651, Florida Statutes. Fair Isaac originated credit-based insurance scoring and is a leading provider of credit-based insurance scoring information in the United States and Canada. Fair Isaac has invested millions of dollars in the development and maintenance of its credit-based insurance models. Fair Isaac concedes that it is not an insurer and, thus, would not be directly regulated by the Proposed Rule. However, Fair Isaac would be directly affected by any negative impact that the Proposed Rule would have in setting limits on the use of credit-based insurance score models in Florida. Lamont Boyd, a manager in Fair Isaac's global scoring division, testified that if the Proposed Rule goes into effect Fair Isaac would, at a minimum, lose all of the revenue it currently generates from insurance companies that use its scores in the State of Florida, because Fair Isaac's credit-based insurance scoring model cannot meet the requirements of the Proposed Rule regarding racial, ethnic, and religious categorization. Mr. Boyd also testified that enactment of the Proposed Rule could cause a "ripple effect" of similar regulations in other states, further impairing Fair Isaac's business. The Statute and Proposed Rule During the 1990s, insurance companies' use of consumer credit information for underwriting and rating automobile and residential property insurance policies greatly increased. Insurance regulators expressed concern that the use of consumer credit reports, credit histories and credit-based insurance scoring models could have a negative effect on consumers' ability to obtain and keep insurance at appropriate rates. Of particular concern was the possibility that the use of credit scoring would particularly hurt minorities, people with low incomes, and young people, because those persons would be more likely to have poor credit scores. On September 19, 2001, Insurance Commissioner Tom Gallagher appointed a task force to examine the use of credit reports and develop recommendations for the Legislature or for the promulgation of rules regarding the use of credit scoring by the insurance industry. The task force met on four separate occasions throughout the state in 2001, and issued its report on January 23, 2002. The task force report conceded that the evidence supporting the negative impact of the use of credit reports on specific groups is "primarily anecdotal," and that the insurance industry had submitted anecdotal evidence to the contrary. Among its nine recommendations, the task force recommended the following: A comprehensive and independent investigation of the relationship between insurers' use of consumer credit information and risk of loss including the impact by race, income, geographic location and age. A prohibition against the use of credit reports as the sole basis for making underwriting or rating decisions. That insurers using credit as an underwriting or rating factor be required to provide regulators with sufficient information to independently verify that use. That insurers be required to send a copy of the credit report to those consumers whose adverse insurance decision is a result of their consumer credit information and a simple explanation of the specific credit characteristics that caused the adverse decision. That insurers not be permitted to draw a negative inference from a bad credit score that is due to medical bills, little or no credit information, or other special circumstances that are clearly not related to an applicant's or policyholder's insurability. That the impact of credit reports be mitigated by imposing limits on the weight that insurers can give to them in the decision to write a policy and limits on the amount the premium can be increased due to credit information. No evidence was presented that the "comprehensive and independent investigation" of insurers' use of credit information was undertaken by the Legislature. However, the other recommendations of the task force were addressed in Senate Bills 40A and 42A, enacted by the Legislature and signed by the governor on June 26, 2003. These companion bills, each with an effective date of January 1, 2004, were codified as Sections 626.9741 and 626.97411, Florida Statutes, respectively. Chapters 2003-407 and 2003-408, Laws of Florida. Section 626.9741, Florida Statutes, provides: The purpose of this section is to regulate and limit the use of credit reports and credit scores by insurers for underwriting and rating purposes. This section applies only to personal lines motor vehicle insurance and personal lines residential insurance, which includes homeowners, mobile home owners' dwelling, tenants, condominium unit owners, cooperative unit owners, and similar types of insurance. As used in this section, the term: "Adverse decision" means a decision to refuse to issue or renew a policy of insurance; to issue a policy with exclusions or restrictions; to increase the rates or premium charged for a policy of insurance; to place an insured or applicant in a rating tier that does not have the lowest available rates for which that insured or applicant is otherwise eligible; or to place an applicant or insured with a company operating under common management, control, or ownership which does not offer the lowest rates available, within the affiliate group of insurance companies, for which that insured or applicant is otherwise eligible. "Credit report" means any written, oral, or other communication of any information by a consumer reporting agency, as defined in the federal Fair Credit Reporting Act, 15 U.S.C. ss. 1681 et seq., bearing on a consumer's credit worthiness, credit standing, or credit capacity, which is used or expected to be used or collected as a factor to establish a person's eligibility for credit or insurance, or any other purpose authorized pursuant to the applicable provision of such federal act. A credit score alone, as calculated by a credit reporting agency or by or for the insurer, may not be considered a credit report. "Credit score" means a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Tier" means a category within a single insurer into which insureds with substantially similar risk, exposure, or expense factors are placed for purposes of determining rate or premium. An insurer must inform an applicant or insured, in the same medium as the application is taken, that a credit report or score is being requested for underwriting or rating purposes. An insurer that makes an adverse decision based, in whole or in part, upon a credit report must provide at no charge, a copy of the credit report to the applicant or insured or provide the applicant or insured with the name, address, and telephone number of the consumer reporting agency from which the insured or applicant may obtain the credit report. The insurer must provide notification to the consumer explaining the reasons for the adverse decision. The reasons must be provided in sufficiently clear and specific language so that a person can identify the basis for the insurer's adverse decision. Such notification shall include a description of the four primary reasons, or such fewer number as existed, which were the primary influences of the adverse decision. The use of generalized terms such as "poor credit history," "poor credit rating," or "poor insurance score" does not meet the explanation requirements of this subsection. A credit score may not be used in underwriting or rating insurance unless the scoring process produces information in sufficient detail to permit compliance with the requirements of this subsection. It shall not be deemed an adverse decision if, due to the insured's credit report or credit score, the insured continues to receive a less favorable rate or placement in a less favorable tier or company at the time of renewal except for renewals or reunderwriting required by this section. (4)(a) An insurer may not request a credit report or score based upon the race, color, religion, marital status, age, gender, income, national origin, or place of residence of the applicant or insured. An insurer may not make an adverse decision solely because of information contained in a credit report or score without consideration of any other underwriting or rating factor. An insurer may not make an adverse decision or use a credit score that could lead to such a decision if based, in whole or in part, on: The absence of, or an insufficient, credit history, in which instance the insurer shall: Treat the consumer as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer; Treat the consumer as if the applicant or insured had neutral credit information, as defined by the insurer; Exclude the use of credit information as a factor and use only other underwriting criteria; Collection accounts with a medical industry code, if so identified on the consumer's credit report; Place of residence; or Any other circumstance that the Financial Services Commission determines, by rule, lacks sufficient statistical correlation and actuarial justification as a predictor of insurance risk. An insurer may use the number of credit inquiries requested or made regarding the applicant or insured except for: Credit inquiries not initiated by the consumer or inquiries requested by the consumer for his or her own credit information. Inquiries relating to insurance coverage, if so identified on a consumer's credit report. Collection accounts with a medical industry code, if so identified on the consumer's credit report Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the home mortgage industry and made within 30 days of one another, unless only one inquiry is considered. Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry and made within 30 days of one another, unless only one inquiry is considered. An insurer must, upon the request of an applicant or insured, provide a means of appeal for an applicant or insured whose credit report or credit score is unduly influenced by a dissolution of marriage, the death of a spouse, or temporary loss of employment. The insurer must complete its review within 10 business days after the request by the applicant or insured and receipt of reasonable documentation requested by the insurer, and, if the insurer determines that the credit report or credit score was unduly influenced by any of such factors, the insurer shall treat the applicant or insured as if the applicant or insured had neutral credit information or shall exclude the credit information, as defined by the insurer, whichever is more favorable to the applicant or insured. An insurer shall not be considered out of compliance with its underwriting rules or rates or forms filed with the Office of Insurance Regulation or out of compliance with any other state law or rule as a result of granting any exceptions pursuant to this subsection. A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. An insurer that requests or uses credit reports and credit scoring in its underwriting and rating methods shall maintain and adhere to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, this section, and all rules related thereto. (7)(a) An insurer shall establish procedures to review the credit history of an insured who was adversely affected by the use of the insured's credit history at the initial rating of the policy, or at a subsequent renewal thereof. This review must be performed at a minimum of once every 2 years or at the request of the insured, whichever is sooner, and the insurer shall adjust the premium of the insured to reflect any improvement in the credit history. The procedures must provide that, with respect to existing policyholders, the review of a credit report will not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan. (b) However, as an alternative to the requirements of paragraph (a), an insurer that used a credit report or credit score for an insured upon inception of a policy, who will not use a credit report or score for reunderwriting, shall reevaluate the insured within the first 3 years after inception, based on other allowable underwriting or rating factors, excluding credit information if the insurer does not increase the rates or premium charged to the insured based on the exclusion of credit reports or credit scores. The commission may adopt rules to administer this section. The rules may include, but need not be limited to: Information that must be included in filings to demonstrate compliance with subsection (3). Statistical detail that insurers using credit reports or scores under subsection (5) must retain and report annually to the Office of Insurance Regulation. Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Standards for review of models, methods, programs, or any other process by which to grade or rank credit report data and which may produce credit scores in order to ensure that the insurer demonstrates that such grading, ranking, or scoring is valid in predicting insurance risk of an applicant or insured. Section 626.97411, Florida Statutes, provides: Credit scoring methodologies and related data and information that are trade secrets as defined in s. 688.002 and that are filed with the Office of Insurance Regulation pursuant to a rate filing or other filing required by law are confidential and exempt from the provisions of s. 119.07(1) and s. 24(a), Art. I of the State Constitution.3 Following extensive rule development workshops and industry comment, proposed Florida Administrative Code Rule 69O-125.005 was initially published in the Florida Administrative Weekly, on February 11, 2005.4 The Proposed Rule states, as follows: 69O-125.005 Use of Credit Reports and Credit Scores by Insurers. For the purpose of this rule, the following definitions apply: "Applicant", for purposes of Section 626.9741, F.S., means an individual whose credit report or score is requested for underwriting or rating purposes relating to personal lines motor vehicle or personal lines residential insurance and shall not include individuals who have merely requested a quote. "Credit scoring methodology" means any methodology that uses credit reports or credit scores, in whole or in part, for underwriting or rating purposes. "Data cleansing" means the correction or enhancement of presumed incomplete, incorrect, missing, or improperly formatted information. "Personal lines motor vehicle" insurance means insurance against loss or damage to any motorized land vehicle or any loss, liability, or expense resulting from or incidental to ownership, maintenance or use of such vehicle if the contract of insurance shows one or more natural persons as named insureds. The following are not included in this definition: Vehicles used as public livery or conveyance; Vehicles rented to others; Vehicles with more than four wheels; Vehicles used primarily for commercial purposes; and Vehicles with a net vehicle weight of more than 5,000 pounds designed or used for the carriage of goods (other than the personal effects of passengers) or drawing a trailer designed or used for the carriage of such goods. The following are specifically included, inter alia, in this definition: Motorcycles; Motor homes; Antique or classic automobiles; and Recreational vehicles. "Unfairly discriminatory" means that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S. Insurers may not use any credit scoring methodology that is unfairly discriminatory. The burden of demonstrating that the credit scoring methodology is not unfairly discriminatory is upon the insurer. An insurer may not request or use a credit report or credit score in its underwriting or rating method unless it maintains and adheres to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, Section 626.9741, F.S., and these rules. Upon initial use or any change in that use, insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall include the following information in filings submitted pursuant to Section 627.062 or 627.0651, F.S. A listing of the types of individuals whose credit reports or scores the company will use or attempt to use to underwrite or rate a given policy. For example: Person signing application; Named insured or spouse; and All listed operators. How those individual reports or scores will be combined if more than one is used. For example: Average score used; Highest score used. The name(s) of the consumer reporting agencies or any other third party vendors from which the company will obtain or attempt to obtain credit reports or scores. Precise identifying information specifying or describing the credit scoring methodology, if any, the company will use including: Common or trade name; Version, subtype, or intended segment of business the system was designed for; and Any other information needed to distinguish a particular credit scoring methodology from other similar ones, whether developed by the company or by a third party vendor. The effect of particular scores or ranges of scores (or, for companies not using scores, the effect of particular items appearing on a credit report) on any of the following as applicable: Rate or premium charged for a policy of insurance; Placement of an insured or applicant in a rating tier; Placement of an applicant or insured in a company within an affiliated group of insurance companies; Decision to refuse to issue or renew a policy of insurance or to issue a policy with exclusions or restrictions or limitations in payment plans. The effect of the absence or insufficiency of credit history (as referenced in Section 626.9741(4)(c)1., F.S.) on any items listed in paragraph (e) above. The manner in which collection accounts identified with a medical industry code (as referenced in Section 626.9741(4)(c)2., F.S.) on a consumer's credit report will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which collection accounts that are not identified with a medical industry code, but which an applicant or insured demonstrates are the direct result of significant and extraordinary medical expenses, will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which the following will be treated in the underwriting or rating process, or within any credit scoring methodology used: Credit inquiries not initiated by the consumer; Requests by the consumer for the consumer's own credit information; Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another; Multiple lender inquiries that are not coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another, but that an applicant or insured demonstrates are the direct result of such inquiries; Inquiries relating to insurance coverage, if so identified on a consumer's credit report; and Inquiries relating to insurance coverage that are not so identified on a consumer's credit report, but which an applicant or insured demonstrates are the direct result of such inquiries. The list of all clear and specific primary reasons that may be cited to the consumer as the basis or explanation for an adverse decision under Section 626.9741(3), F.S. and the criteria determining when each of those reasons will be so cited. A description of the process that the insurer will use to correct any error in premium charged the insured, or in underwriting decision made concerning the insured, if the basis of the premium charged or the decision made is a disputed item that is later removed from the credit report or corrected, provided that the insured first notifies the insurer that the item has been removed or corrected. A certification that no use of credit reports or scores in rating insurance will apply to any component of a rate or premium attributed to hurricane coverage for residential properties as separately identified in accordance with Section 627.0629, F.S. Insurers desiring to make adverse decisions for personal lines motor vehicle policies or personal lines residential policies based on the absence or insufficiency of credit history shall either: Treat such consumers or applicants as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer and does not result in a disparate impact on persons belonging to any of the classes set forth in Section 626.9741(8)(c), This information will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. The information shall include: Data comparing experience for each category of those with absent or insufficient credit history to each category of insureds separately treated with respect to credit and having sufficient credit history; A statistically credible method of analysis that concludes that the relationship between absence or insufficiency and the risk assumed is not due to chance; A statistically credible method of analysis that concludes that absence or insufficiency of credit history does not disparately impact persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S.; A statistically credible method of analysis that confirms that the treatment proposed by the insurer is quantitatively appropriate; and Statistical tests establishing that the treatment proposed by the insurer is warranted for the total of all consumers with absence or insufficiency of credit history and for at least two subsets of such consumers. Treat such consumers as if the applicant or insured had neutral credit information, as defined by the insurer. Should an insurer fail to specify a definition, neutral is defined as the average score that a stratified random sample of consumers or applicants having sufficient credit history would attain using the insurer's credit scoring methodology; or Exclude credit as a factor and use other criteria. These other criteria must be specified by the insurer and must not result in average treatment for the totality of consumers with an absence of or insufficiency of credit history any less favorable than the treatment of average consumers or applicants having sufficient credit history. Insurers desiring to make adverse decisions for personal lines motor vehicle or personal lines residential insurance based on information contained in a credit report or score shall file with the Office information establishing that the results of such decisions do not correlate so closely with the zip code of residence of the insured as to constitute a decision based on place of residence of the insured in violation of Section 626.9741(4)(c)(3), F.S. (7)(a) Insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall develop, maintain, and adhere to written procedures consistent with Section 626.9741(4)(e), F.S. providing appeals for applicants or insureds whose credit reports or scores are unduly influenced by dissolution of marriage, death of a spouse, or temporary loss of employment. (b) These procedures shall be subject to examination by the Office at any time. (8)(a)1. Insurers using credit reports or credit scoring in rating personal lines motor vehicle or personal lines residential insurance shall develop, maintain, and adhere to written procedures to review the credit history of an insured who was adversely affected by such use at initial rating of the policy or subsequent renewal thereof. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: A review shall be conducted: No later than 2 years following the date of any adverse decision, or Any time, at the request of the insured, but no more than once per policy period without insurer assent. The insurer shall notify the named insureds annually of their right to request the review in (II) above. Renewal notices issued 120 days or less after the effective date of this rule are not included in this requirement. The insurer shall adjust the premium to reflect any improvement in credit history no later than the first renewal date that follows a review of credit history. The renewal premium shall be subject to other rating factors lawfully used by the insurer. The review shall not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan based on credit history. (b)1. As an alternative to the requirements in paragraph (8)(a), insurers using credit reports or scores at the inception of a policy but not for re-underwriting shall develop, maintain, and adhere to written procedures. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: Insureds shall be reevaluated no later than 3 years following policy inception based on allowable underwriting or rating factors, excluding credit information. The rate or premium charged to an insured shall not be greater, solely as a result of the reevaluation, than the rate or premium charged for the immediately preceding policy term. This shall not be construed to prohibit an insurer from applying regular underwriting criteria (which may result in a greater premium) or general rate increases to the premium charged. For insureds that received an adverse decision notification at policy inception, no residual effects of that adverse decision shall survive the reevaluation. This means that the reevaluation must be complete enough to make it possible for insureds adversely impacted at inception to attain the lowest available rate for which comparable insureds are eligible, considering only allowable underwriting or rating factors (excluding credit information) at the time of the reevaluation. No credit scoring methodology shall be used for personal lines motor vehicle or personal lines residential property insurance unless that methodology has been demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance. The demonstration of validity detailed below need only be provided with the first rate, rule, or underwriting guidelines filing following the effective date of this rule and at any time a change is made in the credit scoring methodology. Other such filings may instead refer to the most recent prior filing containing a demonstration. Information supplied in the context of a demonstration of validity will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. A demonstration of validity shall include: A listing of the persons that contributed substantially to the development of the most current version of the method, including resumes of the persons, if obtainable, indicating their qualifications and experience in similar endeavors. An enumeration of all data cleansing techniques that have been used in the development of the method, which shall include: The nature of each technique; Any biases the technique might introduce; and The prevalence of each type of invalid information prior to correction or enhancement. All data that was used by the model developers in the derivation and calibration of the model parameters. Data shall be in sufficient detail to permit the Office to conduct multiple regression testing for validation of the credit scoring methodology. Data, including field definitions, shall be supplied in electronic format compatible with the software used by the Office. Statistical results showing that the model and parameters are predictive and not overlapping or duplicative of any other variables used to rate an applicant to such a degree as to render their combined use actuarially unsound. Such results shall include the period of time for which each element from a credit report is used. A precise listing of all elements from a credit report that are used in scoring, and the formula used to compute the score, including the time period during which each element is used. Such listing is confidential if properly so identified by the insurer. An assessment by a qualified actuary, economist, or statistician (whether or not employed by the insurer) other than persons who contributed substantially to the development of the credit scoring methodology, concluding that there is a significant statistical correlation between the scores and frequency or severity of claims. The assessment shall: Identify the person performing the assessment and show his or her educational and professional experience qualifications; and Include a test of robustness of the model, showing that it performs well on a credible validation data set. The validation data set may not be the one from which the model was developed. Documentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Statistical analysis shall be performed on the current insureds of the insurer using the proposed credit scoring model, and shall include the raw data and detailed results on each classification set forth in Section 626.9741(8)(c), F.S. In lieu of such analysis insurers may use the alternative in 2. below. Alternatively, insurers may submit statistical studies and analyses that have been performed by educational institutions, independent professional associations, or other reputable entities recognized in the field, that indicate that there is no disproportionate impact on any of the classes set forth in Section 626.9741(8)(c), F.S. attributable to the use of credit reports or scores. Any such studies or analyses shall have been done concerning the specific credit scoring model proposed by the insurer. The Office will utilize generally accepted statistical analysis principles in reviewing studies submitted which support the insurer's analysis that the credit scoring model does not disproportionately impact any class based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. The Office will permit reliance on such studies only to the extent that they permit independent verification of the results. The testing or validation results obtained in the course of the assessment in paragraphs (d) and (f) above. Internal Insurer data that validates the premium differentials proposed based on the scores or ranges of scores. Industry or countrywide data may be used to the extent that the Florida insurer data lacks credibility based upon generally accepted actuarial standards. Insurers using industry or countrywide data for validation shall supply Florida insurer data and demonstrate that generally accepted actuarial standards would allow reliance on each set of data to the extent the insurer has done so. Validation data including claims on personal lines residential insurance policies that are the result of acts of God shall not be used unless such acts occurred prior to January 1, 2004. The mere copying of another company's system will not fulfill the requirement to validate proposed premium differentials unless the filer has used a method or system for less than 3 years and demonstrates that it is not cost effective to retrospectively analyze its own data. Companies under common ownership, management, and control may copy to fulfill the requirement to validate proposed premium differentials if they demonstrate that the characteristics of the business to be written by the affiliate doing the copying are sufficiently similar to the affiliate being copied to presume common differentials will be accurate. The credibility standards and any judgmental adjustments, including limitations on effects, that have been used in the process of deriving premium differentials proposed and validated in paragraph (i) above. An explanation of how the credit scoring methodology treats discrepancies in the information that could have been obtained from different consumer reporting agencies: Equifax, Experian, or TransUnion. This shall not be construed to require insurers to obtain multiple reports for each insured or applicant. 1. The date that each of the analyses, tests, and validations required in paragraphs (d) through (j) above was most recently performed, and a certification that the results continue to be applicable. 2. Any item not reviewed in the previous 5 years is unacceptable. Specific Authority 624.308(1), 626.9741(8) FS. Law Implemented 624.307(1), 626.9741 FS. History-- New . The Petition 1. Statutory Definitions of "Unfairly Discriminatory" The main issue raised by Petitioners is that the Proposed Rule's definition of "unfairly discriminatory," and those portions of the Proposed Rule that rely on this definition, are invalid because they are vague, and enlarge, modify, and contravene the provisions of the law implemented and other provisions of the insurance code. Section 626.9741, Florida Statutes, does not define "unfairly discriminatory." Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit reports or scores "must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory." Subsection 626.9741(8)(c), Florida Statutes, provides that the FSC may adopt rules, including standards to ensure that rates or premiums "associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." Chapter 627, Part I, Florida Statutes, is referred to as the "Rating Law." § 627.011, Fla. Stat. The purpose of the Rating Law is to "promote the public welfare by regulating insurance rates . . . to the end that they shall not be excessive, inadequate, or unfairly discriminatory." § 627.031(1)(a), Fla. Stat. The Rating Law provisions referenced by Subsection 626.9741(5), Florida Statutes, in relation to ensuring that rates are not "unfairly discriminatory" are Sections 627.062 and 627.0651, Florida Statutes. Section 627.062, Florida Statutes, titled "Rate standards," provides that "[t]he rates for all classes of insurance to which the provisions of this part are applicable shall not be excessive, inadequate, or unfairly discriminatory." § 627.062(1), Fla. Stat. Subsection 627.062(2)(e)6., Florida Statutes, provides: A rate shall be deemed unfairly discriminatory as to a risk or group of risks if the application of premium discounts, credits, or surcharges among such risks does not bear a reasonable relationship to the expected loss and expense experience among the various risks. Section 627.0651, Florida Statutes, titled "Making and use of rates for motor vehicle insurance," provides, in relevant part: One rate shall be deemed unfairly discriminatory in relation to another in the same class if it clearly fails to reflect equitably the difference in expected losses and expenses. Rates are not unfairly discriminatory because different premiums result for policyholders with like loss exposures but different expense factors, or like expense factors but different loss exposures, so long as rates reflect the differences with reasonable accuracy. Rates are not unfairly discriminatory if averaged broadly among members of a group; nor are rates unfairly discriminatory even though they are lower than rates for nonmembers of the group. However, such rates are unfairly discriminatory if they are not actuarially measurable and credible and sufficiently related to actual or expected loss and expense experience of the group so as to assure that nonmembers of the group are not unfairly discriminated against. Use of a single United States Postal Service zip code as a rating territory shall be deemed unfairly discriminatory. Petitioners point out that each of these statutory examples describing "unfairly discriminatory" rates has an actuarial basis, i.e., rates must be related to the actual or expected loss and expense factors for a given group or class, rather than any extraneous factors. If two risks have the same expected losses and expenses, the insurer must charge them the same rate. If the risks have different expected losses and expenses, the insurer must charge them different rates. Michael Miller, Petitioners' expert actuary, testified that the term "unfairly discriminatory" has been used in the insurance industry for well over 100 years and has always had this cost-based definition. Mr. Miller is a fellow of the Casualty Actuarial Society ("CAS"), a professional organization whose purpose is the advancement of the body of knowledge of actuarial science, including the promulgation of industry standards and a code of professional conduct. Mr. Miller was chair of the CAS ratemaking committee when it developed the CAS "Statement of Principles Regarding Property and Casualty Insurance Ratemaking," a guide for actuaries to follow when establishing rates.5 Principle 4 of the Statement of Principles provides: "A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk." In layman's terms, Mr. Miller explained that different types of risks are reflected in a rate calculation. To calculate the expected cost of a given risk, and thus the rate to be charged, the insurer must determine the expected losses for that risk during the policy period. The loss portion reflects the risk associated with an occurrence and the severity of a claim. While the loss portion does not account for the entirety of the rate charged, it is the most important in terms of magnitude. Mr. Miller cautioned that the calculation of risk is a quantification of expected loss, but not an attempt to predict who is going to have an accident or make a claim. There is some likelihood that every insured will make a claim, though most never do, and this uncertainty is built into the incurred loss portion of the rate. No single risk factor is a complete measure of a person's likelihood of having an accident or of the severity of the ensuing claim. The prediction of losses is determined through a risk classification plan that take into consideration many risk factors (also called rating factors) to determine the likelihood of an accident and the extent of the claim. As to automobile insurance, Mr. Miller listed such risk factors as the age, gender, and marital status of the driver, the type, model and age of the car, the liability limits of the coverage, and the geographical location where the car is garaged. As to homeowners insurance, Mr. Miller listed such risk factors as the location of the home, its value and type of construction, the age of the utilities and electrical wiring, and the amount of insurance to be carried. 2. Credit Scoring as a Rating Factor In the current market, the credit score of the applicant or insured is a rating factor common to automobile and homeowners insurance. Subsection 626.9741(2)(c), Florida Statutes, defines "credit score" as follows: a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Credit scores" (more accurately termed "credit-based insurance scores") are derived from credit data that have been found to be predictive of a loss. Lamont Boyd, Fair Isaac's insurance market manager, explained the manner in which Fair Isaac produced its credit scoring model. The company obtained information from various insurance companies on millions of customers. This information included the customers' names, addresses, and the premiums earned by the companies on those policies as well as the losses incurred. Fair Isaac next requested the credit reporting agencies to review their archived files for the credit information on those insurance company customers. The credit agencies matched the credit files with the insurance customers, then "depersonalized" the files so that there was no way for Fair Isaac to know the identity of any particular customer. According to Mr. Lamont, the data were "color blind" and "income blind." Fair Isaac's analysts took these files from the credit reporting agencies and studied the data in an effort to find the most predictive characteristics of future loss propensity. The model was developed to account for all the predictive characteristics identified by Fair Isaac's analysts, and to give weight to those characteristics in accordance to their relative accuracy as predictors of loss. Fair Isaac does not directly sell its credit scores to insurance companies. Rather, Fair Isaac's models are implemented by the credit reporting agencies. When an insurance company wants Fair Isaac's credit score, it purchases access to the model's results from the credit reporting agency. Other vendors offer similar credit scoring models to insurance companies, and in recent years, some insurance companies have developed their own scoring models. Several academic studies of credit scoring were admitted and discussed at the final hearing in these cases. There appears to be no serious debate that credit scoring is a valid and important predictor of losses. The controversy over the use of credit scoring arises over its possible "unfairly discriminatory" impact "based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." § 626.9741(8)(c), Fla. Stat. Mr. Miller was one of two principal authors of a June 2003 study titled, "The Relationship of Credit-Based Insurance Scores to Private Passenger Automobile Insurance Loss Propensity." This study was commissioned by several insurance industry trade organizations, including AIA and NAMIC. The study addressed three questions: whether credit-based insurance scores are related to the propensity for loss; whether credit- based insurance scores measure risk that is already measured by other risk factors; and what is the relative importance to accurate risk assessment of the use of credit-based insurance scores. The study was based on a nationwide random sample of private passenger automobile policy and claim records. Records from all 50 states were included in roughly the same proportion as each state's registered motor vehicles bear to total registered vehicles in the United States. The data samples were provided by seven insurers, and represented approximately 2.7 million automobiles, each insured for 12 months.6 The study examined all major automobile coverages: bodily injury liability, property damage liability, medical payments coverage, personal injury protection coverage, comprehensive coverage, and collision coverage. The study concluded that credit-based insurance scores were correlated with loss propensity. The study found that insurance scores overlap to some degree with other risk factors, but that after fully accounting for the overlaps, insurance scores significantly increase the accuracy of the risk assessment process. The study found that, for each of the six automobile coverages examined, insurance scores are among the three most important risk factors.7 Mr. Miller's study did not examine the question of causality, i.e., why credit-based insurance scores are predictive of loss propensity. Dr. Patrick Brockett testified for Petitioners as an expert in actuarial science, risk management and insurance, and statistics. Dr. Brockett is a professor in the departments of management science and information systems, finance, and mathematics at the University of Texas at Austin. He occupies the Gus S. Wortham Memorial Chair in Risk Management and Insurance, and is the director of the university's risk management and insurance program. Dr. Brockett is the former director of the University of Texas' actuarial science program and continues to direct the study of students seeking their doctoral degrees in actuarial science. His areas of academic research are actuarial science, risk management and insurance, statistics, and general quantitative methods in business. Dr. Brockett has written more than 130 publications, most of which relate to actuarial science and insurance. He has spent his entire career in academia, and has never been employed by an insurance company. In 2002, Lieutenant Governor Bill Ratliff of Texas asked the Bureau of Business Research ("BBR") of the University of Texas' McCombs School of Business to provide an independent, nonpartisan study to examine the relationship between credit history and insurance losses in automobile insurance. Dr. Brockett was one of four named authors of this BBR study, issued in March 2003 and titled, "A Statistical Analysis of the Relationship between Credit History and Insurance Losses." The BBR research team solicited data from insurance companies representing the top 70 percent of the automobile insurers in Texas, and compiled a database of more than 173,000 automobile insurance policies from the first quarter of 1998 that included the following 12 months' premium and loss history. ChoicePoint was then retained to match the named insureds with their credit histories and to supply a credit score for each insured person. The BBR research team then examined the credit score and its relationship with prospective losses for the insurance policy. The results were summarized in the study as follows: Using logistic and multiple regression analyses, the research team tested whether the credit score for the named insured on a policy was significantly related to incurred losses for that policy. It was determined that there was a significant relationship. In general, lower credit scores were associated with larger incurred losses. Next, logistic and multiple regression analyses examined whether the revealed relationship between credit score and incurred losses was explainable by existing underwriting variables, or whether the credit score added new information about losses not contained in the existing underwriting variables. It was determined that credit score did yield new information not contained in the existing underwriting variables. What the study does not attempt to explain is why credit scoring adds significantly to the insurer's ability to predict insurance losses. In other words, causality was not investigated. In addition, the research team did not examine such variables as race, ethnicity, and income in the study, and therefore this report does not speculate about the possible effects that credit scoring may have in raising or lowering premiums for specific groups of people. Such an assessment would require a different study and different data. At the hearing, Dr. Brockett testified that the BBR study demonstrated a "strong and significant relationship between credit scoring and incurred losses," and that credit scoring retained its predictive power even after the other risk variables were accounted for. Dr. Brockett further testified that credit scoring has a disproportionate effect on the classifications of age and marital status, because the very young tend to have credit scores that are lower than those of older people. If the question is simply whether the use of credit scores will have a greater impact on the young and the single, the answer would be in the affirmative. However, Dr. Brockett also noted that young, single people will also have higher losses than older, married people, and, thus, the use of credit scores is not "unfairly discriminatory" in the sense that term is employed in the insurance industry.8 Mr. Miller testified that nothing in the actuarial standards of practice requires that a risk factor be causally related to a loss. The Actuarial Standards Board's Standard of Practice 12,9 dealing with risk classification, states that a risk factor is appropriate for use if there is a demonstrated relationship between the risk factor and the insurance losses, and that this relationship may be established by statistical or other mathematical analysis of data. If the risk characteristic is shown to be related to an expected outcome, the actuary need not establish a cause-and-effect relationship between the risk characteristic and the expected outcome. As an example, Mr. Miller offered the fact that past automobile accidents do not cause future accidents, although past accidents are predictive of future risk. Past traffic violations, the age of the driver, the gender of the driver, and the geographical location are all risk factors in automobile insurance, though none of these factors can be said to cause future accidents. They help insurers predict the probability of a loss, but do not predict who will have an accident or why the accident will occur. Mr. Miller opined that credit scoring is a similar risk factor. It is demonstrably significant as a predictor of risk, though there is no causal relationship between credit scores and losses and only an incomplete understanding of why credit scoring works as a predictor of loss. At the hearing, Dr. Brockett discussed a study that he has co-authored with Linda Golden, a business professor at the University of Texas at Austin. Titled "Biological and Psychobehavioral Correlates of Risk Taking, Credit Scores, and Automobile Insurance Losses: Toward an Explication of Why Credit Scoring Works," the study has been peer-reviewed and at the time of the hearing had been accepted for publication in the Journal of Risk and Insurance. In this study, the authors conducted a detailed review of existing scientific literature concerning the biological, psychological, and behavioral attributes of risky automobile drivers and insured losses, and a similar review of literature concerning the biological, psychological, and behavioral attributes of financial risk takers. The study found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Dr. Brockett testified that this study provides a direction for future research into the reasons why credit scoring works as an insurance risk characteristic. 3. The Proposed Rule's Definition of "Unfairly Discriminatory" Petitioners contend that the Proposed Rule's definition of the term "unfairly discriminatory" expands upon and is contrary to the statutory definition of the term discussed in section C.1. supra, and that this expanded definition operates to impose a ban on the use of credit scoring by insurance companies. As noted above, Section 626.9741, Florida Statutes, does not define the term "unfairly discriminatory." The provisions of the Rating Law10 define the term as it is generally understood by the insurance industry: a rate is deemed "unfairly discriminatory" if the premium charged does not equitably reflect the differences in expected losses and expenses between policyholders. Two provisions of Section 626.9741, Florida Statutes, employ the term "unfairly discriminatory": (5) A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. * * * (8) The commission may adopt rules to administer this section. The rules may include, but need not be limited to: * * * (c) Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners contend that the statute's use of the term "unfairly discriminatory" is unexceptionable, that the Legislature simply intended the term to be used and understood in the traditional sense of actuarial soundness alone. Respondents agree that Subsection 626.9741(5), Florida Statutes, calls for the agency to apply the traditional definition of "unfairly discriminatory" as that term is employed in the statutes directly referenced, Sections 627.062 and 627.0651, Florida Statutes, the relevant texts of which are set forth in Findings of Fact 18 and 19 above. However, Respondents contend that Subsection 626.9741(8)(c), Florida Statutes, calls for more than the application of the Rating Law's definition of the term. Respondents assert that in the context of this provision, "unfairly discriminatory" contemplates not only the predictive function, but also "discrimination" in its more common sense, as the term is employed in state and federal civil rights law regarding race, color, religion, marital status, age, gender, income, national origin, or place of residence. At the hearing, OIR General Counsel Steven Parton testified as to the reasons why the agency chose the federal body of law using the term "disparate impact" as the test for unfair discrimination in the Proposed Rule: Well, first of all, what we were looking for is a workable definition that people would have some understanding as to what it meant when we talked about unfair discrimination. We were also looking for a test that did not require any willfulness, because it was not our concern that, in fact, insurance companies were engaging willfully in unfair discrimination. What we believed is going on, and we think all of the studies that are out there suggest, is that credit scoring is having a disparate impact upon various people, whether it be income, whether it be race. . . . Respondents' position is that Subsection 626.9741(8)(c), Florida Statutes, requires that a proposed rate or premium be rejected if it has a "disproportionately" negative effect on one of the named classes of persons, even though the rate or premium equitably reflects the differences in expected losses and expenses between policyholders. In the words of Mr. Parton, "This is not an actuarial rule." Mr. Parton explained the agency's rationale for employing a definition of "unfairly discriminatory" that is different from the actuarial usage employed in the Rating Law. Subsection 626.9741(5), Florida Statutes, already provides that an insurer's rate filings may not be "excessive, inadequate, or unfairly discriminatory" in the actuarial sense. To read Subsection 626.9741(8)(c), Florida Statutes, as simply a reiteration of the actuarial "unfair discrimination" rule would render the provision, "a nullity. There would be no force and effect with regards to that." Thus, the Proposed Rule defines "unfairly discriminatory" to mean "that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S." Proposed Florida Administrative Code Rule 69O-125.005(1)(e). OIR's actuary, Howard Eagelfeld, explained that "disproportionate effect" means "having a different effect on one group . . . causing it to pay more or less premium than its proportionate share in the general population or than it would have to pay based upon all other known considerations." Mr. Eagelfeld's explanation is not incorporated into the language of the Proposed Rule. Consistent with the actuarial definition of "unfairly discriminatory," the Proposed Rule requires that any credit scoring methodology must be "demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance," and sets forth detailed criteria through which the insurer can make the required demonstration. Proposed Florida Administrative Code Rule 69O-125.005(9)(a)-(f) and (h)-(l). Proposed Florida Administrative Code Rule 69O-125.005(9)(g) sets forth Respondents' "civil rights" usage of the term "unfairly discriminatory." The insurer's demonstration of the validity of its credit scoring methodology must include: [d]ocumentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), F.S. A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory.11 Mr. Parton, who testified in defense of the Proposed Rule as one of its chief draftsmen, stated that the agency was concerned that the use of credit scoring may be having a disproportionate effect on minorities. Respondents believe that credit scoring may simply be a surrogate measure for income, and that using income as a basis for setting rates would have an obviously disparate impact on lower-income persons, including the young and the elderly. Mr. Parton testified that "neither the insurance industry nor anyone else" has researched the theory that credit scoring may be a surrogate for income. Mr. Miller referenced a 1998 analysis performed by AIA indicating that the average credit scores do not vary significantly according to the income group. In fact, the lowest income group (persons making less than $15,000 per year) had the highest average credit score, and the average credit scores actually dropped as income levels rose until the income range reached $50,000 to $74,000 per year, when the credit scores began to rise. Mr. Miller testified that a credit score is no more predictive of income level than a coin flip. However, Respondents introduced a January 2003 report to the Washington State Legislature prepared by the Social & Economic Sciences Research Center of Washington State University, titled "Effect of Credit Scoring on Auto Insurance Underwriting and Pricing." The purpose of the study was to determine whether credit scoring has unequal impacts on specific demographic groups. For this study, the researchers received data from three insurance companies on several thousand randomly chosen customers, including the customers' age, gender, residential zip code, and their credit scores and/or rate classifications. The researchers contacted about 1,000 of each insurance company's customers and obtained information about their ethnicity, marital status, and income levels. The study's findings were summarized as follows: The demographic patterns discerned by the study are: Age is the most significant factor. In almost every analysis, older drivers have, on average, higher credit scores, lower credit-based rate assignments, and less likelihood of lacking a valid credit score. Income is also a significant factor. Credit scores and premium costs improve as income rises. People in the lowest income categories-- less than $20,000 per year and between $20,000 and $35,000 per year-- often experienced higher premiums and lower credit scores. More people in lower income categories also lacked sufficient credit history to have a credit score. Ethnicity was found to be significant in some cases, but because of differences among the three firms studied and the small number of ethnic minorities in the samples, the data are not broadly conclusive. In general, Asian/Pacific Islanders had credit scores more similar to whites than to other minorities. When other minority groups had significant differences from whites, the differences were in the direction of higher premiums. In the sample of cases where insurance was cancelled based on credit score, minorities who were not Asian/Pacific Islanders had greater difficulty finding replacement insurance, and were more likely to experience a lapse in insurance while they searched for a new policy. The analysis also considered gender, marital status and location, but for these factors, significant unequal effects were far less frequent. (emphasis added) The evidence appears equivocal on the question of whether credit scoring is a surrogate for income. The Washington study seems to indicate that ethnicity may be a significant factor in credit scoring, but that significant unequal effects are infrequent regarding gender and marital status. The evidence demonstrates that the use of credit scores by insurers would tend to have a negative impact on young people. Mr. Miller testified that persons between ages 25 and 30 have lower credit scores than older people. Petitioners argue that by defining "unfairly discriminatory" to mean "disproportionate effect," the Proposed Rule effectively prohibits insurers from using credit scores, if only because all the parties recognize that credit scores have a "disproportionate effect" on young people. Petitioners contend that this prohibition is in contravention of Section 626.9741(1), Florida Statutes, which states that the purpose of the statute is to "regulate and limit" the use of credit scores, not to ban them outright. Respondents counter that if the use of credit scores is "unfairly discriminatory" toward one of the listed classes of persons in contravention of Subsection 626.9741(8)(c), Florida Statutes, then the "limitation" allowed by the statute must include prohibition. This point is obviously true but sidesteps the real issues: whether the statute's undefined prohibition on "unfair discrimination" authorizes the agency to employ a "disparate impact" or "disproportionate effect" definition in the Proposed Rule, and, if so, whether the Proposed Rule sufficiently defines any of those terms to permit an insurer to comply with the rule's requirements. Proposed Florida Administrative Code Rule 69O-125.005(2) provides that the insurer bears the burden of demonstrating that its credit scoring methodology does not disproportionately affect persons based upon their race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners state that no insurer can demonstrate, consistent with the Proposed Rule, that its credit scoring methodology does not have a disproportionate effect on persons based upon their age. Therefore, no insurer will ever be permitted to use credit scores under the terms of the Proposed Rule. As discussed more fully in Findings of Fact 73 through 76 below, Petitioners also contend that the Proposed Rule provides no guidance as to what "disproportionate effect" and "disparate impact" mean, and that this lack of definitional guidance will permit the agency to reject any rate filing that uses credit scoring, based upon an arbitrary determination that it has a "disproportionate effect" on one of the classes named in Subsection 626.9741(8)(c), Florida Statutes. Petitioners also presented evidence that no insurer collects data on race, color, religion, or national origin from applicants or insureds. Mr. Miller testified that there is no reliable independent source for race, color, religious affiliation, or national origin data. Mr. Eagelfeld agreed that there is no independent source from which insurers can obtain credible data on race or religious affiliation. Mr. Parton testified that this lack of data can be remedied by the insurance companies commencing to request race, color, religion, and national origin information from their customers, because there is no legal impediment to their doing so. Mr. Miller testified that he would question the reliability of the method suggested by Mr. Parton because many persons will refuse to answer such sensitive questions or may not answer them correctly. Mr. Miller stated that, as an actuary, he would not certify the results of a study based on demographic data obtained in this manner and would qualify any resulting actuarial opinion due to the unreliability of the database. Petitioners also object to the vagueness of the broad categories of "race, color, religion and national origin." Mr. Miller testified that the Proposed Rule lacks "operational definitions" for those terms that would enable insurers to perform the required calculations. The Proposed Rule places the burden on the insurer to demonstrate no disproportionate effect on persons based on these categories, but offers no guidance as to how these demographic classes should be categorized by an insurer seeking to make such a demonstration. Petitioners point out that even if the insurer is able to ascertain the categories sought by the regulators, the Proposed Rule gives no guidance as to whether the "disproportionate effect" criterion mandates perfect proportionality among all races, colors, religions, and national origins, or whether some degree of difference is tolerable. Petitioners contend that this lack of guidance provides unbridled discretion to the regulator to reject any disproportionate effect study submitted by an insurer. At his deposition, Mr. Parton was asked how an insurer should break down racial classifications in order to show that there is no disproportionate effect on race. His answer was as follows: There is African-American, Cuban-American, Spanish-American, African-American, Haitian- American. Are you-- you know, whatever the make-up of your book of business is-- you're the one in control of it. You can ask these folks what their ethnic background is. At his deposition, Mr. Parton frankly admitted that he had no idea what "color" classifications an insurer should use, yet he also stated that an insurer must demonstrate no disproportionate effect on each and every listed category, including "color." At the final hearing, when asked to list the categories of "color," Mr. Parton responded, "I suppose Indian, African-American, Chinese, Japanese, all of those."12 At the final hearing, Mr. Parton was asked whether the Proposed Rule contemplates requiring insurers to demonstrate distinctions between such groups as "Latvian-Americans" and "Czech-Americans." Mr. Parton's reply was as follows: No. And I don't think it was contemplated by the Legislature. . . . The question is race by any other name, whether it be national origin, ethnicity, color, is something that they're concerned about in terms of an impact. What we would anticipate, and what we have always anticipated, is the industry would demonstrate whether or not there is an adverse effect against those folks who have traditionally in Florida been discriminated against, and that would be African-Americans and certain Hispanic groups. In our opinion, at least, if you could demonstrate that the credit scoring was not adversely impacting it, it may very well answer the questions to any other subgroup that you may want to name. At the hearing, Mr. Parton was also questioned as to distinctions between religions and testified as follows: The impact of credit scoring on religion is going to be in the area of what we call thin files, or no files. That is to say people who do not have enough credit history from which credit scores can be done, or they're going to be treated somehow differently because of that lack of history. A simple question that needs to be asked by the insurance company is: "Do you, as a result of your religious belief or whatever [sect] you are in, are you forbidden as a precept of your religious belief from engaging in the use of credit?" When cross-examined on the subject, Mr. Parton could not confidently identify any religious group that forbids the use of credit. He thought that Muslims and Quakers may be such groups. Mr. Parton concluded by stating, "I don't think it is necessary to identify those groups. The question is whether or not you have a religious group that you prescribe to that forbids it." Petitioners contend that, in addition to failing to define the statutory terms of race, color, religion, and national origin in a manner that permits insurer compliance, the Proposed Rule fails to provide an operational definition of "disproportionate effect." The following is a hypothetical question put to Mr. Parton at his deposition, and Mr. Parton's answer: Q: Let's assume that African-Americans make up 10 percent of the population. Let's just use two groups for the sake of clarity. Caucasians make up 90 percent. If the application of credit scoring in underwriting results in African-Americans paying 11 percent of the premium and Caucasians paying 89 percent of the premium, is that, in your mind, a disproportionate affect [sic]? A: It may be. I think it would give rise under this rule that perhaps there is a presumption that it is, but that presumption is not [an irrebuttable] one.[13] For instance, if you then had testimony that a 1 percent difference between the two was statistically insignificant, then I would suggest that that presumption would be overridden. This answer led to a lengthy discussion regarding a second hypothetical in which African-Americans made up 29 percent of the population, and also made up 35 percent of the lowest, or most unfavorable, tier of an insurance company's risk classifications. Mr. Parton ultimately opined that if the difference in the two numbers was found to be "statistically significant" and attributable only to the credit score, then he would conclude that the use of credit scoring unfairly discriminated against African-Americans. As to whether his answer would be the same if the hypothetical were adjusted to state that African-Americans made up 33 percent of the lowest tier, Mr. Parton responded: "That would be up to expert testimony to be provided on it. That's what trials are all about."14 Aside from expert testimony to demonstrate that the difference was "statistically insignificant," Mr. Parton could think of no way that an insurer could rebut the presumption that the difference was unfairly discriminatory under the "disproportionate effect" definition set forth in the proposed rule. He stated that, "I can't anticipate, nor does the rule propose to anticipate, doing the job of the insurer of demonstrating that its rates are not unfairly discriminatory." Mr. Parton testified that an insurer's showing that the credit score was a valid and important predictor of risk would not be sufficient to rebut the presumption of disproportionate effect. Summary Findings Credit-based insurance scoring is a valid and important predictor of risk, significantly increasing the accuracy of the risk assessment process. The evidence is still inconclusive as to why credit scoring is an effective predictor of risk, though a study co-authored by Dr. Brockett has found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Though the evidence was equivocal on the question of whether credit scoring is simply a surrogate for income, the evidence clearly demonstrated that the use of credit scores by insurance companies has a greater negative overall effect on young people, who tend to have lower credit scores than older people. Petitioners and Fair Isaac emphasized their contention that compliance with the Proposed Rule would be impossible, and thus the Proposed Rule in fact would operate as a prohibition on the use of credit scoring by insurance companies. At best, Petitioners demonstrated that compliance with the Proposed Rule would be impracticable at first, given the current business practices in the industry regarding the collection of customer data regarding race and religion. The evidence indicated no legal barriers to the collection of such data by the insurance companies. Questions as to the reliability of the data are speculative until a methodology for the collection of the data is devised. Subsection 626.9741(8)(c), Florida Statutes, authorizes the FSC to adopt rules that may include: Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners' contention that the statute's use of "unfairly discriminatory" contemplates nothing more than the actuarial definition of the term as employed by the Rating Law is rejected. As Respondents pointed out, Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit scores must comply with the Rating Law's requirements that the rates not be "unfairly discriminatory" in the actuarial sense. If Subsection 626.9741(8)(c), Florida Statutes, merely reiterates the actuarial requirement, then it is, in Mr. Parton's words, "a nullity."15 Thus, it is found that the Legislature contemplated some level of scrutiny beyond actuarial soundness to determine whether the use of credit scores "unfairly discriminates" in the case of the classes listed in Subsection 626.9741(8)(c), Florida Statutes. It is found that the Legislature empowered FSC to adopt rules establishing standards to ensure that an insurer's rates or premiums associated with the use of credit scores meet this added level of scrutiny. However, it must be found that the term "unfairly discriminatory" as employed in the Proposed Rule is essentially undefined. FSC has not adopted a "standard" by which insurers can measure their rates and premiums, and the statutory term "unfairly discriminatory" is thus subject to arbitrary enforcement by the regulating agency. Proposed Florida Administrative Code Rule 69O-125.005(1)(e) defines "unfairly discriminatory" in terms of adverse decisions that "disproportionately affect" persons in the classes set forth in Subsection 626.9741(8)(c), Florida Statutes, but does not define what is a "disproportionate effect." At Subsection (9)(g), the Proposed Rule requires "statistical testing" of the credit scoring model to determine whether it results in a "disproportionate impact" on the listed classes. This subsection attempts to define its terms as follows: A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Thus, the Proposed Rule provides that a "disproportionate effect" equals a "disparate impact" equals "unfairly discriminatory," without defining any of these terms in such a way that an insurer could have any clear notion, prior to the regulator's pronouncement on its rate filing, whether its credit scoring methodology was in compliance with the rule. Indeed, Mr. Parton's testimony evinced a disinclination on the part of the agency to offer guidance to insurers who attempt to understand this circular definition. The tenor of his testimony indicated that the agency itself is unsure of exactly what an insurer could submit to satisfy the "disproportionate effect" test, aside from perfect proportionality, which all parties concede is not possible at least as to young people, or a showing that any lack of perfect proportionality is "statistically insignificant," whatever that means. Mr. Parton seemed to say that OIR will know a valid use of credit scoring when it sees one, though it cannot describe such a use beforehand. Mr. Eagelfeld offered what might be a workable definition of "disproportionate effect," but his definition is not incorporated into the Proposed Rule. Mr. Parton attempted to assure the Petitioners that OIR would take a reasonable view of the endless racial and ethnic categories that could be subsumed under the literal language of the Proposed Rule, but again, Mr. Parton's assurances are not part of the Proposed Rule. Mr. Parton's testimony referenced federal and state civil rights laws as the source for the term "disparate impact." Federal case law under Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2, has defined a "disparate impact" claim as "one that 'involves employment practices that are facially neutral in their treatment of different groups, but that in fact fall more harshly on one group than another and cannot be justified by business necessity.'" Adams v. Florida Power Corporation, 255 F.3d 1322, 1324 n.4 (11th Cir. 2001), quoting Hazen Paper Co. v. Biggins, 507 U.S. 604, 609, 113 S. Ct. 1701, 1705, 123 L. Ed. 2d 338 (1993). The Proposed Rule does not reference this definition, nor did Mr. Parton detail how OIR proposes to apply or modify this definition in enforcing the Proposed Rule. Without further definition, all three of the terms employed in this circular definition are conclusions, not "standards" that the insurer and the regulator can agree upon at the outset of the statistical and analytical process leading to approval or rejection of the insurer's rates. Absent some definitional guidance, a conclusory term such as "disparate impact" can mean anything the regulator wishes it to mean in a specific case. The confusion is compounded by the Proposed Rule's failure to refine the broad terms "race," "color," and "religion" in a manner that would allow an insurer to prepare a meaningful rate submission utilizing credit scoring. In his testimony, Mr. Parton attempted to limit the Proposed Rule's impact to those groups "who have traditionally in Florida been discriminated against," but the actual language of the Proposed Rule makes no such distinction. Mr. Parton also attempted to limit the reach of "religion" to groups whose beliefs forbid them from engaging in the use of credit, but the language of the Proposed Rule does not support Mr. Parton's distinction.

USC (1) 42 U.S.C 2000e Florida Laws (18) 119.07120.52120.536120.54120.56120.57120.68624.307624.308626.9741627.011627.031627.062627.0629627.0651688.002760.10760.11 Florida Administrative Code (1) 69O-125.005
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer