Elawyers Elawyers
Washington| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 49 similar cases
BERNADETTE S. WOODS vs BOARD OF OPTOMETRY, 91-002353 (1991)
Division of Administrative Hearings, Florida Filed:Orlando, Florida Apr. 18, 1991 Number: 91-002353 Latest Update: Oct. 28, 1991

The Issue Petitioner has challenged her grade on the written portion of the September 1990 Optometry licensing examination. The issue for determination is whether she is entitled to a passing grade. BACKGROUND, FINDINGS OF FACT AND RECOMMENDED DISPOSITION The hearing was convened as scheduled, and the parties proceeded with their argument and presentation of exhibits and testimony. Respondent stipulated that Ms. Woods had passed the clinical and practical portions of the examination, but received a 68.5% score on the written portion of the examination. A passing score is 70%. Ms. Woods required three additional raw score points to pass. During the course of the hearing it became apparent that the text of one of the questions challenged by Ms. Woods was misleading, as the correct answer in a multiple choice series was misspelled. The misspelling was such that the proper spelling could have been either the term intended by the test, or another term that would have been an incorrect answer. Ms. Woods selected the next best answer in the series. After a brief recess in the hearing, Respondent stipulated that Petitioner should be given credit for her answer on that question, as well as ensuing questions that were part of the same hypothetical example. As stipulated by Respondent on the record, this results in a passing score for Petitioner. It was agreed that a Recommended Order would be entered, consistent with this stipulation, and that the examination questions received in evidence would be forwarded to the Board, appropriately sealed.

Recommendation Based on the foregoing, it is hereby, recommended that the Board of Optometry enter its final order granting a passing score on the September 1990 Optometry examination to Petitioner, Bernadette Susan Woods. RECOMMENDED this 19th day of July, 1991, in Tallahassee, Leon County, Florida. MARY CLARK Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904)488-9675 Filed with the Clerk of the Division of Administrative Hearings this 19th day of July, 1991. COPIES FURNISHED: Bernadette S. Woods 315 Lakepointe Drive, #104 Altamonte Springs, FL 32701 Vytas J. Urba, Asst. General Counsel Department of Professional Regulation 1940 N. Monroe Street Tallahassee, FL 32399-0792 Patricia Guilford, Executive Director Board of Optometry Department of Professional Regulation 1940 N. Monroe Street Tallahassee, FL 32399-0792 Jack McRay, General Counsel Department of Professional Regulation 1940 N. Monroe Street Tallahassee, FL 32399-0792

# 1
ERIC SOBEL vs DEPARTMENT OF BUSINESS AND PROFESSIONAL REGULATION, BOARD OF CONSTRUCTION, 03-001642 (2003)
Division of Administrative Hearings, Florida Filed:Clearwater, Florida May 07, 2003 Number: 03-001642 Latest Update: Nov. 06, 2019

The Issue The issues in this case are whether certain questions within the June 2002 construction building contractor examination are invalid, and whether Petitioner should receive credit for certain answers scored as incorrect.

Findings Of Fact In June 2002, Petitioner sat for the construction building contractor examination. Shortly following the exam, Petitioner was advised that he incorrectly answered 17 of the 50 exam questions and did not attain the minimum passing score of 70 percent, but received a failing scaled score of 66 percent. Petitioner timely challenged the validity and scoring of eight questions, including questions 8, 14, 17, 33, 34, 38, 43, and 44. In order for Petitioner to acquire a passing score, Petitioner must prove that certain challenged questions are invalid or demonstrate that he is entitled to receive credit for his answers. Specifically, Petitioner must demonstrate that either three questions should be stricken from the exam providing Petitioner with 70.2 percent, two questions should be stricken and one answer scored as correct providing Petitioner with 70.8 percent or two answers should scored as correct providing Petitioner with 70 percent. QUESTION 8 Exam Question 8 asks, "According to AIA-A201, who determines the interest rate that the contractor can charge on due and unpaid payments?" Petitioner's expert, Mr. Uman, argues that the parties to the contract are not defined within the question and it is therefore misleading. However, the credited answer D, "all the parties must agree on the rate" is within the provided reference material and is clearly the best answer. It is not misleading and Petitioner's argument lacks merit. In addition, 89.47 percent of the test-takers correctly answered Question 8. QUESTION 14 Exam Question 14 is wordy and involves computations. It requires the test-taker to calculate the number of "labor" hours required per 100 pieces to build a wall, given certain pricing and wall construction information. Question 14 is ambiguous and confusing on its face. While the question asks for labor hours, the facts provide a fixed combined hourly cost for a mason and laborer's hour. There is no distinction made between "labor" hours and a "laborer's" hours. Mr. Collier admitted that there is some apparent confusion between "labor" costs and the "laborer's" costs. Mr. Palm further agreed and indicated that he fully understood Petitioner's rationale to divide the labor costs in half and choose answer A. Furthermore, it is clear that Petitioner's perception of the question was not unique. In fact, only 46.62 percent of the test-takers correctly answered Question 14. QUESTION 17 Exam Question 17 asks, "During the bid process, which document has priority in the event of conflicting information?" Clearly, the correct answer is B, "addenda." Petitioner's argument regarding "competitively bid projects" is without merit. Mr. Palm succinctly explained that Petitioner's selection was obviously incorrect because "plans don't change during the bid process unless there is an addenda issued." Moreover, 75.56 percent of the test-takers correctly answered Question 17. QUESTION 33 Exam Question 33 identifies a situation that where drawings differ from written specifications and where there is no legal precedent that one is more binding than the other. The question specifically calls for the best procedure according to the listed and available reference. While Mr. Uman argues that the answer does not appear within the reference material in a clear manner, the exact text of the question and answer are in fact within the material. Petitioner's argument lacks credibility. QUESTION 34 Exam Question 34 asks the test-taker "what is the EARLIEST workday for completing the masonry work?" given the number of crew, the number of hours required, and the ratio constant of the crew. Although 80.45 percent of the test-takers correctly answered Question 34, Mr. Uman argues that the question could have been answered without reference to the Builder's Guide to Accounting material and therefore, was misleading. Petitioner's argument is devoid of common sense. QUESTION 38 Exam Question 38 asks the test-taker to identify the activity that "a specialty structural contractor is qualified" to perform. Petitioner's expert, Mr. Uman, again argues that the question is misleading since the credited correct answer "perform non-structural work" is not written verbatim in the provided reference material. To the contrary however, all of the alternative choices are clearly listed in the reference material as activities specifically prohibited by specialty structure contractors. Furthermore, page 2B17 to 61G415.015 of the Contractor's Manual specifically states that: The specialty structure contractor whose services are limited shall not perform any work that alters the structural integrity of the building including but not limited to roof trusses. Respondent's experts, Mr. Collier and Mr. Palm, agree that Question 38 is clear. Moreover, 53.38 percent of test- takers correctly answered the question. While the question appears to require enhanced reasoning skills and is generally more difficult, it is not misleading. Petitioner's assertions are without merit. QUESTION 43 Exam Question 43 asks, "Which accounting method should be used by a contractor if the contractor is unable to reasonably estimate the amount of progress to date on a job or the total costs remaining to fulfill the contract?" Mr. Uman argues that the question is ambiguous and the reference material is "not terribly clear." He further alleges that when a contractor cannot estimate progress, the contractor cannot establish a "completed contract method," the credited correct answer. Respondent's experts disagree. While it is true that Mr. Palm agreed that all of the choices are accounting methods which is inconsistent with Mr. Collier's testimony, the reference material is clear. In fact, 58.65 percent of the test-takers correctly answered Question 43. Petitioner presented insufficient evidence that he should receive credit for his answer or that Question 43 should be invalidated. QUESTION 44 Exam Question 44 provides detailed information regarding a standard form contract and asks, "Based ONLY on the information given above, what is the amount of the current payment due?" In addition, however, as Mr. Uman points out, the standard form referred to in the problem was mistakenly misidentified as Form 201 instead of Form 702. While it is clear that the referenced form was mislabeled, the precise form number was incidental, unrelated to the question, and unnecessary to compute the answer. In fact, Mr. Palm explains that the problem was "just a mathematical exercise." According to Mr. Collier, the question was not misleading, and the incorrect reference was irrelevant. "It's simple math, industry knowledge." Furthermore, Petitioner's answer is clearly incorrect because "he failed to deduct the retainage." Finally, 54.89 percent of the test-takers correctly answered Question 44.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that a final order be entered invalidating only Question 14, re-computing Petitioner's examination score, and dismissing his challenge. DONE AND ENTERED this 1st day of October, 2003, in Tallahassee, Leon County, Florida. S WILLIAM R. PFEIFFER Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 1st day of October, 2003. COPIES FURNISHED: Nickolas Ekonomides, Esquire 791 Bayway Boulevard Clearwater, Florida 33767 Charles F. Tunnicliff, Esquire Department of Business and Professional Regulation 1940 North Monroe Street, Suite 60 Tallahassee, Florida 32399-2202 Nancy P. Campiglia, General Counsel Department of Business and Professional Regulation Northwood Centre 1940 North Monroe Street Tallahassee, Florida 32399-2202 Robert Crabill, Executive Director Construction Industry Licensing Board Department of Business and Professional Regulation Northwood Centre 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.57120.68455.217
# 2
MARTIN MARQUEZ vs BOARD OF PROFESSIONAL ENGINEERS, 93-004472 (1993)
Division of Administrative Hearings, Florida Filed:Tampa, Florida Aug. 11, 1993 Number: 93-004472 Latest Update: Jun. 03, 1996

Findings Of Fact On the October 1992 examination for licensure as a Professional Engineer Petitioner received an overall grade of 69.1 on Principles and Practice. An overall grade of 70 is required to pass the examination. The examination for Professional Engineer is a national examination prepared and graded by the National Council of Engineering Examiners (NCEE). On question 124 Petitioner calculated an angle to be 25.717 degrees correctly but when this angle was used to solve a latter part of the problem it was transposed as 21.717 degrees. The calculation performed with the transposed number was correct but because of the use of the wrong number of degrees the final answer was incorrect. On question 124 Petitioner received a score of 8 out of a possible 10. Petitioner contends he should have received a score of 10. The Final Scoring Plan for problem 124 provides that a score of 10 demonstrates applicant is EXCEPTIONALLY COMPETENT (It is not necessary that the solution be perfect.) Correct approach to horizontal curve geometry and coordinate computations. All answers to the nearest plus or minus 0.01 foot. To receive a grade of 8 the scoring plan provides: MORE THAN MINIMUM BUT LESS THAN EXCEPTIONAL COMPETENCE. Generally correct approach to the problem solution, but a solution with one math error or one error in logic or a solution with answers outside of the plus or minus 0.01 foot range or correct solution to parts (b) and (c) only. It was in part (c) of the problem that Petitioner transposed the wrong angle. All of the answers produced by the approximately 1000 applicants who took question 124 were graded by one grader. When Petitioner requested this grade be reviewed it was sent back to NCEE where the Petitioner's answer again received a grade of 8. The grading system for all of the problems on the NCEE examinations are on a scale of 0-10 at two-point intervals. There are no odd numbered scores given on any question. Problem 120 which Petitioner also challenges involved calculating the cost of fill material received from two separate sources with different distances to haul, different prices per cubic yard, and with the fill having different void ratios. Petitioner's calculations were accurate except that in his calculations Petitioner added two figures together rather than subtracting one from the other as he should have done. This was done twice in solving this problem causing an error of nearly twice the cost differential analysis (from $298,000 to $580,000). Petitioner's expert witness opined that the maximum deduction of two points in problem 124 was excessive; however, he concurred that the nationwide examination prepared and graded by NCEE is the best solution to qualifying Professional Engineers.

Recommendation Based on the foregoing, it is, hereby, RECOMMENDED: That a final order be entered dismissing the challenge by Martin Marquez to the final grade he was given on the October 1992 examination for licensure as a professional engineer. DONE AND RECOMMENDED this 4th day of January, 1994, in Tallahassee, Leon County, Florida. K. N. AYERS Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904)488-9675 Filed with the Clerk of the Division of Administrative Hearings this 4th day of January, 1994. COPIES FURNISHED: Vytas J. Urba Assistant General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792 Martin Marquez 5412 Walstone Court Tampa, Florida 33624 Jack McRay Acting General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792 Angel Gonzalez Executive Director Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (2) 471.011471.015
# 3
NATIONAL COMPUTER SYSTEMS, INC. vs DEPARTMENT OF EDUCATION, 99-001226BID (1999)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Mar. 17, 1999 Number: 99-001226BID Latest Update: Jul. 19, 1999

The Issue The primary issue is whether the process used by the Department of Education (Department) for evaluating and ranking the proposals submitted in response to Request For Proposal (RFP) 99-03 for the Florida Comprehensive Assessment Test (FCAT) administration contract was contrary to the provisions of the RFP in a way that was clearly erroneous, contrary to competition, arbitrary, or capricious.

Findings Of Fact The RFP for the FCAT describes a five stage process for evaluating proposals. In Stage I, the Department’s Purchasing Office determined whether a proposal contained certain mandatory documents and statements and was sufficiently responsive to the requirements of the RFP to permit a complete evaluation. Stage II involved the Department’s evaluation of a bidder’s corporate qualifications to determine whether the bidder has the experience and capability to do the type of work that will be required in administering the FCAT. Stage III was the Department’s evaluation of a bidder’s management plan and production proposal. In Stage IV, the Department evaluated a bidder’s cost proposal. Stage V involved the ranking of proposals based on points awarded in Stages II-IV. If a proposal did not meet the requirements at any one stage of the evaluation process, it was not to be evaluated in the following stage. Instead, it was to be disqualified from further consideration. Stages II and III of the evaluation process were conducted by an evaluation team comprised of six Department employees: Dr. Debby Houston, Ms. Lynn Joszefczyk, Dr. Peggy Stillwell, Dr. Cornelia Orr, Dr. Laura Melvin, and Ms. Karen Bennett. Dr. Thomas Fisher, head of the Department’s Assessment and Evaluation Services Section, and Dr. Mark Heidorn, Administrator for K-12 Assessment Programs within the Department’s Assessment and Evaluation Services Section, served as non-voting co-chairs of the evaluation team. The focus of this proceeding is Stage II of the evaluation process addressing a bidder’s corporate qualifications. RFP Provisions Regarding Corporate Qualification The FCAT administration contractor will be required to administer tests to approximately one and a half million students each year in a variety of subject areas at numerous grade levels. The FCAT program involves a complex set of interrelated work activities requiring specialized human resources, technological systems and procedures. The FCAT must be implemented annually within limited time periods. The FCAT administration contractor must meet critical deadlines for the delivery of test materials to school districts and the delivery of student scores prior to the end of the school year. In developing the RFP, the Department deliberately established a set of minimum requirements for corporate qualifications that a bidder was to demonstrate in order for its proposal to be eligible for further evaluation. The purpose of the RFP’s minimum corporate qualifications requirements was to limit bidding to qualified vendors who have demonstrated prior experience in successfully administering large-scale assessment projects like the FCAT, thereby providing the Department with some degree of assurance that the winning bidder could successfully administer the FCAT. The instructions to bidders regarding the minimum requirements for corporate qualifications are contained in RFP Section 10, which gives directions on proposal preparation. Section 10.1, which lists certain mandatory documents and statements to be included in the bidder’s proposal, requires that a transmittal letter contain "[a] statement certifying that the bidder has met the minimum corporate qualifications as specified in the RFP." These "minimum corporate qualifications" are set forth in RFP Appendix J. RFP Section 10.2 identifies what a bidder is required to include in its proposal with respect to corporate qualifications. The first paragraph of Section 10.2 directs a bidder generally to describe its qualifications and experience performing tasks similar to those that it would perform in administering the FCAT, in order to demonstrate that the bidder is qualified where it states: Part II of a bidder’s proposal shall be entitled Corporate Qualifications. It shall provide a description of the bidder’s qualifications and prior experience in performing tasks similar to those required in this RFP. The discussion shall include a description of the bidder’s background and relevant experience that qualifies it to provide the products and services required by the RFP. RFP Section 10.2, however, is not limited to a directive that qualifications and past experience be described generally. Instead, Section 10.2, also communicates, in plain and unambiguous terms, that there are specific minimum corporate qualifications a bidder must demonstrate: The minimum expectations for corporate qualifications and experience are shown in Appendix J. There are two separate sets of factors, one set of eight for the developmental contractor and another set of nine for the administration contractor. Bidders must demonstrate their Corporate Qualifications in terms of the factors that are applicable to the activities for which a bid is being submitted -- development or administration. For each criterion, the bidder must demonstrate that the minimum threshold of experience has been achieved with prior completed projects. (Emphasis added.) Moreover, Section 10.2 singles out for emphasis, in relation to the administration component of the RFP, the importance placed on a bidder’s ability to demonstrate experience processing a large volume of tests: The [bidder’s prior completed] projects must have included work tasks similar to those described herein, particularly in test development or processing a comparable number of tests. The bidder will provide a description of the contracted services; the contract period; and the name, address, and telephone number of a contact person for each of the contracting agencies. This description shall (1) document how long the organization has been providing similar services; (2) provide details of the bidder’s experience relevant to the services required by this RFP; and (3) describe the bidder’s other testing projects, products, and services that are similar to those required by this RFP. (Emphasis added.) The Department thus made clear its concern that bidders demonstrate experience with large-scale projects. RFP Appendix J sets forth nine different criteria (C1 through C9) for the administration contractor. As stated in RFP Section 10.2, "[f]or each criterion, the bidder must demonstrate that the minimum threshold of experience has been achieved with prior completed projects . . . ." (emphasis added). Appendix J contains a chart which lists for each criterion: (1) a summary of the related FCAT work task, (2) the detailed criteria for the bidder’s experience related to that work task, and (3) the necessary documentation a bidder must provide. Criterion C4 and Criterion C6 include work tasks that involve the use of image-based scoring technology. C4 and C6 are the only corporate qualifications criteria at issue in this proceeding. RFP Provisions Involving Corporate Qualifications for Image-Based Scoring "Handscoring" is the test administration activity in which open-ended or performance-based student responses are assessed. This practice involves a person reading something the student has written as part of the test, as distinguished from machine scoring multiple choice responses (i.e., the filled-in "bubbles" on an answer sheet). There are two types of handscoring: (1) paper-based handscoring, and (2) image-based handscoring. Paper-based handscoring requires that a student response paper be sent to a reader, who then reviews the student’s response as written on the paper and enters a score on a separate score sheet. Image-based handscoring involves a scanned image of the student’s response being transmitted to a reader electronically. The student’s response is then projected on a computer screen, where the reader reviews it and assigns a score using the computer. The RFP requires that the reading and math portions of the FCAT be handscored on-line using imaging technology beginning with the February 2000 FCAT administration. The RFP provides that the writing portion of the FCAT may be handscored using either the paper-based method or on-line imaging technology during the February 2000 and 2001 FCAT administrations. However, on-line image-based scoring of the writing portion of the FCAT is required for all FCAT administrations after February 2001. An image-based scoring system involves complex computer technology. William Bramlett, an expert in designing and implementing large-scale imaging computer systems and networks, presented unrefuted testimony that an image-based scoring system will be faced with special challenges when processing large volumes of tests. These challenges involve the need to automate image quality control, to manage the local and wide area network load, to assure adequate server performance and storage requirements, and to manage the work flow in a distributed environment. In particular, having an image-based scoring system process an increasing volume of tests is not simply a matter of adding more components. Rather, the system’s basic software architecture must be able to understand and manage the added elements and volume involved in a larger operation. According to Bramlett, there are two ways that the Department could assess the ability of a bidder to perform a large- scale, image-based scoring project such as the FCAT from a technological perspective: (1) have the bidder provide enough technological information about its system to be able to model or simulate the system and predict its performance for the volumes involved, or (2) require demonstrated ability through completion of prior similar projects. Dr. Mark Heidorn, Administrator for Florida’s K-12 Statewide Assessment Programs, was the primary author of RFP Sections 1-8, which describe the work tasks for the FCAT -- the goods and services vendors are to provide and respond to in their technical proposals. Dr. Heidorn testified that in the Department’s testing procurements involving complex technology, the Department has never required specific descriptions of the technology to be used. Instead, the Department has relied on the bidder’s experience in performing similar projects. Thus, the RFP does not specifically require that bidders describe in detail the particular strategies and approaches they intend to employ when designing and implementing an image-based scoring system for FCAT. Instead, the Department relied on the RFP requirements calling for demonstrated experience as a basis to understand that the bidder could implement such an image-based scoring system. Approximately 717,000 to 828,000 student tests will be scored annually by the FCAT administration contractor using imaging technology. The RFP, however, does not require that bidders demonstrate image-based scoring experience at that magnitude. Instead, the RFP requires bidders to demonstrate only a far less demanding minimum level of experience using image-based scoring technology. Criterion C4 and Criterion C6 in Appendix J of the RFP each require that a bidder demonstrate prior experience administering "a minimum of two" assessment programs using imaged- based scoring that involved "at least 200,000 students annually." The requirements for documenting a "minimum of two" programs or projects for C4 and C6 involving "at least 200,000 students annually" are material because they are intended to provide the Department with assurance that the FCAT administration contractor can perform the large-scale, image-based scoring requirements of the contract from a technological perspective. Such experience would indicate that the bidder would have been required to address the sort of system issues described by Bramlett. Dr. Heidorn testified that the number 200,000 was used in C4 and C6 "to indicate the level of magnitude of experience which represented for us a comfortable level to show that a contractor had enough experience to ultimately do the project that we were interested in completing." Dr. Fisher, who authored Appendix J, testified that the 200,000 figure was included in C4 and C6 because it was a number judged sufficiently characteristic of large-scale programs to be relevant for C4 and C6. Dr. Fisher further testified that the Department was interested in having information that a bidder’s experience included projects of a sufficient magnitude so that the bidder would have experienced the kinds of processing issues and concerns that arise in a large-scale testing program. The Department emphasized this specific quantitative minimum requirement in response to a question raised at the Bidder’s Conference held on November 13, 1998: Q9: In Appendix J, the criteria for evaluating corporate quality for the administration operations C4, indicates that the bidder must have experience imaging as indicated. Does this mean that the bid [sic] must bid for using [sic] imaging technology for reading and mathematics tests? A: Yes. The writing assessment may be handscored for two years, and then it will be scored using imaging technology. To be responsive, a bid must be for imaging. The corporate experience required (200,000 students annually for which reports were produced in three months) could be the combined experience of the primary contractor and the subcontractors. (Emphasis added.) Criterion C4 addresses the RFP work tasks relating to handscoring, including both the image-based handscoring of the reading and math portions of the FCAT for all administrations and the writing portions of the FCAT for later administrations. The "Work Task" column for C4 in Appendix J of the RFP states: Design and implement efficient and effective procedures for handscoring student responses to performance tasks within the limited time constraints of the assessment schedule. Handscoring involves image-based scoring of reading and mathematics tasks for all administrations and writing tasks for later administrations at secure scoring sites. Retrieve and score student responses from early district sample schools and deliver required data to the test development contractor within critical time periods for calibration and scaling. The "Necessary Documentation" column for C4 in Appendix J states: Bidder must document successful completion of a minimum of two performance item scoring projects for statewide assessment programs during the last four years for which the bidder was required to perform as described in the Criteria column. (Emphasis added.) The "Criteria" column for C4 in Appendix J, like the related work tasks in the RFP, addresses both image-based handscoring of reading and math, as well as paper-based or image- based handscoring of writing. In connection with all handscoring work tasks, "[t]he bidder must demonstrate completion of test administration projects for a statewide program for which performance items were scored using scoring rubrics and associated scoring protocols." With respect to the work tasks for handscoring the reading and math portions of the FCAT, "[t]he bidder must demonstrate completion of statewide assessment programs involving scoring multiple-choice and performance items for at least 200,000 students annually for which reports were produced in three months." In addition, for the reading and math work tasks, "[e]xperience must been shown in the use of imaging technology and hand-scoring student written responses with completion of scoring within limited time restrictions." This provision dealing with "imaging technology" experience self-evidently addresses the reading and math components, because separate language addresses imaging experience in connection with the writing component. The relevant handscoring experience for the reading and math aspects of the program is experience using image-based technology. By contrast, with respect to the work tasks for scoring the writing portions of the FCAT, "the bidder must also demonstrate completion of statewide assessment programs involving paper-based or imaged scoring student responses to writing assessment prompts for at least 200,000 students annually for which reports were produced in three months." (Emphasis added.) Criterion C6 addresses work tasks relating to designing and implementing systems for processing, scanning, imaging and scoring student responses to mixed-format tests within limited time constraints. The "Work Task" column for C6 in RFP Appendix J states: Design and implement systems for the processing, scanning, imaging, and scoring of student responses to test forms incorporating both multiple-choice and constructed response items (mixed-format) within the limited time constraints of the assessment schedule. Scoring of student responses involves implementation of IRT scoring tables and software provided by the development contractor within critical time periods. The "Necessary Documentation" column for C6 in Appendix J states: Bidder must document successful completion of a minimum of two test administration projects for statewide assessment programs during the last four years in which the bidder was required to perform as described in the Criteria column. (Emphasis added.) The Criteria column for C6 in Appendix J states: The bidder must demonstrate completion of test administration projects for statewide assessment programs or other large-scale assessment programs that required the bidder to design and implement systems for processing, scanning, imaging, and scoring responses to mixed-format tests for at least 200,000 students annually for which reports were produced in three months. Experience must be shown in use of imaging student responses for online presentation to readers during handscoring. (Emphasis added.) RFP Provisions Per Corporate Qualifications The procedure for evaluating a bidder’s corporate qualifications is described in RFP Section 11.3: The Department will evaluate how well the resources and experience described in each bidder’s proposal qualify the bidder to provide the services required by the provisions of this RFP. Consideration will be given to the length of time and the extent to which the bidder and any proposed subcontractors have been providing services similar or identical to those requested in this RFP. The bidder’s personnel resources as well as the bidder’s computer, financial, and other technological resources will be considered in evaluating a bidder’s qualifications to meet the requirements of this RFP. Client references will be contacted and such reference checks will be used in judging a bidder’s qualifications. The criteria to be used to rate a bidder’s corporate qualifications to meet the requirements of this RFP are shown in Appendix J and will be applied as follows: * * * Administrative Activities. Each of the nine administration activities criteria in Appendix J will be individually rated by members of the evaluation team. The team members will use the rating scale shown in Figure 1 below. Individual team members will review the bidder’s corporate qualifications and rate the response with a rating of one to five. The ratings across all evaluators for each factor will be averaged, rounded to the nearest tenth, and summed across all criteria. If each evaluator assigns the maximum number of points for each criterion, the total number of points will be 45. To meet the requirements of Stage II, the proposal must achieve a minimum rating of 27 points and have no individual criterion for which the number of points averaged across evaluators and then rounded is less than 3.0. Each proposal that receives a qualifying score based on the evaluation of the bidder’s qualifications will be further evaluated in Stage III. Figure 1 Evaluation Scale for Corporate Qualifications 5 Excellent 4 3 Satisfactory 2 1 Unsatisfactory The bidder has demonstrated exceptional experience and capability to perform the required tasks. The bidder has demonstrated that it meets an acceptable level of experience and capability to perform the required tasks. The bidder either has not established its corporate qualifications or does not have adequate qualifications. RFP Section 11.3 provides that each of the nine corporate qualifications criteria for administration operations in Appendix J (C1 through C9) will be individually rated by the six members of the evaluation team using a scale of one to five. A rating of three is designated as "satisfactory" which means that "[t]he bidder has demonstrated that it meets an acceptable level of experience and capability to perform the required tasks." In order to be further evaluated, Section 11.3 provides that there must be no individual corporate qualifications criterion for which the bidder’s proposal receives a score less than 3.0 (average points across evaluators). Dr. Fisher, the primary author of Section 11.3 of the RFP, referred to the 3.0 rating as the "cut score." (Emphasis added.) The RFP’s clear and unambiguous terms thus establish the "minimum threshold" of experience that a bidder "must demonstrate" in its proposal for Criterion C1 through Criterion C9. The "minimum threshold" of experience that a bidder must demonstrate for each criterion is described in Appendix J of the RFP. If a proposal failed to demonstrate that the bidder meets the minimum threshold of experience for a particular criterion in Appendix J, the bidder obviously would not have demonstrated "that it meets an acceptable level of experience and capability to perform the required tasks." Thus, in that setting, an evaluator was to have assigned the proposal a rating of less than "satisfactory," or less than three, for that criterion. (Emphasis added.) The fact that a score less than "3" was expected for -- and would eliminate -- proposals that did not demonstrate the "minimum threshold" of experience does not render meaningless the potential scores of "1" and "2." Those scores may reflect the degree to which a bidder’s demonstrated experience was judged to fall below the threshold. Although some corporate capability minimums were stated quantitatively (i.e., "minimum of two," or "at least 200,000"), others were open to a more qualitative assessment (i.e., "large-scale," "systems," or "reports"). Moreover, a proposal that included demonstrated experience in some manner responsive to each aspect of Appendix J might nevertheless be assigned a score of less than "3," based on how an evaluator assessed the quality of the experience described in the proposal. By the terms of the RFP, however, an average score across evaluators of less than 3 represented essentially a decision that the minimum threshold of experience was not demonstrated. Had the Department truly intended Appendix J to reflect only general targets or guidelines, there were many alternative ways to communicate such an intent without giving mandatory direction about what bidders "must demonstrate" or without establishing quantitative minimums (i.e. "a minimum of two," or "at least 200,000"). RFP Appendix K, for instance, sets forth the evaluation criteria for technical proposals in broad terms that do not require the bidder to provide anything in particular. Even within Appendix J, other than in Criterion C4 and Criterion C6, bidders were to show experience with "large-scale" projects rather than experience at a quantified level. Pursuant to the RFP’s plain language, in order to meet the "minimum threshold" of experience for Criterion C4 and Criterion C6, a bidder "must demonstrate," among other things, successful completion of a "minimum of two" projects, each involving the use of image-based scoring technology in administering tests to "at least 200,000 students annually." Department’s Evaluation of Corporate Qualifications In evaluating Harcourt’s proposal, the Department failed to give effect to the plain RFP language stating that a bidder "must document" successful completion of a "minimum of two" testing projects involving "at least 200,000 students annually" in order to meet the "minimum threshold" of experience for C4 and C6. Dr. Fisher was the primary author of Sections 10, 11 and Appendix J of the RFP. He testified that during the Stage II evaluation of corporate qualifications, the evaluation team applied a "holistic" approach, like that used in grading open-ended written responses in student test assessments. Under the holistic approach that Dr. Fisher described, each member of the evaluation team was to study the proposals, compare the information in the proposals to everything contained in Appendix J, and then assign a rating for each criterion in Appendix J based on "how well" the evaluator felt the proposal meets the needs of the agency. Notwithstanding Dr. Fisher’s present position, the RFP’s terms and their context demonstrate that the minimum requirements for corporate qualifications are in RFP Appendix J. During the hearing, Dr. Fisher was twice asked to identify language in the RFP indicating that the Department would apply a "holistic" approach when evaluating corporate qualifications. Both times, Dr. Fisher was unable to point to any explicit RFP language putting bidders on notice that the Department would be using a "holistic" approach to evaluating proposals and treating the Appendix J thresholds merely as targets. In addition, Dr. Fisher testified that the Department did not engage in any discussion at the bidders’ conference about the evaluation method that was going to be used other than drawing the bidders’ attention to the language in the RFP. As written, the RFP establishes minimum thresholds of experience to be demonstrated. Where, as in the RFP, certain of those minimum thresholds are spelled out in quantitative terms that are not open to interpretation or judgment, it is neither reasonable nor logical to rate a proposal as having demonstrated "an acceptable level of experience" when it has not demonstrated the specified minimum levels, even if other requirements with which it was grouped were satisfied. The plain RFP language unambiguously indicates that an analytic method, not a "holistic" method, will be applied in evaluating corporate qualifications. Dr. Fisher acknowledged that, in an assessment using an analytic method, there is considerable effort placed up front in deciding the specific factors that will be analyzed and those factors are listed and explained. Dr. Fisher admitted that the Department went into considerable detail in Appendix J of the RFP to explain to the bidders the minimums they had to demonstrate and the documentation that was required. In addition, Dr. Orr, who served as a member of the evaluation team and who herself develops student assessment tests, stated that in assessments using the "holistic" method there is a scoring rubric applied, but that rubric does not contain minimum criteria like those found in the RFP for FCAT. The holistic method applied by the Department ignores very specific RFP language which spells out minimum requirements for corporate qualifications. Harcourt’s Corporate Qualifications for C4 and C6 Harcourt’s proposal lists the same three projects administered by Harcourt for both Criterion C4 and Criterion C6: the Connecticut Mastery Test ("CMT"), the Connecticut Academic Performance Test ("CAPT") and the Delaware Student Testing Program ("DSTP"). Harcourt’s proposal also lists for Criterion C4 projects administered by its proposed scoring subcontractors, Measurement Incorporated ("MI") and Data Recognition Corporation ("DRC"). However, none of the projects listed for MI or DRC involve image- based scoring. Thus, the MI and DRC projects do not demonstrate any volume of image-based scoring as required by C6 and by the portion of C4 which relates to the work task for the imaged-based scoring of the math and reading portions of the FCAT. Harcourt’s proposal states that "[a]pproximately 35,000 students per year in grade 10 are tested with the CAPT." Harcourt’s proposal states that "[a]pproximately 120,000 students per year in grades 4, 6 and 8 are tested with the CMT." Harcourt’s proposal states that "[a]pproximately 40,000 students in grades 3, 5, 8, and 10" are tested with the DSTP. Although the descriptions of the CMT and the CAPT in Harcourt’s proposal discuss image-based scoring, there is nothing in the description of the DSTP that addresses image-based scoring. There is no evidence that the evaluators were ever made aware that the DSTP involved image-based scoring. Moreover, although the Department called the Delaware Department of Education ("DDOE") as a reference for Harcourt’s development proposal, the Department did not discuss Harcourt’s administration of the DSTP (including whether the DSTP involves image-based scoring) with the DDOE. Harcourt overstated the number of students tested in the projects it referenced to demonstrate experience with image-based scoring. Harcourt admitted at hearing that, prior to submitting its proposal, Harcourt had never tested 120,000 students with the CMT. In fact, the total number of students tested by Harcourt on an annual basis under the CMT has ranged from 110,273 in the 1996- 97 school year to 116,679 in the 1998-99 school year. Harcourt also admitted at hearing that, prior to submitting its proposal, Harcourt had never tested 35,000 students in grade 10 with the CAPT. Instead, the total number of grade 10 students tested by Harcourt on an annual basis with the CAPT ranged from 30,243 in 1997 to 31,390 in 1998. In addition, Harcourt admitted at hearing that, prior to submitting its proposal, it had conducted only one "live" administration of the DSTP (as distinguished from field testing). That administration of the DSTP involved only 33,051, not 40,000, students in grades 3, 5, 8 and 10. Harcourt itself recognized that "field tests" of the DSTP are not responsive to C4 and C6, as evidenced by Harcourt’s own decision not to include in its proposal the number of students field tested under the DSTP. Even assuming that the numbers in Harcourt’s proposal are accurate, and that the description of the DSTP in Harcourt’s proposal reflected image-based scoring, Harcourt’s proposal on its face does not document any single project administered by Harcourt for C4 or C6 involving image-based testing of more than 120,000 students annually. When the projects are aggregated, the total number of students claimed as tested annually still does not reach the level of "at least 200,000;" it comes to only 195,000, and it reaches that level only once due to the single administration of the DSTP. Moreover, even if that 195,000 were considered "close enough" to the 200,000 level required, it was achieved only one time, while Appendix J plainly directs that there be a minimum of two times that testing at that level has been performed. The situation worsens for Harcourt when using the true numbers of students tested under the CMT, CAPT, and DSTP, because Harcourt cannot document any single image-based scoring project it has administered involving testing more than 116,679 students annually. Moreover, when the true numbers of students tested are aggregated, the total rises only to 181,120 students tested annually on one occasion, and no more than 141,663 tested annually on any other occasion. Despite this shortfall from the minimum threshold of experience, under the Department’s holistic approach the evaluators assigned Harcourt’s proposal four ratings of 3.0 and two ratings of 4.0 for C4, for an average of 3.3 on C4; and five ratings of 3.0 and one rating of 4.0 for C6, for an average of 3.2 on C6. Applying the plain language of the RFP in Sections 10 and 11 and Appendix J, Harcourt did not demonstrate that it meets an acceptable level of experience and capability for C4 or C6, because Harcourt did not satisfy the minimum threshold for each criterion by demonstrating a minimum of two prior completed projects involving image-based scoring requiring testing of at least 200,000 students annually. Harcourt’s proposal should not have received any rating of 3.0 or higher on C4 or C6 and should have been disqualified from further evaluation due to failure to demonstrate the minimum experience that the Department required in order to be assured that Harcourt can successfully administer the FCAT program. NCS’s Compliance With RFP Requirements Even though the NCS proposal did not meet all of the mandatory requirements, and despite the requirement of Section 11.2 that the proposal be automatically disqualified under such circumstances, the Department waived NCS’s noncompliance as a minor irregularity. The factors in C4 and C6 were set, minimal requirements with which NCS did not comply. For example, one of the two programs NCS submitted in response to Criteria C4 and C6 was the National Assessment of Educational Progress program ("NAEP"). NAEP, however, is not a "statewide assessment program" within the meaning of that term as used in Criteria C4 and C6. Indeed, NCS admitted that NAEP is not a statewide assessment program and that, without consideration of that program, NCS’s proposal is not responsive to Criteria C4 and C6 because NCS has not have submitted the required proof of having administered two statewide assessment programs. This error cannot be cured by relying on the additional experience of NCS’s subcontractor because that experience does not show that its subcontractor produced reports within three months, and so such experience does not demonstrate compliance with Criteria C4. The Department deliberately limited the competition for the FCAT contract to firms with specified minimum levels of experience. As opined at final hearing, if the Department in the RFP had announced to potential bidders that the type of experience it asked vendors to describe were only targets, goals and guidelines, and that a failure to demonstrate target levels of experience would not be disqualifying, then the competitive environment for this procurement would have differed since only 2.06 evaluation points (out of a possible 150) separated the NCS and Harcourt scores. Dr. Heidorn conceded that multiple companies with experience in different aspects of the FCAT program -- a computer/imaging company and a firm experienced in educational testing -- might combine to perform a contract like the FCAT. Yet, that combination of firms would be discouraged from bidding because they could not demonstrate the minimum experience spelled out in the RFP. Language in the RFP, indicating the "holistic" evaluation that was to be applied, could have resulted in a different field of potential and actual bidders.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is recommended that Respondent, State of Florida, Department of Education, enter a Final Order rejecting the bids submitted by Harcourt and NCS for the administration component of the RFP. The Department should then seek new proposals. DONE AND ENTERED this 25th day of May, 1999, in Tallahassee, Leon County, Florida. DON W. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 25th day of May, 1999. COPIES FURNISHED: Karen D. Walker, Esquire Holland and Knight, LLP Post Office Drawer 810 Tallahassee, Florida 32302 Mark D. Colley, Esquire Holland and Knight, LLP Suite 400 2100 Pennsylvania Avenue, Northwest Washington, D.C. 20037 Charles S. Ruberg, Esquire Department of Education The Capitol, Suite 1701 Tallahassee, Florida 32399-0400 Paul R. Ezatoff, Jr., Esquire Christopher B. Lunny, Esquire Katz, Kutter, Haigler, Alderman, Bryant and Yon, P.A. 106 East College Avenue, Suite 1200 Tallahassee, Florida 32302-7741 Tom Gallagher Commissioner of Education Department of Education The Capitol, Plaza Level 08 Tallahassee, Florida 32399-0400 Michael H. Olenick, General Counsel Department of Education The Capitol, Suite 1701 Tallahassee, Florida 32399-0400

Florida Laws (3) 120.57287.012287.057
# 4
CHRISTOPHER NATHANIEL LOVETT vs DEPARTMENT OF BUSINESS AND PROFESSIONAL REGULATION, BOARD OF PROFESSIONAL ENGINEERS, 03-004013RP (2003)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Oct. 29, 2003 Number: 03-004013RP Latest Update: May 26, 2005

The Issue The ultimate issue in this proceeding is whether proposed Florida Administrative Code Rule 61G15-21 is an invalid exercise of delegated legislative authority.

Findings Of Fact Florida Administrative Code Rule 61G15-21.004, in relevant part, states: The criteria for determining the minimum score necessary for passing the Engineering Fundamentals Examination shall be developed through the collective judgment of qualified experts appointed by NCEES to set the raw score that represents the minimum amount of knowledge necessary to pass the examination. The judges shall use a Modified Angoff Method in determining the minimally acceptable raw score necessary to pass the Fundamentals of Engineering Examination. Using the above mentioned Modified Angoff Method, the judges will indicate the probability that a minimally knowledgeable Fundamentals of Engineering examinee would answer any specific questions correctly. The probability of a correct response is then assigned to each question. Each judge will then make an estimate of the percentage of minimally knowledgeable examinees who would know the answer to each question. The totals each of the judges is added together and divided by the number of judges to determine the overall estimate of the minimum standards necessary. The minimum number of correct answers required to achieve a passing score will take into account the relative difficulty of each examination through scaling and equating each examination to the base examination. The raw score necessary to show competence shall be deemed to be a 70 on a scale of 100. A passing grade on Part Two of the examination is defined as a grade of 70 or better. The grades are determined by a group of knowledgeable professional engineers, who are familiar with engineering practice and with what is required for an applicable engineering practice and with what is required for an applicable engineering task. These professional engineers will establish a minimum passing score on each individual test item (i.e., examination problem). An Item Specific Scoring Plan (ISSP) will be prepared for each examination item based upon the NCEES standard scoring plan outline form. An ISSP will be developed by persons who are familiar with each discipline including the item author, the item scorer, and other NCEES experts. On a scale of 0-10, six (6) will be a minimum passing standard and scores between six (6) and ten (10) will be considered to be passing scores for each examination item. A score of five (5) or lower will be considered an unsatisfactory score for that item and examinee will be considered to have failed that item. To pass, an examinee must average six (6) or greater on his/her choice of eight (8) exam items, that is, the raw score must be forty- eight (48) or greater based on a scale of eighty (80). This raw score is then converted to a base 100 on which, as is noted above, a passing grade will be seventy (70). The proposed changes to Florida Administrative Code Rule 61G15-21.004, in relevant part, state: The passing grade for the Engineering Fundamentals Examination is 70 or better. The criteria for determining the minimum score necessary for passing the Engineering Fundamentals Examination shall be developed through the collective judgment of qualified experts appointed by NCEES to set the raw score that represents the minimum amount of knowledge necessary to pass the examination. The judges shall use a Modified Angoff Method in determining the minimally acceptable raw score necessary to pass the Fundamentals of Engineering Examination. Using the above mentioned Modified Angoff Method, the judges will indicate the probability that a minimally knowledgeable Fundamentals of Engineering examinee would answer any specific questions correctly. The probability of a correct response is then assigned to each question. Each judge will then make an estimate of the percentage of minimally knowledgeable examinees who would know the answer to each question. The totals each of the judges is added together and divided by the number of judges to determine the overall estimate of the minimum standards necessary. The minimum number of correct answers required to achieve a passing score will take into account the relative difficulty of each examination through scaling and equating each examination to the base examination. The raw score necessary to show competence shall be deemed to be a 70 on a scale of 100. The passing grade for the Principles and Practice Examination is 70 or better. A passing grade on Part Two of the examination is defined as a grade of 70 or better. The grades are determined by a group of knowledgeable professional engineers, who are familiar with engineering practice and with what is required for an applicable engineering practice and with what is required for an applicable engineering task. These professional engineers will establish a minimum passing score on each individual test item (i.e., examination problem). An Item Specific Scoring Plan (ISSP) will be prepared for each examination item based upon the NCEES standard scoring plan outline form. An ISSP will be developed by persons who are familiar with each discipline including the item author, the item scorer, and other NCEES experts. On a scale of 0-10, six (6) will be a minimum passing standard and scores between six (6) and ten (10) will be considered to be passing scores for each examination item. A score of five (5) or lower will be considered an unsatisfactory score for that item and examinee will be considered to have failed that item. To pass, an examinee must average six (6) or greater on his/her choice of eight (8) exam items, that is, the raw score must be forty- eight (48) or greater based on a scale of eighty (80). This raw score is then converted to a base 100 on which, as is noted above, a passing grade will be seventy (70). Petitioner resides in Tampa, Florida. On April 11, 2003, Petitioner took a national examination that Petitioner must pass to be licensed by the state as a professional engineer. On July 1, 2003, Petitioner received a letter from the Board advising Petitioner that he had received a failing grade on the examination. On July 2, 2003, Petitioner unsuccessfully requested the raw scores on his examination from a representative of the National Council of Examiners for Engineering and Surveying (NCEES). The NCEES is the national testing entity that conducts examinations and determines scores for the professional engineer examination required by the state. On July 9, 2003, Petitioner submitted a formal request to the Board for all of the raw scores related to Petitioner "and all past P.E. Exams that the Petitioner had taken." A representative of the Board denied Petitioner's request explaining that the raw scores are kept by the NCEES and "it is not their policy to release them." The Board's representative stated that the Board was in the process of adopting new rules "that were in-line with the policies of the NCEES." On July 31, 2003, Petitioner requested the Board to provide Petitioner with any statute or rule that authorized the Board to deny Petitioner's request for raw scores pursuant to Section 119.07(1)(a), Florida Statutes (2003). On the same day, counsel for the Board explained to Petitioner that the Board is not denying the request. The Board is unable to comply with the request because the Board does not have physical possession of the raw scores. Petitioner and counsel for Respondent engaged in subsequent discussions that are not material to this proceeding. On August 6, 2003, Petitioner requested counsel for Respondent to provide Petitioner with copies of the proposed rule changes that the Board intended to consider on August 8, 2003. On August 27, 2003, Petitioner filed a petition with the Board challenging existing Florida Administrative Code Rule 61G15-21.004. The petition alleged that parts of the existing rule are invalid. Petitioner did not file a challenge to the existing rule with DOAH. The Petition for Hearing states that Petitioner is filing the Petition for Hearing pursuant to Subsections 120.56(1) and (3)(b), Florida Statutes (2003). However, the statement of how Petitioner's substantial interests are affected is limited to the proposed changes to the existing rule. During the hearing conducted on January 29, 2004, Petitioner explained that he does not assert that the existing rule is invalid. Rather, Petitioner argues that the Board deviates from the existing rule by not providing examinees with copies of their raw scores and by failing to use raw scores in the determination of whether an applicant achieved a passing grade on the exam. Petitioner further argues that the existing rule benefits Petitioner by purportedly requiring the Board to use raw scores in the determination of passing grades. The elimination of that requirement in the proposed rule arguably will adversely affect Petitioner's substantial interests. The Petition for Hearing requests several forms of relief. The Petition for Hearing seeks an order granting Petitioner access to raw scores, a determination that Petitioner has met the minimum standards required under the existing rule, and an order that the Board grant a license to Petitioner. The Petition for Hearing does not request an order determining that the proposed rule changes constitute an invalid exercise of delegated legislative authority.

Florida Laws (4) 119.07120.56120.68455.217
# 5
THE FLORIDA INSURANCE COUNCIL, INC.; THE AMERICAN INSURANCE ASSOCIATION; PROPERTY CASUALTY INSURERS ASSOCIATION OF AMERICA; AND NATIONAL ASSOCIATION OF MUTUAL INSURANCE COMPANIES vs DEPARTMENT OF FINANCIAL SERVICES, OFFICE OF INSURANCE REGULATION, AND THE FINANCIAL SERVICES COMMISSION, 05-002803RP (2005)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 03, 2005 Number: 05-002803RP Latest Update: May 17, 2007

The Issue At issue in this proceeding is whether proposed Florida Administrative Code Rule 69O-125.005 is an invalid exercise of delegated legislative authority.

Findings Of Fact Petitioners AIA is a trade association made up of 40 groups of insurance companies. AIA member companies annually write $6 billion in property, casualty, and automobile insurance in Florida. AIA's primary purpose is to represent the interests of its member insurance groups in regulatory and legislative matters throughout the United States, including Florida. NAMIC is a trade association consisting of 1,430 members, mostly mutual insurance companies. NAMIC member companies annually write $10 billion in property, casualty, and automobile insurance in Florida. NAMIC represents the interests of its member insurance companies in regulatory and legislative matters throughout the United States, including Florida. PCI is a national trade association of property and casualty insurance companies consisting of 1,055 members. PCI members include mutual insurance companies, stock insurance companies, and reciprocal insurers that write property and casualty insurance in Florida. PCI members annually write approximately $15 billion in premiums in Florida. PCI participated in the OIR's workshops on the Proposed Rule. PCI's assistant vice president and regional manager, William Stander, testified that if the Proposed Rule is adopted, PCI's member companies would be required either to withdraw from the Florida market or drastically reorganize their business model. FIC is an insurance trade association made up of 39 insurance groups that represent approximately 250 insurance companies writing all lines of insurance. All of FIC's members are licensed in Florida and write approximately $27 billion in premiums in Florida. FIC has participated in rule challenges in the past, and participated in the workshop and public hearing process conducted by OIR for this Proposed Rule. FIC President Guy Marvin testified that FIC's property and casualty members use credit scoring and would be affected by the Proposed Rule. A substantial number of Petitioners' members are insurers writing property and casualty insurance and/or motor vehicle insurance coverage in Florida. These members use credit-based insurance scoring in their underwriting and rating processes. They would be directly regulated by the Proposed Rule in their underwriting and rating methods and in the rate filing processes set forth in Sections 627.062 and 627.0651, Florida Statutes. Fair Isaac originated credit-based insurance scoring and is a leading provider of credit-based insurance scoring information in the United States and Canada. Fair Isaac has invested millions of dollars in the development and maintenance of its credit-based insurance models. Fair Isaac concedes that it is not an insurer and, thus, would not be directly regulated by the Proposed Rule. However, Fair Isaac would be directly affected by any negative impact that the Proposed Rule would have in setting limits on the use of credit-based insurance score models in Florida. Lamont Boyd, a manager in Fair Isaac's global scoring division, testified that if the Proposed Rule goes into effect Fair Isaac would, at a minimum, lose all of the revenue it currently generates from insurance companies that use its scores in the State of Florida, because Fair Isaac's credit-based insurance scoring model cannot meet the requirements of the Proposed Rule regarding racial, ethnic, and religious categorization. Mr. Boyd also testified that enactment of the Proposed Rule could cause a "ripple effect" of similar regulations in other states, further impairing Fair Isaac's business. The Statute and Proposed Rule During the 1990s, insurance companies' use of consumer credit information for underwriting and rating automobile and residential property insurance policies greatly increased. Insurance regulators expressed concern that the use of consumer credit reports, credit histories and credit-based insurance scoring models could have a negative effect on consumers' ability to obtain and keep insurance at appropriate rates. Of particular concern was the possibility that the use of credit scoring would particularly hurt minorities, people with low incomes, and young people, because those persons would be more likely to have poor credit scores. On September 19, 2001, Insurance Commissioner Tom Gallagher appointed a task force to examine the use of credit reports and develop recommendations for the Legislature or for the promulgation of rules regarding the use of credit scoring by the insurance industry. The task force met on four separate occasions throughout the state in 2001, and issued its report on January 23, 2002. The task force report conceded that the evidence supporting the negative impact of the use of credit reports on specific groups is "primarily anecdotal," and that the insurance industry had submitted anecdotal evidence to the contrary. Among its nine recommendations, the task force recommended the following: A comprehensive and independent investigation of the relationship between insurers' use of consumer credit information and risk of loss including the impact by race, income, geographic location and age. A prohibition against the use of credit reports as the sole basis for making underwriting or rating decisions. That insurers using credit as an underwriting or rating factor be required to provide regulators with sufficient information to independently verify that use. That insurers be required to send a copy of the credit report to those consumers whose adverse insurance decision is a result of their consumer credit information and a simple explanation of the specific credit characteristics that caused the adverse decision. That insurers not be permitted to draw a negative inference from a bad credit score that is due to medical bills, little or no credit information, or other special circumstances that are clearly not related to an applicant's or policyholder's insurability. That the impact of credit reports be mitigated by imposing limits on the weight that insurers can give to them in the decision to write a policy and limits on the amount the premium can be increased due to credit information. No evidence was presented that the "comprehensive and independent investigation" of insurers' use of credit information was undertaken by the Legislature. However, the other recommendations of the task force were addressed in Senate Bills 40A and 42A, enacted by the Legislature and signed by the governor on June 26, 2003. These companion bills, each with an effective date of January 1, 2004, were codified as Sections 626.9741 and 626.97411, Florida Statutes, respectively. Chapters 2003-407 and 2003-408, Laws of Florida. Section 626.9741, Florida Statutes, provides: The purpose of this section is to regulate and limit the use of credit reports and credit scores by insurers for underwriting and rating purposes. This section applies only to personal lines motor vehicle insurance and personal lines residential insurance, which includes homeowners, mobile home owners' dwelling, tenants, condominium unit owners, cooperative unit owners, and similar types of insurance. As used in this section, the term: "Adverse decision" means a decision to refuse to issue or renew a policy of insurance; to issue a policy with exclusions or restrictions; to increase the rates or premium charged for a policy of insurance; to place an insured or applicant in a rating tier that does not have the lowest available rates for which that insured or applicant is otherwise eligible; or to place an applicant or insured with a company operating under common management, control, or ownership which does not offer the lowest rates available, within the affiliate group of insurance companies, for which that insured or applicant is otherwise eligible. "Credit report" means any written, oral, or other communication of any information by a consumer reporting agency, as defined in the federal Fair Credit Reporting Act, 15 U.S.C. ss. 1681 et seq., bearing on a consumer's credit worthiness, credit standing, or credit capacity, which is used or expected to be used or collected as a factor to establish a person's eligibility for credit or insurance, or any other purpose authorized pursuant to the applicable provision of such federal act. A credit score alone, as calculated by a credit reporting agency or by or for the insurer, may not be considered a credit report. "Credit score" means a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Tier" means a category within a single insurer into which insureds with substantially similar risk, exposure, or expense factors are placed for purposes of determining rate or premium. An insurer must inform an applicant or insured, in the same medium as the application is taken, that a credit report or score is being requested for underwriting or rating purposes. An insurer that makes an adverse decision based, in whole or in part, upon a credit report must provide at no charge, a copy of the credit report to the applicant or insured or provide the applicant or insured with the name, address, and telephone number of the consumer reporting agency from which the insured or applicant may obtain the credit report. The insurer must provide notification to the consumer explaining the reasons for the adverse decision. The reasons must be provided in sufficiently clear and specific language so that a person can identify the basis for the insurer's adverse decision. Such notification shall include a description of the four primary reasons, or such fewer number as existed, which were the primary influences of the adverse decision. The use of generalized terms such as "poor credit history," "poor credit rating," or "poor insurance score" does not meet the explanation requirements of this subsection. A credit score may not be used in underwriting or rating insurance unless the scoring process produces information in sufficient detail to permit compliance with the requirements of this subsection. It shall not be deemed an adverse decision if, due to the insured's credit report or credit score, the insured continues to receive a less favorable rate or placement in a less favorable tier or company at the time of renewal except for renewals or reunderwriting required by this section. (4)(a) An insurer may not request a credit report or score based upon the race, color, religion, marital status, age, gender, income, national origin, or place of residence of the applicant or insured. An insurer may not make an adverse decision solely because of information contained in a credit report or score without consideration of any other underwriting or rating factor. An insurer may not make an adverse decision or use a credit score that could lead to such a decision if based, in whole or in part, on: The absence of, or an insufficient, credit history, in which instance the insurer shall: Treat the consumer as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer; Treat the consumer as if the applicant or insured had neutral credit information, as defined by the insurer; Exclude the use of credit information as a factor and use only other underwriting criteria; Collection accounts with a medical industry code, if so identified on the consumer's credit report; Place of residence; or Any other circumstance that the Financial Services Commission determines, by rule, lacks sufficient statistical correlation and actuarial justification as a predictor of insurance risk. An insurer may use the number of credit inquiries requested or made regarding the applicant or insured except for: Credit inquiries not initiated by the consumer or inquiries requested by the consumer for his or her own credit information. Inquiries relating to insurance coverage, if so identified on a consumer's credit report. Collection accounts with a medical industry code, if so identified on the consumer's credit report Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the home mortgage industry and made within 30 days of one another, unless only one inquiry is considered. Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry and made within 30 days of one another, unless only one inquiry is considered. An insurer must, upon the request of an applicant or insured, provide a means of appeal for an applicant or insured whose credit report or credit score is unduly influenced by a dissolution of marriage, the death of a spouse, or temporary loss of employment. The insurer must complete its review within 10 business days after the request by the applicant or insured and receipt of reasonable documentation requested by the insurer, and, if the insurer determines that the credit report or credit score was unduly influenced by any of such factors, the insurer shall treat the applicant or insured as if the applicant or insured had neutral credit information or shall exclude the credit information, as defined by the insurer, whichever is more favorable to the applicant or insured. An insurer shall not be considered out of compliance with its underwriting rules or rates or forms filed with the Office of Insurance Regulation or out of compliance with any other state law or rule as a result of granting any exceptions pursuant to this subsection. A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. An insurer that requests or uses credit reports and credit scoring in its underwriting and rating methods shall maintain and adhere to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, this section, and all rules related thereto. (7)(a) An insurer shall establish procedures to review the credit history of an insured who was adversely affected by the use of the insured's credit history at the initial rating of the policy, or at a subsequent renewal thereof. This review must be performed at a minimum of once every 2 years or at the request of the insured, whichever is sooner, and the insurer shall adjust the premium of the insured to reflect any improvement in the credit history. The procedures must provide that, with respect to existing policyholders, the review of a credit report will not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan. (b) However, as an alternative to the requirements of paragraph (a), an insurer that used a credit report or credit score for an insured upon inception of a policy, who will not use a credit report or score for reunderwriting, shall reevaluate the insured within the first 3 years after inception, based on other allowable underwriting or rating factors, excluding credit information if the insurer does not increase the rates or premium charged to the insured based on the exclusion of credit reports or credit scores. The commission may adopt rules to administer this section. The rules may include, but need not be limited to: Information that must be included in filings to demonstrate compliance with subsection (3). Statistical detail that insurers using credit reports or scores under subsection (5) must retain and report annually to the Office of Insurance Regulation. Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Standards for review of models, methods, programs, or any other process by which to grade or rank credit report data and which may produce credit scores in order to ensure that the insurer demonstrates that such grading, ranking, or scoring is valid in predicting insurance risk of an applicant or insured. Section 626.97411, Florida Statutes, provides: Credit scoring methodologies and related data and information that are trade secrets as defined in s. 688.002 and that are filed with the Office of Insurance Regulation pursuant to a rate filing or other filing required by law are confidential and exempt from the provisions of s. 119.07(1) and s. 24(a), Art. I of the State Constitution.3 Following extensive rule development workshops and industry comment, proposed Florida Administrative Code Rule 69O-125.005 was initially published in the Florida Administrative Weekly, on February 11, 2005.4 The Proposed Rule states, as follows: 69O-125.005 Use of Credit Reports and Credit Scores by Insurers. For the purpose of this rule, the following definitions apply: "Applicant", for purposes of Section 626.9741, F.S., means an individual whose credit report or score is requested for underwriting or rating purposes relating to personal lines motor vehicle or personal lines residential insurance and shall not include individuals who have merely requested a quote. "Credit scoring methodology" means any methodology that uses credit reports or credit scores, in whole or in part, for underwriting or rating purposes. "Data cleansing" means the correction or enhancement of presumed incomplete, incorrect, missing, or improperly formatted information. "Personal lines motor vehicle" insurance means insurance against loss or damage to any motorized land vehicle or any loss, liability, or expense resulting from or incidental to ownership, maintenance or use of such vehicle if the contract of insurance shows one or more natural persons as named insureds. The following are not included in this definition: Vehicles used as public livery or conveyance; Vehicles rented to others; Vehicles with more than four wheels; Vehicles used primarily for commercial purposes; and Vehicles with a net vehicle weight of more than 5,000 pounds designed or used for the carriage of goods (other than the personal effects of passengers) or drawing a trailer designed or used for the carriage of such goods. The following are specifically included, inter alia, in this definition: Motorcycles; Motor homes; Antique or classic automobiles; and Recreational vehicles. "Unfairly discriminatory" means that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S. Insurers may not use any credit scoring methodology that is unfairly discriminatory. The burden of demonstrating that the credit scoring methodology is not unfairly discriminatory is upon the insurer. An insurer may not request or use a credit report or credit score in its underwriting or rating method unless it maintains and adheres to established written procedures that reflect the restrictions set forth in the federal Fair Credit Reporting Act, Section 626.9741, F.S., and these rules. Upon initial use or any change in that use, insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall include the following information in filings submitted pursuant to Section 627.062 or 627.0651, F.S. A listing of the types of individuals whose credit reports or scores the company will use or attempt to use to underwrite or rate a given policy. For example: Person signing application; Named insured or spouse; and All listed operators. How those individual reports or scores will be combined if more than one is used. For example: Average score used; Highest score used. The name(s) of the consumer reporting agencies or any other third party vendors from which the company will obtain or attempt to obtain credit reports or scores. Precise identifying information specifying or describing the credit scoring methodology, if any, the company will use including: Common or trade name; Version, subtype, or intended segment of business the system was designed for; and Any other information needed to distinguish a particular credit scoring methodology from other similar ones, whether developed by the company or by a third party vendor. The effect of particular scores or ranges of scores (or, for companies not using scores, the effect of particular items appearing on a credit report) on any of the following as applicable: Rate or premium charged for a policy of insurance; Placement of an insured or applicant in a rating tier; Placement of an applicant or insured in a company within an affiliated group of insurance companies; Decision to refuse to issue or renew a policy of insurance or to issue a policy with exclusions or restrictions or limitations in payment plans. The effect of the absence or insufficiency of credit history (as referenced in Section 626.9741(4)(c)1., F.S.) on any items listed in paragraph (e) above. The manner in which collection accounts identified with a medical industry code (as referenced in Section 626.9741(4)(c)2., F.S.) on a consumer's credit report will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which collection accounts that are not identified with a medical industry code, but which an applicant or insured demonstrates are the direct result of significant and extraordinary medical expenses, will be treated in the underwriting or rating process or within any credit scoring methodology used. The manner in which the following will be treated in the underwriting or rating process, or within any credit scoring methodology used: Credit inquiries not initiated by the consumer; Requests by the consumer for the consumer's own credit information; Multiple lender inquiries, if coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another; Multiple lender inquiries that are not coded by the consumer reporting agency on the consumer's credit report as being from the automobile lending industry or the home mortgage industry and made within 30 days of one another, but that an applicant or insured demonstrates are the direct result of such inquiries; Inquiries relating to insurance coverage, if so identified on a consumer's credit report; and Inquiries relating to insurance coverage that are not so identified on a consumer's credit report, but which an applicant or insured demonstrates are the direct result of such inquiries. The list of all clear and specific primary reasons that may be cited to the consumer as the basis or explanation for an adverse decision under Section 626.9741(3), F.S. and the criteria determining when each of those reasons will be so cited. A description of the process that the insurer will use to correct any error in premium charged the insured, or in underwriting decision made concerning the insured, if the basis of the premium charged or the decision made is a disputed item that is later removed from the credit report or corrected, provided that the insured first notifies the insurer that the item has been removed or corrected. A certification that no use of credit reports or scores in rating insurance will apply to any component of a rate or premium attributed to hurricane coverage for residential properties as separately identified in accordance with Section 627.0629, F.S. Insurers desiring to make adverse decisions for personal lines motor vehicle policies or personal lines residential policies based on the absence or insufficiency of credit history shall either: Treat such consumers or applicants as otherwise approved by the Office of Insurance Regulation if the insurer presents information that such an absence or inability is related to the risk for the insurer and does not result in a disparate impact on persons belonging to any of the classes set forth in Section 626.9741(8)(c), This information will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. The information shall include: Data comparing experience for each category of those with absent or insufficient credit history to each category of insureds separately treated with respect to credit and having sufficient credit history; A statistically credible method of analysis that concludes that the relationship between absence or insufficiency and the risk assumed is not due to chance; A statistically credible method of analysis that concludes that absence or insufficiency of credit history does not disparately impact persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S.; A statistically credible method of analysis that confirms that the treatment proposed by the insurer is quantitatively appropriate; and Statistical tests establishing that the treatment proposed by the insurer is warranted for the total of all consumers with absence or insufficiency of credit history and for at least two subsets of such consumers. Treat such consumers as if the applicant or insured had neutral credit information, as defined by the insurer. Should an insurer fail to specify a definition, neutral is defined as the average score that a stratified random sample of consumers or applicants having sufficient credit history would attain using the insurer's credit scoring methodology; or Exclude credit as a factor and use other criteria. These other criteria must be specified by the insurer and must not result in average treatment for the totality of consumers with an absence of or insufficiency of credit history any less favorable than the treatment of average consumers or applicants having sufficient credit history. Insurers desiring to make adverse decisions for personal lines motor vehicle or personal lines residential insurance based on information contained in a credit report or score shall file with the Office information establishing that the results of such decisions do not correlate so closely with the zip code of residence of the insured as to constitute a decision based on place of residence of the insured in violation of Section 626.9741(4)(c)(3), F.S. (7)(a) Insurers using credit reports or credit scores for underwriting or rating personal lines residential or personal lines motor vehicle insurance shall develop, maintain, and adhere to written procedures consistent with Section 626.9741(4)(e), F.S. providing appeals for applicants or insureds whose credit reports or scores are unduly influenced by dissolution of marriage, death of a spouse, or temporary loss of employment. (b) These procedures shall be subject to examination by the Office at any time. (8)(a)1. Insurers using credit reports or credit scoring in rating personal lines motor vehicle or personal lines residential insurance shall develop, maintain, and adhere to written procedures to review the credit history of an insured who was adversely affected by such use at initial rating of the policy or subsequent renewal thereof. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: A review shall be conducted: No later than 2 years following the date of any adverse decision, or Any time, at the request of the insured, but no more than once per policy period without insurer assent. The insurer shall notify the named insureds annually of their right to request the review in (II) above. Renewal notices issued 120 days or less after the effective date of this rule are not included in this requirement. The insurer shall adjust the premium to reflect any improvement in credit history no later than the first renewal date that follows a review of credit history. The renewal premium shall be subject to other rating factors lawfully used by the insurer. The review shall not be used by the insurer to cancel, refuse to renew, or require a change in the method of payment or payment plan based on credit history. (b)1. As an alternative to the requirements in paragraph (8)(a), insurers using credit reports or scores at the inception of a policy but not for re-underwriting shall develop, maintain, and adhere to written procedures. These procedures shall be subject to examination by the Office at any time. The procedures shall comply with the following: Insureds shall be reevaluated no later than 3 years following policy inception based on allowable underwriting or rating factors, excluding credit information. The rate or premium charged to an insured shall not be greater, solely as a result of the reevaluation, than the rate or premium charged for the immediately preceding policy term. This shall not be construed to prohibit an insurer from applying regular underwriting criteria (which may result in a greater premium) or general rate increases to the premium charged. For insureds that received an adverse decision notification at policy inception, no residual effects of that adverse decision shall survive the reevaluation. This means that the reevaluation must be complete enough to make it possible for insureds adversely impacted at inception to attain the lowest available rate for which comparable insureds are eligible, considering only allowable underwriting or rating factors (excluding credit information) at the time of the reevaluation. No credit scoring methodology shall be used for personal lines motor vehicle or personal lines residential property insurance unless that methodology has been demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance. The demonstration of validity detailed below need only be provided with the first rate, rule, or underwriting guidelines filing following the effective date of this rule and at any time a change is made in the credit scoring methodology. Other such filings may instead refer to the most recent prior filing containing a demonstration. Information supplied in the context of a demonstration of validity will be held as confidential if properly so identified by the insurer and eligible under Section 626.9711, F.S. A demonstration of validity shall include: A listing of the persons that contributed substantially to the development of the most current version of the method, including resumes of the persons, if obtainable, indicating their qualifications and experience in similar endeavors. An enumeration of all data cleansing techniques that have been used in the development of the method, which shall include: The nature of each technique; Any biases the technique might introduce; and The prevalence of each type of invalid information prior to correction or enhancement. All data that was used by the model developers in the derivation and calibration of the model parameters. Data shall be in sufficient detail to permit the Office to conduct multiple regression testing for validation of the credit scoring methodology. Data, including field definitions, shall be supplied in electronic format compatible with the software used by the Office. Statistical results showing that the model and parameters are predictive and not overlapping or duplicative of any other variables used to rate an applicant to such a degree as to render their combined use actuarially unsound. Such results shall include the period of time for which each element from a credit report is used. A precise listing of all elements from a credit report that are used in scoring, and the formula used to compute the score, including the time period during which each element is used. Such listing is confidential if properly so identified by the insurer. An assessment by a qualified actuary, economist, or statistician (whether or not employed by the insurer) other than persons who contributed substantially to the development of the credit scoring methodology, concluding that there is a significant statistical correlation between the scores and frequency or severity of claims. The assessment shall: Identify the person performing the assessment and show his or her educational and professional experience qualifications; and Include a test of robustness of the model, showing that it performs well on a credible validation data set. The validation data set may not be the one from which the model was developed. Documentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Statistical analysis shall be performed on the current insureds of the insurer using the proposed credit scoring model, and shall include the raw data and detailed results on each classification set forth in Section 626.9741(8)(c), F.S. In lieu of such analysis insurers may use the alternative in 2. below. Alternatively, insurers may submit statistical studies and analyses that have been performed by educational institutions, independent professional associations, or other reputable entities recognized in the field, that indicate that there is no disproportionate impact on any of the classes set forth in Section 626.9741(8)(c), F.S. attributable to the use of credit reports or scores. Any such studies or analyses shall have been done concerning the specific credit scoring model proposed by the insurer. The Office will utilize generally accepted statistical analysis principles in reviewing studies submitted which support the insurer's analysis that the credit scoring model does not disproportionately impact any class based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. The Office will permit reliance on such studies only to the extent that they permit independent verification of the results. The testing or validation results obtained in the course of the assessment in paragraphs (d) and (f) above. Internal Insurer data that validates the premium differentials proposed based on the scores or ranges of scores. Industry or countrywide data may be used to the extent that the Florida insurer data lacks credibility based upon generally accepted actuarial standards. Insurers using industry or countrywide data for validation shall supply Florida insurer data and demonstrate that generally accepted actuarial standards would allow reliance on each set of data to the extent the insurer has done so. Validation data including claims on personal lines residential insurance policies that are the result of acts of God shall not be used unless such acts occurred prior to January 1, 2004. The mere copying of another company's system will not fulfill the requirement to validate proposed premium differentials unless the filer has used a method or system for less than 3 years and demonstrates that it is not cost effective to retrospectively analyze its own data. Companies under common ownership, management, and control may copy to fulfill the requirement to validate proposed premium differentials if they demonstrate that the characteristics of the business to be written by the affiliate doing the copying are sufficiently similar to the affiliate being copied to presume common differentials will be accurate. The credibility standards and any judgmental adjustments, including limitations on effects, that have been used in the process of deriving premium differentials proposed and validated in paragraph (i) above. An explanation of how the credit scoring methodology treats discrepancies in the information that could have been obtained from different consumer reporting agencies: Equifax, Experian, or TransUnion. This shall not be construed to require insurers to obtain multiple reports for each insured or applicant. 1. The date that each of the analyses, tests, and validations required in paragraphs (d) through (j) above was most recently performed, and a certification that the results continue to be applicable. 2. Any item not reviewed in the previous 5 years is unacceptable. Specific Authority 624.308(1), 626.9741(8) FS. Law Implemented 624.307(1), 626.9741 FS. History-- New . The Petition 1. Statutory Definitions of "Unfairly Discriminatory" The main issue raised by Petitioners is that the Proposed Rule's definition of "unfairly discriminatory," and those portions of the Proposed Rule that rely on this definition, are invalid because they are vague, and enlarge, modify, and contravene the provisions of the law implemented and other provisions of the insurance code. Section 626.9741, Florida Statutes, does not define "unfairly discriminatory." Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit reports or scores "must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory." Subsection 626.9741(8)(c), Florida Statutes, provides that the FSC may adopt rules, including standards to ensure that rates or premiums "associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." Chapter 627, Part I, Florida Statutes, is referred to as the "Rating Law." § 627.011, Fla. Stat. The purpose of the Rating Law is to "promote the public welfare by regulating insurance rates . . . to the end that they shall not be excessive, inadequate, or unfairly discriminatory." § 627.031(1)(a), Fla. Stat. The Rating Law provisions referenced by Subsection 626.9741(5), Florida Statutes, in relation to ensuring that rates are not "unfairly discriminatory" are Sections 627.062 and 627.0651, Florida Statutes. Section 627.062, Florida Statutes, titled "Rate standards," provides that "[t]he rates for all classes of insurance to which the provisions of this part are applicable shall not be excessive, inadequate, or unfairly discriminatory." § 627.062(1), Fla. Stat. Subsection 627.062(2)(e)6., Florida Statutes, provides: A rate shall be deemed unfairly discriminatory as to a risk or group of risks if the application of premium discounts, credits, or surcharges among such risks does not bear a reasonable relationship to the expected loss and expense experience among the various risks. Section 627.0651, Florida Statutes, titled "Making and use of rates for motor vehicle insurance," provides, in relevant part: One rate shall be deemed unfairly discriminatory in relation to another in the same class if it clearly fails to reflect equitably the difference in expected losses and expenses. Rates are not unfairly discriminatory because different premiums result for policyholders with like loss exposures but different expense factors, or like expense factors but different loss exposures, so long as rates reflect the differences with reasonable accuracy. Rates are not unfairly discriminatory if averaged broadly among members of a group; nor are rates unfairly discriminatory even though they are lower than rates for nonmembers of the group. However, such rates are unfairly discriminatory if they are not actuarially measurable and credible and sufficiently related to actual or expected loss and expense experience of the group so as to assure that nonmembers of the group are not unfairly discriminated against. Use of a single United States Postal Service zip code as a rating territory shall be deemed unfairly discriminatory. Petitioners point out that each of these statutory examples describing "unfairly discriminatory" rates has an actuarial basis, i.e., rates must be related to the actual or expected loss and expense factors for a given group or class, rather than any extraneous factors. If two risks have the same expected losses and expenses, the insurer must charge them the same rate. If the risks have different expected losses and expenses, the insurer must charge them different rates. Michael Miller, Petitioners' expert actuary, testified that the term "unfairly discriminatory" has been used in the insurance industry for well over 100 years and has always had this cost-based definition. Mr. Miller is a fellow of the Casualty Actuarial Society ("CAS"), a professional organization whose purpose is the advancement of the body of knowledge of actuarial science, including the promulgation of industry standards and a code of professional conduct. Mr. Miller was chair of the CAS ratemaking committee when it developed the CAS "Statement of Principles Regarding Property and Casualty Insurance Ratemaking," a guide for actuaries to follow when establishing rates.5 Principle 4 of the Statement of Principles provides: "A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk." In layman's terms, Mr. Miller explained that different types of risks are reflected in a rate calculation. To calculate the expected cost of a given risk, and thus the rate to be charged, the insurer must determine the expected losses for that risk during the policy period. The loss portion reflects the risk associated with an occurrence and the severity of a claim. While the loss portion does not account for the entirety of the rate charged, it is the most important in terms of magnitude. Mr. Miller cautioned that the calculation of risk is a quantification of expected loss, but not an attempt to predict who is going to have an accident or make a claim. There is some likelihood that every insured will make a claim, though most never do, and this uncertainty is built into the incurred loss portion of the rate. No single risk factor is a complete measure of a person's likelihood of having an accident or of the severity of the ensuing claim. The prediction of losses is determined through a risk classification plan that take into consideration many risk factors (also called rating factors) to determine the likelihood of an accident and the extent of the claim. As to automobile insurance, Mr. Miller listed such risk factors as the age, gender, and marital status of the driver, the type, model and age of the car, the liability limits of the coverage, and the geographical location where the car is garaged. As to homeowners insurance, Mr. Miller listed such risk factors as the location of the home, its value and type of construction, the age of the utilities and electrical wiring, and the amount of insurance to be carried. 2. Credit Scoring as a Rating Factor In the current market, the credit score of the applicant or insured is a rating factor common to automobile and homeowners insurance. Subsection 626.9741(2)(c), Florida Statutes, defines "credit score" as follows: a score, grade, or value that is derived by using any or all data from a credit report in any type of model, method, or program, whether electronically, in an algorithm, computer software or program, or any other process, for the purpose of grading or ranking credit report data. "Credit scores" (more accurately termed "credit-based insurance scores") are derived from credit data that have been found to be predictive of a loss. Lamont Boyd, Fair Isaac's insurance market manager, explained the manner in which Fair Isaac produced its credit scoring model. The company obtained information from various insurance companies on millions of customers. This information included the customers' names, addresses, and the premiums earned by the companies on those policies as well as the losses incurred. Fair Isaac next requested the credit reporting agencies to review their archived files for the credit information on those insurance company customers. The credit agencies matched the credit files with the insurance customers, then "depersonalized" the files so that there was no way for Fair Isaac to know the identity of any particular customer. According to Mr. Lamont, the data were "color blind" and "income blind." Fair Isaac's analysts took these files from the credit reporting agencies and studied the data in an effort to find the most predictive characteristics of future loss propensity. The model was developed to account for all the predictive characteristics identified by Fair Isaac's analysts, and to give weight to those characteristics in accordance to their relative accuracy as predictors of loss. Fair Isaac does not directly sell its credit scores to insurance companies. Rather, Fair Isaac's models are implemented by the credit reporting agencies. When an insurance company wants Fair Isaac's credit score, it purchases access to the model's results from the credit reporting agency. Other vendors offer similar credit scoring models to insurance companies, and in recent years, some insurance companies have developed their own scoring models. Several academic studies of credit scoring were admitted and discussed at the final hearing in these cases. There appears to be no serious debate that credit scoring is a valid and important predictor of losses. The controversy over the use of credit scoring arises over its possible "unfairly discriminatory" impact "based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence." § 626.9741(8)(c), Fla. Stat. Mr. Miller was one of two principal authors of a June 2003 study titled, "The Relationship of Credit-Based Insurance Scores to Private Passenger Automobile Insurance Loss Propensity." This study was commissioned by several insurance industry trade organizations, including AIA and NAMIC. The study addressed three questions: whether credit-based insurance scores are related to the propensity for loss; whether credit- based insurance scores measure risk that is already measured by other risk factors; and what is the relative importance to accurate risk assessment of the use of credit-based insurance scores. The study was based on a nationwide random sample of private passenger automobile policy and claim records. Records from all 50 states were included in roughly the same proportion as each state's registered motor vehicles bear to total registered vehicles in the United States. The data samples were provided by seven insurers, and represented approximately 2.7 million automobiles, each insured for 12 months.6 The study examined all major automobile coverages: bodily injury liability, property damage liability, medical payments coverage, personal injury protection coverage, comprehensive coverage, and collision coverage. The study concluded that credit-based insurance scores were correlated with loss propensity. The study found that insurance scores overlap to some degree with other risk factors, but that after fully accounting for the overlaps, insurance scores significantly increase the accuracy of the risk assessment process. The study found that, for each of the six automobile coverages examined, insurance scores are among the three most important risk factors.7 Mr. Miller's study did not examine the question of causality, i.e., why credit-based insurance scores are predictive of loss propensity. Dr. Patrick Brockett testified for Petitioners as an expert in actuarial science, risk management and insurance, and statistics. Dr. Brockett is a professor in the departments of management science and information systems, finance, and mathematics at the University of Texas at Austin. He occupies the Gus S. Wortham Memorial Chair in Risk Management and Insurance, and is the director of the university's risk management and insurance program. Dr. Brockett is the former director of the University of Texas' actuarial science program and continues to direct the study of students seeking their doctoral degrees in actuarial science. His areas of academic research are actuarial science, risk management and insurance, statistics, and general quantitative methods in business. Dr. Brockett has written more than 130 publications, most of which relate to actuarial science and insurance. He has spent his entire career in academia, and has never been employed by an insurance company. In 2002, Lieutenant Governor Bill Ratliff of Texas asked the Bureau of Business Research ("BBR") of the University of Texas' McCombs School of Business to provide an independent, nonpartisan study to examine the relationship between credit history and insurance losses in automobile insurance. Dr. Brockett was one of four named authors of this BBR study, issued in March 2003 and titled, "A Statistical Analysis of the Relationship between Credit History and Insurance Losses." The BBR research team solicited data from insurance companies representing the top 70 percent of the automobile insurers in Texas, and compiled a database of more than 173,000 automobile insurance policies from the first quarter of 1998 that included the following 12 months' premium and loss history. ChoicePoint was then retained to match the named insureds with their credit histories and to supply a credit score for each insured person. The BBR research team then examined the credit score and its relationship with prospective losses for the insurance policy. The results were summarized in the study as follows: Using logistic and multiple regression analyses, the research team tested whether the credit score for the named insured on a policy was significantly related to incurred losses for that policy. It was determined that there was a significant relationship. In general, lower credit scores were associated with larger incurred losses. Next, logistic and multiple regression analyses examined whether the revealed relationship between credit score and incurred losses was explainable by existing underwriting variables, or whether the credit score added new information about losses not contained in the existing underwriting variables. It was determined that credit score did yield new information not contained in the existing underwriting variables. What the study does not attempt to explain is why credit scoring adds significantly to the insurer's ability to predict insurance losses. In other words, causality was not investigated. In addition, the research team did not examine such variables as race, ethnicity, and income in the study, and therefore this report does not speculate about the possible effects that credit scoring may have in raising or lowering premiums for specific groups of people. Such an assessment would require a different study and different data. At the hearing, Dr. Brockett testified that the BBR study demonstrated a "strong and significant relationship between credit scoring and incurred losses," and that credit scoring retained its predictive power even after the other risk variables were accounted for. Dr. Brockett further testified that credit scoring has a disproportionate effect on the classifications of age and marital status, because the very young tend to have credit scores that are lower than those of older people. If the question is simply whether the use of credit scores will have a greater impact on the young and the single, the answer would be in the affirmative. However, Dr. Brockett also noted that young, single people will also have higher losses than older, married people, and, thus, the use of credit scores is not "unfairly discriminatory" in the sense that term is employed in the insurance industry.8 Mr. Miller testified that nothing in the actuarial standards of practice requires that a risk factor be causally related to a loss. The Actuarial Standards Board's Standard of Practice 12,9 dealing with risk classification, states that a risk factor is appropriate for use if there is a demonstrated relationship between the risk factor and the insurance losses, and that this relationship may be established by statistical or other mathematical analysis of data. If the risk characteristic is shown to be related to an expected outcome, the actuary need not establish a cause-and-effect relationship between the risk characteristic and the expected outcome. As an example, Mr. Miller offered the fact that past automobile accidents do not cause future accidents, although past accidents are predictive of future risk. Past traffic violations, the age of the driver, the gender of the driver, and the geographical location are all risk factors in automobile insurance, though none of these factors can be said to cause future accidents. They help insurers predict the probability of a loss, but do not predict who will have an accident or why the accident will occur. Mr. Miller opined that credit scoring is a similar risk factor. It is demonstrably significant as a predictor of risk, though there is no causal relationship between credit scores and losses and only an incomplete understanding of why credit scoring works as a predictor of loss. At the hearing, Dr. Brockett discussed a study that he has co-authored with Linda Golden, a business professor at the University of Texas at Austin. Titled "Biological and Psychobehavioral Correlates of Risk Taking, Credit Scores, and Automobile Insurance Losses: Toward an Explication of Why Credit Scoring Works," the study has been peer-reviewed and at the time of the hearing had been accepted for publication in the Journal of Risk and Insurance. In this study, the authors conducted a detailed review of existing scientific literature concerning the biological, psychological, and behavioral attributes of risky automobile drivers and insured losses, and a similar review of literature concerning the biological, psychological, and behavioral attributes of financial risk takers. The study found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Dr. Brockett testified that this study provides a direction for future research into the reasons why credit scoring works as an insurance risk characteristic. 3. The Proposed Rule's Definition of "Unfairly Discriminatory" Petitioners contend that the Proposed Rule's definition of the term "unfairly discriminatory" expands upon and is contrary to the statutory definition of the term discussed in section C.1. supra, and that this expanded definition operates to impose a ban on the use of credit scoring by insurance companies. As noted above, Section 626.9741, Florida Statutes, does not define the term "unfairly discriminatory." The provisions of the Rating Law10 define the term as it is generally understood by the insurance industry: a rate is deemed "unfairly discriminatory" if the premium charged does not equitably reflect the differences in expected losses and expenses between policyholders. Two provisions of Section 626.9741, Florida Statutes, employ the term "unfairly discriminatory": (5) A rate filing that uses credit reports or credit scores must comply with the requirements of s. 627.062 or s. 627.0651 to ensure that rates are not excessive, inadequate, or unfairly discriminatory. * * * (8) The commission may adopt rules to administer this section. The rules may include, but need not be limited to: * * * (c) Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners contend that the statute's use of the term "unfairly discriminatory" is unexceptionable, that the Legislature simply intended the term to be used and understood in the traditional sense of actuarial soundness alone. Respondents agree that Subsection 626.9741(5), Florida Statutes, calls for the agency to apply the traditional definition of "unfairly discriminatory" as that term is employed in the statutes directly referenced, Sections 627.062 and 627.0651, Florida Statutes, the relevant texts of which are set forth in Findings of Fact 18 and 19 above. However, Respondents contend that Subsection 626.9741(8)(c), Florida Statutes, calls for more than the application of the Rating Law's definition of the term. Respondents assert that in the context of this provision, "unfairly discriminatory" contemplates not only the predictive function, but also "discrimination" in its more common sense, as the term is employed in state and federal civil rights law regarding race, color, religion, marital status, age, gender, income, national origin, or place of residence. At the hearing, OIR General Counsel Steven Parton testified as to the reasons why the agency chose the federal body of law using the term "disparate impact" as the test for unfair discrimination in the Proposed Rule: Well, first of all, what we were looking for is a workable definition that people would have some understanding as to what it meant when we talked about unfair discrimination. We were also looking for a test that did not require any willfulness, because it was not our concern that, in fact, insurance companies were engaging willfully in unfair discrimination. What we believed is going on, and we think all of the studies that are out there suggest, is that credit scoring is having a disparate impact upon various people, whether it be income, whether it be race. . . . Respondents' position is that Subsection 626.9741(8)(c), Florida Statutes, requires that a proposed rate or premium be rejected if it has a "disproportionately" negative effect on one of the named classes of persons, even though the rate or premium equitably reflects the differences in expected losses and expenses between policyholders. In the words of Mr. Parton, "This is not an actuarial rule." Mr. Parton explained the agency's rationale for employing a definition of "unfairly discriminatory" that is different from the actuarial usage employed in the Rating Law. Subsection 626.9741(5), Florida Statutes, already provides that an insurer's rate filings may not be "excessive, inadequate, or unfairly discriminatory" in the actuarial sense. To read Subsection 626.9741(8)(c), Florida Statutes, as simply a reiteration of the actuarial "unfair discrimination" rule would render the provision, "a nullity. There would be no force and effect with regards to that." Thus, the Proposed Rule defines "unfairly discriminatory" to mean "that adverse decisions resulting from the use of a credit scoring methodology disproportionately affects persons belonging to any of the classes set forth in Section 626.9741(8)(c), F.S." Proposed Florida Administrative Code Rule 69O-125.005(1)(e). OIR's actuary, Howard Eagelfeld, explained that "disproportionate effect" means "having a different effect on one group . . . causing it to pay more or less premium than its proportionate share in the general population or than it would have to pay based upon all other known considerations." Mr. Eagelfeld's explanation is not incorporated into the language of the Proposed Rule. Consistent with the actuarial definition of "unfairly discriminatory," the Proposed Rule requires that any credit scoring methodology must be "demonstrated to be a valid predictor of the insurance risk to be assumed by an insurer for the applicable type of insurance," and sets forth detailed criteria through which the insurer can make the required demonstration. Proposed Florida Administrative Code Rule 69O-125.005(9)(a)-(f) and (h)-(l). Proposed Florida Administrative Code Rule 69O-125.005(9)(g) sets forth Respondents' "civil rights" usage of the term "unfairly discriminatory." The insurer's demonstration of the validity of its credit scoring methodology must include: [d]ocumentation consisting of statistical testing of the application of the credit scoring model to determine whether it results in a disproportionate impact on the classes set forth in Section 626.9741(8)(c), F.S. A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory.11 Mr. Parton, who testified in defense of the Proposed Rule as one of its chief draftsmen, stated that the agency was concerned that the use of credit scoring may be having a disproportionate effect on minorities. Respondents believe that credit scoring may simply be a surrogate measure for income, and that using income as a basis for setting rates would have an obviously disparate impact on lower-income persons, including the young and the elderly. Mr. Parton testified that "neither the insurance industry nor anyone else" has researched the theory that credit scoring may be a surrogate for income. Mr. Miller referenced a 1998 analysis performed by AIA indicating that the average credit scores do not vary significantly according to the income group. In fact, the lowest income group (persons making less than $15,000 per year) had the highest average credit score, and the average credit scores actually dropped as income levels rose until the income range reached $50,000 to $74,000 per year, when the credit scores began to rise. Mr. Miller testified that a credit score is no more predictive of income level than a coin flip. However, Respondents introduced a January 2003 report to the Washington State Legislature prepared by the Social & Economic Sciences Research Center of Washington State University, titled "Effect of Credit Scoring on Auto Insurance Underwriting and Pricing." The purpose of the study was to determine whether credit scoring has unequal impacts on specific demographic groups. For this study, the researchers received data from three insurance companies on several thousand randomly chosen customers, including the customers' age, gender, residential zip code, and their credit scores and/or rate classifications. The researchers contacted about 1,000 of each insurance company's customers and obtained information about their ethnicity, marital status, and income levels. The study's findings were summarized as follows: The demographic patterns discerned by the study are: Age is the most significant factor. In almost every analysis, older drivers have, on average, higher credit scores, lower credit-based rate assignments, and less likelihood of lacking a valid credit score. Income is also a significant factor. Credit scores and premium costs improve as income rises. People in the lowest income categories-- less than $20,000 per year and between $20,000 and $35,000 per year-- often experienced higher premiums and lower credit scores. More people in lower income categories also lacked sufficient credit history to have a credit score. Ethnicity was found to be significant in some cases, but because of differences among the three firms studied and the small number of ethnic minorities in the samples, the data are not broadly conclusive. In general, Asian/Pacific Islanders had credit scores more similar to whites than to other minorities. When other minority groups had significant differences from whites, the differences were in the direction of higher premiums. In the sample of cases where insurance was cancelled based on credit score, minorities who were not Asian/Pacific Islanders had greater difficulty finding replacement insurance, and were more likely to experience a lapse in insurance while they searched for a new policy. The analysis also considered gender, marital status and location, but for these factors, significant unequal effects were far less frequent. (emphasis added) The evidence appears equivocal on the question of whether credit scoring is a surrogate for income. The Washington study seems to indicate that ethnicity may be a significant factor in credit scoring, but that significant unequal effects are infrequent regarding gender and marital status. The evidence demonstrates that the use of credit scores by insurers would tend to have a negative impact on young people. Mr. Miller testified that persons between ages 25 and 30 have lower credit scores than older people. Petitioners argue that by defining "unfairly discriminatory" to mean "disproportionate effect," the Proposed Rule effectively prohibits insurers from using credit scores, if only because all the parties recognize that credit scores have a "disproportionate effect" on young people. Petitioners contend that this prohibition is in contravention of Section 626.9741(1), Florida Statutes, which states that the purpose of the statute is to "regulate and limit" the use of credit scores, not to ban them outright. Respondents counter that if the use of credit scores is "unfairly discriminatory" toward one of the listed classes of persons in contravention of Subsection 626.9741(8)(c), Florida Statutes, then the "limitation" allowed by the statute must include prohibition. This point is obviously true but sidesteps the real issues: whether the statute's undefined prohibition on "unfair discrimination" authorizes the agency to employ a "disparate impact" or "disproportionate effect" definition in the Proposed Rule, and, if so, whether the Proposed Rule sufficiently defines any of those terms to permit an insurer to comply with the rule's requirements. Proposed Florida Administrative Code Rule 69O-125.005(2) provides that the insurer bears the burden of demonstrating that its credit scoring methodology does not disproportionately affect persons based upon their race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners state that no insurer can demonstrate, consistent with the Proposed Rule, that its credit scoring methodology does not have a disproportionate effect on persons based upon their age. Therefore, no insurer will ever be permitted to use credit scores under the terms of the Proposed Rule. As discussed more fully in Findings of Fact 73 through 76 below, Petitioners also contend that the Proposed Rule provides no guidance as to what "disproportionate effect" and "disparate impact" mean, and that this lack of definitional guidance will permit the agency to reject any rate filing that uses credit scoring, based upon an arbitrary determination that it has a "disproportionate effect" on one of the classes named in Subsection 626.9741(8)(c), Florida Statutes. Petitioners also presented evidence that no insurer collects data on race, color, religion, or national origin from applicants or insureds. Mr. Miller testified that there is no reliable independent source for race, color, religious affiliation, or national origin data. Mr. Eagelfeld agreed that there is no independent source from which insurers can obtain credible data on race or religious affiliation. Mr. Parton testified that this lack of data can be remedied by the insurance companies commencing to request race, color, religion, and national origin information from their customers, because there is no legal impediment to their doing so. Mr. Miller testified that he would question the reliability of the method suggested by Mr. Parton because many persons will refuse to answer such sensitive questions or may not answer them correctly. Mr. Miller stated that, as an actuary, he would not certify the results of a study based on demographic data obtained in this manner and would qualify any resulting actuarial opinion due to the unreliability of the database. Petitioners also object to the vagueness of the broad categories of "race, color, religion and national origin." Mr. Miller testified that the Proposed Rule lacks "operational definitions" for those terms that would enable insurers to perform the required calculations. The Proposed Rule places the burden on the insurer to demonstrate no disproportionate effect on persons based on these categories, but offers no guidance as to how these demographic classes should be categorized by an insurer seeking to make such a demonstration. Petitioners point out that even if the insurer is able to ascertain the categories sought by the regulators, the Proposed Rule gives no guidance as to whether the "disproportionate effect" criterion mandates perfect proportionality among all races, colors, religions, and national origins, or whether some degree of difference is tolerable. Petitioners contend that this lack of guidance provides unbridled discretion to the regulator to reject any disproportionate effect study submitted by an insurer. At his deposition, Mr. Parton was asked how an insurer should break down racial classifications in order to show that there is no disproportionate effect on race. His answer was as follows: There is African-American, Cuban-American, Spanish-American, African-American, Haitian- American. Are you-- you know, whatever the make-up of your book of business is-- you're the one in control of it. You can ask these folks what their ethnic background is. At his deposition, Mr. Parton frankly admitted that he had no idea what "color" classifications an insurer should use, yet he also stated that an insurer must demonstrate no disproportionate effect on each and every listed category, including "color." At the final hearing, when asked to list the categories of "color," Mr. Parton responded, "I suppose Indian, African-American, Chinese, Japanese, all of those."12 At the final hearing, Mr. Parton was asked whether the Proposed Rule contemplates requiring insurers to demonstrate distinctions between such groups as "Latvian-Americans" and "Czech-Americans." Mr. Parton's reply was as follows: No. And I don't think it was contemplated by the Legislature. . . . The question is race by any other name, whether it be national origin, ethnicity, color, is something that they're concerned about in terms of an impact. What we would anticipate, and what we have always anticipated, is the industry would demonstrate whether or not there is an adverse effect against those folks who have traditionally in Florida been discriminated against, and that would be African-Americans and certain Hispanic groups. In our opinion, at least, if you could demonstrate that the credit scoring was not adversely impacting it, it may very well answer the questions to any other subgroup that you may want to name. At the hearing, Mr. Parton was also questioned as to distinctions between religions and testified as follows: The impact of credit scoring on religion is going to be in the area of what we call thin files, or no files. That is to say people who do not have enough credit history from which credit scores can be done, or they're going to be treated somehow differently because of that lack of history. A simple question that needs to be asked by the insurance company is: "Do you, as a result of your religious belief or whatever [sect] you are in, are you forbidden as a precept of your religious belief from engaging in the use of credit?" When cross-examined on the subject, Mr. Parton could not confidently identify any religious group that forbids the use of credit. He thought that Muslims and Quakers may be such groups. Mr. Parton concluded by stating, "I don't think it is necessary to identify those groups. The question is whether or not you have a religious group that you prescribe to that forbids it." Petitioners contend that, in addition to failing to define the statutory terms of race, color, religion, and national origin in a manner that permits insurer compliance, the Proposed Rule fails to provide an operational definition of "disproportionate effect." The following is a hypothetical question put to Mr. Parton at his deposition, and Mr. Parton's answer: Q: Let's assume that African-Americans make up 10 percent of the population. Let's just use two groups for the sake of clarity. Caucasians make up 90 percent. If the application of credit scoring in underwriting results in African-Americans paying 11 percent of the premium and Caucasians paying 89 percent of the premium, is that, in your mind, a disproportionate affect [sic]? A: It may be. I think it would give rise under this rule that perhaps there is a presumption that it is, but that presumption is not [an irrebuttable] one.[13] For instance, if you then had testimony that a 1 percent difference between the two was statistically insignificant, then I would suggest that that presumption would be overridden. This answer led to a lengthy discussion regarding a second hypothetical in which African-Americans made up 29 percent of the population, and also made up 35 percent of the lowest, or most unfavorable, tier of an insurance company's risk classifications. Mr. Parton ultimately opined that if the difference in the two numbers was found to be "statistically significant" and attributable only to the credit score, then he would conclude that the use of credit scoring unfairly discriminated against African-Americans. As to whether his answer would be the same if the hypothetical were adjusted to state that African-Americans made up 33 percent of the lowest tier, Mr. Parton responded: "That would be up to expert testimony to be provided on it. That's what trials are all about."14 Aside from expert testimony to demonstrate that the difference was "statistically insignificant," Mr. Parton could think of no way that an insurer could rebut the presumption that the difference was unfairly discriminatory under the "disproportionate effect" definition set forth in the proposed rule. He stated that, "I can't anticipate, nor does the rule propose to anticipate, doing the job of the insurer of demonstrating that its rates are not unfairly discriminatory." Mr. Parton testified that an insurer's showing that the credit score was a valid and important predictor of risk would not be sufficient to rebut the presumption of disproportionate effect. Summary Findings Credit-based insurance scoring is a valid and important predictor of risk, significantly increasing the accuracy of the risk assessment process. The evidence is still inconclusive as to why credit scoring is an effective predictor of risk, though a study co-authored by Dr. Brockett has found that basic chemical and psychobehavioral characteristics, such as a sensation-seeking personality type, are common to individuals exhibiting both higher insured automobile losses and poorer credit scores. Though the evidence was equivocal on the question of whether credit scoring is simply a surrogate for income, the evidence clearly demonstrated that the use of credit scores by insurance companies has a greater negative overall effect on young people, who tend to have lower credit scores than older people. Petitioners and Fair Isaac emphasized their contention that compliance with the Proposed Rule would be impossible, and thus the Proposed Rule in fact would operate as a prohibition on the use of credit scoring by insurance companies. At best, Petitioners demonstrated that compliance with the Proposed Rule would be impracticable at first, given the current business practices in the industry regarding the collection of customer data regarding race and religion. The evidence indicated no legal barriers to the collection of such data by the insurance companies. Questions as to the reliability of the data are speculative until a methodology for the collection of the data is devised. Subsection 626.9741(8)(c), Florida Statutes, authorizes the FSC to adopt rules that may include: Standards that ensure that rates or premiums associated with the use of a credit report or score are not unfairly discriminatory, based upon race, color, religion, marital status, age, gender, income, national origin, or place of residence. Petitioners' contention that the statute's use of "unfairly discriminatory" contemplates nothing more than the actuarial definition of the term as employed by the Rating Law is rejected. As Respondents pointed out, Subsection 626.9741(5), Florida Statutes, provides that a rate filing using credit scores must comply with the Rating Law's requirements that the rates not be "unfairly discriminatory" in the actuarial sense. If Subsection 626.9741(8)(c), Florida Statutes, merely reiterates the actuarial requirement, then it is, in Mr. Parton's words, "a nullity."15 Thus, it is found that the Legislature contemplated some level of scrutiny beyond actuarial soundness to determine whether the use of credit scores "unfairly discriminates" in the case of the classes listed in Subsection 626.9741(8)(c), Florida Statutes. It is found that the Legislature empowered FSC to adopt rules establishing standards to ensure that an insurer's rates or premiums associated with the use of credit scores meet this added level of scrutiny. However, it must be found that the term "unfairly discriminatory" as employed in the Proposed Rule is essentially undefined. FSC has not adopted a "standard" by which insurers can measure their rates and premiums, and the statutory term "unfairly discriminatory" is thus subject to arbitrary enforcement by the regulating agency. Proposed Florida Administrative Code Rule 69O-125.005(1)(e) defines "unfairly discriminatory" in terms of adverse decisions that "disproportionately affect" persons in the classes set forth in Subsection 626.9741(8)(c), Florida Statutes, but does not define what is a "disproportionate effect." At Subsection (9)(g), the Proposed Rule requires "statistical testing" of the credit scoring model to determine whether it results in a "disproportionate impact" on the listed classes. This subsection attempts to define its terms as follows: A model that disproportionately affects any such class of persons is presumed to have a disparate impact and is presumed to be unfairly discriminatory. Thus, the Proposed Rule provides that a "disproportionate effect" equals a "disparate impact" equals "unfairly discriminatory," without defining any of these terms in such a way that an insurer could have any clear notion, prior to the regulator's pronouncement on its rate filing, whether its credit scoring methodology was in compliance with the rule. Indeed, Mr. Parton's testimony evinced a disinclination on the part of the agency to offer guidance to insurers who attempt to understand this circular definition. The tenor of his testimony indicated that the agency itself is unsure of exactly what an insurer could submit to satisfy the "disproportionate effect" test, aside from perfect proportionality, which all parties concede is not possible at least as to young people, or a showing that any lack of perfect proportionality is "statistically insignificant," whatever that means. Mr. Parton seemed to say that OIR will know a valid use of credit scoring when it sees one, though it cannot describe such a use beforehand. Mr. Eagelfeld offered what might be a workable definition of "disproportionate effect," but his definition is not incorporated into the Proposed Rule. Mr. Parton attempted to assure the Petitioners that OIR would take a reasonable view of the endless racial and ethnic categories that could be subsumed under the literal language of the Proposed Rule, but again, Mr. Parton's assurances are not part of the Proposed Rule. Mr. Parton's testimony referenced federal and state civil rights laws as the source for the term "disparate impact." Federal case law under Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2, has defined a "disparate impact" claim as "one that 'involves employment practices that are facially neutral in their treatment of different groups, but that in fact fall more harshly on one group than another and cannot be justified by business necessity.'" Adams v. Florida Power Corporation, 255 F.3d 1322, 1324 n.4 (11th Cir. 2001), quoting Hazen Paper Co. v. Biggins, 507 U.S. 604, 609, 113 S. Ct. 1701, 1705, 123 L. Ed. 2d 338 (1993). The Proposed Rule does not reference this definition, nor did Mr. Parton detail how OIR proposes to apply or modify this definition in enforcing the Proposed Rule. Without further definition, all three of the terms employed in this circular definition are conclusions, not "standards" that the insurer and the regulator can agree upon at the outset of the statistical and analytical process leading to approval or rejection of the insurer's rates. Absent some definitional guidance, a conclusory term such as "disparate impact" can mean anything the regulator wishes it to mean in a specific case. The confusion is compounded by the Proposed Rule's failure to refine the broad terms "race," "color," and "religion" in a manner that would allow an insurer to prepare a meaningful rate submission utilizing credit scoring. In his testimony, Mr. Parton attempted to limit the Proposed Rule's impact to those groups "who have traditionally in Florida been discriminated against," but the actual language of the Proposed Rule makes no such distinction. Mr. Parton also attempted to limit the reach of "religion" to groups whose beliefs forbid them from engaging in the use of credit, but the language of the Proposed Rule does not support Mr. Parton's distinction.

USC (1) 42 U.S.C 2000e Florida Laws (18) 119.07120.52120.536120.54120.56120.57120.68624.307624.308626.9741627.011627.031627.062627.0629627.0651688.002760.10760.11 Florida Administrative Code (1) 69O-125.005
# 6
JOHN D. WATSON vs FLORIDA ENGINEERS MANAGEMENT CORPORATION, 98-004756 (1998)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Oct. 26, 1998 Number: 98-004756 Latest Update: Apr. 20, 1999

The Issue The issue in this case is whether the Petitioner is entitled to additional credit for his response to question number 123 of the Principles & Practice Civil/Sanitary Engineer Examination administered on April 24, 1998.

Findings Of Fact Petitioner took the April 24, 1998, Principles & Practice Civil/Sanitary Engineer examination. A score of 70 is required to pass the exam. Petitioner obtained a score of 69. In order to achieve a score of 70, Petitioner needs a raw score of 48. Petitioner obtained a score of 69 which is a raw score of 47. Therefore, Petitioner is in need of one (1) additional raw score point. On question number 123, Petitioner received a score of six points out of a possible ten. Question nimber 123 is scored in increments of two raw points. Two additional raw score points awarded to the Petitioner would equal a raw score of 49, equating to a conversion score of seventy-one, a passing score. The National Council of Examiners for Engineering and Surveying (NCEES), the organization that produces the examination, provides a Solution and Scoring Plan which outlines the scoring process used in question number 123. The Petitioner is not allowed a copy of the examination question or the Solution and Scoring Plan for preparation of the Proposed Recommended Order. Question number 123 has three parts: part A, part B, and part C. For a score of ten on question number 123, the Solution and Scoring Plan states that the solution to part A must be correct within allowable tolerances; the solution to part B must state two variables that affect the answer in part A; and the solution to part C must state that anti-lock brakes do not leave skid marks thus making it very had to determine braking distance. For a score of eight points on question number 123, the Solution and Scoring Plan states that part A could contain one error and lists specific allowable errors, and that part B and part C must be answered correctly showing mastery of the concepts involved. Petitioner made an error in part A which falls into the allowable errors listed in the Solution and Scoring Plan under the eight-point scoring plan. Petitioner answered part B correctly. Petitioner contends that he also answered correctly part C, and should be awarded eight points. NCEES marked part C incorrect. Question number 123 is a problem involving a vehicle (vehicle number one) that skids on asphalt and hits another vehicle (vehicle number two). Part C asks "Explain how your investigation of this accident would have changed if vehicle one had been equipped with anti-lock brakes." The Petitioner's answer was as follows: If vehicle one does not "lock" its brakes, its deceleration will be dependent upon its brakes. (Not f). [Judge's note: f is used as the symbol for the co-efficient of friction between the tires and road surface in the problem.] The rate of deceleration (a) must be determined (from testing, mfg, [manufacturer,] etc.) As stated above, the Board accepts a solution that recognizes that the vehicle equipped with anti-lock brakes will not leave skid marks which can be used for computing initial speed using the skid distance equation. The Petitioner's answer pre-supposes that there are no skid marks because the vehicle's wheels do not lock because of the anti-lock brakes; therefore, if the co-efficient of friction of the tires, which generates the skid marks, has no effect. The Petitioner introduced a portion of a commonly used manual for preparation for examination (Petitioner's Exhibit 1), which states, regarding a vehicle that does not lock its brakes, "its decelerations will be dependent upon its brakes." The Board's expert recognized the statement by the Petitioner in response to part C as true, but indicated it was not responsive to the question in that it did not state specifically that the vehicle would not produce skid marks that would be able to be measured for use in the skid distance equation. The solution sheet states regarding part C, "Part C is answered correctly by explaining that anti-lock brakes would not leave skid marks thus making it very had to determine the braking distance."

Recommendation Based upon the foregoing Findings of Fact and Conclusions of Law set forth herein, it is, RECOMMENDED: That the Board of Professional Engineers enter a Final Order giving Petitioner credit for part C on the examination and passing the test. DONE AND ENTERED this 25th day of March, 1999, in Tallahassee, Leon County, Florida. STEPHEN F. DEAN Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 25th day of March, 1999. COPIES FURNISHED: Natalie A. Lowe Vice President of Legal Affairs Florida Engineers Management Corporation 1208 Hays Street Tallahassee, Florida 32301 John D. Watson 88 Marine Street St. Augustine, Florida 32084 Dennis Barton, Executive Director Board of Professional Engineers 1208 Hays Street Tallahassee, Florida 32301

Florida Laws (1) 120.57
# 7
KRISTINA V. TIGNOR vs. BOARD OF PROFESSIONAL ENGINEERS, 87-005110 (1987)
Division of Administrative Hearings, Florida Number: 87-005110 Latest Update: Jun. 10, 1988

Findings Of Fact Petitioner herein, Kristina V. Tignor, took the Professional Engineers Examination for the State of Florida in Orlando on April 9 and 10, 1987. On July 22, 1987 she was advised by the Department of Professional Regulation's Office of Examination Services that she had failed the examination and was given a cummulative score of principles and practice of 69.1 percent. In her initial request for review and reconsideration, Petitioner objected to the points assigned to her solutions for three problems on the test, Numbers 425, 421, and 124. She contended that as a working engineer, certain criteria and assumptions must be made in approaching any engineering problem and, because the portion of the examination in issue is graded subjectively, her answered should be reconsidered and evaluated in that light. At the hearing, Petitioner contested only the grading of questions number 124 and 421, thereby accepting the grade given for question 425. With regard to Question 124, Ms. Tignor was awarded a score of 5 on her solution to this problem. The National Council of Engineering Examiners, in its Standard Scoring Plan Outline awards a "qualified" evaluation to scores from 10 down to 6 on this question. Scores from 5 to 0 are rated, "unqualified." A score of 5 indicates the applicant has failed to demonstrate adequate knowledge in one aspect of one category. Specifically, a rating of 5 in this question indicates that the examinee displayed an inadequate knowledge of weight/volume and concrete mix design. Her computations were displayed and an incomplete or erroneous solution was arrived at which gave a generally unrealistic result. Dr. Bruce A. Suprenant a civil engineer registered in four states and who teaches engineering at the University of South Florida, reviewed the question, the Petitioner's solution, the solution proposed by the examiners, and the grading scheme for this problem and found a number of illogical items in Petitioner's solution which, to him, were difficult to understand. He found several items which had no basis and which were possibly assumed. As to Part a of Petitioner's answer, a mixture of answers, (correction for moisture), which should have been in Part b, was located in Part a. As to density, the value used by Petitioner does not appear to be reasonable based on information provided in the problem. In Dr. Suprenant's opinion, there are at least three approaches to this problem. One is the water/cement ration method. Another is the weight method. The third is the absolute volume method. The water/cement ratio method would be difficult to apply here and neither Petitioner nor the examiners used it. As to the weight method, much the same problem exists. There is insufficient information provided to satisfactorily apply this method and while the examiners did not use it, Petitioner did. Petitioner's answer has a correction for moisture in the absolute volume method on the first page of the solution form at the top. The calculations by Petitioner are assumed information not known, (volume). In addition the correction for moisture in the second part of page one is included on the top of page two. It is not a part of the solution for subpart a and should not be there. Petitioner used 150 pounds per cubic foot for concrete density in her solution and this choice is not explained. Most publications utilized by engineers suggest using tables which were not provided to the examinees and it is, therefore, illogical to assume concrete density with no history for that assumption. Petitioner's answer of 5.41 cubic yards is only slightly off the suggested answer of 5.44 cubic yards but the fact that the answers are close does not justify her assumption. It might well not come so close in other cases. As to Part b of the question calling for the water/cement ratio, the corrections for moisture of fine and coarse aggregate on page one are acceptable. On the second page, a problem arises in when the correction for moisture should decrease. Petitioner got the right factor but applied it in the wrong manner. As a result, her answer to Part b of the examination question is wrong. Her answer was 4.40 as opposed to the correct answer of 4.34. This small degree of error can be attributed to the smallness of the amount in question. Were the amounts greater, the error would be greater. As to part c of the question, which deals with the cement factor in a yard of concrete, Petitioner's approach of dividing sacks of cubic yards is correct, but the cubic yard content was determined from Part a of the question, and Dr. Suprenant does not agree with how she got her solution. He therefore questions her carryover. The standard weight of a sack of concrete is 94 pounds. The individual grading Petitioner's response to Question 124 indicates she displayed inadequate knowledge and reached a solution which gives "unrealistic results." Dr. Suprenant agrees, contending that Petitioner's performance in regard to this question indicates inadequate knowledge of weight/volume relationship. She made inadequate assumptions in formulating her answer to the question. The fact that in this problem she arrived at a solution close to the correct one does not indicate that in other problems, she would achieve the same closeness using the same procedure. In his opinion, Petitioner showed some confusion regarding the basis for solving this problem and Dr. Suprenant believes that a grade of 5 as awarded by the examiner is correct. Petitioner questioned the fact that the various technical weights and volumes, such as 94 pounds in a sack of concrete, 8.33 pounds for a gallon of water, and 27 cubic feet in a cubic yard do not appear in the problem statement. This, in the opinion of Dr. Suprenant, compounds the gravity of Petitioner's deficiency. They are routine "givens" generally accepted in the practice by engineers and it would be difficult to assume that anyone familiar with the practice of engineering would use different "givens" for these specifics. Petitioner's employer, Mr. Bishop, himself a registered civil engineer in Florida since 1958, also reviewed Petitioner's solution to Question 124. He admits that on the first page of the answer sheet, Petitioner began solving the problem in an inappropriate way. Her calculations for moisture content were correct, however. On the second paged the correction factor was put in with the wrong sign and the aggregate was given the wrong factor. As a result, the answer was off. In his practice, however, the error committed by Petitioner in these regards is both minimal and acceptable. Her choice of 150 pounds per square foot is reasonable and produced a close result, and while it is true that if the project were of a greater scale, the error might be significant for a test question, as here, the error, in his opinion, is insignificant. He feels much the same way regarding the error in Part c of the examination question. While the factors used by petitioner were wrong, the process used was correct and the answer was not unreasonably incorrect for a test solution. In an examination situation, the calculations are not being done on a continuous basis, and he feels the grade of 5 awarded is unduly harsh since the error was numerical rather than operational. In his opinion, a more reasonable grade would have been a 6 or 7. Petitioner began her solution to this problem by using one similar to that used by the examiners in their publications. Shortly, however, she realized she would not get the answer she needed by doing so and abandoned her solution. She forgot to cross it out, however, and now recognizes she should have done so. She thereafter began to accomplish a series of new calculations on the first page of the answer sheet but did not necessarily utilize that data for her solution to Part a. She admits she made an error in calculation for moisture on the second page. In that calculation, she used the study manual and admits now that she should have cited the figure she used. As to Parts b and c, her use of some figures from Part a may have thrown her answer off somewhat. However, the 5 awarded her, indicating her solution was unrealistic, is, in her opinion unfair as she considers her answer to be quite realistic. The problem did not state what solution method to use and she feels her use of givens from recognized manuals such as the 150 pounds, should not be held against her. 94 pounds for a sack of cement used by the grader was also not given and her use of other accepted numbers should not, she contends, be held against her. Petitioner believes a grade of 7 would more accurately describe the quality of her answer. A 7 means that the examinee obtained an appropriate solution but chose a less than optimum approach. The solution is, therefore, awkward but nonetheless resonable. Ms. Tignor believes that while her approach may have been awkward, she achieved reasonable solution demonstrated by the fact that it was only slightly off the correct figure. Therefore, she believes a grade of 6 would be appropriate. This examination was an open book examination and Petitioner had her manuals with her. She could have easily determined the appropriate weights an "givens" from these manuals without choosing those she used. Ms. Tignor's conclusions that her results are realistic are contradicted by the Board's expert. Realistic results are, in engineering practice, not only the figure reached but also the method used in arriving at that figure. Here, though Petitioner's results are close, the approach utilized in arriving at her solution is unrealistic. Her approach showed an inadequate knowledge of weight/volume and calculations. Consequently it is found the grade is valid and was not arbitrarily assigned. According to the Standard Scoring Plan Outline, each score from 10 through 6 has an indispensable criteria that all categories must be satisfied. Since Ms. Tignor's examination response did not satisfy all categories, the best she can be given is a 5 and that award appears to be justified by the evidence presented. Question 421 was a four part drainage problem. Petitioner used as a part of her solution calculations based on a 100 year storm and this was determined by the examiners to be inappropriate. Ms. Tignor was awarded a grade of 8 and contends she was not given appropriate credit. She relates that even Mr. Smith, the Executive Director of the Board of Professional Engineers, advised her she may not have been given full credit for her answer. She was given full credit for Part a but lost two points for part c which included a calculation error to which Petitioner admits. She contends however, it was so minor, only one point should have been deducted. Were Petitioner to receive an additional one point on this question, she would pass the examination which she failed by only one point. However, this issue must be resolved on the basis of lawfully admitted evidence and Mr. Smith's comment, being unsupported hearsay evidence, cannot itself sustain the rasing of the grade. The Standard Scoring Plan Outline for this question reflects that to receive an 8, the examinee must demonstrate that all categories are satisfied, that errors are attributable to misread tables or calculating devices, and that errors would be corrected by routine checking. The results must be reasonable if not correct. For a 9, the examinee must demonstrate that all categories are satisfied; that a correct solution is arrived at but the examinee has been excessively conservative in the choice of working values; and that examinee's presentation is lacking in completeness or equations diagrams or orderly steps in solution, etc. Subqualifications for a 9 indicates that the answer is correct but that the organization of the solution is not logical. One error in calculation in any of the Parts from a to d, which does not affect the other parts of the solution, is acceptable. Mr. Kenneth Weldon, the Assistant State Drainage Engineer for the Department of Transportation, an expert in the area of drainage to which this problem relates, reviewed the question and the Petitioner's answer thereto and would award a grade of 8 to her answer. He found various numerical mathematical errors which led to the wrong solution. In addition, Petitioner made various assumptions that, though supposedly supported, were, he felt, in error through her misinterpretation. In general, none of the actual solutions she arrived at were correct. Specifically, that portion of the problem to determine the cross sectional area of the waterway for establishing normal depth flow was done incorrectly. Because the Petitioner used incorrect equations throughout the problem, the depth flow computed is high. Petitioner did no analysis to determine whether or not any of the several situations relating to flow control were pertinent. Mr. Weldon initially felt Petitioner's answer to the question merited a grade of 6. This means that the examinee knew all the proper steps but failed to interpret some of the criteria properly. He could not award her a grade of 9 which would indicate all categories were satisfied and the solution was correct, if conservative. Petitioner's solutions were incorrect. He subsequently changed his award to an 8, however, on the basis that the Petitioner's errors were attributable to a misread table or calculating device and would be corrected by routine checking. The result was reasonable, though not correct. Mr. Weldon did not like this question even though he believed it appropriate for a one-hour exam. As written, it involves establishing and making judgements beyond what someone minimally competent would be expected to do. It requires materials that are beyond what are normally available to someone taking the exam. However, Petitioner failed to make proper provision to protect herself in a case where the question is inappropriate or incomplete. If she felt something was wrong with the question, she should have clearly stated the assumption she was making to solve the problem. This was her responsibility and she failed to do so. In Mr. Weldon's opinion, Petitioner's answer might merit a grade slightly higher but not significantly higher. His reasoning is that Petitioner misinterpreted the criteria she stated for writing the problem. Her comment that the Department of Transportation uses 100 year storm criteria was incorrect even though that statement is made in outdated Department of Transportation publications. The basis for her answer is not well established or correct, or based on engineering calculations or judgement, and at best he could award no more than an 8.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is, therefore: RECOMMENDED that a Final Order be entered affirming the score awarded to Petitioner on questions 124 and 421, respectively, of the Civil Engineering Examination administered to her in April, 1987. RECOMMENDED this 10th day of June, 1988, at Tallahassee, Florida. ARNOLD H. POLLOCK, Hearing Officer Division of Administrative Hearings The Oakland Building 2009 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 10th day of June, 1988. APPENDIX TO RECOMMENDED ORDER, CASE NO. 87-5110 The following constitutes my specific rulings pursuant to Section 120.59(2), Florida Statutes, on all of the Proposed Findings of Fact submitted by the parties to this case. For the Petitioner None For the Respondent Accepted and incorporated herein. Accepted and incorporated herein. Accepted and incorporated except for the characterization of several assumptions as guesses. No evidence exists to support such a characterization even though they are incorrect. Accepted and incorporated herein. Accepted and incorporated herein. Accepted and incorporated herein. Accepted and incorporated herein. Accepted and incorporated herein. COPIES FURNISHED: Kristina V. Tignor, pro se 2160 North Oval Drive Sarasota, Florida 34239 H. Reynolds Sampson, Esquire Department of Professional Regulation 130 North Monroe Street Tallahassee, Florida 32399-0750 Allen R. Smith, Jr. Executive Director DPR, Board of Professional Engineers 130 North Monroe Street Tallahassee, Florida 32399-0750

Florida Laws (1) 120.57
# 8
VADIM J. ALTSHULER vs BOARD OF PROFESSIONAL ENGINEERS, 98-002342 (1998)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida May 18, 1998 Number: 98-002342 Latest Update: Jan. 27, 1999

The Issue Whether Petitioner is entitled to additional credit for his response to Question Number 146 of the Principles and Practice of Engineering examination administered on October 31 through November 1, 1997.

Findings Of Fact Petitioner took the professional engineering licensing examination with emphasis in mechanical engineering on October 31, 1997. Passing score on the examination was 70. Petitioner obtained a score of 65 and a raw score of 43. A score of 70 would have generated a raw score of 48. Petitioner needed at least 5 additional raw score points to achieve a passing grade and a converted score of 70. Out of a possible 10 points on Question Number 146, Petitioner received a score of 4 points. The National Council of Examiners for Engineering and Surveying (NCEES), the organization that produces the examination, provides a Solution and Scoring Plan outlining the scoring process for question 146. Further, NCEES rescored Petitioner’s test but found no basis to award additional points. There are 5 categories to question 146. All six elements of question 146 must be completely and correctly answered to receive full credit of 10 points for the question. Instructions for the question provide: A perfect solution is not required, as the examinee is allowed minor psychometric chart reading errors (two maximum) or minor math errors (two maximum). The total number of minor errors allowed is two. Errors in solution methodology are not allowed. Examinee handles all concepts (i.e., sensible and total heat, sensible heat ratio, coil ADP and BF, adiabatic mixing, and coil heat transfer) correctly. (emphasis supplied.) Testimony at the final hearing of Petitioner’s expert in mechanical engineering establishes that Petitioner did not qualify for additional points for answers provided for question 146. Petitioner failed to use the definition of bypass factor indicated in the problem. Instead, Petitioner used the Lindenburg method rather than the Carrier method to calculate the bypass factor. The Carrier Method was implied in the problem due to the way the problem was structured. The system outlined in question 146 did not have the special configuration that would be listed if the Lindenburg method were utilized. Petitioner also missed the total coil capacity due to misreading the psychometric chart. By his own admission at the final hearing, Petitioner misread the data provided because they were printed one right above the other in the question. Petitioner read the point on the psychometric chart for an outdoor dry bulb temperature at 95 degrees and a 78 percent relative humidity as the outdoor air. The question required a dry bulb temperature of 95 degrees and a wet bulb temperature of 78 degrees. Petitioner’s misreading constituted an error in methodology as opposed to a minor chart reading error. Question Number 146 on the examination was properly designed to test the candidate’s competency, provided enough information for a qualified candidate to supply the correct answer, and was graded correctly and in accord with the scoring plan.

Recommendation Based on the foregoing, it is, hereby, RECOMMENDED: That a final order be entered confirming Petitioner’s score on the examination question which is at issue in this proceeding. DONE AND ENTERED this 25th day of August, 1998, in Tallahassee, Leon County, Florida. DON W. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 25th day of August, 1998. COPIES FURNISHED: Natalie A. Lowe, Esquire Board of Professional Engineers 1208 Hays Street Tallahassee, Florida 32301 Vadim J. Altshuler 9794 Sharing Cross Court Jacksonville, Florida 32257 Dennis Barton, Executive Director Board of Professional Engineers 1208 Hays Street Tallahassee, Florida 32301 Lynda L. Goodgame, General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (1) 120.57
# 9
RONALD R. CORUM vs BOARD OF PROFESSIONAL ENGINEERS, 91-003651 (1991)
Division of Administrative Hearings, Florida Filed:Tampa, Florida Jun. 11, 1991 Number: 91-003651 Latest Update: Nov. 26, 1991

Findings Of Fact Upon consideration of the oral and documentary evidence adduced at the hearing, the following relevant findings of fact are made: At all times pertinent to the issue herein, Petitioner, Ronald R. Corum, Examinee Identification No. 200619, was a candidate for licensure by examination as a professional engineer, and the Board of Professional Engineers was and is the state agency in Florida responsible for the licensing of Professional Engineers and the regulation of the practice of professional engineering in the state of Florida. Petitioner sat for the October 1990, Florida Professional Engineer Licensure Examination (Principles and Practice of Engineering). This part of the examination is divided into a morning session and an afternoon session. The morning session requires the examinee to choose four essay questions from a choice of twelve essay questions and produce a numerical solution to each question. The afternoon session is multiple choice and the examinee has to solve four questions from a choice of twelve questions. Each of the questions, both morning and afternoon, are worth ten points (raw score) for a total maximum raw score of 80 points, with a minimum passing raw score of 48 points. Petitioner received a raw score of 47 points. Question 124 was one of the essay question selected by Petitioner to solve in morning session of examination. Question 124 consisted three parts, 124A, B and C which required the examinee to: compute the area of traverse (in acres) a five-sided polygon; compute the net area (in acres) in the land parcel after adding sector area AB and excluding sector area DE; and compute the length of curve DE (in feet). The problems posed by Question 124 are not uncommon in the day to day practice of professional engineering and are not particularly difficult to solve. Petitioner attempted to solve Part A by using the method of coordinates which is an acceptable method of determining the area of a traverse. However, the Petitioner made a fundamental error in applying the method, not a simple mathematical error, in that he did not return to the beginning point of the traverse which resulted in an unrealistic answer. The correct answer to Part A was 16.946 acres. The Petitioner calculated the area to be 126.12 acres. In attempting to solve Part B, the Petitioner misapplied a correct methodology by erroneously expressing the central angle of the area in degrees rather than in radians. A radian is equal to approximately 57 degrees and this resulted in substantial error in Petitioner's calculation. The correct answer was 17.607 acres. Petitioner's answer was 219.63 acres which was not possible in relation to the area the Petitioner had already calculated for the traverse in Part A. This was a very serious error, a fundamental error, not a mathematical error. The maximum raw score for question 124 was ten points. Petitioner received a raw score of two points. On review, Petitioner was again granted only two points out of ten possible points. The examinee's identity is not known to the scorer during the initial scoring or the review. Both question 124 and the scoring plan used in grading question 124 were approved by the National Council of Examiners of Engineers and Surveyors (NCEES). The scoring of question 124 was weighted so that Parts A and B were worth four points each, and Part C was worth two points. Petitioner correctly answered Part C and received two points. The Petitioner did not receive any points for Part A or Part B. The examinee was not aware of this weighting policy at the time of the examination. The scoring plan for question 124 which was used by the NCEES grader was set up in six (6) categories from 0 - 10 in two-point increments as follows: 10 - Exceptionally competent. 8 - More than minimum competence but less than exceptionally competent. 6 - Minimum competence. 4 - More than rudimentary knowledge but insufficient to demonstrate competence. 2 - Rudimentary knowledge. 0 - Nothing presented to indicate significant knowledge of the problem. Petitioner's use of acceptable methodologies in attempting to solve the problems of Parts A and B may indicate at least rudimentary knowledge and possibly more than rudimentary knowledge but insufficient knowledge to demonstrate competence which would have entitled Petitioner to at least two points on Parts A and B each. However, the unreasonableness and the impossibility of his answers and his failure to recognize the unreasonableness and impossibility of his answers coupled with his fundamental error in solving the problems of Parts A and B were such that the Petitioner did not demonstrate significant knowledge of the problems for Parts A and B. Therefore, any credit that would have been given for using acceptable methodologies in attempting to solve the problems would be negated by this lack of significant knowledge of the problems. Because of this lack of significant knowledge of the problems the scorer correctly adjusted Petitioner's score on Part A and Part B each to zero. Unreasonable answers result in credit being deleted, and this policy is uniform among all of the states. However, the examinee is not made aware of this policy at the time of the examination. There was no instruction or guide to indicate to the examinee that if the examinee recognized that any answer was unrealistic that the examinee should so indicate on the answer sheet. Likewise, there was no instruction or guide to indicate that the examinee would be more heavily penalized if the examinee did not indicate on the answer sheet that the answer was unrealistic. An examinee's inability to recognize an unrealistic answer and to so indicate on the answer sheet without specific instruction goes to the examinee's competence as a professional engineer. Therefore, Petitioner has not been treated unfairly by the lack of instruction or guide advising him to indicate his ability to recognize an unrealistic answer on the answer sheet. The NCEES scorer for question 124 attempted to award the same score to all examinees of the October 1990 examination who gave similar unrealistic answers to question 124 as did Petitioner without noting on the answer sheet that the answer was unrealistic. The examinees are not informed of how the scoring plan will be applied in advance of the examination or that the essay question will be scored in two- point increments only. There was no evidence that this information would be of significant benefit to the examinee. In fact, the Petitioner did allocate his time in attempting to solve question 124 similar to the weighting of the scoring plan, spending only a small part of the time on Part C. Part B should have identified the curved areas to be computed as segments, rather than sectors. Petitioner attempted to solve Part B as though it referred to segments, and did not raise this issue in the request for review. Petitioner's use of degrees rather than radians would have been equally erroneous in determining the area of a sector. There was no evidence to show that identifying the curved area as a sector rather than a segment had any effect on Petitioner's attempt to solve the problem. The official solution to Part B contained a typographical error made during the transcription of the grader's handwritten solution. This had no effect on the scoring of Part B. The solution cannot affect the answer given by the examinees, as the solution is only available after the examinee has completed the examination and is challenging the scoring. There is a lack of competent substantial evidence in the record to establish that the scores which Petitioner received on Part A and Part B of question 124 of the October, 1990 Professional Engineering Licensure Examination were incorrect, unfair or invalid, or that the examination, and subsequent review, were administered in an arbitrary or capricious manner.

Recommendation Based upon the foregoing, it is RECOMMENDED: That Respondent enter a Final Order dismissing the Petitioner's challenge to the grading of his response to question 124 on the October 1990 Professional Engineer's Licensure Examination. DONE and ENTERED this 26th day of November, 1991, in Tallahassee, Florida. WILLIAM R. CAVE Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, FL 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 26th day of November, 1991. APPENDIX TO RECOMMENDED ORDER The following constitutes my specific rulings pursuant to Section 120.59(2), Florida Statutes, on all of the Proposed Findings of Fact submitted by the parties in the case. Rulings on Proposed Finding of Fact Submitted by the Petitioner Each of the following proposed findings of fact are adopted in substance as modified in the Recommended Order. The number in parenthesis is the Finding(s) of Fact which adopts the proposed finding of fact: 1 (2); 2 (3); 3 (3); 4 (5); 5 (5); 6 (7); 7 (9); 8 (9); 9 (13); 10 (13); 11 (12); 12 (9); 13 (5); 14 (6); 15 (9); 16 (11); 17 (7); 18 (1); 20 (16); 21 (16) and 22 (18). Proposed finding of fact 19 is not supported by substantial competent evidence in the record but see finding of fact 11. Rulings on Proposed Findings of Fact Submitted by the Respondent 1. Each of the following proposed findings of fact are adopted in substance as modified in the Recommended Order. The number in parenthesis is the Finding(s) of Fact which adopts the proposed findings of fact: 1 (2); 2 (3); 3 (4); 4 (5); 5 (6); 6 (6); 7 (7); 8 (8); 9 (9); 10 (12); 11 (11); 12 (15); 13 (9, 16); 14 (17); 15 (18). COPIES FURNISHED: Wellington H. Meffert, II, Esquire Department of Professional Regulation 1940 North Monroe Street, Suite 60 Tallahassee, FL 32399-0792 David W. Persky, Esquire Spicola & Larkin 806 Jackson Street Tampa, FL 33602 Angel Gonzalez, Executive Director Board of Professional Engineers 1940 North Monroe Street, Suite 60 Tallahassee, FL 32399-0792 Jack McRay, General Counsel Department of Professional Regulation 1940 North Monroe Street, Suite 60 Tallahassee, FL 32399-0792

Florida Laws (3) 120.57471.013471.015
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer