Elawyers Elawyers
Ohio| Change
Find Similar Cases by Filters
You can browse Case Laws by Courts, or by your need.
Find 48 similar cases
XEROX CORPORATION vs. DEPARTMENT OF GENERAL SERVICES, 80-000553 (1980)
Division of Administrative Hearings, Florida Number: 80-000553 Latest Update: Oct. 10, 1980

Findings Of Fact It is the responsibility of the Department to coordinate the purchase of commodities for all state agencies. The Department annually enters into approximately 140 "term contracts" for commodities of which nine are multiple awards. A "term contract" does not involve a definite quantity purchase but rather a guarantee that a purchase, if made during the contract period, will be from the term contract at a set price. No bidding is involved since the user agency accepts the terms agreed to in the contract. To implement a decision to establish a term contract, the Department issues an ITB which consists of general conditions, special conditions, and technical specifications. After issuance of an ITB, pre-bid conferences and addenda to the ITB may follow. Vendors respond to the ITB and ultimately a term contract is certified following approval by the Governor and Cabinet. Agencies may acquire from the term contract without prior approval of the Department. Term contracts may be for either a single competitive award or multiple award. A competitive award involves a single vendor or awardee who is awarded the contract for a specified term on the basis of being the lowest responsive bidder. If the Department receives only one responsive bid, it will award the bid to the single bidder in lieu of making a multiple award. A "multiple award," as used herein means an award to multiple vendors who are qualified by the Department to furnish a commodity. After a multiple award, the vendors compete for business at the user agency level under the terms of the state contract. A multiple award is often referred to as a "purchasing" or "pricing agreement." The Department utilizes single competitive bids when specifications can be drawn which reflect minimum agency needs and do not discriminate among vendors. The Department has no rules regarding when it will utilize a single or multiple award approach. Instead, this decision is made on a commodity by commodity basis. Generally, the Department considers a single award system as the preferred approach when technical specifications can be adequately drafted. If the specifications are drafted too broadly, they will not meet specified needs. Similarly, if they are drafted too narrowly, they will unduly restrict a vendor's ability to compete in the bidding process, thereby defeating one of the basic purposes of a single award system. In soliciting competitive bids for copiers, the Department has attempted to standardize specifications to meet basic copying needs. Even though machine specifications may differ, the general specifications by type and class prescribe the minimum requirements a machine must possess to qualify. The standardized specifications are what determines whether a comparison based on price alone is fair. Individual copier machines are not equivalent due to the wide variety of features available from different manufacturers; however, the Department only seeks to acquire machines which meet basic performance specifications. In the specifications set forth in the ITB, the Department has established general categories based upon the type of paper utilized, machine features such as reduction and two sided copying and monthly volume ranges of between 1,000 to 60,000 copies per month. Through these specifications, the Department has not attempted to meet every copying need, but only basic copying requirements of state agencies. If an agency has a need which is not met by the proposed state contract, the Department may authorize an exception to the contract and the needed copier may be acquired either through sole source or competitive bid depending on the situation. A basic disagreement between Xerox and the Department involves whether the specifications adequately address user needs. In drafting the specifications for the ITB, the Department utilized information obtained from vendor's quarterly reports, a review of previous years' exceptions, a review of other states' specifications, independent technological publications such as Data-Pro, certification forms completed by the user agencies and conferences and discussions with user agency personnel and vendors. Each agency purchasing official who testified at the formal hearing stated that the competitive award contract utilized by the Department either met their needs or they were unaware of any needs which were not met by the contract. Witnesses for Savin supported this finding by stating that the specifications met the vast majority of the copier needs which had been identified as being required by government agencies. The Petitioner has failed to demonstrate what actual needs are not capable of being met by the proposed contract in conjunction with the exception process. The competitive award system is not inherently unfair to any vendor. By acquiring copier machines with basic features at the lowest cost, companies such as Xerox which stress marketing, service, and added features are placed at a competitive disadvantage against companies such as Savin who emphasize low cost, basic copiers. Conversely, under a multiple award system, a secondary vendor such as Savin without a large established sales force and acknowledged user acceptance, would be placed in a similarly disadvantageous position. If a multiple award contract were utilized in Florida, pricing would be similar to General Services Administration (hereafter "GSA") price lists. This is the price provided under the federal multiple award program administered by the GSA. Under this system, vendors do not compete generally on the basis of price; a multiple system merely qualified a vendor to sell a commodity directly to the user agency once it agrees to offer a set discount from its commercial pricing. Since direct price competition results to a far greater degree under a single award system, prices paid for commodities are generally lower than under a multiple award approach. Savin witnesses corroborated this by testifying that Savin prices submitted to the state under a competitive award approach were less than those submitted to the federal government for GSA contracts. Additionally, at the request of the Congress, the General Accounting Office (GAO) reviewed the GSA's multiple award schedule program and issued a report outlining the federal experience with multiple awards. (See DGS Exhibit 3). In compiling its report, the GAO investigated state competitive procurement practices in certain states and costs associated under both the state and federal models. The report concluded that states which use competitive procurement have been able to obtain significantly lower prices than GSA for identical products. The report also criticized the GSA for its "service oriented" approach to acquisitions which attempts to "... satisfy the unique need of each customer." Instead, the report recommended that GSA ". . . should balance the interests of the Government as a whole by providing a reasonable range of items to satisfy agency needs through market research and analysis. . ." See DGS Exhibit 3 at 48. The multiple award approach is a more complicated and difficult system to administer because each vendor is permitted to include as part its response to an ITB, special terms and conditions and pricing plans. While this allows for more flexibility at the user level it also creates difficulty for user agencies in evaluating the various plans and prices to determine which products meet needs at the lowest cost. While certain agency-purchasing officials express a desire to make copier acquisition decisions at the user level rather than through a centralized procurement agency such as DGS, none of the officials who testified indicated that he or she had copier acquisition experience equal to that found in the Department. In the absence of a standardized contract with comparable pricing plans, it is difficult to ascertain how the actual prices of various copiers can legitimately be compared. Dependability of machinery is a primary factor in acquisition of copiers. The Department's terms and conditions which specify maintenance and service requirements are critical since a copier must function in order to justify its acquisition. Since copiers are inherently unreliable, a vendor's record on service and maintenance is extremely import.ant in awarding contracts. Although the standardized terms and conditions set forth by DGS in its ITB concerning service and maintenance are accepted by the vendors when they are awarded a bid, it is impossible for vendors in all situations to comply with such standards. Service varies among vendors and dealers. However, if an agency located in an isolated part of the state had a service problem and a critical need for a continually functioning machine, the exception process should be made available to accommodate such user. Neither the competitive nor the multiple award approach has a clear advantage in the area of service, this being a function more of the individual's repairing the machine rather than the type of award utilized. In drafting the technical specifications for the proposed contract, innovative or uncommon features are often excluded from consideration. The Department's goal is to provide specifications based on the "lowest common denominator" of perceived needs. The 1980-1981 draft specifications delineate twenty categories of copiers in four purchase plans, all designed to meet basic copying needs as determined by the Department. Thus, so long as a machine meets the minimum requirements of the technical specifications, it is eligible to compete for an award against all other copiers which also meet such threshold requirements. From a cost standpoint, the machines which come closest to meeting the minimum features should have the best chance of receiving an award. While, in some cases, copiers with added features were the low bidder over machines with less features, the result was simply that the state received added features at no extra cost. Xerox has disputed the Department's use of copies per minute and recommend volume ranges as the test for a copier to meet the requirements of Table I of DGS Exhibit 1. While the recommended volume ranges of some equipment was above or below those recommended by Data-Pro, this does not mean that the specifications are arbitrary or erroneous. Rather, volume ranges should be evaluated on the basis of whether or not they reasonably reflect the volume needs of the user agencies. Other states which have drafted specifications for copiers utilize volume ranges comparable to those used in Florida. Although it is possible that vendors would have an incentive to bid copiers in higher volume ranges than recommended by Data-Pro when copiers are evaluated on the basis of price, this does not necessarily mean that the copier is not capable of performing satisfactorily. Some of the equipment evaluated by Data-Pro has been used successfully at volume ranges in excess of those recommended. Table II of the DGS proposed technical specifications describes the features required for a copier to qualify in the nineteen categories of Group I. Xerox contends that machines with specialized features such as size, console controls, computer self-diagnostics, job recovery and automatic self-cancelling controls, throughput abilities, platen type, toner, paper feed, first copy time, type of document feed and specialized reduction, should not be compared to copiers without such features on the basis of price. However, while many of these features may indeed be useful to a user agency, the question is not whether they are useful but rather whether they are necessary to meet an agency need. For example, if an agency has a specific need for a compact copier because of space limitations, the Department could address such need by the exception process. Similarly, if an agency demonstrated a need for a copier with exceptional copy quality, the exception process could again be utilized. The Department's specifications require either a single or multiple automatic document feed. Although an automatic multipage document feed makes a difference in copier performance, no testimony was presented to show that a need was not being met by requiring one of the two types of feeds. The specifications do not distinguish between sorting capabilities of machines. No distinction is made between a copier with ten bins of sorting and a copier with fifty bins of sorting. This feature, however, must be available as either a built-in or add-on accessory. If a sorter is built into a machine the machine is then compared to one without the added cost of the sorter considered. While this gives an agency the option of adding a sorter at a later date if needed, it also ignores the benefit of the built in feature during the evaluation process. In order to fairly evaluate the two machines, the cost of the "must be availible feature" should be included if a fair cost comparison between machines which provide build-in sorting and those on which it is a mandatory option is to be made. The Department has delineated features affecting performance in the technical specifications. To the extent that features were not delineated, the Department reasonably concluded, based upon its expertise and experience, that the added features, although desirable in many instances, were not necessary. The technical specifications meet the vast majority of user needs and any specific or unusual need not addressed by the specifications can be acquired by a user agency through the exception process. Although each copier is unique in terms of the needs it serves, the Department has drafted technical specifications requiring comparing copiers not on the basis of unique features but on the basis of performance capabilities. It is possible for the Department to delineate user needs through specifications and permit vendors to market the machines which meet those needs at the lowest price. One of the primary advantages of the multiple award system is that it permits new technologically advanced equipment to enter the market place. However, through a workable exception process, equipment can be acquired if such equipment meets a need which is not addressed by the specifications. While each vendor has a variety of pricing plans, no testimony was presented from agency personnel that the state pricing plans which are limited to monthly rental, annual rental, two year lease and outright purchase, failed to adequately address agency pricing plan needs. The Department's use of a median volume figure is generally representative of actual usage. Of necessity, any figure utilized is to some extent arbitrary, however, the median is a reasonable predictor of average usage. In prior years, DGS attempted to average by volume bands in determining tie low bidder. This system proved administratively difficult to both the Department and vendors and was replaced by the present system. The cost formula utilized by the Department in its ITB was also challenged as an ineffective way to analyze the costs and productivity of individual copiers. The Petitioner contends that seven elements affecting productivity including (1) positioning originals; (2) dialing the number of copies; (3) pressing the start button; (4) waiting for the first copy; (5) copying time; (6) replenishing the paper tray; and (7) key operator functions should be considered by the Department either in whole or in part. Another suggested way of determining productivity would be to time various copying tasks. Additionally, Petitioner asserts that the Department's failure to consider the above factors and rely solely on rated machine speed to determine productivity causes labor costs to be understated. The Department's cost formula constitutes a reasonable method of comparing the costs of various copiers. Labor costs will of necessity vary widely depending upon, among other things, the operator, location of the machine, etc. However, the constant primary cost factors are the cost of the machine and supplies. The productivity analysis suggested by the Respondents would clearly be valuable in high volume copying. In lower volumes, the importance of specifically addressing factors other than machine, labor, and supply costs decreases. In order to be awarded a state contract a vendor is required to bid either statewide or by district. The state was divided into four districts by the Department to insure that vendors who receive bids provide service in remote or rural areas. To an extent, this requirement is restrictive and eliminates service vendors who are not capable of providing service to remote areas but who are capable of providing service in and around urban centers. While the district bidding requirement assures service to isolated areas it has also eliminated otherwise qualified vendors from the bidding process. Although the Department has presented a reasonable basis for dividing the state into four districts for bidding purposes, it is possible that alternatives such as reducing the size of the districts, would encourage vendor participation in the bidding process and thus strengthen the existing single award system. In order to ensure that a single award approach meets agency copying needs, it is imperative that a centralized procurement agency have information available by which to assess the actual work performed by the agencies in implementing their statutory responsibilities. Prior to October, 1978, Department approval was required prior to the placement of any copiers. In the fall of 1978, the Governor and Cabinet, sitting as the head of the Department, amended Rule 13A-1.04(1), Florida Administrative Code, to permit agencies to acquire from the state contract without prior approval but with after the fact certification. This amendment significantly decentralized purchasing authority and permitted quicker placement of copiers. However, approximately fifty percent of the copier placements have not been certified as required by rule. This failure to provide the Department with copier placement information raises questions in two areas. First, without this information, it will become increasingly difficult for the Department to draft technical specifications which meet agency needs. The information furnished through the certification process is necessary for the Department to obtain in order to assess the changing responsibilities of an agency. The needs of an agency could change yearly as the legislature reorganizes and creates, transfers and/or abolishes programs in executive departments. Second, the primary advantages of the single award, i.e. lower costs, could easily be circumvented without the certification process. For example, nothing prohibits an agency from acquiring from the contract a higher priced machine with more features than needed. The certification process operates as a check to insure that agency requirements correspond to the acquisition. Without the post-acquisition certification, the potential for abuse increases and a basic advantage of the single award system is potentially eliminated. Therefore, because of the importance of the certification process to the Department's ability to draft specifications and fulfill its statutory duty to promote economy and efficiency, it is necessary that executive agencies certify their acquisitions to the Department as required by Rule 13A-1.04, Florida Administrative Code. A consistent issue in this proceeding has been whether the Department or user agencies are better equipped to make cost saving decisions regarding copiers. The agency purchasing officials who testified on behalf of the Petitioners were proponents of a decentralized procurement system in which acquisition and management decisions were made at the user level rather than through a centralized agency. Those who testified for the Department believe that centralized procurement is desirable not only from a cost savings standpoint, but also because the Department possesses specialized expertise to make decisions on technical equipment which is lacking in their agencies. Additionally, they preferred DGS to deal with vendors in a highly competitive market such as copiers where frequent calls from salesmen attempting to place copiers are a standard marketing technique. However, the agency purchasing officials who support the multiple award concept generally do so not because they believe that their respective agencies possess expertise greater than or equal to the Department's or that their present needs were not being met through the existing contract and exception process, but rather because the multiple award system gives user agencies greater management flexibility and discretion in acquiring a commodity. In actuality, the objections of user agencies are directed more toward the concept of centralized procurement that to the method of procurement utilized. The Department has not promulgated rules to explain when an exception to the contract will be granted or denied. Fewer exceptions are now being granted and the exception process has become more stringent since a personnel change occurred in the Department. However, in order for the single award system to work, some flexibility must be provided through the exception process. As stated previously, user agencies may have unique needs not addressed by the state contract which justify a deviation. Areas such as quality, service and specialized features could present peculiar problems for certain agencies which could be solved through the exception route. The Department's present position regarding exceptions is overly rigid and effectively precludes objective consideration of user agency requests. The Department's decision to utilize multiple or competitive awards is based upon whether specifications can be drawn which do not discriminate against vendors and allow the state to meet its commodity needs. with respect to copiers, the present specifications for 1980-1981 in conjunction with the exception process, do not discriminate among vendors and do meet agency needs. The Department's attempt to draft copier specifications for purposes of a single award system is relatively new and of necessity an evolving process. The trend in recent years among some of the larger states has been to competitively bid copier contracts in some form, either definite quantity or fixed term. It is anticipated that through actual experience and compilation of necessary data, the specifications will also change in order to reflect changing needs. Under the proposed single award system for the 1980-1981 copier contract, the Petitioner will receive fewer awards than in previous years when a multiple award approach was used. Xerox received no awards in 1979-1980 when a single award was used. Thus, the Petitioner can expect fewer copier placements in 1980-1981 as a direct result of the single award contract. Conversely, Savin, which has intentionally structured its marketing practices to compete in a competitive single award system, would place fewer copiers if the Department utilized a multiple award approach for 1980-1981.

Florida Laws (3) 120.57287.032287.042
# 1
VIRGINIA ANN DASSAW vs DEPARTMENT OF BANKING AND FINANCE, DEPARTMENT OF REVENUE, AND DEPARTMENT OF LOTTERY, 96-001786 (1996)
Division of Administrative Hearings, Florida Filed:Miami, Florida Apr. 12, 1996 Number: 96-001786 Latest Update: Jan. 15, 1999

The Issue The central issue in this case is whether the sum of $1,318.00 should be permanently withheld from Petitioner's lottery winning.

Findings Of Fact The Petitioner, Virginia Ann Dassaw, was formerly known as Virginia Ann Davis. In 1979, Petitioner was charged with a criminal violation of Section 409.325, Florida Statutes, welfare fraud. The information alleged Petitioner had received food stamps which she was not entitled to because financial assistance was not available to her. On May 29, 1979, Petitioner appeared before the court and entered a guilty plea to the charge. As a result of the negotiated plea, Petitioner received two years of unsupervised probation and adjudication was withheld. Petitioner received $1,318.00 in overpayments from the Department of Health and Rehabilitative Services for the period March, 1977 through June, 1978. Such overpayments, monthly assistance payments, were from aid to families with dependent children; benefits Petitioner was not entitled to receive. Petitioner did not believe she was required to repay the overpayment amount since the criminal court did not require restitution as part of the conditions of her probation in connection with the food stamp welfare fraud. Petitioner did not, however, aver that she had repaid the obligation at issue nor did she dispute that she had received an overpayment. She felt that the criminal proceeding had been sufficient to satisfy the question. The order granting probation and fixing terms thereof did not, however, excuse Petitioner from the amount claimed in the instant case. On or about February 26, 1996, Petitioner became a lottery prize winner in the amount of $2,500.00. In conjunction with its claim for the overpayment described above, the Department of Health and Rehabilitative Services notified the Department of the Lottery of its claim for reimbursement from Petitioner's winnings in the amount of $1,318.00. The Department of the Lottery transmitted the winning amount to the Office of the Comptroller. The winning amount, less the claim filed by the Department of Health and Rehabilitative Services, was issued to Petitioner by expense warrant number 4-17 700 616 on March 12, 1996, in the amount of $1,182.00. Petitioner timely contested the amount claimed by the Department of Health and Rehabilitative Services.

Recommendation Based on the foregoing, it is, hereby, RECOMMENDED: That the Department of Banking and Finance, Office of the Comptroller, issue a final order finding the Department correctly reduced Petitioner's lottery prize winning by $1,318.00 and dismissing Petitioner's challenge to the amount disbursed. DONE AND ENTERED this 25th day of October, 1996, in Tallahassee, Leon County, Florida. JOYOUS D. PARRISH Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (904) 488-9675 SUNCOM 278-9675 Fax Filing (904) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 25th day of October, 1996. APPENDIX Rulings on the proposed findings of fact submitted by Petitioner: None submitted. Rulings on the proposed findings of fact submitted by the Respondent, Department of Health and Rehabilitative Services: 1. Paragraphs 1 through 6 are accepted. COPIES FURNISHED: Virginia Ann Dassaw 10075 Southwest 170th Terrace Perrine, Florida 33157 Andre L. Williams Assistant District Legal Counsel Department of Health and Rehabilitative Services 401 Northwest Second Avenue, N-1014 Miami, Florida 33128 Josephine A. Schultz Chief Counsel Office of the Comptroller Department of Banking and Finance The Fletcher Building, Suite 526 101 East Gaines Street Tallahassee, Florida 32399-0350 Louisa Warren, Esquire Department of the Lottery 250 Marriott Drive Tallahassee, Florida 32399 Robert F. Milligan Office of the Comptroller Department of Banking and Finance The Capitol, Plaza level Tallahassee, Florida 32399-0350 Harry Cooper General Counsel Department of Banking and Finance The Capitol, Room 1302 Tallahassee, Florida 32399-00350

Florida Laws (1) 24.115
# 2
LUCKY GRAHAM vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 92-003892 (1992)
Division of Administrative Hearings, Florida Filed:Miami, Florida Jun. 25, 1992 Number: 92-003892 Latest Update: Nov. 04, 1993

The Issue At issue in these proceedings is whether petitioner suffers from "retardation," as that term is defined by Section 393.063(41), Florida Statutes, and therefore qualifies for services under Chapter 393, Florida Statutes, the "Developmental Disabilities Prevention and Community Services Act."

Findings Of Fact Petitioner, Lucky Graham (Lucky), was born September 18, 1973, and was, at the time of hearing, 19 years of age. Lucky has resided his entire life with his grandmother, Susie Griggs, in Miami, Dade County, Florida, and has been effectively abandoned by his mother and father. When not attending the Dorsey Skill Center, a program offered by the Dade County Public School system to develop minimal skills necessary to acquire a vocational skill, Lucky spends most of his free time alone in his room, and does not interact socially or play with other children beyond his immediate family. Notwithstanding, Lucky does interact with members of his immediate family; attend family outings; contribute to minor chores around the house such as hanging laundry, washing dishes and mopping floors; maintain himself and his room in a neat manner; and prepare food and drink for himself, at least to some unspecified extent. Lucky cannot, however, without supervision, shop or make change, but can utilize public transportation to and from Dorsey Skill Center without supervision. Lucky's limited social skills are, likewise, apparent at the Dorsey Skill Center where his interaction with other students is limited. Lucky's functional performance, as opposed to his learning ability, is also apparent from his past performance at school, where it was rated at the first grade level. As such, he is unable to read or write to any significant extent and cannot perform mathematical calculations beyond the most basic addition and substraction; i.e., he cannot add two digit numbers that require carrying and cannot perform substraction that requires borrowing from another number (regrouping). He did, however, complete a vocational training program for auto body repair and was, as of October 8, 1992, and apparently at the time of hearing, enrolled in a auto mechanics program at Dorsey Skill Center. (Tr. p 46, Petitioner's Exhibit 9). The quality of Lucky's performance was not, however, placed of record. Current and past testing administered through the Dade County School System, for functional ability (vocational ability), as opposed to learning ability, evidence that Lucky functions on a level comparable to mildly mentally retarded individuals. In this regard, he was found to be impulsive, disorganized and lacking concentration, and to be most appropriately placed in a sheltered workshop environment with direct supervision and below competitive employment capacity. During the course of his life, Lucky has been administered a number of intelligence assessment tests. In July 1977, at age 3 years 10 months, he was administered the Stanford Binet by the University of Miami Child Development Center and achieved an IQ score of 55. Lucky was described as "hesitant in coming into the testing room but . . . fairly cooperative throughout." Thereafter, he was administered the following intellectual assessment instruments by the Dade County Public Schools prior to his eighteenth birthday: in March 1980, at age 6 years 6 months, he was administered the Wechsler Intelligence Scale for Children--Revised (WISC-R) and received a verbal score of 65, a performance score of 55, and a full scale IQ score of 56; and, in October 1984, at age 11 years 1 month, he was administered the WISC-R and received a verbal score of 58, a performance score of 58, and a full scale IQ score of 54. During these testing sessions, Lucky was observed to have been minimally cooperative, with low frustration level, and highly distractible. If reliable, such tests would reflect a performance which was two or more standard deviations from the mean, and within the mild range of mental retardation. While not administered contemporaneously with the administration of intellectual assessment instruments, a Vineland Adaptive Behavior Scales (Vineland) was administered to Lucky through the Dade County Public Schools in January 1988, when he was 14 years 4 months. The results of such test reflected an adaptive behavior score of 51, and an age equivalent of 5 years. Such result would indicate a deficit in Lucky's adaptive behavior skills compared with other children his age. On August 8, 1991, pursuant to an order of the Circuit Court, Dade County, Florida, Lucky was evaluated by Walter B. Reid, Ph.D., a clinical psychologist associated with the Metropolitan Dade County Department of Human Resources, Office of Rehabilitative Services, Juvenile Court Mental Health Clinic. Dr. Reid administered the Wechsler Adult Intelligence Scale (WAIS) to Lucky, whose cooperation during such testing was observed to be good, and he achieved a verbal score of 68, a performance score of 70, and a full scale IQ of Dr. Reid concluded that Lucky suffered mild mental retardation and opined: . . . his [Lucky's] abilities should be thoroughly assessed by the Division of Vocational Rehabilitation as it is my opinion . . . this young man can function in a sheltered workshop and live in a group adult facility . . . Plans should be under- taken immediately to get this youth into appropriate training as soon as he gets out of high school in order for him to learn skills that will make it possible for him to work and to learn skills in the area of socialization. This is a pleasant young man, who, in my opinion, has the capability of working and living semi-independently. Thereafter, on August 26, 1991, apparently at the request of the Circuit Court, Juvenile Division, Lucky was assessed by the Department pursuant to the "Developmental Disabilities Prevention and Community Services Act," Chapter 393, Florida Services, to determine whether he was eligible for services as a consequence of a disorder or syndrome which was attributable to retardation. The Wechsler Adult Intelligence Scale-Revised (WAIS-R) was administered to Lucky, who was described as cooperative and motivated during the session, and he achieved a verbal score of 71, a performance score of 78, and a full scale IQ of 73. This placed Lucky within the borderline range of intellectual functioning, but not two or more standard deviations from the mean score of the WAIS-R. A subtest analysis revealed strengths in "the putting together" of concrete forms and psychomotor speed. Difficulties were noticed in verbal conceptualization and language abilities. In addition to the WAIS-R, Lucky was also administered the Vineland Adaptive Behavior Scales. He obtained a communication domain standard score of 30, a daily living skills domain standard score of 90, and a socialization domain score of 63. His adaptive Behavior Composite Score was 56. This score placed Lucky within the Moderate range of adaptive functioning. Based on the foregoing testing, the Department, following review by and the recommendation of its Diagnosis and Evaluation Team, advised the court that Lucky was not eligible for services of the Developmental Services Program Office under the category of mental retardation. The basic reason for such denial was Lucky's failure to test two or more standard deviations from the mean score of the WAIS-R which was administered on August 26, 1991, as well as the failure of the Vineland to reliable reflect a significant deficit in adaptive behavior. Also considered was the questionable reliability of prior testing.1/ Following the Department's denial, a timely request for formal hearing pursuant to Section 120.57(1), Florida Statutes, was filed on behalf of Lucky to review, de novo, the Department's decision. Here, resolution of the issue as to whether Lucky has been shown to suffer from "retardation" as that term is defined by law, discussed infra, resolves itself to a determination of the reliability of the various tests that have been administered to Lucky, as well as the proper interpretation to be accorded those tests. In such endeavor, the testimony of Bill E. Mosman, Ph.D., Psychology, which was lucid, cogent, and credible, has been accorded deference. In the opinion of Dr. Mosman, accepted protocol dictates that an IQ score alone, derived from an intelligence assessment instrument, is not a reliable indicator of mental retardation unless it is a valid reliable score. Such opinion likewise prevails with regard to adaptive behavior instruments. Here, Dr. Mosman opines that the IQ scores attributable to Lucky are not a reliable indication of mental retardation because Lucky's performance on most of the various parts of the tests reflects a performance level above that ascribed to those suffering retardation. In the opinion of Dr. Mosman, which is credited, the full scale scores ascribed to Lucky were artificially lowered because of his deficiencies in only a few parts of the tests. These deficiencies are reasonably attributable to a learning disability and, to a lesser extent, certain deficits in socialization, and not mental retardation. Consistent with such conclusion is the lack of cooperation and motivation exhibited by Lucky during earlier testing, and the otherwise inexplicable rise in his full scale IQ score over prior testing. Consequently, the test results do not reliably reflect a disorder attributable to retardation. The same opinion prevails regarding Lucky's performance on the adaptive behavior instruments which, when examined by their constituent parts, demonstrates that Lucky scores lower in the areas consistent with learning disabilities as opposed to retardation. In sum, although Lucky may be functioning at a low intelligence level, he is not mentally retarded. 2/

Recommendation Based on the foregoing findings of fact and conclusions of law, it is RECOMMENDED that a final order be rendered which denies petitioner's application for services for the developmentally disabled under the category of mental retardation. DONE AND ORDERED in Tallahassee, Leon County, Florida, this 10th day of August 1993. WILLIAM J. KENDRICK Hearing Officer Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 Filed with the Clerk of the Division of Administrative Hearings this 10 day of August, 1993.

Florida Laws (3) 120.57393.063393.065
# 3
DIVISION OF REAL ESTATE vs HOWARD SARVEN WILLIAMS, 98-003520 (1998)
Division of Administrative Hearings, Florida Filed:Shalimar, Florida Aug. 03, 1998 Number: 98-003520 Latest Update: Jul. 15, 2004

The Issue The issue is whether Respondent's license as a real estate salesperson should be disciplined for the reasons given in the Administrative Complaint filed on May 20, 1998.

Findings Of Fact Based upon all of the evidence, the following findings of fact are determined: In this disciplinary action, Petitioner, Department of Business and Professional Regulation, Division of Real Estate (Division), seeks to impose penal sanctions on the license of Respondent, Howard Sarven Williams, a licensed real estate salesperson, on the ground that he failed to disclose that he had pled guilty to a crime when he filed his application for licensure in September 1994. In his Election of Rights Form filed with the Division, Respondent disputed this allegation, contended that his incorrect response "was done with the mistaken belief that it could be answered that way," and requested a formal hearing. Respondent is subject to the regulatory jurisdiction of the Division, having been issued license no. SL 0617682 by the Division in late 1994. The license remained inactive from January 1, 1995, until February 8, 1995; on that date, Respondent became an active salesperson with J.A.S. Coastal Realty, Inc. in Destin, Florida, until June 20, 1998. Between then and December 1998, he had no employing broker. Whether he is currently employed as a realtor is not of record. It is undisputed that on November 9, 1994, Respondent pled no contest to 12 counts of keeping a gambling house, a felony of the third degree. The offenses related to the illicit placement by Respondent (and two other individuals now deceased) of video gambling machines in approximately 10 VFW clubs and American Legion posts in Northwest Florida. On November 10, 1994, the court withheld adjudication of guilt; it placed Respondent on 10 years' supervised probation; and it ordered him to pay a fine and investigative costs totaling in excess of $25,000.00. Respondent was arrested in late 1993. On September 23, 1994, or before he entered his plea of no contest, Respondent completed and filed with the Division an application for licensure as a real estate salesperson. Question 9 on the application asks in part the following: Have you ever been convicted of a crime, found guilty, or entered a plea of guilty or nolo contendere (no contest), even if adjudication was withheld? At the time the application was filled out, Respondent had not yet entered his plea of no contest. Therefore, he properly answered the foregoing question in the negative. Although Respondent was statutorily required to notify the Commission in writing of this matter within 30 days after entering his plea, he has not been charged with violating that statute. The record does not reveal how the Division learned that Respondent had pled no contest to the charges. In any event, in March 1998, or more than three years later, a Division investigator interviewed Respondent who readily admitted that he had pled no contest to the charges, that he was still on probation, and that he was making monthly payments on the substantial fine imposed in 1994. The issuance of the Administrative Complaint followed. Although the evidence does not support the charge, as narrowly drawn in the Administrative Complaint, it should be noted that Respondent says he mistakenly assumed (without the advice of counsel) that because he had pled no contest and adjudication of guilt was withheld, he had not been convicted of a crime. Thus, he believed that his record was clean. At the same time, the plea is a matter of public record, and Respondent did not intend to make a fraudulent statement in order to secure his license.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED that the Florida Real Estate Commission enter a final order dismissing the Administrative Complaint, with prejudice. DONE AND ENTERED this 23rd day of November, 1999, in Tallahassee, Leon County, Florida. DONALD R. ALEXANDER Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 23rd day of November, 1999. COPIES FURNISHED: Herbert S. Fecker, Director Division of Real Estate Department of Business and Professional Regulation Post Office Box 1900 Orlando, Florida 32802-1900 Laura McCarthy, Esquire Department of Business and Professional Regulation Post Office Box 1900 Orlando, Florida 32802-1900 Drew S. Pinkerton, Esquire Post Office Box 2379 Fort Walton Beach, Florida 32549-2379 Barbara D. Auger, General Counsel Department of Business and Professional Regulation 1940 North Monroe Street Tallahassee, Florida 32399-0792

Florida Laws (3) 120.569120.57475.25
# 4
ORVIL OWNBY vs. DEPARTMENT OF TRANSPORTATION AND CAREER SERVICE COMMISSION, 77-000261 (1977)
Division of Administrative Hearings, Florida Number: 77-000261 Latest Update: Jan. 17, 1978

Findings Of Fact Joe Francis, Orvil Ownby, and Roscoe Cleavenger are all permanent Career Service Commission employees with appeal rights to the Career Service Commission. The appellants each filed a timely appeal of their reduction in pay by the Department of Transportation with the Career Service Commission. The parties stipulated to the following facts: The reduction of the pay of the appellants was not a disciplinary action. Under protest, some employees have paid back money allegedly overpaid, and other employees are in the process of paying back money allegedly overpaid. The performance of all the affected employees was rated as satisfactory or above, and no basis existed for any reduction in pay due to unsatisfactory performance. All the affected employees initially had their pay reduced to the "current" maximum salary. Thereafter, those employees who did not elect to pay the money back in a lump sum had their pay reduced by a fixed amount to repay monies allegedly overpaid, or alternatively, the employees have made similar monthly payments by personal check to the State under protest. Exhibits A through E were admitted into the record together with the entire personnel file of each appellant. In 1972, Jay McGlon, then State Personnel Director, authorized employees in the classes of Maintenance Foreman II to be changed from pay class 16 to pay class 17. Similar authorization was given to change Sign Erector Foreman from pay class 16 to pay class 17. Pay class 17 had a pay range of $544.62 to $744.72. This adjustment in pay class was effective November 16, 1972, pursuant to McGlon's letter of authorization. See Exhibit A. In the instant case, the affected employees were being paid a geographical pay differential. When their pay was increased by the difference between the minimum salary of the class of which they had been a member and the minimum salary of the class to which they were raised, their adjusted pay, together with the geographical pay differential, exceeded the maximum pay range of the new class. On October 30, 1975, Conley Kennison, McGlon's successor as State Personnel Director, wrote David Ferguson, personnel officer of the Department of Transportation. This letter was in response to Ferguson's letter of May 23, 1975, requesting retroactive approval of a $16.00 biweekly pay adjustment, effective November 16, 1972, for all Dade County employees in the classes of Highway Maintenance Foreman II and Sign Erector Foreman II. In this letter, Kennison cites that the pay increases were not in accordance with the final implementation instructions. However, from the text of this letter, it is unclear whether the instructions referred to relate to the salary increases or the geographical pay differentials discussed in the letter. Kennison, in this letter, denies the request made by Ferguson and directs that steps be initiated to recover the overpayments to employees. Two weeks were given for the Department of Transportation to inform Kennison the method by which the overpayments would be recovered and the amounts owed by individual employees to which overpayment had allegedly been made. It was determined that Cleavenger owed $971.74, Francis owed $821.30, and Ownby $600.01. The Department of Transportation reduced the pay of the affected employees by $16.00 per pay period in order to recover the amount of the overpayment. This reduction occurred effective the first pay period following December 5, 1975.

Recommendation Based upon the foregoing findings of fact and conclusions of law, the Hearing Officer recommends that the Career Service Commission rescind the action taken by the agency, and that all monies collected from the affected employees be returned to them. DONE and ORDERED this 17th day of January, 1978, in Tallahassee, Florida. STEPHEN F. DEAN, Hearing Officer Division of Administrative Hearings Room 530, Carlton Building Tallahassee, Florida 32304 (904) 488-9675 COPIES FURNISHED: Phillip S. Bennett, Esquire Mrs. Dorothy Roberts Department of Transportation Appeals Coordinator Haydon Burns Building Career Service Commission Tallahassee, Florida 32304 530 Carlton Building Tallahassee, Florida 32304 Ronald A. Silver, Esquire 2020 Northeast 163rd Street, S204 Joe Francis North Miami Beach, Florida 33162 3830 Day Avenue Coral Gables, Florida Rosco Homer Cleavenger 1901 N.W. 107th Street Miami, Florida 33167

# 5
CITRUS COUNTY SCHOOL BOARD vs BETH STONE, 13-003340 (2013)
Division of Administrative Hearings, Florida Filed:Istachatta, Florida Sep. 05, 2013 Number: 13-003340 Latest Update: Apr. 14, 2014

The Issue Whether Respondent's employment as a teacher by the Citrus County School Board should be suspended or terminated for the reasons specified in the letter of notification of suspension and termination dated June 17, 2013.

Findings Of Fact Petitioner, Citrus County School Board (School Board or District), is the entity authorized to conduct public education in Citrus County, Florida. Respondent is employed as an instructor by the School Board pursuant to a professional services contract. She has taught third grade at Crystal River Primary School (the School) for seven years. Respondent previously taught in Marion County schools, and has taught school for a total of 22 years. Respondent is active in her community, serving as the choir director at her church and teaching Vacation Bible School. Respondent is also a member of the American Regional auxiliary. STAR Testing The District administers a number of standardized tests to elementary school students. Second- and third-grade students are administered a STAR test four times during the school year. The STAR test is an assessment tool to gauge student growth in reading.1/ The STAR test is given to students at the beginning of the school year (“the first test”), and at the end of the school year (“the last test”). The results of these two tests are compared to measure the students? growth in reading. Student growth in reading is achieved when a student?s last test score is at least one point higher than his or her first test score.2/ Alternatively, if a student?s first test score is above the mean for the entire grade, growth is achieved, even if the last test score is lower than the first test score, as long as the last test score remains above the mean. The STAR test is also given two additional times during the school year to monitor student progress. The scores on these “interim” tests do not factor into a determination of a student?s reading growth for the school year. School policy instituted in the 2011-2012 school year requires both the first and last administrations of STAR to be conducted in the school?s computer lab under strict guidelines. The tests must be proctored. Both the teacher and the proctor must sign a Test Administration Agreement (Agreement) in which they agree not to engage in activities which may threaten the integrity of the test, such as explaining or reading passages for students, and changing or otherwise interfering with student responses to test items. The Agreement also binds the teacher and the proctor to follow test protocols, including testing only during the designated testing windows for the first and last tests. The District also requires the teacher to read a specific script to students prior to beginning the test. The teacher is prohibited from answering questions from students after the test begins. The “interim” tests are not proctored, but are administered in the school?s computer lab. Second- and third-grade students do not take the Florida Comprehensive Assessment Test (FCAT). One reason for administering STAR under strict guidelines is to prepare these students for the FCAT testing environment. While a student may not be retained in third grade for failing a STAR test, that student may be retained in fourth grade for not passing the FCAT. The record evidence conflicted as to whether 2012-2013 School policy prohibited teachers from administering the STAR test outside of the computer lab under any circumstances. During both the 2011-2012 and 2012-2013 school years, the STAR program was available to teachers on computers in their classrooms. Respondent testified that she and other teachers had given the STAR test in their classrooms. Valerie Komara, who runs the school?s computer lab, testified first that teachers had access to and did give STAR tests in their classrooms prior to the 2012-2013 school year: Q. And the Star test, was that –- to your knowledge was the Star test ever given outside of the lab, even on the progress test? A. Not this past year, no. No, it has not been. Q. Meaning, do teachers –- to the best of your knowledge do teachers give the Star test in the classroom at any point in time? A. Previous to this year, to the year that we?re talking? Yes. A. I know that it was on their computers, yes, sir. It was available to them.[3/] When the undersigned asked for clarification, Ms. Komara testified that teachers “were not to test in their room” during either the 2011-2012 or 2012-2013 school years.4/ Ms. Komara?s testimony is not competent substantial evidence on which to find that 2012-2013 School policy prohibited teachers from administering the STAR test in the classroom under any circumstances. Virginia George, the Teacher on Special Assignment (TOSA) in charge of test administration for the school, testified, “we do all our testing in our test tech labs.”5/ However, Ms. George was not aware, until after the events of May 1 and 2, 2013, that STAR was available to teachers on computers in their classrooms.6/ Thus, Ms. George?s testimony as to whether School policy prohibited teachers from giving the STAR test in their classrooms prior to May 1 and 2, 2013, is not reliable. Implementation of STAR has evolved since the 2011-2012 school year. Thus, School policy has been somewhat fluid. In 2011-2012, the School did not administer the first test until October. In subsequent years, the first test has been administered during a narrow test window in late August and early September. During both the 2011-2012 and 2012-2013 school years, STAR was available to, and utilized by, teachers in their classrooms. Following the 2012-2013 school year, the School removed STAR from computers in teachers? classrooms. During both the 2011-2012 and 2012-2013 school years, STAR was administered to students four times during the school year. Currently, the District administers STAR only twice during the school year -- fall and spring. From the totality of the evidence, the undersigned finds that 2012-2013 School policy did not prohibit teachers from administering STAR to students in their classrooms in addition to the four STAR tests administered in the computer lab. STAR Factor in Teacher Evaluations Beginning with the 2011-2012 school year, the District began using students? reading growth, based on STAR test results, as a factor in their teachers? evaluations. Fifty percent of third-grade teachers? evaluations is based on their students? reading gains for the given school year. If 80 percent of the students achieve growth, the teacher may be rated either “effective” or “highly effective” on the Student Learning Growth/Performance Data portion (Student Learning Growth) of the evaluation. If less than 80 percent of the students achieve growth, the teacher may receive a “needs improvement” or “unsatisfactory” rating.7/ The second portion of the evaluation is the Professional Standards portion, in which a school administrator (i.e., principal or assistant principal) rates the teacher based on factors such as the teacher?s leadership, support of the District, design and implementation of lesson plans, class work, and monitoring of student progress, as well as achievement of goals stated in his or her professional development plan. Teachers receive a final rating based on the following matrix, which combines both the Student Learning Growth portion and the Professional Standards portion: Student Learning Growth/Data Portion Professional Standards Portion HE Highly Effective E Effective D/NI Developing/Nee ds Improvement U Unsatisfactory HE HE HE/E E E/D/NI E HE/E E E D/NI D/NI E D/NI D/NI D/NI U D/NI D/NI U U If a teacher receives an "unsatisfactory" on the Student Learning Growth portion, and a “highly effective” rating on the Professional Standards portion, he or she may receive an overall rating of either "effective" or “needs improvement.” The school principal has the discretion to assign either rating under that factual scenario. The School has no discretion in assigning ratings for the Student Learning Growth portion. The students? test results are reported to the District and the District assigns the rating based solely on the test results. The importance of STAR testing significantly increased in 2011-2012 when STAR results became a factor in teacher evaluations. Thus, testing protocols were introduced to protect the integrity of the first and last tests upon which the students? growth determination is based. Requiring proctors, signed Agreements, and a limited timeframe in which to administer the first and last tests are measures which ensure consistent test conditions and comparable results. These measures likewise ensure fair evaluation of the teachers. Respondent's Performance Evaluations For the 2011-2012 school year, Respondent received an "unsatisfactory" on the Student Learning Growth portion of her evaluation because less than 80 percent of her students achieved reading growth during the school year. Respondent did not agree with the “unsatisfactory” rating as a fair assessment of her teaching abilities. The first STAR test for the 2011-2012 school year was not given until October. Respondent noted on her evaluation that if growth had been measured from August to May, rather than October to May, she would have met, if not exceeded, the 80 percent growth standard. Respondent received a "highly effective" rating on the Professional Standards portion of the evaluation. Among the glowing comments noted in the Professional Standards portion of Respondent?s 2011-2012 evaluation are the following: Ms. Stone's care and compassion for her students and classroom is evident in her day-to-day decision making. She is always looking for innovative ways to improve instruction. Her dedication included an active part in our school events. She adapted her learning environment to accommodate the needs of her students. Respondent received an overall "effective" rating for the 2011-2012 school year. No evaluation of Respondent prior to the 2011-2012 school year was introduced into evidence. Nor was any evidence introduced of prior disciplinary action against Respondent by either the School or the District. For the 2012-2013 school year, Respondent was rated "highly effective" by the principal on the Professional Standards portion. The Student Learning Growth portion was dependent on the outcome of her students? STAR tests. STAR testing May 1, 2013 On the morning of May 1, 2013, Respondent took her class to the computer lab for administration of the last STAR test of the year. This is the test that would determine her students' reading growth for the year. Valerie Komara runs the School?s computer lab, and proctored the STAR test with Respondent that morning. Both Respondent and Ms. Komara signed the Agreement and followed all test protocols. The test was administered without incident. At the conclusion of the test, Ms. Komara generated a growth report and handed it to Respondent. A growth report shows the score of the first and last STAR test for each student in the class, and the calculated growth. Following the test, Respondent took her students to her classroom. After lunch, the students reported to an assembly. From the growth report, Respondent knew that her class did not achieve the 80 percent growth necessary for her to receive an "effective" rating on the Student Learning Growth portion of her evaluation. The evidence was insufficient to determine what percentage of Respondent's students did achieve growth. Of the 16 students in Respondent's class, only 13 took both the first and last proctored STAR test during the 2012-2013 school year.8/ Of those, only three students achieved at least a one point increase in their STAR score. The third-grade mean score was not introduced into evidence. It is impossible to determine how many, if any, of Respondent?s students scored above the mean for the third grade such that growth was achieved despite a gain of less than one point. If none of the students' scores was above the mean for the third grade, only 23 percent achieved growth. Based on the preponderance of the evidence, the undersigned finds that the growth percentage for Respondent's class was very low. Respondent was especially concerned about the scores of three students, K.K., F.F., and E.B. Each of these students? final test score was either lower than, or the same as, their first test score, despite progress having been made on interim tests. K.K.?s final score of 3.4 was lower than the 3.7 she received on the first test, and lower than the 3.6 and 3.9 scores recorded on her successive interim tests. F.F.?s final score of 3.6 was the same as her initial score. F.F. had scored 4.3 and 3.6 on the two interim tests. E.B.?s final score of 2.9 was lower than her initial score of 3.0, and lower than the scores of 3.2 and 3.8 recorded on her successive interim tests. Respondent testified, credibly, that she knew each of these students could do better. Respondent explained her belief that students are tested so often during the school year that they “burn out” by the end of the year and do not perform as they should. Respondent pulled students K.K., F.F., and E.B. out of the assembly and took them to her classroom. Respondent told the students that they had not scored well on the STAR test that morning, that they were going to take it again, and that they needed to try harder. Respondent seated the students side-by-side at computer terminals, logged them into the STAR program in her classroom, and proceeded to administer the exam. Respondent seated herself behind E.B., who was seated between K.K. and F.F. When the students completed the STAR test, Respondent dismissed them back to the assembly. Respondent ran a test record report for each of the three students to see whether their scores on the test administered in her classroom that afternoon were higher than their scores from the test given in the computer lab that morning. A test record report shows the date on which each STAR test was taken, as well as the corresponding scores. Respondent was indeed pleased to see that each of the three students? scores had increased. Respondent then ran a new growth report for her entire class and found that these three students? scores from the morning administration of the test had been replaced with the scores from the afternoon test. Respondent testified that she did not expect the growth score for these students to be replaced by the second score and it was not her intent to substitute the scores. She maintained that her intent was to see how well these three students could do when they were taking the test seriously and trying harder. Respondent?s testimony was sincere and is accepted as credible. Having seen the growth report, Respondent knew “I screwed up.”9/ She was asked why. “[T]esting in Florida is everything,” she responded. So true. Respondent panicked. By her own admission, Respondent lied. The Cover-Up Respondent immediately prepared the following e-mail message and sent it to Ms. Komara and the principal, Donnie Brown: From: Stone, BethSent: Wednesday, May 01, 2013 2:57 PM To: Brown, Donnie; Komara, Valerie Subject: puzzled I was looking at my students? STAR test record and there is an extra with today?s date for 3 students . . . []. While I certainly like those scores, they are very different from their scores this morning. Respondent admits this e-mail was deceitful. Ms. Brown was at a District meeting off-site and did not respond to the e-mail. Ms. Komara received the e-mail after 3:00 p.m. on May 1, 2013, and went to Respondent?s classroom to speak with her. Respondent lied to Ms. Komara and told her that Respondent had misplaced the growth report Ms. Komara had given Respondent following her students? testing in the computer lab that morning. Respondent used this lie to explain why she had run the second growth report, which allegedly “revealed” the second set of scores for these three students.10/ Ms. Komara was upset because she had signed the Agreement for the morning test and knew that she was responsible, along with Respondent, for ensuring that test protocols were followed. Ms. Komara told Respondent she was going to inform Virginia George, the school?s Testing Coordinator. Respondent told Ms. Komara not to worry about it. But, Ms. Komara was very worried. Ms. Komara left Respondent?s office to find Ms. George, who was not in her office. Ms. Komara next tried to find Ms. Brown, who was likewise unavailable. Ms. Komara returned to her room at approximately 4:20 p.m. and ran a growth report for Respondent?s students. She circled on the report the scores from the three students? last test. Then, Ms. Komara ran a test record on Respondent?s students, which showed that two separate tests were given that day to each of the three students. Ms. Komara then realized that the system reported a difference in reading growth for these three students. The following day, May 2, 2013, Respondent went to Ms. George?s office before school started. Respondent informed Ms. George that the test record for three of her students showed two STAR tests from May 1, 2013. Respondent asked Ms. George if she could delete the second set of scores. Ms. George expressed concern over the second set of scores. While Respondent was still in her office, Ms. George began looking at the scores from other classes, trying to determine if a second set of test results were reported for students in other classes. Ms. George was concerned about a flaw in the testing program, a database error, or other system- wide glitch. Respondent informed Ms. George that she had brought the matter to the attention of Ms. Komara as well. Ms. George asked Respondent to stop by the computer lab on the way to her classroom and let Ms. Komara know that Ms. George was going to handle the matter. Respondent left Ms. Komara the following note on her computer: “Val – Virginia is checking into the 2nd tests. She said not to worry. She?ll get it taken care of.” Respondent hoped that Ms. George had the authority to delete the second set of test scores and that deletion would put the issue to rest. Respondent was wrong. Ms. George spent the remainder of the school day investigating the origins of the second set of scores and potential system errors. Ms. Komara contacted Jennifer Budden, who handles the STAR database for the school. Ms. Budden contacted Matt Biggs, a District employee involved in testing, to assist in finding out exactly when the second set of test scores was posted. Ms. Budden also contacted the software company directly. During her inquiry, Ms. George discovered that the STAR testing program was available in the classrooms. She had not previously been aware of this. She ran through possible scenarios in her head –- did the three students accidentally log into the program when they returned to class after testing? Did another student log on using their passwords? Ms. George decided that she would have to interview the three students to get to the bottom of the issue. At the end of the school day on May 2, 2013, Respondent came to see Ms. George again and inquired whether she had been able to delete the second set of scores for the three students. Ms. George explained the investigation she had undertaken that day, the various scenarios she was imagining, and her decision that she must interview the three students the following day. Respondent immediately offered to interview the students herself. Ms. George declined, explaining that it was important that she find out what had happened. After Ms. George made clear that she was going to interview the students, Respondent stated, “I did it.” Respondent then explained that she had tested the students again in the afternoon of the previous day because she knew they could have done better. Ms. George then told Respondent they would have to bring Ms. Brown, the principal, into the issue. Ms. George asked Respondent whether Respondent wanted to talk with Ms. Brown herself or if Ms. George should contact her. Respondent indicated she would like to speak to Ms. Brown personally. That evening Ms. George called Ms. Brown, explained the investigation she had undertaken that day and her concern that the system was flawed. Ms. George reported that the matter had been cleared up late in the day by Respondent, who would be coming to see her the following morning. On May 3, 2013, prior to the start of school, Respondent saw Ms. Brown and confessed that she had retested the students, which explained the second set of scores. Respondent?s Intent in Administering the Second Test The District maintains that Respondent intended to change the students? scores in the STAR system and that she was motivated by the need to achieve a satisfactory performance on the Student Learning Growth portion of her 2012-2013 evaluation. The District relies upon the following alleged facts: Respondent disagreed with the District rating of “unsatisfactory” on the Student Learning Growth portion of her 2011-2012 evaluation; although she had received a “highly effective” rating on the Professional Standards portion of her 2012-2013 evaluation, an “unsatisfactory” rating on the District portion could result in an overall rating of “needs improvement” rather than “effective” on her 2012-2013 evaluation; Respondent had expressed concern to a fellow third-grade teacher that she was concerned her class would not achieve 80 percent growth; she retested the three students in her classroom secretively and told the students not to tell anyone; she had to have known that the second set of scores would replace the first ones on the growth report; and, of course, that her series of deceitful acts following the second test were designed to conceal the act of retesting which she knew to be wrong. These allegations are discussed in turn. It is true that Respondent could have received an overall “needs improvement” rating on her 2012-2013 evaluation. The same was true for the 2011-2012 evaluation, but Respondent received the “effective rating.” Administration was highly supportive of Respondent?s teaching methods and strategies and clearly considered her an asset to the school and her students. Respondent?s testimony that she was not in fear of receiving a lower overall rating is accepted as credible. Moreover, Petitioner did not prove that increasing the STAR scores for these three students to the “growth” threshold would have impacted her evaluation at all. Petitioner did not introduce the third-grade mean STAR score, which is the key to determine the percentage of Respondent?s students who attained growth. Without that key evidence, the undersigned is left with the conclusion that only 23 percent of Respondent?s students achieved growth. Adding three more students to the growth column would result in 46 percent growth –- far short of the 80 percent needed to achieve a “satisfactory” rating from the District. The undersigned finds that Respondent was not motivated by an unattainable goal of 80 percent growth. Respondent?s colleague, Jasmine Welter, testified that Respondent had expressed to her on three different occasions that she was concerned her class would not make the 80 percent goal. However, Ms. Welter also testified that the conversations took place on or near the final testing date and that such conversations among teachers were not unusual as the testing dates approached. This evidence does not demonstrate that Respondent was any more concerned about her students? upcoming STAR performance than any other third-grade teacher. Respondent did retest her students in the classroom, rather than the computer lab, and without the stringent conditions under which the first and last tests are administered. As previously discussed, there was nothing inherently wrong in testing students in the classroom, a fact which was confirmed by the principal, Ms. Brown. The District failed to prove that Respondent told the students not to tell anyone about the retest. Of the three students, one testified that Respondent told them not to tell anyone about the test. Another testified that Respondent told them not to tell anyone that she gave them lollipops for taking the test and doing better. The third student did not testify concerning the matter at all. Respondent likely did know that the retest scores would replace the morning?s STAR test scores on a growth report. However, her testimony that she was not thinking about the growth report at the time is accepted as credible. Respondent?s focus was on her students and the potential to increase their performance. This is reflected in the fact that Respondent first ran a test record report, not a growth report, immediately after testing them. Respondent was focused on the individual students? achievement, rather than the overall growth percentage of her class. It was only when she ran the growth report that she realized the morning?s test scores had been replaced with the retest scores. Once she realized that, Respondent immediately took steps, however clumsy and surreptitious, to remove the second set of scores and reestablish the morning?s growth calculation as the final one to be reported to the District. The preponderance of the evidence does not support a finding that Respondent intended to replace these three students? STAR test results from the morning test with the results from the afternoon test. Other Issues The troubling issue with the retesting is the inescapable conclusion that Respondent assisted at least one of the students with the test. E.B. is an exceptional education student under a 504 plan with a special accommodation for testing. E.B. did not take the STAR test on the morning of May 1, 2013, with the rest of Respondent?s class, but was given the test in a different setting. Both K.K. and F.F. testified that Respondent helped E.B. with the test that afternoon. K.K. testified Respondent was seated directly behind E.B. and mumbled words to E.B., although she could not make out the words. F.F. testified that Respondent helped E.B. with words on the test, although she could not hear specifically what Respondent was saying. E.B.?s highest STAR score was a 3.8, received on one of the interim tests. E.B.?s score on the retest was an unprecedented 6.1 –- a full 2.3 points higher than her previous highest score. The evidence supports a finding that Respondent assisted E.B. with the test. The evidence raises a question as to whether Respondent also assisted K.K. with the test. K.K. scored a 4.5 on the afternoon test, six-tenths of a point higher than her previous high score of 3.9 on one of the interim tests. Both F.F. and K.K. testified that Respondent did not assist them with the test. E.B. testified that Respondent helped K.K. and F.F. with the test, but only by telling them to re-read the questions they were having difficulty with. The evidence is insufficient to support a finding that Respondent assisted either K.K. or F.F. with the test. The District argues that by assisting students with the test, Respondent violated testing protocols and the Agreement she executed on the morning of May 1, 2013. That argument is not well-taken. Respondent cannot be said to have violated protocols for a test which was not administered for the purpose of official scores. Finally, Respondent?s deceitful attempt to have the second set of scores deleted was a clumsy, panicked effort to undo the mess she had made. It cannot be overlooked that it was an attempt to correct her error, not perpetuate inaccurate test results for her own professional gain. It was wrong to lie, and it was wrong to involve so many professional colleagues in her attempt to have the scores deleted. Respondent should have known that, given the importance of the test results and the protocols surrounding the testing, the matter would not be cleared up by a simple deletion of test scores. While Respondent is to be commended for bringing the ruse to an end before the students were hauled in for questioning, the gesture was too little, too late.

Recommendation Based on the foregoing Findings of Fact and Conclusions of Law, it is RECOMMENDED: That the Citrus County School Board enter a final order finding Beth Stone guilty of misconduct in office, suspend her employment without pay for a period of 180 school days retroactive to May 24, 2013, and place her on probation for a period of one year. DONE AND ENTERED this 22nd day of January, 2014, in Tallahassee, Leon County, Florida. S SUZANNE VAN WYK Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with the Clerk of the Division of Administrative Hearings this 22nd day of January, 2014.

Florida Laws (9) 1001.021012.221012.33120.536120.54120.569120.57120.65120.68
# 6
NATURE'S WAY NURSERY OF MIAMI, INC. vs FLORIDA DEPARTMENT OF HEALTH, AN EXECUTIVE BRANCH AGENCY OF THE STATE OF FLORIDA, 17-005801RE (2017)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Oct. 19, 2017 Number: 17-005801RE Latest Update: Apr. 23, 2019

The Issue The issues to be decided are (i) whether Emergency Rule 64ER17-7(1)(b)-(d) constitutes an invalid exercise of delegated legislative authority, and (ii) whether Respondent's scoring methodology, which comprises several policies and procedures for determining the aggregate scores of the nurseries that applied for Dispensing Organization licenses in 2015, constitutes an unadopted rule.

Findings Of Fact BACKGROUND AND PARTIES Respondent Florida Department of Health (the "Department" or "DOH") is the agency responsible for administering and enforcing laws that relate to the general health of the people of the state. The Department's jurisdiction includes the state's medical marijuana program, which the Department oversees. Art. X, § 29, Fla. Const.; § 381.986, Fla. Stat. Enacted in 2014, section 381.986, Florida Statutes (2015) (the "Noneuphoric Cannabis Law"), legalized the use of low-THC cannabis by qualified patients having specified illnesses, such as cancer and debilitating conditions that produce severe and persistent seizures and muscle spasms. The Noneuphoric Cannabis Law directed the Department to select one dispensing organization ("DO") for each of five geographic areas referred to as the northwest, northeast, central, southwest, and southeast regions of Florida. Once licensed, a regional DO would be authorized to cultivate, process, and sell medical marijuana, statewide, to qualified patients. Section 381.986(5)(b), Florida Statutes (2015), prescribed various conditions that an applicant would need to meet to be licensed as a DO, and it required the Department to "develop an application form and impose an initial application and biennial renewal fee." DOH was, further, granted authority to "adopt rules necessary to implement" the Noneuphoric Cannabis Law. § 381.986(5)(d), Fla. Stat. (2015). Accordingly, the Department's Office of Compassionate Use ("OCU"), which is now known as the Office of Medical Marijuana Use, adopted rules under which a nursery could apply for a DO license. Incorporated by reference in these rules is a form of an Application for Low-THC Cannabis Dispensing Organization Approval ("Application"). See Fla. Admin. Code R. 64-4.002 (incorporating Form DH9008-OCU-2/2015). To apply for one of the initial DO licenses, a nursery needed to submit a completed Application, including the $60,063.00 application fee, no later than July 8, 2015.1/ See Fla. Admin. Code R. 64-4.002(5). Petitioner Nature's Way of Miami, Inc. ("Nature's Way"), is a nursery located in Miami, Florida, which grows and sells tropical plants to big box retailers throughout the nation. Nature's Way timely applied to the Department in 2015 for licensure as a DO in the southeast region. THE 2015 DO APPLICATION CYCLE These rule challenges arise from the Department's intended denial of Nature's Way's October 19, 2017, application for registration as a medical marijuana treatment center ("MMTC"), which is the name by which DOs are now known. Nature's Way asserts that it qualifies for licensure as an MMTC because it meets the newly created "One Point Condition," which can be satisfied only by a nursery, such as Nature's Way, whose 2015 application for licensure as a DO was evaluated, scored, and not approved as of the enactment, in 2017, of legislation that substantially overhauled the Noneuphoric Cannabis Law. See Ch. 2017-232, Laws of Fla. The current iteration of section 381.986, in effect as of this writing, will be called the "Medical Marijuana Law." The One Point Condition operates retroactively in that it establishes a previously nonexistent basis for licensure that depends upon pre-enactment events. This is analogous to the legislative creation of a new cause of action, involving as it does the imposition of a new duty (to issue licenses) on the Department and the bestowal of a new right (to become licensed) on former applicants based on their past actions. Facts surrounding the inaugural competition under the Noneuphoric Cannabis Law for regional DO licenses are material, therefore, to the determination not only of whether an applicant for licensure as an MMTC under the Medical Marijuana Law meets the One Point Condition, but also of the (in)validity of the emergency rule at issue, and the (il)legality of the agency statements alleged to be rules by definition, upon which the Department relies in applying the One Point Condition. To understand the issues at hand, it is essential first to become familiar with the evaluation and scoring of, and the agency actions with respect to, the applications submitted during the 2015 DO application cycle. The Competitive, Comparative Evaluation As stated in the Application, OCU viewed its duty to select five regional DOs as requiring OCU to choose "the most dependable, most qualified" applicant in each region "that can consistently deliver high-quality" medical marijuana. For ease of reference, such an applicant will be referred to as the "Best" applicant for short. Conversely, an applicant not chosen by OCU as "the most dependable, most qualified" applicant in a given region will be called, simply, "Not Best." Given the limited number of available DO licenses under the Noneuphoric Cannabis Law, the 2015 application process necessarily entailed a competition. As the Application explained, applicants were not required to meet any "mandatory minimum criteria set by the OCU," but would be evaluated comparatively in relation to the "other Applicants" for the same regional license, using criteria "drawn directly from the Statute." Clearly, the comparative evaluation would require the item-by-item comparison of competing applicants, where the "items" being compared would be identifiable factors drawn from the statute and established in advance. Contrary to the Department's current litigating position, however, it is not an intrinsic characteristic of a comparative evaluation that observations made in the course thereof must be recorded using only comparative or superlative adjectives (e.g., least qualified, qualified, more qualified, most qualified).2/ Moreover, nothing in the Noneuphoric Cannabis Law, the Application, or Florida Administrative Code Rule 64-4.002 stated expressly, or necessarily implied, that in conducting the comparative evaluation, OCU would not quantify (express numerically an amount denoting) the perceived margins of difference between competing applications. Quite the opposite is true, in fact, because, as will be seen, rule 64-4.002 necessarily implied, if it did not explicitly require, that the applicants would receive scores which expressed their relative merit in interpretable intervals. Specifically, the Department was required to "substantively review, evaluate, and score" all timely submitted and complete applications. Fla. Admin. Code R. 64-4.002(5)(a). This evaluation was to be conducted by a three-person committee (the "Reviewers"), each member of which had the duty to independently review and score each application. See Fla. Admin. Code R. 64-4.002(5)(b). The applicant with the "highest aggregate score" in each region would be selected as the Department's intended licensee for that region. A "score" is commonly understood to be "a number that expresses accomplishment (as in a game or test) or excellence (as in quality) either absolutely in points gained or by comparison to a standard." See "Score," Merriam-Webster.com, http://www.merriam-webster.com (last visited May 30, 2018). Scores are expressed in cardinal numbers, which show quantity, e.g., how many or how much. When used as a verb in this context, the word "score" plainly means "to determine the merit of," or to "grade," id., so that the assigned score should be a cardinal number that tells how much quality the graded application has as compared to the competing applications. The language of the rule leaves little or no doubt that the Reviewers were supposed to score the applicants in a way that quantified the differences between them, rather than with superlatives such as "more qualified" and "most qualified" (or numbers that merely represented superlative adjectives). By rule, the Department had identified the specific items that the Reviewers would consider during the evaluation. These items were organized around five subjects, which the undersigned will refer to as Topics. The five Topics were Cultivation, Processing, Dispensing, Medical Director, and Financials. Under the Topics of Cultivation, Processing, and Dispensing were four Subtopics (the undersigned's term): Technical Ability; Infrastructure; Premises, Resources, Personnel; and Accountability. In the event, the 12 Topic-Subtopic combinations (e.g., Cultivation-Technical Ability, Cultivation- Infrastructure), together with the two undivided Topics (i.e., Medical Director and Financials), operated as 14 separate evaluation categories. The undersigned refers to these 14 categories as Domains. The Department assigned a weight (by rule) to each Topic, denoting the relative importance of each in assessing an applicant's overall merit. The Subtopics, in turn, were worth 25% of their respective Topics' scores, so that a Topic's raw or unadjusted score would be the average of its four Subtopics' scores, if it had them. The 14 Domains and their associated weights are shown in the following table: CULTIVATION 30% 1. Cultivation – Technical Ability 25% out of 30% 2. Cultivation – Infrastructure 25% out of 30% 3. Cultivation – Premises, Resources, Personnel 25% out of 30% 4. Cultivation – Accountability 25% out of 30% PROCESSING 30% 5. Processing – Technical Ability 25% out of 30% 6. Processing – Infrastructure 25% out of 30% 7. Processing: Premises, Resources, Personnel 25% out of 30% 8. Processing: Accountability 25% out of 30% DISPENSING 15% 9. Dispensing: Technical Ability 25% out of 15% 10. Dispensing: Infrastructure 25% out of 15% 11. Dispensing: Premises, Resources, Personnel 25% out of 15% 12. Dispensing: Accountability 25% out of 15% 13. MEDICAL DIRECTOR 5% 14. FINANCIALS 20% If there were any ambiguity in the meaning of the word "score" as used in rule 64-4.002(5)(b), the fact of the weighting scheme removes all uncertainty, because in order to take a meaningful percentage (or fraction) of a number, the number must signify a divisible quantity, or else the reduction of the number, x, to say, 20% of x, will not be interpretable. Some additional explanation here might be helpful. If the number 5 is used to express how much of something we have, e.g., 5 pounds of flour, we can comprehend the meaning of 20% of that value (1 pound of flour). On the other hand, if we have coded the rank of "first place" with the number 5 (rather than, e.g., the letter A, which would be equally functional as a symbol), the meaning of 20% of that value is incomprehensible (no different, in fact, than the meaning of 20% of A). To be sure, we could multiply the number 5 by 0.20 and get 1, but the product of this operation, despite being mathematically correct (i.e., true in the abstract, as a computational result), would have no contextual meaning. This is because 20% of first place makes no sense. Coding the rank of first place with the misleading symbol of "5 points" would not help, either, because the underlying referent——still a position, not a quantity——is indivisible no matter what symbol it is given.3/ We can take this analysis further. The weighting scheme clearly required that the points awarded to an applicant for each Topic must contribute a prescribed proportionate share both to the applicant's final score per Reviewer, as well as to its aggregate score. For example, an applicant's score for Financials had to be 20% of its final Reviewer scores and 20% of its aggregate score, fixing the ratio of unweighted Financials points to final points (both Reviewer and aggregate) at 5:1. For this to work, a point scale having fixed boundaries had to be used, and the maximum number of points available for the final scores needed to be equal to the maximum number of points available for the raw (unweighted) scores at the Topic level. In other words, to preserve proportionality, if the applicants were scored on a 100-point scale, the maximum final score had to be 100, and the maximum raw score for each of the five Topics needed to be 100, too. The reasons for this are as follows. If there were no limit to the number of points an applicant could earn at the Topic level (like a baseball game), the proportionality of the weighting scheme could not be maintained; an applicant might run up huge scores in lower-weighted Topics, for example, making them proportionately more important to its final score than higher-weighted Topics. Similarly, if the maximum number of points available at the Topic level differed from the maximum number of points available as a final score, the proportionality of the weighting scheme (the prescribed ratios) would be upset, obviously, because, needless to say, 30% of, e.g., 75 points is not equal to 30% of 100 points. If a point scale is required to preserve proportionality, and it is, then so, too, must the intervals between points be the same, for all scores, in all categories, or else the proportionality of the weighting scheme will fail. For a scale to be uniform and meaningful, which is necessary to maintain the required proportionality, the points in it must be equidistant from each other; that is, the interval between 4 and 5, for example, needs to be the same as the interval between 2 and 3, and the distance between 85 and 95 (if the scale goes that high) has to equal that between 25 and 35.4/ When the distances between values are known, the numbers are said to express interval data.5/ Unless the distances between points are certain and identical, the prescribed proportions of the weighting scheme established in rule 64-4.002 will be without meaning. Simply stated, there can be no sense of proportion without interpretable intervals. We cannot say that a 5:1 relationship exists between two point totals (scores) if we have no idea what the distance is between 5 points and 1 point. The weighting system thus necessarily implied that the "scores" assigned by the Reviewers during the comparative evaluation would be numerical values (points) that (i) expressed quantity; (ii) bore some rational relationship to the amount of quality the Reviewer perceived in an applicant in relation to the other applicants; and (iii) constituted interval data. In other words, the rule unambiguously required that relative quality be counted (quantified), not merely coded. The Scoring Methodology: Interval Coding In performing the comparative evaluation of the initial applications filed in 2015, the Reviewers were required to use Form DH8007-OCU-2/2015, "Scorecard for Low-THC Cannabis Dispensing Organization Selection" (the "Scorecard"), which is incorporated by reference in rule 64-4.002(5)(a). There are no instructions on the Scorecard. The Department's rules are silent to how the Reviewers were supposed to score applications using the Scorecard, and they provide no process for generating aggregate scores from Reviewer scores. To fill these gaps, the Department devised several policies that governed its free-form decision-making in the run- up to taking preliminary agency action on the applications. Regarding raw scores, the Department decided that the Reviewers would sort the applications by region and then rank the applications, from best to worst, on a per-Domain basis, so that each Reviewer would rank each applicant 14 times (the "Ranking Policy"). An applicant's raw Domanial score would be its position in the ranking, from 1 to x, where x was both (i) equal to the number of applicants within the region under review and (ii) the number assigned to the rank of first place (or Best). In other words, the Reviewer's judgments as to the descending order of suitability of the competing applicants, per Domain, were symbolized or coded with numbers that the Department called "rank scores," and which were thereafter used as the applicants' raw Domanial scores. To be more specific, in a five-applicant field such as the southeast region, the evaluative judgments of the Reviewers were coded as follows: Evaluative Judgment Symbol ("Rank Score") Best qualified applicant ("Best") 5 points Less qualified than the best qualified applicant, but better qualified than all other applicants ("Second Best") 4 points Less qualified than two better qualified applicants, but better qualified than all other applicants ("Third Best") 3 points Less qualified than three better qualified applicants, but better qualified than all other applicants ("Fourth Best") 2 points Less qualified than four better qualified applicants ("Fifth Best") 1 point The Department's unfortunate decision to code the Reviewers' qualitative judgments regarding positions in rank orders with symbols that look like quantitative judgments regarding amounts of quality led inexorably to extremely misleading results. The so-called "rank scores" give the false impression of interval data, tricking the consumer (and evidently the Department, too) into believing that the distance between scores is certain and the same; that, in other words, an applicant with a "rank score" of 4 is 2 points better than an applicant with a "rank score" of 2. If this deception had been intentional (and, to be clear, there is no evidence it was), we could fairly call it fraud. Even without bad intent, the decision to code positions in ranked series with "scores" expressed as "points" was a colossal blunder that turned the scoring process into a dumpster fire. Before proceeding, it must be made clear that an applicant's being ranked Best in a Domain meant only that, as the highest-ranked applicant, it was deemed more suitable, by some unknown margin, than all the others within the group. By the same token, to be named Second Best meant only that this applicant was less good, in some unknown degree, than the Best applicant, and better, in some unknown degree, than the Third Best and remaining, lower-ranked applicants. The degree of difference in suitability between any two applicants in any Domanial ranking might have been a tiny sliver or a wide gap, even if they occupied adjacent positions, e.g., Second Best and Third Best. The Reviewers made no findings with respect to degrees of difference. Moreover, it cannot truthfully be claimed that the interval between, say, Second Best and Third Best is the same as that between Third Best and Fourth Best, for there exists no basis in fact for such a claim. In sum, the Department's Domanial "rank scores" merely symbolized the applicants' positions in sets of ordered applications. Numbers which designate the respective places (ranks) occupied by items in an ordered list are called ordinal numbers. The type of non-metric data that the "rank scores" symbolize is known as ordinal data, meaning that although the information can be arranged in a meaningful order, there is no unit or meter by which the intervals between places in the ranking can be measured. Because it is grossly misleading to refer to positions in a ranking as "scores" counted in "points," the so-called "rank scores" will hereafter be referred to as "Ordinals"——a constant reminder that we are working with ordinal data. This is important to keep in mind because, as will be seen, there are limits on the kinds of mathematical manipulation that can appropriately be carried out with ordinal data. The Department's policy of coding positions in a rank order with "rank scores" expressed as "points" will be called the "Interval Coding Policy." In conducting the evaluation, the Reviewers followed the Ranking Policy and Interval Coding Policy (collectively, the "Rank Scores Policies"). The Computational Methodology: Interval Statements and More Once the Reviewers finished evaluating and coding the applications, the evaluative phase of the Department's free-form process was concluded. The Reviewers had produced a dataset of Domanial Ordinals——42 Domanial Ordinals for each applicant to be exact——that collectively comprised a compilation of information, stored in the scorecards. This universe of Domanial Ordinals will be called herein the "Evaluation Data." The Department would use the Evaluation Data in the next phase of its free-form process as grounds for computing the applicants' aggregate scores. Rule 64-4.002(5)(b) provides that "scorecards from each reviewer will be combined to generate an aggregate score for each application. The Applicant with the highest aggregate score in each dispensing region shall be selected as the region's Dispensing Organization." Notice that the rule here switches to the passive voice. The tasks of (i) "combin[ing]" scorecards to "generate" aggregate scores and of (ii) "select[ing]" regional DOs were not assigned to the Reviewers, whose work was done upon submission of the scorecards. As mentioned previously, the rule does not specify how the Evaluation Data will be used to generate aggregate scores. The Department formulated extralegal policies6/ for this purpose, which can be stated as follows: (i) the Ordinals, which in actuality are numeric code for uncountable information content, shall be deemed real (counted) points, i.e., equidistant units of measurement on a 5-point interval scale (the "Deemed Points Policy"); (ii) in determining aggregate scores, the three Reviewer scores will be averaged instead of added together, so that "aggregate score" means "average Reviewer score" (the "Aggregate Definition"); and (iii) the results of mathematical computations used to determine weighted scores at the Reviewer level and, ultimately, the aggregate scores themselves will be carried out to the fourth decimal place (the "Four Decimal Policy"). Collectively, these three policies will be referred to as the "Generation Policies." The Department's "Scoring Methodology" comprises the Rank Scores Policies and the Generation Policies. The Department's computational process for generating aggregate scores operated like this. For each applicant, a Reviewer final score was derived from each Reviewer, using that Reviewer's 14 Domanial Ordinals for the applicant. For each of the subdivided Topics (Cultivation, Processing, and Dispensing), the mean of the Reviewer's four Domanial Ordinals for the applicant (one Domanial Ordinal for each Subtopic) was determined by adding the four numbers (which, remember, were whole numbers as discussed above) and dividing the sum by 4. The results of these mathematical operations were reported to the second decimal place. (The Reviewer raw score for each of the subdivided Topics was, in other words, the Reviewer's average Subtopic Domanial Ordinal.) For the undivided Topics of Medical Director and Financials, the Reviewer raw score was simply the Domanial Ordinal, as there was only one Domanial Ordinal per undivided Topic. The five Reviewer raw Topic scores (per Reviewer) were then adjusted to account for the applicable weight factor. So, the Reviewer raw scores for Cultivation and Processing were each multiplied by 0.30; raw scores for Dispensing were multiplied by 0.15; raw scores (Domanial Ordinals) for Medical Director were multiplied by 0.05; and raw scores (Domanial Ordinals) for Financials were multiplied by 0.20. These operations produced five Reviewer weighted-Topic scores (per Reviewer), carried out (eventually) to the fourth decimal place. The Reviewer final score was computed by adding the five Reviewer weighted-Topic scores. Thus, each applicant wound up with three Reviewer final scores, each reported to the fourth decimal place pursuant to the Four Decimal Policy. The computations by which the Department determined the three Reviewer final scores are reflected (but not shown) in a "Master Spreadsheet"7/ that the Department prepared. Comprising three pages (one for each Reviewer), the Master Spreadsheet shows all of the Evaluation Data, plus the 15 Reviewer raw Topic scores per applicant, and the three Reviewer final scores for each applicant. Therein, the Reviewer final scores of Reviewer 2 and Reviewer 3 were not reported as numbers having five significant digits, but were rounded to the nearest hundredth. To generate an applicant's aggregate score, the Department, following the Aggregate Definition, computed the average Reviewer final score by adding the three Reviewer final scores and dividing the sum by 3. The result, under the Four Decimal Policy, was carried out the ten-thousandth decimal point. The Department referred to the aggregate score as the "final rank" in its internal worksheets. The Department further assigned a "regional rank" to each applicant, which ordered the applicants, from best to worst, based on their aggregate scores. Put another way, the regional rank was an applicant's Ultimate Ordinal. The Reviewer final scores and the "final ranks" (all carried out to the fourth decimal place), together with the "regional ranks," are set forth in a table the Department has labeled its November 2015 Aggregated Score Card (the "Score Card"). The Score Card does not contain the Evaluation Data. Preliminary Agency Actions Once the aggregate scores had been computed, the Department was ready to take preliminary agency action on the applications. As to each application, the Department made a binary decision: Best or Not Best. The intended action on the applications of the five Best applicants (one per region), which were identified by their aggregate scores (highest per region), would be to grant them. Each of the Not Best applicants, so deemed due to their not having been among the highest scored applicants, would be notified that the Department intended to deny its application. The ultimate factual determination that the Department made for each application was whether the applicant was, or was not, the most dependable, most qualified nursery as compared to the alternatives available in a particular region. Clear Points of Entry Letters dated November 23, 2015, were sent to the applicants informing them either that "your application received the highest score" and thus is granted, or that because "[you were] not the highest scored applicant in [your] region, your application . . . is denied," whichever was the case. The letters contained a clear point of entry, which concluded with the usual warning that the "[f]ailure to file a petition within 21 days shall constitute a waiver of the right to a hearing on this agency action." 8/ (Emphasis added). Nature's Way decided not to request a hearing in 2015, and therefore it is undisputed that the Department's proposed action, i.e., the denial of Nature's Way's application because the applicant was not deemed to be the most dependable, most qualified nursery for purposes of selecting a DO for the southeast region, became final agency action without a formal hearing, the right to which Nature's Way elected to waive. The Department argues that Nature's Way thereby waived, forever and for all purposes, the right to a hearing on the question of whether its aggregate score of 2.8833 and Costa's aggregate score of 4.4000 (highest in the southeast region)——which the Department generated using the Scoring Methodology——are, in fact, true as interval statements of quantity. (Note that if these scores are false as interval data, as Nature's Way contends, then the statement that Costa's score exceeds Nature's Way's score by 1.5167 points is false, also, because it is impossible to calculate a true, interpretable difference (interval) between two values unless those values are expressions of quantified data. Simply put, you cannot subtract Fourth Best from Best.) The Department's waiver argument, properly understood, asserts that Nature's Way is barred by administrative finality from "relitigating" matters, such as the truth of the aggregate scores as quantifiable facts, which were supposedly decided conclusively in the final agency action on its DO application in 2015. To successfully check Nature's Way with the affirmative defense of administrative finality, the Department needed to prove that the truth of the aggregate scores, as measurable quantities, was actually adjudicated (or at least judicable) in 2015, so that the numbers 2.8833 and 4.4000 are now incontestably true interval data, such that one figure can meaningfully be subtracted from the other for purposes of applying the One Point Condition. The Department's affirmative defense of collateral estoppel/issue preclusion was rejected in the related disputed- fact proceeding, which is the companion to this litigation, based on the undersigned's determination that the truth of the aggregate scores as statements of fact expressing interval data had never been previously adjudicated as between the Department and Nature's Way. See Nature's Way Nursery of Miami, Inc. v. Dep't of Health, Case No. 18-0721 (Fla. DOAH June 15, 2018). The Ambiguity of the Aggregate Scores There is a strong tendency to look at a number such as 2.8833 and assume that it is unambiguous——and, indeed, the Department is unquestionably attempting to capitalize on that tendency. But numbers can be ambiguous.9/ The aggregate scores are, clearly, open to interpretation. To begin, however, it must be stated up front that there is no dispute about the existence of the aggregate scores. It is an undisputed historical fact, for example, that Nature's Way had a final ranking (aggregate score) of 2.8833 as computed by the Department in November 2015. There is likewise no dispute that Costa's Department-computed aggregate score was 4.4000. In this sense, the scores are historical facts—— relevant ones, too, since an applicant needed to have had an aggregate score in 2015 to take advantage of the One Point Condition enacted in 2017. The existence of the scores, however, is a separate property from their meaning. Clearly, the aggregate scores that exist from history purport to convey information about the applicants; in effect, they are statements. The ambiguity arises from the fact that each score could be interpreted as having either of two different meanings. On the one hand, an aggregate score could be understood as a numerically coded non- quantity, namely a rank. In other words, the aggregate scores could be interpreted reasonably as ordinal data. On the other hand, an aggregate score could be understood as a quantified measurement taken in units of equal value, i.e., interval data. In 2015, the Department insisted (when it suited its purposes) that the aggregate scores were numeric shorthand for its discretionary value judgments about which applicants were best suited, by region, to be DOs, reflecting where the applicants, by region, stood in relation to the best suited applicants and to each other. The Department took this position because it wanted to limit the scope of the formal hearings requested by disappointed applicants to reviewing its decisions for abuse of discretion. Yet, even then, the Department wanted the aggregate scores to be seen as something more rigorously determined than a discretionary ranking. Scores such as 2.8833 and 3.2125 plainly connote a much greater degree of precision than "these applicants are less qualified than others." Indeed, in one formal hearing, the Department strongly implied that the aggregate scores expressed interval data, arguing that they showed "the [Department's position regarding the] order of magnitude" of the differences in "qualitative value" between the applicants, so that a Fourth Best applicant having a score of 2.6458 was asserted to be "far behind" the highest-scored applicant whose final ranking was 4.1042.10/ A ranking, of course, expresses order but not magnitude; interval data, in contrast, expresses both order and magnitude, and it is factual in nature, capable of being true or false. In short, as far as the meaning of the aggregate scores is concerned, the Department has wanted to have it both ways. Currently, the Department is all-in on the notion that the aggregate scores constitute precise interval data, i.e., quantified facts. In its Proposed Recommended Order in Case No. 18-0721,11/ on page 11, the Department argues that "Nature's Way does not meet the within-one-point requirement" because "Nature's Way's Final Rank [aggregate score of 2.8833] is 1.5167 points less than the highest Final Rank [Cost's aggregate score, 4.4000] in its region." This is a straight-up statement of fact, not a value judgment or policy preference. Moreover, it is a statement of fact which is true only if the two aggregate scores being compared (2.8833 and 4.4000), themselves, are true statements of quantifiable fact about the respective applicants. The Department now even goes so far as to claim that the aggregate score is the precise and true number (quantity) of points that an applicant earned as a matter of fact. On page 6 of its Proposed Final Order, the Department states that Costa "earned a Final Rank of 4.4000" and that Nature's Way had an "earned Final Rank of 2.8833." In this view, the scores tell us not that, in the Department's discretionary assignment of value, Costa was better suited to be the DO for the southeast region, but rather that (in a contest, it is insinuated, the Department merely refereed) Costa outscored Nature's Way by exactly 1.5167 points——and that the points have meaning as equidistant units of measurement. The Department is plainly using the aggregate scores, today, as interval statements of quantifiable fact, claiming that Nature's Way "earned" exactly 2.8833 points on a 5-point scale where each point represents a standard unit of measurement, while Costa "earned" 4.4000 points; this, again, is the only way it would be correct to say that Costa was 1.5167 points better than Nature's Way. Indeed, Emergency Rule 64ER17-7 (the "Emergency Rule") purports to codify this interpretation of the aggregate scores——and to declare that the 2015 aggregate scores are true as interval data. ENACTMENT OF THE MEDICAL MARIJUANA LAW Effective January 3, 2017, Article X of the Florida Constitution was amended to include a new section 29, which addresses medical marijuana production, possession, dispensing, and use. Generally speaking, section 29 expands access to medical marijuana beyond the framework created by the Florida Legislature in 2014. To implement the newly adopted constitutional provisions and "create a unified regulatory structure," the legislature enacted the Medical Marijuana Law, which substantially revised section 381.986 during the 2017 Special Session. Ch. 2017-232, § 1, Laws of Fla. Among other things, the Medical Marijuana Law establishes a licensing protocol for ten new MMTCs. The relevant language of the new statute states: (8) MEDICAL MARIJUANA TREATMENT CENTERS.— (a) The department shall license medical marijuana treatment centers to ensure reasonable statewide accessibility and availability as necessary for qualified patients registered in the medical marijuana use registry and who are issued a physician certification under this section. * * * The department shall license as medical marijuana treatment centers 10 applicants that meet the requirements of this section, under the following parameters: As soon as practicable, but no later than August 1, 2017, the department shall license any applicant whose application was reviewed, evaluated, and scored by the department and which was denied a dispensing organization license by the department under former s. 381.986, Florida Statutes 2014; which had one or more administrative or judicial challenges pending as of January 1, 2017, or had a final ranking within one point of the highest final ranking in its region under former s. 381.986, Florida Statutes 2014; which meets the requirements of this section; and which provides documentation to the department that it has the existing infrastructure and technical and technological ability to begin cultivating marijuana within 30 days after registration as a medical marijuana treatment center. § 381.986, Fla. Stat. (Emphasis added: The underscored provision is the One Point Condition). The legislature granted the Department rulemaking authority, as needed, to implement the provisions of section 381.986(8). § 381.986(8)(k), Fla. Stat. In addition, the legislature authorized the Department to adopt emergency rules pursuant to section 120.54(4), as necessary to implement section 381.986, without having to find an actual emergency, as otherwise required by section 120.54(4)(a). Ch. 2017-232, § 14, at 45, Laws of Fla. IMPLEMENTATION OF THE ONE POINT CONDITION AND ADOPTION OF THE EMERGENCY RULE The One Point Condition went into effect on June 23, 2017. Ch. 2017-232, § 20, Laws of Fla. Thereafter, the Department issued a license to Sun Bulb Nursery (a 2015 DO applicant in the southwest region), because the Department concluded that Sun Bulb's final ranking was within one point of the highest final ranking in the southwest region.12/ Keith St. Germain Nursery Farms ("KSG"), like Nature's Way a 2015 DO applicant for the southeast region, requested MMTC registration pursuant to the One Point Condition in June 2017. In its request for registration, KSG asserted that the One Point Condition is ambiguous and proposed that the Department either calculate the one-point difference based on the regional ranks set forth in the Score Card (KSG was the regional Second Best, coded as Ultimate Ordinal 4) or round off the spurious decimal points in the aggregate scores when determining the one-point difference. The Department preliminarily denied KSG's request for MMTC registration in August 2017. In its notice of intent, the Department stated in part: The highest-scoring entity in the Southeast Region, Costa Nursery Farms, LLC, received a final aggregate score of 4.4000. KSG received a final aggregate score of 3.2125. Therefore, KSG was not within one point of Costa Farms. KSG requested a disputed-fact hearing on this proposed agency action and also filed with DOAH a Petition for Formal Administrative Hearing and Administrative Determination Concerning Unadopted Rules, initiating Keith St. Germain Nursery Farms v. Florida Department of Health, DOAH Case No. 17-5011RU ("KSG's Section 120.56(4) Proceeding"). KSG's Section 120.56(4) Proceeding, which Nature's Way joined as a party by intervention, challenged the legality of the Department's alleged unadopted rules for determining which of the 2015 DO applicants were qualified for licensure pursuant to the One Point Condition. Faced with the KSG litigation, the Department adopted Emergency Rule 64ER17-3, which stated in relevant part: For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). The Department admits that not much analysis or thought was given to the development of this rule, which reflected the Department's knee-jerk conclusion that the One Point Condition's use of the term "final ranking" clearly and unambiguously incorporated the applicants' "aggregate scores" (i.e., "final rank" positions), as stated in the Score Card, into the statute. In any event, the rule's transparent purpose was to adjudicate the pending licensing dispute with KSG and shore up the Department's ongoing refusal (in Department of Health Case No. 2017-0232) to grant KSG a formal disputed-fact hearing on the proposed denial of its application. Naturally, the Department took the position that rule 64ER17-3 had settled all possible disputes of material fact, once and for all, as a matter of law. In a surprising about-face, however, on October 26, 2017, the Department entered into a settlement agreement with KSG pursuant to which the Department agreed to register KSG as an MMTC. The Department issued a Final Order Adopting Settlement Agreement with KSG on October 30, 2017. That same day (and in order to effectuate the settlement with KSG), the Department issued the Emergency Rule. The Emergency Rule amends former rule 64ER17-3 to expand the pool of Qualified 2015 Applicants by exactly one, adding KSG——not by name, of course, but by deeming all the regional Second Best applicants to be Within One Point. Because KSG was the only 2015 applicant ranked Second Best in its region that did not have an aggregate score within one point of its region's Best applicant in accordance with rule 64ER17-3, KSG was the only nursery that could take advantage of the newly adopted provisions. As relevant, the Emergency Rule provides as follows: This emergency rule supersedes the emergency rule 64ER17-3 which was filed and effective on September 28, 2017. For the purposes of implementing s. 381.986(8)(a)2.a., F.S., the following words and phrases shall have the meanings indicated: Application – an application to be a dispensing organization under former s. 381.986, F.S. (2014), that was timely submitted in accordance with Rule 64- 4.002(5) of the Florida Administrative Code (2015). Final Ranking – an applicant's aggregate score for a given region as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, incorporated by reference and available at [hyperlink omitted], as the final rank existed on November 23, 2015. Highest Final Ranking – the final rank with the highest point value for a given region, consisting of an applicant's aggregate score as provided in the column titled "Final Rank" or the applicant's regional rank as provided in the column titled "Regional Rank" within the November 2015 Aggregated Score Card, as the final rank existed on November 23, 2015. Within One Point – for the aggregate score under the column "Final Rank" one integer (i.e., whole, non-rounded number) carried out to four decimal points (i.e., 1.0000) or for the regional rank under the column "Regional Rank" one whole number difference, by subtracting an applicant's final ranking from the highest final ranking in the region for which the applicant applied. Qualified 2015 Applicant – an individual or entity whose application was reviewed, evaluated, and scored by the department and that was denied a dispensing organization license under former s. 381.986, F.S. (2014) and either: (1) had one or more administrative or judicial challenges pending as of January 1, 2017; or had a final ranking within one point of the highest final ranking in the region for which it applied, in accordance with Rule 64-4.002(5) of the Florida Administrative Code (2015). (Emphasis added). In a nutshell, the Emergency Rule provides that an applicant meets the One Point Condition if either (i) the difference between its aggregate score and the highest regional aggregate score, as those scores were determined by the Department effective November 23, 2015, is less than or equal to 1.0000; or (ii) its regional rank, as determined by the Department effective November 23, 2015, is Second Best. A number of applicants satisfy both criteria, e.g., 3 Boys, McCrory's, Chestnut Hill, and Alpha (northwest region). Some, in contrast, meet only one or the other. Sun Bulb, Treadwell, and Loop's, for example, meet (i) but not (ii). KSG, alone, meets (ii) but not (i). The Department has been unable to come up with a credible, legally cohesive explanation for the amendments that distinguish the Emergency Rule from its predecessor. On the one hand, Christian Bax testified that KSG had persuaded the Department that "within one point" meant, for purposes of the One Point Condition, Second Best (or "second place"), and that this reading represented a reasonable interpretation of a "poorly crafted sentence" using an "unartfully crafted term," i.e., "final ranking." On the other hand, the Department argues in its Proposed Final Order (on page 17) that the One Point Condition's "plain language reflects the legislature's intent that the 'second-best' applicant in each region (if otherwise qualified) be licensed as an MMTC." (Emphasis added). Logically, of course, the One Point Condition cannot be both "poorly crafted" (i.e., ambiguous) and written in "plain language" (i.e., unambiguous); legally, it must be one or the other. Put another way, the One Point Condition either must be construed, which entails a legal analysis known as statutory interpretation that is governed by well-known canons of construction and results in a legal ruling declaring the meaning of the ambiguous terms, or it must be applied according to its plain language, if (as a matter of law) it is found to be unambiguous. Obviously, as well, the One Point Condition, whether straightforward or ambiguous, cannot mean both within one point and within one place, since these are completely different statuses. If the statute is clear and unambiguous, only one of the alternatives can be correct; if ambiguous, either might be permissible, but not both simultaneously. By adopting the Emergency Rule, the Department took a position in direct conflict with the notion that the One Point Condition is clear and unambiguous; its reinterpretation of the statute is consistent only with the notion that the statute is ambiguous, and its present attempt to disown that necessarily implicit conclusion is rejected. The irony is that the Department surrendered the high ground of statutory unambiguity, which it initially occupied and stoutly defended, to take up an indefensible position, where, instead of choosing between two arguably permissible, but mutually exclusive, interpretations, as required, it would adopt both interpretations. The only reasonable inference the undersigned can draw from the Department's bizarre maneuver is that the Emergency Rule is not the product of high-minded policy making but rather a litigation tactic, which the Department employed as a necessary step to resolve the multiple disputes then pending between it and KSG. The Emergency Rule was adopted to adjudicate the KSG disputes in KSG's favor, supplanting the original rule that was adopted to adjudicate the same disputes in the Department's favor. THE IRRATIONALITY OF THE SCORING METHODOLOGY The Department committed a gross conceptual error when it decided to treat ordinal data as interval data under its Interval Coding and Deemed Points Policies. Sadly, there is no way to fix this problem retroactively; no formula exists for converting or translating non-metric data such as rankings (which, for the most part, cannot meaningfully be manipulated mathematically) into quantitative data. Further, the defect in the Department's "scoring" process has deprived us of essential information, namely, actual measurements. A Second Look at the Department's Scoring Methodology The Department's Scoring Methodology was described above. Nevertheless, for purposes of explicating just how arbitrary and capricious were the results of this process, and to shed more light on the issues of fact which the Department hopes the Emergency Rule has resolved before they can ever become grounds for a disputed-fact hearing, the undersigned proposes that the way the Department arrived at its aggregate scores be reexamined. It will be recalled that each applicant received 14 Ordinals from each reviewer, i.e., one Ordinal per Domain. These will be referred to as Domanial Ordinals. Thus, each applicant received, collectively, 12 Domanial Ordinals apiece for the Main Topics of Cultivation, Processing, and Dispensing; and three Domanial Ordinals apiece for the Main Topics of Medical Director and Financials, for a total of 42 Domanial Ordinals. These five sets of Domanial Ordinals will be referred to generally as Arrays, and specifically as the Cultivation Array, the Processing Array, the Dispensing Array, the MD Array, and the Financials Array. Domanial Ordinals that have been sorted by Array will be referred to, hereafter, as Topical Ordinals. So, for example, the Cultivation Array comprises 12 Topical Ordinals per applicant. A table showing the Arrays of the southeast region applicants is attached as Appendix A. Keeping our attention on the Cultivation Array, observe that if we divide the sum of the 12 Topical Ordinals therein by 12, we will have calculated the mean (or average) of these Topical Ordinals. This value will be referred to as the Mean Topical Ordinal or "MTO." For each applicant, we can find five MTOs, one apiece for the five Main Topics. So, each applicant has a Cultivation MTO, a Processing MTO, and so forth. As discussed, each Main Topic was assigned a weight, e.g., 30% for Cultivation, 20% for Financials. These five weights will be referred to generally as Topical Weights, and specifically as the Cultivation Topical Weight, the Processing Topical Weight, etc. If we reduce, say, the Cultivation MTO to its associated Cultivation Topical Weight (in other words, take 30% of the Cultivation MTO), we will have produced the weighted MTO for the Main Topic of Cultivation. For each applicant, we can find five weighted MTOs ("WMTO"), which will be called specifically the Cultivation WMTO, the Processing WMTO, etc. The sum of each applicant's five WMTOs equals what the Department calls the applicant's aggregate score or final rank. In other words, in the Department's scoring methodology, an MTO is functionally a "Topical raw score" and a WMTO is an "adjusted Topical score" or, more simply, a "Topical subtotal." Thus, we can say, alternatively, that the sum of an applicant's five Topical subtotals equals its DOH-assigned aggregate score. For those in a hurry, an applicant's WMTOs (or Topical subtotals) can be computed quickly by dividing the sum of the Topical Ordinals in each Array by the respective divisors shown in the following table: Dividend Divisor Quotient Sum of the Topical Ordinals in the CULTIVATION Array ÷ 40 - Cultivation WMTO Sum of the Topical Ordinals in the PROCESSING Array ÷ 40 - Processing WMTO Dividend Divisor Quotient Sum of the Topical Ordinals in the DISPENSING Array ÷ 80 - Dispensing WMTO Sum of the Ordinals in Topical the MD Array ÷ 60 - MD WMTO Sum of the Topical Ordinals in the FINANCIALS Array ÷ 15 - Financials WMTO To advance the discussion, it is necessary to introduce some additional concepts. We have become familiar with the Ordinal, i.e., a number that the Department assigned to code a particular rank (5, 4, 3, 2, or 1).13/ From now on, the symbol ? will be used to represent the value of an Ordinal as a variable. There is another value, which we can imagine as a concept, namely the actual measurement or observation, which, as a variable, we will call x. For our purposes, x is the value that a Reviewer would have reported if he or she had been asked to quantify (to the fourth decimal place) the amount of an applicant's suitability vis-à-vis the attribute in view on a scale of 1.0000 to 5.0000, with 5.0000 being "ideal" and 1.0000 meaning, roughly, "serviceable." This value, x, is a theoretical construct only because no Reviewer actually made any such measurements; such measurements, however, could have been made, had the Reviewers been required to do so. Indeed, some vague idea, at least, of x must have been in each Reviewer's mind every time he or she ranked the applicants, or else there would have been no grounds for the rankings. Simply put, a particular value x can be supposed to stand behind every Topical Ordinal because every Topical Ordinal is a function of x. Unfortunately, we do not know x for any Topical Ordinal. Next, there is the true value of x, for which we will give the symbol µ. This is a purely theoretical notion because it represents the value that would be obtained by a perfect measurement, and there is no perfect measurement of anything, certainly not of relative suitability to serve as an MMTC.14/ Finally, measurements are subject to uncertainty, which can be expressed in absolute or relative terms. The absolute uncertainty expresses the size of the range of values in which the true value is highly likely to lie. A measurement given as 150 ± 0.5 pounds tells us that the absolute uncertainty is 0.5 pounds, and that the true value is probably between 149.5 and 150.5 pounds (150 – 0.5 and 150 + 0.5). This uncertainty can be expressed as a percentage of the measured value, i.e., 150 pounds ± .33%, because 0.5 is .33% of 150. With that background out of the way, let's return to concept of the mean. The arithmetic mean is probably the most commonly used operation for determining the central tendency (i.e., the average or typical value) of a dataset. No doubt everyone reading this Order, on many occasions, has found the average of, say, four numbers by adding them together and dividing by 4. When dealing with interval data, the mean is interpretable because the interval is interpretable. Where the distance between 4 and 5, for example, is the same as that between 5 and 6, everyone understands that 4.5 is halfway between 4 and 5. As long as we know that 4.5 is exactly halfway between 4 and 5, the arithmetic mean of 4 and 5 (i.e., 4.5) is interpretable. The mean of a set of measurement results gives an estimate of the true value of the measurement, assuming there is no systematic error in the data. The greater the number of measurements, the better the estimate. Therefore, if, for example, we had in this case an Array of xs, then the mean of that dataset (x¯) would approximate µ, especially for the Cultivation, Processing, and Dispensing Arrays, which have 12 observations apiece. If the Department had used x¯ as the Topical raw score instead of the MTO, then its scoring methodology would have been free of systematic error. But the Department did not use x¯ as the Topical raw score. In the event, it had only Arrays of ?s to work with, so when the Department calculated the mean of an Array, it got the average of a set of Ordinals (?¯), not x¯. Using the mean as a measure of the central tendency of ordinal data is highly problematic, if not impermissible, because the information is not quantifiable. In this case, the Department coded the rankings with numbers, but the numbers (i.e., the Ordinals), not being units of measurement, were just shorthand for content that must be expressed verbally, not quantifiably. The Ordinals, that is, translate meaningfully only as words, not as numbers, as can be seen in the table at paragraph 27, supra. Because these numbers merely signify order, the distances between them have no meaning; the interval, it follows, is not interpretable. In such a situation, 4.5 does not signify a halfway point between 4 and 5. Put another way, the average of Best and Second Best is not "Second-Best-and-a- half," for the obvious reason that the notion is nonsensical. To give a real-life example, the three Topical Ordinals in Nature's Way's MD Array are 5, 3, and 2. The average of Best, Third Best, and Fourth Best is plainly not "Third-Best-and-a- third," any more than the average of Friday, Wednesday, and Tuesday is Wednesday-and-a-third. For these reasons, statisticians and scientists ordinarily use the median or the mode to measure the central tendency of ordinal data, generally regarding the mean of such data to be invalid or uninterpretable. The median is the middle number, which is determined by arranging the data points from lowest to highest, and identifying the one having the same number of data points on either side (if the dataset contains an odd number of data points) or taking the average of the two data points in the middle (if the dataset contains an even number of data points). The mode is the most frequently occurring number. (If no number repeats, then there is no mode, and if two or more numbers recur with the same frequency, then there are multiple modes.) We can easily compute the medians, modes, and means of the Topical Ordinals in each of the applicants' Arrays. They are set forth in the following table. Cultivation 30% Processing 30% Dispensing 15% Medical Director 5% Financials 20% Bill's Median Mode Mean 1 1 1.8333 Median Mode Mean 2 2 1.7500 Median Mode Mean 1 1 1.1667 Median Mode Mean 2 NA 2.0000 Median Mode Mean 1 1 1.0000 Costa Median Mode Mean 5 5 4.6667 Median Mode Mean 4.5 5 4.1667 Median Mode Mean 4 4 4.0000 Median Mode Mean 4 4 4.3333 Median Mode Mean 5 5 4.6667 Keith St. Germain Median Mode Mean 4 4 3.4167 Median Mode Mean 4 4 3.2500 Median Mode Mean 2 2 2.4167 Median Mode Mean 4 NA 3.6667 Median Mode Mean 3 3 3.3333 Nature's Way Median Mode Mean 3 4 3.0833 Median Mode Mean 3 3 2.5833 Median Mode Mean 3.5 3 3.6667 Median Mode Mean 3 NA 3.3333 Median Mode Mean 2 2 2.3333 Redland Median Mode Mean 2 2 2.2500 Median Modes Mean 3.5 3, 4, 5 3.4167 Median Mode Mean 5 5 4.1667 Median Mode Mean 2 NA 2.3333 Median Mode Mean 4 NA 3.6667 It so happens that the associated medians, modes, and means here are remarkably similar——and sometimes the same. The point that must be understood, however, is that the respective means, despite their appearance of exactitude when drawn out to four decimal places, tell us nothing more (if, indeed, they tell us anything) than the medians and the modes, namely whether an applicant was typically ranked Best, Second Best, etc. The median and mode of Costa's Cultivation Ordinals, for example, are both 5, the number which signifies "Best." This supports the conclusion that "Best" was Costa's average ranking under Cultivation. The mean of these same Ordinals, 4.6667, appears to say something more exact about Costa, but, in fact, it does not. At most, the mean of 4.6667 tells us only that Costa was typically rated "Best" in Cultivation. (Because there is no cognizable position of rank associated with the fraction 0.6667, the number 4.6667 must be rounded if it is to be interpreted.) To say that 4.6667 means that Costa outscored KSG by 1.2500 "points" in Cultivation, therefore, or that Costa was 37% more suitable than KSG, would be a serious and indefensible error, for these are, respectively, interval and ratio statements, which are never permissible to make when discussing ordinal data. As should by now be clear, ?¯ is a value having limited usefulness, if any, which cannot ever be understood, properly, as an estimate of µ. The Department, regrettably, treated ?¯ as if it were the same as x¯ and, thus, a reasonable approximation of µ, making the grievous conceptual mistakes of using ordinal data to make interval-driven decisions, e.g., whom to select for licensure when the "difference" between applicants was as infinitesimal as 0.0041 "points," as well as interval representations about the differences between applicants, such as, "Costa's aggregate score is 1.5167 points greater than Nature's Way's aggregate score." Due to this flagrant defect in the Department's analytical process, the aggregate scores which the Department generated are hopelessly infected with systematic error, even though the mathematical calculations behind the flawed scores are computationally correct. Dr. Cornew's Solution Any attempt to translate the Ordinals into a reasonable approximation of interval data is bound to involve a tremendous amount of inherent uncertainty. If we want to ascertain the x behind a particular ?, all we can say for sure is that: [(? – n) + 0.000n] = x = [(? + a) – 0.000a], where n represents the number of places in rank below ?, and a symbolizes the number of places in rank above ?. The Ordinals of 1 and 5 are partial exceptions, because 1 = x = 5. Thus, when ? = 5, we can say [(? – n) + 0.000n] = x = 5, and when ? = 1, we can say 1 = x = [(? + a) – 0.000a]. The table below should make this easier to see. Lowest Possible Value of x Ordinal ? Highest Possible Value of x 1.0004 5 5.0000 1.0003 4 4.9999 1.0002 3 4.9998 1.0001 2 4.9997 1.0000 1 4.9996 As will be immediately apparent, all this tells us is that x could be, effectively, any score from 1 to 5——which ultimately tells us nothing. Accordingly, to make any use of the Ordinals in determining an applicant's satisfaction of the One Point Condition, we must make some assumptions, to narrow the uncertainty. Nature's Way's expert witness, Dr. Ronald W. Cornew,15/ offers a solution that the undersigned finds to be credible. Dr. Cornew proposes (and the undersigned agrees) that, for purposes of extrapolating the scores (values of x) for a given applicant, we can assume that the Ordinals for every other applicant are true values (µ) of x, in other words, perfectly measured scores expressing interval data——a heroic assumption in the Department's favor. Under this assumption, if the subject applicant's Ordinal is the ranking of, say, 3, we shall assume that the adjacent Ordinals of the other applicants, 2 and 4, are true quantitative values. This, in turn, implies that the true value of the subject applicant's Ordinal, as a quantified score, is anywhere between 2 and 4, since all we know about the subject applicant is that the Reviewer considered it to be, in terms of relative suitability, somewhere between the applicants ranked Fourth Best (2) and Second Best (4). If we make the foregoing Department-friendly assumption that the other applicants' Ordinals are µ, then the following is true for the unseen x behind each of the subject applicant's ?s: [(? – 1) + 0.0001] = x = [(? + 1) – 0.0001]. The Ordinals of 1 and 5 are, again, partial exceptions. Thus, when ? = 5, we can say 4.0001 = x = 5, and when ? = 1, we can say 1 = x = 1.9999. Dr. Cornew sensibly rounds off the insignificant ten-thousandths of points, simplifying what would otherwise be tedious mathematical calculations, so that: Lowest Possible Value of x Ordinal ? Highest Possible Value of x 4 5 5 3 4 5 2 3 4 1 2 3 1 1 2 We have now substantially, albeit artificially, reduced the uncertainty involved in translating ?s to xs. Our assumption allows us to say that x = ? ± 1 except where only negative uncertainty exists (because x cannot exceed 5) and where only positive uncertainty exists (because x cannot be less than 1). It is important to keep in mind, however, that (even with the very generous, pro-Department assumption about other applicants' "scores") the best we can do is identify the range of values within which x likely falls, meaning that the highest values and lowest values are not alternatives; rather, the extrapolated score comprises those two values and all values in between, at once. In other words, if the narrowest statement we can reasonably make is that an applicant's score could be any value between l and h inclusive, where l and h represent the low and high endpoints of the range, then what we are actually saying is that the score is all values between l and h inclusive, because none of those values can be excluded. Thus, in consequence of the large uncertainty about the true values of x that arises from the low-information content of the data available for review, Ordinal 3, for example, translates, from ordinal data to interval data, not to a single point or value, but to a score- set, ranging from 2 to 4 inclusive. Thus, to calculate Nature's Way's aggregate score-set using Dr. Cornew's method, as an example, it is necessary to determine both the applicant's highest possible aggregate score and its lowest possible aggregate score, for these are the endpoints of the range that constitutes the score-set. Finding the high endpoint is accomplished by adding 1 to each Topical Ordinal other than 5, and then computing the aggregate score-set using the mathematical operations described in paragraphs 74 and 75. The following WMTOs (Topical subtotals) are obtained thereby: Cultivation, 1.2250; Processing, 1.0500; Dispensing, 0.6625; MD, 0.2000; and Financials, 0.6667. The high endpoint of Nature's Way's aggregate score-set is the sum of these numbers, or 3.8042.16/ Finding the low endpoint is accomplished roughly in reverse, by subtracting 1 from each Topical Ordinal other than 1, and then computing the aggregate score-set using the mathematical operations described in paragraphs 74 and 75. The low endpoint for Nature's Way works out to 1.9834. Nature's Way's aggregate score-set, thus, is 1.9834-3.8042.17/ This could be written, alternatively, as 2.8938 ± 0.9104 points, or as 2.8938 ± 31.46%. The low and high endpoints of Costa's aggregate score- set are found the same way, and they are, respectively, 3.4000 and 4.8375.18/ Costa's aggregate score-set is 3.4000-4.8375, which could also be written as 4.1188 ± 0.7187 points or 4.1188 ± 17.45%. We can now observe that a score of 2.4000 or more is necessary to satisfy the One Point Condition, and that any score between 2.4000 and 3.8375, inclusive, is both necessary and sufficient to satisfy the One Point Condition. We will call this range (2.4000-3.8375) the Proximity Box. A score outside the Proximity Box on the high end, i.e., a score greater than 3.8375, meets the One Point Condition, of course; however, a score that high, being more than sufficient, is not necessary. Rounding Off the Spurious Digits Remember that the Ordinal 5 does not mean 5 of something that has been counted but the position of 5 in a list of five applicants that have been put in order——nothing more. Recall, too, that there is no interpretable interval between places in a ranking because the difference between 5 and 4 is not the same as that between 4 and 3, etc., and that there is no "second best-and-a-half," which means that taking the average of such numbers is a questionable operation that could easily be misleading if not properly explained. Therefore, as discussed earlier, if the mean of ordinal data is taken, the result must be reported using only as many significant figures as are consistent with the least accurate number, which in this case is one significant figure (whose meaning is only Best, Second Best, Third Best, and so forth). The Department egregiously violated the rule against reliance upon spurious digits, i.e., numbers that lack credible meaning and impart a false sense of accuracy. The Department took advantage of meaningless fractions obtained not by measurement but by mathematical operations, thereby compounding its original error of treating ordinal data as interval data. When the Department says that Nature's Way's aggregate score is 2.8833, it is reporting a number with five significant figures. This number implies that all five figures make sense as increments of a measurement; it implies that the Department's uncertainty about the value is around 0.0001 points——an astonishing degree of accuracy. The trouble is that the aggregate scores, as reported without explanation, are false and deceptive. There is no other way to put it. The Department's reported aggregate scores cannot be rationalized or defended, either, as matters of policy or opinion. This point would be obvious if the Department were saying something more transparent, e.g., that 1 + 1 + 1 + 0 + 0 = 2.8833, for everyone would see the mistake and understand immediately that no policy can change the reality that the sum of three 1s is 3. The falsity at issue is hidden, however, because, to generate each applicant's "aggregate score," the Department started with 42 whole numbers (of ordinal data), each of which is a value from 1 to 5. It then ran the applicant's 42 single- digit, whole number "scores" through a labyrinth of mathematical operations (addition, division, multiplication), none of which improved the accuracy or information content of the original 42 numbers, to produce "aggregate scores" such as 2.8833. This process lent itself nicely to the creation of spreadsheets and tables chocked full of seemingly precise numbers guaranteed to impress.19/ Lacking detailed knowledge (which few people have) about how the numbers were generated, a reasonable person seeing "scores" like 2.8833 points naturally regards them as having substantive value at the microscopic level of ten-thousandths of a point——that's what numbers like that naturally say. He likely believes that these seemingly carefully calibrated measurements are very accurate; after all, results as finely-tuned as 2.8833 are powerful and persuasive when reported with authority. But he has been fooled. The only "measurement" the Department ever took of any applicant was to rank it Best, Second Best, etc.——a "measurement" that was not, and could not have been, fractional. The reported aggregate scores are nothing but weighted averages of ordinal data, dressed up to appear to be something they are not. Remember, the smallest division on the Reviewers' "scale" (using that word loosely here) was 1 rank. No Reviewer used decimal places to evaluate any portion of any application. The aggregate scores implying precision to the ten-thousandth place were all derived from calculations using whole numbers that were code for a value judgment (Best, Second Best, etc.), not quantifiable information. Therefore, in the reported "aggregate scores," none of the digits to the right of first (tenth place) decimal point has any meaning whatsoever; they are nothing but spurious digits introduced by calculations carried out to greater precision than the original data. The first decimal point, moreover, being immediately to the right of the one (and only) significant figure in the aggregate score, is meaningful (assuming that the arithmetic mean of ordinal data even has interpretable meaning, which is controversial) only as an approximation of 1 (whole) rank. Because there is no meaningful fractional rank, the first decimal must be rounded off to avoid a misrepresentation of the data. Ultimately, the only meaning that can be gleaned from the "aggregate score" of 2.8833 is that Nature's Way's typical (or mean) weighted ranking is 2.8833. Because there is no ranking equivalent to 2.8833, this number, if sense is to be made of it, must be rounded to the nearest ranking, which is 3 (because 2.8 ˜ 3), or Third Best. To report this number as if it means something more than that is to mislead. To make decisions based on the premise that 0.8833 means something other than "approximately one whole place in the ranking" is, literally, irrational——indeed, the Department's insistence that its aggregate scores represent true and meaningful quantities of interval data is equivalent, as a statement of logic, to proclaiming that 1 + 1 = 3, the only difference being that the latter statement is immediately recognizable as a delusion. An applicant could only be ranked 1, 2, 3, 4, or 5——not 2.8833 or 4.4000. Likewise, the only meaning that can be taken from the "aggregate score" of 4.4000 is that Costa's average weighted ranking is 4.4000, a number which, for reasons discussed, to be properly understood, must be rounded to the nearest ranking, i.e., 4. The fraction, four-tenths, representing less than half of a position in the ranking, cannot be counted as approximately one whole (additional) place (because 4.4 ? 5). And to treat 0.4000 as meaning four-tenths of a place better than Second Best is absurd. There is no mathematical operation in existence that can turn a number which signifies where in order something is, into one that counts how much of that thing we have. To eliminate the false precision, the spurious digits must be rounded off, which is the established mathematical approach to dealing with numbers that contain uncertainty, as Dr. Cornew credibly confirmed. Rounding to the nearest integer value removes the meaningless figures and eliminates the overprecision manifested by those digits.

Florida Laws (9) 120.52120.536120.54120.56120.569120.57120.595120.68381.986
# 7
FAMILY ARCADE ALLIANCE vs DEPARTMENT OF REVENUE, 91-005338RP (1991)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Aug. 23, 1991 Number: 91-005338RP Latest Update: Mar. 17, 1992

The Issue The issues are whether proposed rules 12-18.008, 12A-15.001 and 12A-1.044, Florida Administrative Code, are valid exercises of delegated legislative authority.

Findings Of Fact The Parties The Family Arcade Alliance (Alliance) is a group composed primarily of businesses that operate amusement game machines in the State of Florida which are activated either by token or coin. The parties agree that the Alliance is a substantially affected person as that term is defined in Section 120.54(4)(a), Florida Statutes (1991), and has standing to maintain these proceedings. The Department of Revenue (Department) is the entity of state government charged with the administration of the revenue laws. The Tax and the Implementing Rules Except for the period the services tax was in force, no sales tax had been imposed on charges made for the use of coin-operated amusement machines before the enactment of Chapter 91-112, Laws of Florida, which became effective on July 1, 1991. The Act imposed a 6 percent sales tax on each taxable transaction. Coin-operated amusement machines found in Florida are typical of those machines throughout the United States. The charges for consumer use of the machines are multiples of twenty-five-cent coins, i.e., 25 cents, 50 cents, 75 cents, and one dollar. The sales tax is most often added to the sale price of goods, but it is not practicable for the sellers of all products or services to separately state and collect sales tax from consumers. For example, there is no convenient way separately to collect and account for the sales tax on items purchased from vending machines such as snacks or beverages, or from newspaper racks. For these types of items, a seller reduces the price of the object or service sold, so that the tax is included in the receipts in the vending machine, newspaper rack or here, the coin-operated amusement machine. There are subtleties in the administration of the sales tax which are rarely noticed. The sales tax due on the purchase of goods or services is calculated at the rate of 6 percent only where the purchase price is a round dollar amount. For that portion of the sales price which is less than a dollar, the statute imposes not a 6 percent tax, but rather a tax computed according to a specific statutory schedule: Amount above or below Sales tax whole dollar amount statutorily imposed 1-9 0 10-16 1 17-33 2 34-50 3 51-66 4 67-83 5 84-100 6 Section 212.12(9)(a) through (h), Florida Statutes (1991). In most transactions the effect of the schedule is negligible and the consumer never realizes that the tax rate is greater than 6 percent for the portion of the sales price that is not a round dollar amount. Where a very large percentage of sales come from transactions of less than a dollar, the statutory schedule for the imposition of the sales tax takes on a greater significance. For those transactions between 9 cents up to a dollar the schedule's effective tax rate is never below the nominal tax rate of 6 percent, and may be as high as 11.76 percent. For example, the 1 cent sales tax on a 10 cent transaction yields an effective tax rate of 10 percent, not 6 percent. Where it is impracticable for businesses in an industry to separately state the tax for each sale, the statutes permit sellers (who are called "dealers" in the language of the statute) to file their tax returns on a gross receipts basis. Rather than add the amount of the tax to each transaction, taxes are presumed to be included in all the transactions and the dealer calculates the tax based on his gross receipts by using the effective tax rate promulgated by the Department in a rule. See Section 212.07(2), Florida Statutes (1991). Businesses also have the option to prove to the Department that in their specific situation the tax due is actually lower than a rule's effective tax rate for the industry, but those businesses must demonstrate the accuracy of their contentions that a lower tax is due. Applying the statutory tax schedule to sales prices which are typical in the amusement game machine industry (which are sometimes referred to as "price points") the following effective tax rates are generated at each price point: Total Sales Presumed Presumed Effective Price Selling Price Sales Tax Tax Rate 25 cents 23 cents 2 cents 8.7% 50 cents 47 cents 3 cents 6.38% 75 cents 70 cents 5 cents 7.14% $1.00 94 cents 6 cents 6.38% The determination of an effective tax rate for an industry as a whole also requires the identification of industry gross receipts from each of the price points. Once that effective tax rate is adopted as a rule, the Department treats dealers who pay tax using the effective tax rate as if they had remitted tax on each individual transaction. Proposed Rule 12A-1.044 establishes an industry-wide effective tax rate for monies inserted into coin-operated amusement machines or token dispensing machines of 7.81 percent. For counties with a one half or one percent surtax, the effective tax rates are 8.38 percent and 8.46 percent respectively. These rates include allowances for multiple plays, i.e., where the consumer deposits multiple coins to activate the machine. Proposed Rule 12A-1.044(1)(b) defines coin-operated amusement machines as: Any machine operated by coin, slug, token, coupon or similar device for the purpose of entertainment or amusement. Amusement machines include, but are not limited to, coin-operated radio and televisions, telescopes, pinball machines, music machines, juke boxes, mechanical games, video games, arcade games, billiard tables, moving picture viewers, shooting galleries, mechanical rides and all similar amusement devices. Proposed Rule 12-18.008 contained a definition of "coin-operated amusement machines" when the rule was first published which was essentially similar, but that rule's nonexclusive list of amusement machines did not include radios, televisions or telescopes. The Department has prepared a notice to be filed with the Joint Administrative Procedures Committee conforming the definitions so they will be identical. The current differences found in the nonexclusive descriptive lists are so slight as to be inconsequential. The Petitioners have failed to prove any confusion or ambiguity resulting from the differences that would impede evenhanded enforcement of the rule. Proposed Rule 12A-15.011 did not contain a separate definition of coin-operated amusement machines. Owners of amusement machines do not always own locations on which to place them. Machine owners may go to landowners and lease the right to place their machines on the landowner's property. The transaction becomes a lease of real property or a license to use real property. Sometimes owners of locations suitable for the placement of amusement machines lease machines from machine owners. Those transactions become leases of tangible personal property. Both transactions are subject to sales tax after July 1, 1991. Proposed rules 12A- 1.044(9)(c), (d) and 10(a), (c) prescribe which party to the leases of real estate or personal property will be responsible to collect, report and remit the tax. Under subsection 9(d) of proposed rule 12A-1.044, sales tax will not be due on any payment made to an owner of an amusement machine by the owner of the location where that machine is placed if: a) the lease of tangible personalty is written, b) the lease was executed prior to July 1, 1991, and c) the machine involved was purchased by the lessor prior to July 1, 1991. The tax will be effective only upon the expiration or renewal of the written lease. Similarly, proposed 12A-1.044(10)(d) provides that sales tax will not be due on written agreements for the lease of locations to owners of amusement machines if: a) the agreement to rent the space to the machine owner is in writing, and b) was entered into before July 1, 1991. At the termination of the lease agreement, the transaction becomes taxable. Changes to the proposed rules The Department published changes to the proposed rule 12A-1.044(3)(e) on October 18, 1991, which prescribed additional bookkeeping requirements on any amusement machine operators who wished to avoid the effective tax rate established in the proposed rule, and demonstrate instead a lower effective tax rate for their machines. The significant portions of the amendments read: In order to substantiate a lower effective tax rate, an operator is required to maintain books and records which contain the following information: * * * b. For an amusement machine operator, a list identifying each machine by name and serial number, the cost per play on each machine, the total receipts from each machine and the date the receipts are removed from each machine. If an operator establishes a lower effective tax rate on a per vending or amusement machine basis, the operator must also establish an effective tax rate for any machine which produces a higher rate than that prescribed in this rule. Operators using an effective rate other than the applicable tax rate prescribed within this rule must recompute the rate on a monthly basis. (Exhibit 6, pg. 4-5) There was also a change noticed to subsection (e) of the proposed rule 12A-1.044, which reads: (e) For the purposes of this rule, possession of an amusement or vending machine means either actual or constructive possession and control. To determine if a person has constructive possession and control, the following indicia shall be considered: right of access to the machine; duty to repair; title to the machine; risk of loss from damages to the machine; and the party possessing the keys to the money box. If, based on the indicia set out above, the owner of the machine has constructive possession and control, but the location owner has physical possession of the machine, then the operator shall be determined by who has the key to the money box and is responsible for removing the receipts. If both the owner of the machine and the location owner have keys to the money box and are responsible for removing the receipts, then they shall designate in writing who shall be considered the operator. Absent such designation, the owner of the machine shall be deemed to be the operator. (Exhibit 6, pg. 1-2) The Amusement Game Machine Industry All operators must be aware of how much money an amusement machine produces in order to determine whether it should be replaced or rotated to another location when that is possible, for if games are not changed over time, patrons become bored and go elsewhere to play games on machines which are new to them. The sophistication with which operators track machine production varies. It is in the economic self interest of all operators to keep track of the revenues produced by each machine in some way. In general, amusement game machine businesses fall into one of three categories: free standing independent operators, route vendors, and mall operators. Free standing independent operators have game arcades located in detached buildings, and offer patrons the use of amusement machines much in the same way that bowling alleys are usually freestanding amusement businesses. Like bowling alleys, they are designed to be destinations to which patrons travel with the specific purpose of recreation or amusement. They are usually independent businesses, not franchises or chains. Route operators place machines individually or in small numbers at other businesses, such as bars or convenience stores. People who use the machines are usually at the location for some other purpose. Those games are maintained on a regular basis by an operator who travels a route from game location to game location. The route operator or the location owner may empty the machine's money box. Mall operators tend to be parts of large chains of amusement game operators who rent store space in regional shopping malls. The mall is the patron's destination, and the game parlor is just one of the stores in the mall. Amusement machines are operated by either coin or by token. About 75 percent of independent amusement game operators use coin-operated machines. About 75 percent of the large chain operators found in malls use tokens. The cost of converting a coin-activated amusement machine to a token-activated amusement machine is about thirty dollars per machine. The mechanism costs $10 to $12, the rest of the cost comes from labor. Token operators must buy an original supply of tokens and periodically replenish that supply. The use of tokens enhances security because it gives the operator better control over their cash and permits the operator to run "promotions," for example, offering 5 rather than 4 tokens for a dollar for a specific period in an attempt to increase traffic in the store. Depending on the number purchased, tokens cost operators between 5 and 10 cents each. Token-activated machines accept only tokens. Coin-operated machines only accept a single denomination of coin. Change machines generally accept quarters and one, five and ten dollar bills. A change machine may be used either to provide players with quarters, which can be used to activate coin- operated machines, or they can be filled with tokens rather than quarters, and become a token dispenser. In a token-operated amusement location, the only machines which contain money are the change machines used to dispense tokens. The game machines will contain only tokens. Token machines record the insertion of each coin and bill by an internal meter as a domination of coin or currency is inserted. Token dispensing machines record their receivables as follows: when one quarter is inserted, the machine records one transaction. When a fifty-cent piece is inserted, the machine records one transaction. When three quarters are inserted, the machine records three transactions. When a dollar bill is inserted, the machine records one transaction. When a five dollar bill is inserted, the machine records one transaction. When a ten dollar bill is inserted, the machine records one transaction. Token machine meters record separately for each domination the total number of times coins or currency of each domination are deposited in the machine. The internal meters of token dispensing machines do not distinguish between insertion of several coins or bills by one person and the insertion of singular coins or bills by several persons. Token dispensing machines cannot distinguish the insertion of four quarters by one person on a single occasion from the insertion of one quarter by each of four persons at four different times. Similarly, the internal meters of amusement machines activated by coin rather than by token do not distinguish between insertion of several coins or bills by one person and the insertion of single coins or bills by several persons. Machines which are coin-activated also do not distinguish between the insertion of four quarters by one person at one time or the insertion of one quarter by each of four persons at different times. Coin-operation has certain cost advantages. The operator avoids the cost of switching the machine from coin to token operation, for machines are manufactured to use coins, and avoids the cost of purchasing and replenishing a supply of tokens. The operator does not risk activation of his machine by tokens purchased at another arcade, which have no value to him, and can better take advantage of impulse spending. Coin-operated machines do not have a separate device for collecting tax and it is not possible for an operator to fit games with machinery to collect an additional two cents on a transaction initiated by depositing a quarter in a machine. There are alternative methods available to operators of amusement game machines to recapture the amount of the new sales tax they may otherwise absorb.1 One is to raise the price of games. This can be done either by setting the machines to produce a shorter play time, or to require more quarters or tokens to activate the machines. Raising the prices will not necessarily increase an operator's revenues, because customers of coin-operated amusement businesses usually have a set amount of money budgeted to spend and will stop playing when they have spent that money. In economic terms, consumer demand for amusement play is inelastic. Amusement businesses could also sell tokens over- the-counter, and collect sales tax as an additional charge, much as they would if they sold small foods items over the counter such as candy bars. Over-the- counter sales systems significantly increase labor costs. An amusement business open for 90 hours per week might well incur an additional $30,000-a-year in operating costs by switching to an over-the-counter token sales system. In a small coin-operated business, the operator often removes the receipts by emptying the contents of each machine into a larger cup or container, without counting the receipts from each machine separately because it is too time consuming to do so. But see Finding 17 above. With a token-operated business, the operator can determine the percentage of revenue derived from twenty-five cent transactions, as distinct from token sales initiated by the insertion of one, five or ten dollar bills into token dispensing machines. The proposed rule has the effect (although it is unintended) of placing the coin-operated amusement operators at a relative disadvantage in computing sales tax when compared to the token-operated businesses. Token operators can establish that they are responsible for paying a tax rate lower than the 7.81 percent effective rate set in the rule because many of their sales are for one dollar, five dollars or ten dollars. The smaller businesses using coin-operated machines do not have the technological capacity to demonstrate that customers are spending dollars rather than single quarters. Consequently, coin operators will have an incentive to shift to token sales rather than pay the proposed rule's higher effective tax rate if a large percentage of their patrons spend dollars rather than single quarters. For example, Mr. Scott Neslund is an owner of a small business which has 80 amusement machines at a freestanding token-operated location. He is atypical of small amusement game operators because 75 percent of them use coin-operated machines rather than token-operated machines. Mr. Neslund can demonstrate that 92 percent of his sales are for one dollar or more. By applying the tax rate of six percent to those transactions, he pays substantially less than the proposed rule's effective tax rate of 7.81 percent. This is very significant to Mr. Neslund because over the nine years from 1982 to 1990, his average profit margin was 7.77 percent. Although a flat 6 percent tax would have consumed 73 percent of that profit margin, if his businesses were on a coin-operated basis he would have been required to pay the proposed rule's 7.81 percent effective tax rate, which would have consumed 93 percent of his profit margin, leaving him with a very thin profit margin of 1/2 of 1 percent. The difference between a 1/2 of 1 percent profit margin and 2 percent profit margin, on a percentage basis, is a four hundred percent difference. Mr. Neslund's average profit annually had been $24,000. The effective tax rate of 7.81 percent would take $22,7000 of that amount, leaving an average annual profit of only $1,700. It is impossible to extrapolate from this single example and have confidence in the accuracy of the extrapolation, however. The Department's Effective Tax Rate Study There is no data for the amusement game industry specific to Florida concerning the number of transactions occurring at specified price points, but there is national data available which the Department considered. There is no reason to believe that the Florida amusement game industry is significantly different from the national industry. Nationally approximately 80 percent of all plays and 60 percent of all revenues come from single quarter (twenty-five- cent) plays. The Department's study used the typical sale prices charged in the industry and the categories of coin-operated amusement games reported in the national survey. Using them the Department derived an estimate of revenues by type of game for Florida. The effective tax rate the Department derived is the Department's best estimate of the price mix of transactions which occur through amusement machines. It is not itself an issue in this proceeding. Petitioners' counsel specifically agreed that they were not contesting the setting of the effective tax rate at 7.81 percent and presented no evidence that any other effective tax rate should have been set. The Department's Economic Impact Statement Dr. Brian McGavin of the Department prepared in July 1991 paragraphs 2, 3 and 5 of the economic impact statement for the proposed rules (Exhibits 14, 15 and 16), which concluded that proposed rules 12A-15.001, 12-18.008 and 12A- 1.044 would have no effect on small businesses. The economic impact statements for all three proposed rules contain identical information and involve the same issues concerning economic impact. Before drafting the economic impact statement published with these rules, Dr. McGavin had completed one other economic impact statement, had used a small manual which gave a general description of the process for developing economic impact statements and had discussed the process with another economist, Al Friesen, and his supervisor, Dr. James Francis, the Department's director of tax research. Dr. Francis prepares or reviews more than a dozen economic impact statements annually, and is well aware of the definition of small businesses found in Section 288.703(1), Florida Statutes. Dr. Francis reviewed Dr. McGavin's work and agreed with Dr. McGavin's conclusions. Paragraphs 2, 3 and 5 of the economic impact statements for these rules state: Estimated cost or economic benefits to persons directly affected by the proposed rule. The rule establishes effective tax rates for two categories of machines - 1) amusement machines, 2) vending machines. Amusement machines were not previously taxable (except during the Services tax period). * * * The costs of this rule are primarily compliance costs. The rules establishe several compliance provisions. quarterly sale and use tax reports. submission of supporting information for these reports on electronic media. affixation of registration certifi-cates to machines. presentation of certificates by operators to wholesale dealers. The filing requirement is obviously an integral and necessary part of the sales tax collection process . . . . The costs of complying will be borne by operators. If the operators have previously computerized their records, the marginal compliance costs will be negligible. For a small operator who has not computerized his operations, the costs of minimally configured PC systems - including software and training - would be roughly $2,000. This could be a major expense for a small operator . . . . We do not have data which will permit us to estimate the proportion of non-computerized operators in this industry. Effect of the proposed action on competition and on the open market for employment. * * * Given the low labor-intensity of this industry the overall effect should be very small. * * * 5. Impact of the proposed action on small business firms. Small business firms are not affected by the proposed action. (Exhibits 14, 15 and 16) The Petitioners demonstrated that before Dr. McGavin prepared the economic impact statement he did not read section 120.54 on rulemaking and he did not conduct any industry research or refer to any sources of information on the amusement game industry in Florida or nationally. He did not use any data to calculate or measure economic impact, consult text books, or refer to any outside sources or statistical information, nor did he talk with any industry experts or representatives. He did not obtain any information about the industry by distributing questionnaires to those in the industry, nor did he know whether there were differences in day-to-day operations between large and small amusement businesses or the different types of accounting and bookkeeping systems used by small businesses. He had not read Section 288.073, Florida Statutes, which defines a small business. He did not know the impact the 7.81 percent effective tax rate established by the rule would have on small business, and he did not analyze the cost difference businesses experienced between the sale of tokens by machine and the sale of tokens over-the-counter by an employee. To the extent it even entered into Dr. McGavin's thought process, Dr. McGavin made the general assumption that token sales would either be made over the counter, in which case the sales tax could be separately collected, or possibly by selling fewer tokens per unit of currency. When the Legislature enacted Chapter 91-112, Laws of Florida, and imposed the tax on the use of coin operated amusement machines, it did not provide for any phasing in of the tax, nor for any tiering of the tax based on the size of the taxpayers. Nothing in the language of the statute imposing the tax indicates that the Legislature believed that there was a distinction to be made in the taxation of larger and smaller businesses which provide the same service, viz, use of amusement machines. The Department does permit certain accommodations to businesses which have a small volume of sales. A business may report quarterly rather than monthly if its tax liability is less than $100 for the preceding quarter, and if the tax liability is less than $200 for the previous six months, a dealer may request semiannual reporting periods. Regardless of size, a business with more than one location in a county may file one return. Both of these provisions may lessen the burden of complying with the tax imposed on the use of coin-operated amusement machines. The Economic Impact Analysis Performed For The Challengers By Dr. Elton Scott Dr. Elton Scott is an economist and a professor at the Florida State University. The Petitioners engaged him to evaluate the economic impact statement the Department had prepared when these proposed rules were published. After conducting his own analysis, Dr. Scott wrote a report in which he determined that the Department's economic impact statement was deficient. According to Dr. Scott, one must understand an industry to determine whether an economic impact flows from a regulation and to determine the magnitude of any impact or the differential impact the regulation may have on large and small businesses. To prepare his own economic impact analysis, Dr. Scott first obtained information about the operational characteristics of the industry by speaking directly with a handful of industry members. He developed a questionnaire that tested the experience and background of operators so that he could evaluate the reliability or accuracy of information he received from them. He then asked additional questions about the operators' individual businesses and questions about differences between large and small operators within the industry. Dr. Scott's testimony outlines the factors which should be used to make an economic impact statement as useful as possible, but his testimony does not, and cannot, establish minimum standards for what an economic impact analysis should contain. Those factors are controlled by the Legislature, and no doubt the requirements imposed on agencies could be more onerous, and if faithfully followed could produce more useful economic impact statements. The economic impact small businesses will bear is caused by the statute, not by the implementing rule, with the possible exception of the electronic filing requirement, which has not been challenged in any of the three proceedings consolidated here. Large businesses have several advantages over smaller ones. Large businesses have sophisticated accounting systems, whether they use token or coin-operated machines, which allow tracking not only of gross receipts but kinds of plays, which enhance the operator's ability to establish that the tax due is lower than the effective tax rate, while the less sophisticated systems of metering receipts in coin-operated small businesses require reliance on the effective tax rates. (Exhibit 9 pg. 4) Large businesses may extend the useful life of a game machine by rotating the machine from one location to another, may deal directly with manufactures in purchasing a larger number of games or machines and therefore obtain more favorable discounts. Small businesses cannot rotate games if they have only one location, and purchase at higher prices from manufactures. In general, smaller businesses have lower profit margins than larger businesses. All of these advantages exist independently of any rule implementing the sales tax statute.

Florida Laws (10) 120.52120.54120.68212.02212.031212.05212.07212.12288.703689.01 Florida Administrative Code (5) 12-18.00812A-1.00412A-1.04412A-15.00112A-15.011
# 8
KPMG CONSULTING, INC. vs DEPARTMENT OF REVENUE, 02-001719BID (2002)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida May 01, 2002 Number: 02-001719BID Latest Update: Oct. 15, 2002

The Issue The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from further consideration in the invitation to negotiate process.

Findings Of Fact Procurement Background: The Respondent, the (DOR) is a state agency charged with the responsibility of administering the Child Support Enforcement Program (CSE) for the State of Florida, in accordance with Section 20.21(h), Florida Statutes. The DOR issued an ITN for the CAMS Compliance Enforcement implementation on February 1, 2002. This procurement is designed to give the Department a "state of the art system" that will meet all Federal and State Regulations and Policies for Child Support Enforcement, improve the effectiveness of collections of child support and automate enforcement to the greatest extent possible. It will automate data processing and other decision- support functions and allow rapid implementation of changes in regulatory requirements resulting from revised Federal and State Regulation Policies and Florida initiatives, including statutory initiatives. CSE services suffer from dependence on an inadequate computer system known as the "FLORIDA System" which was not originally designed for CSE and is housed and administered in another agency. The current FLORIDA System cannot meet the Respondent's needs for automation and does not provide the Respondent's need for management and reporting requirements and the need for a more flexible system. The DOR needs a system that will ensure the integrity of its data, will allow the Respondent to consolidate some of the "stand-alone" systems it currently has in place to remedy certain deficiencies of the FLORIDA System and which will help the Child Support Enforcement system and program secure needed improvements. The CSE is also governed by Federal Policy, Rules and Reporting requirements concerning performance. In order to improve its effectiveness in responding to its business partners in the court system, the Department of Children and Family Services, the Sheriff's Departments, employers, financial institutions and workforce development boards, as well as to the Federal requirements, it has become apparent that the CSE agency and system needs a new computer system with the flexibility to respond to the complete requirements of the CSE system. In order to accomplish its goal of acquiring a new computer system, the CSE began the procurement process. The Department hired a team from the Northrup Grumman Corporation headed by Dr. Edward Addy to head the procurement development process. Dr. Addy began a process of defining CSE needs and then developing an ITN which reflected those needs. The process included many individuals in CSE who would be the daily users of the new system. These individuals included Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; Frank Doolittle, Process Manager for Child Support Enforcement Compliance Enforcement and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program. There are two alternative strategies for implementing a large computer system such as CAMS CE: a customized system developed especially for CSE or a Commercial Off The Shelf, Enterprise Resource Plan (COTS/ERP). A COTS/ERP system is a pre-packaged software program, which is implemented as a system- wide solution. Because there is no existing COTS/ERP for child support programs, the team recognized that customization would be required to make the product fit its intended use. The team recognized that other system attributes were also important, such as the ability to convert "legacy data" and to address such factors as data base complexity and data base size. The Evaluation Process: The CAMS CE ITN put forth a tiered process for selecting vendors for negotiation. The first tier involved an evaluation of key proposal topics. The key topics were the vendors past corporate experience (past projects) and its key staff. A vendor was required to score 150 out of a possible 230 points to enable it to continue to the next stage or tier of consideration in the procurement process. The evaluation team wanted to remove vendors who did not have a serious chance of becoming the selected vendor at an early stage. This would prevent an unnecessary expenditure of time and resources by both the CSE and the vendor. The ITN required that the vendors provide three corporate references showing their past corporate experience for evaluation. In other words, the references involved past jobs they had done for other entities which showed relevant experience in relation to the ITN specifications. The Department provided forms to the vendors who in turn provided them to their corporate references that they themselves selected. The vendors also included a summary of their corporate experience in their proposal drafted by the vendors themselves. Table 8.2 of the ITN provided positive and negative criteria by which the corporate references would be evaluated. The list in Table 8.2 is not meant to be exhaustive and is in the nature of an "included but not limited to" standard. The vendors had the freedom to select references whose projects the vendors' believed best fit the criteria upon which each proposal was to be evaluated. For the key staff evaluation standard, the vendors provided summary sheets as well as résumés for each person filling a lead role as key staff members on their proposed project team. Having a competent project team was deemed by the Department to be critical to the success of the procurement and implementation of a large project such as the CAMS CE. Table 8.2 of the ITN provided the criteria by which the key staff would be evaluated. The Evaluation Team: The CSE selected an evaluation team which included Dr. Addy, Mr. Ellis, Mr. Bankirer, Mr. Doolittle and Mr. Esser. Although Dr. Addy had not previously performed the role of an evaluator, he has responded to several procurements for Florida government agencies. He is familiar with Florida's procurement process and has a doctorate in Computer Science as well as seventeen years of experience in information technology. Dr. Addy was the leader of the Northrup Grumman team which primarily developed the ITN with the assistance of personnel from the CSE program itself. Mr. Ellis, Mr. Bankirer and Mr. Doolittle participated in the development of the ITN as well. Mr. Bankirer and Mr. Doolittle had previously been evaluators in other procurements for Federal and State agencies prior to joining the CSE program. Mr. Esser is the Chief of the Bureau of Information Technology at the Department of Highway Safety and Motor Vehicles and has experience in similar, large computer system procurements at that agency. The evaluation team selected by the Department thus has extensive experience in computer technology, as well as knowledge of the requirements of the subject system. The Department provided training regarding the evaluation process to the evaluators as well as a copy of the ITN, the Source Selection Plan and the Source Selection Team Reference Guide. Section 6 of the Source Selection Team Reference Guide entitled "Scoring Concepts" provided guidance to the evaluators for scoring proposals. Section 6.1 entitled "Proposal Evaluation Specification in ITN Section 8" states: Section 8 of the ITN describes the method by which proposals will be evaluated and scored. SST evaluators should be consistent with the method described in the ITN, and the source selection process documented in the Reference Guide and the SST tools are designed to implement this method. All topics that are assigned to an SST evaluator should receive at the proper time an integer score between 0 and 10 (inclusive). Each topic is also assigned a weight factor that is multiplied by the given score in order to place a greater or lesser emphasis on specific topics. (The PES workbook is already set to perform this multiplication upon entry of the score.) Tables 8-2 through 8-6 in the ITN Section 8 list the topics by which the proposals will be scored along with the ITN reference and evaluation and scoring criteria for each topic. The ITN reference points to the primary ITN section that describes the topic. The evaluation and scoring criteria list characteristics that should be used to affect the score negatively or positively. While these characteristics should be used by each SST evaluator, each evaluator is free to emphasize each characteristic more or less than any other characteristic. In addition, the characteristics are not meant to be inclusive, and evaluators may consider other characteristics that are not listed . . . (Emphasis supplied). The preponderant evidence demonstrates that all the evaluators followed these instructions in conducting their evaluations and none used a criterion that was not contained in the ITN, either expressly or implicitly. Scoring Method: The ITN used a 0 to 10 scoring system. The Source Selection Team Guide required that the evaluators use whole integer scores. They were not required to start at "7," which was the average score necessary to achieve a passing 150 points, and then to score up or down from 7. The Department also did not provide guidance to the evaluators regarding a relative value of any score, i.e., what is a "5" as opposed to a "6" or a "7." There is no provision in the ITN which establishes a baseline score or starting point from which the evaluators were required to adjust their scores. The procurement development team had decided to give very little structure to the evaluators as they wanted to have each evaluator score based upon his or her understanding of what was in the proposal. Within the ITN the development team could not sufficiently characterize every potential requirement, in the form that it might be submitted, and provide the consistency of scoring that one would want in a competitive environment. This open-ended approach is a customary method of scoring, particularly in more complex procurements in which generally less guidance is given to evaluators. Providing precise guidance regarding the relative value of any score, regarding the imposition of a baseline score or starting point, from which evaluators were required to adjust their scores, instruction as to weighing of scores and other indicia of precise structure to the evaluators would be more appropriate where the evaluators themselves were not sophisticated, trained and experienced in the type of computer system desired and in the field of information technology and data retrieval generally. The evaluation team, however, was shown to be experienced and trained in information technology and data retrieval and experienced in complex computer system procurement. Mr. Barker is the former Bureau Chief of Procurement for the Department of Management Services. He has 34 years of procurement experience and has participated in many procurements for technology systems similar to CAMS CE. He established that the scoring system used by the Department at this initial stage of the procurement process is a common method. It is customary to leave the numerical value of scores to the discretion of the evaluators based upon each evaluator's experience and review of the relevant documents. According wider discretion to evaluators in such a complex procurement process tends to produce more objective scores. The evaluators scored past corporate experience (references) and key staff according to the criteria in Table 8.2 of the ITN. The evaluators then used different scoring strategies within the discretion accorded to them by the 0 to 10 point scale. Mr. Bankirer established a midrange of 4 to 6 and added or subtracted points based upon how well the proposal addressed the CAMS CE requirements. Evaluator Ellis used 6 as his baseline and added or subtracted points from there. Dr. Addy evaluated the proposals as a composite without a starting point. Mr. Doolittle started with 5 as an average score and then added or subtracted points. Mr. Esser gave points for each attribute in Table 8.2, for key staff, and added the points for the score. For the corporate reference criterion, he subtracted a point for each attribute the reference lacked. As each of the evaluators used the same methodology for the evaluation of each separate vendor's proposal, each vendor was treated the same and thus no specific prejudice to KPMG was demonstrated. Corporate Reference Evaluation: KPMG submitted three corporate references: Duke University Health System (Duke), SSM Health Care (SSM), and Armstrong World Industries (Armstrong). Mr. Bankirer gave the Duke reference a score of 6, the SSM reference a score of 5 and the Armstrong reference a score of 7. Michael Strange, the KPMG Business Development Manager, believed that 6 was a low score. He contended that an average score of 7 was required to make the 150-point threshold for passage to the next level of the ITN consideration. Therefore, a score of 7 would represent minimum compliance, according to Mr. Strange. However, neither the ITN nor the Source Selection Team Guide identified 7 as a minimally compliant score. Mr. Strange's designation of 7 as a minimally compliant score is not provided for in the specifications or the scoring instructions. Mr. James Focht, Senior Manager for KPMG testified that 6 was a low score, based upon the quality of the reference that KPMG had provided. However, Mr. Bankirer found that the Duke reference was actually a small-sized project, with little system development attributes, and that it did not include information regarding a number of records, the data base size involved, the estimated and actual costs and attributes of data base conversion. Mr. Bankirer determined that the Duke reference had little similarity to the CAMS CE procurement requirements and did not provide training or data conversion as attributes for the Duke procurement which are attributes necessary to the CAMS CE procurement. Mr. Strange and Mr. Focht admitted that the Duke reference did not specifically contain the element of data conversion and that under the Table 8.2, omission of this information would negatively affect the score. Mr. Focht admitted that there was no information in the Duke Health reference regarding the number of records and the data base size, all of which factors diminish the quality of Duke as a reference and thus the score accorded to it. Mr. Strange opined that Mr. Bankirer had erred in determining that the Duke project was a significantly small sized project since it only had 1,500 users. Mr. Focht believed that the only size criterion in Table 8.2 was the five million dollar cost threshold, and, because KPMG indicated that the project cost was greater than five million dollars, that KPMG had met the size criterion. Mr. Focht believed that evaluators had difficulty in evaluating the size of the projects in the references due to a lack of training. Mr. Focht was of the view that the evaluator should have been instructed to make "binary choices" on issues such as size. He conceded, however, that evaluators may have looked at other criteria in Table 8.2 to determine the size of the project, such as database size and number of users. However, the corporate references were composite scores by the evaluators, as the ITN did not require separate scores for each factor in Table 8.2. Therefore, Mr. Focht's focus on binary scoring for size, to the exclusion of other criteria, mis-stated the objective of the scoring process. The score given to the corporate references was a composite of all of the factors in Table 8.2, and not merely monetary value size. Although KPMG apparently contends that size, in terms of dollar value, is the critical factor in determining the score for a corporate reference, the vendor questions and answers provided at the pre-proposal conference addressed the issue of relevant criteria. Question 40 of the vendor questions and answers, Volume II, did not single out "project greater than five million dollars" as the only size factor or criterion. QUESTION: Does the state require that each reference provided by the bidder have a contract value greater than $5 million; and serve a large number of users; and include data conversion from a legacy system; and include training development? ANSWER: To get a maximum score for past corporate experience, each reference must meet these criteria. If the criteria are not fully met, the reference will be evaluated, but will be assigned a lower score depending upon the degree to which the referenced project falls short of these required characteristics. Therefore, the cost of the project is shown to be only one component of a composite score. Mr. Strange opined that Mr. Bankirer's comment regarding the Duke reference, "little development, mostly SAP implementation" was irrelevant. Mr. Strange's view was that the CAMS CE was not a development project and Table 8.2 did not specifically list development as a factor on which proposals would be evaluated. Mr. Focht stated that in his belief Mr. Bankirer's comment suggested that Mr. Bankirer did not understand the link between the qualifications in the reference and the nature of KPMG's proposal. Both Strange and Focht believe that the ITN called for a COTS/ERP solution. Mr. Focht stated that the ITN references a COTS/ERP approach numerous times. Although many of the references to COTS/ERP in the ITN also refer to development, Mr. Strange also admitted that the ITN was open to a number of approaches. Furthermore, both the ITN and the Source Selection Team Guide stated that the items in Table 8.2 are not all inclusive and that the evaluators may look to other factors in the ITN. Mr. Bankirer noted that there is no current CSE COTS/ERP product on the market. Therefore, some development will be required to adapt an off-the-shelf product to its intended use as a child support case management system. Mr. Bankirer testified that the Duke project was a small-size project with little development. Duke has three sites while CSE has over 150 sites. Therefore, the Duke project is smaller than CAMS. There was no information provided in the KPMG submittal regarding data base size and number of records with regard to the Duke project. Mr. Bankirer did not receive the information he needed to infer a larger sized-project from the Duke reference. Mr. Esser also gave the Duke reference a score of 6. The reference did not provide the data base information required, which was the number of records in the data base and the number of "gigabytes" of disc storage to store the data, and there was no element of legacy conversion. Dr. Addy gave the Duke reference a score of 5. He accepted the dollar value as greater than five million dollars. He thought that the Duke Project may have included some data conversion, but it was not explicitly stated. The Duke customer evaluated training so he presumed training was provided with the Duke project. The customer ratings for Duke were high as he expected they would be, but similarity to the CAMS CE system was not well explained. He looked at size in terms of numbers of users, number of records and database size. The numbers that were listed were for a relatively small-sized project. There was not much description of the methodology used and so he gave it an overall score of 5. Mr. Doolittle gave the Duke reference a score of 6. He felt that it was an average response. He listed the number of users, the number of locations, that it was on time and on budget, but found that there was no mention of data conversion, database size or number of records. (Consistent with the other evaluators). A review of the evaluators comments makes it apparent that KPMG scores are more a product of a paucity of information provided by KPMG corporate references instead of a lack of evaluator knowledge of the material being evaluated. Mr. Ellis gave a score of 6 for the Duke reference. He used 6 as his baseline. He found the required elements but nothing more justifying in his mind raising the score above 6. Mr. Focht and Mr. Strange expressed the same concerns regarding Bankirer's comment, regarding little development, for the SSM Healthcare reference as they had for the Duke Health reference. However, both Mr. Strange and Mr. Focht admitted that the reference provided no information regarding training. Mr. Strange admitted that the reference had no information regarding data conversion. Training and data conversion are criteria contained in Table 8.2. Mr. Strange also admitted that KPMG had access to Table 8.2 before the proposal was submitted and could have included the information in the proposal. Mr. Bankirer gave the SSM reference a score of 5. He commented that the SAP implementation was not relevant to what the Department was attempting to do with the CAMS CE system. CAMS CE does not have any materials management or procurement components, which was the function of the SAP components and the SSM reference procurement or project. Additionally, there was no training indicated in the SSM reference. Mr. Esser gave the SSM reference a score of 3. His comments were "no training provided, no legacy data conversion, project evaluation was primarily for SAP not KPMG". However, it was KPMG's responsibility in responding to the ITN to provide project information concerning a corporate reference in a clear manner rather than requiring that an evaluator infer compliance with the specifications. Mr. Focht believed that legacy data conversion could be inferred from the reference's description of the project. Mr. Strange opined that Mr. Esser's comment was inaccurate as KPMG installed SAP and made the software work. Mr. Esser gave the SSM reference a score of 3 because the reference described SAP's role, but not KPMG's role in the installation of the software. When providing information in the reference SSM gave answers relating to SAP to the questions regarding system capability, system usability, system reliability but did not state KPMG's role in the installation. SAP is a large enterprise software package. This answer created an impression of little KPMG involvement in the project. Dr. Addy gave the SSM reference a score of 6. Dr. Addy found that the size was over five million dollars and customer ratings were high except for a 7 for usability with reference to a "long learning curve" for users. Data conversion was implied. There was no strong explanation of similarity to CAMS CE. It was generally a small-sized project. He could reason some similarity into it, even though it was not well described in the submittal. Mr. Doolittle gave the SSM reference a score of 6. Mr. Doolittle noted, as positive factors, that the total cost of the project was greater than five million dollars, that it supported 24 sites and 1,500 users as well "migration from a mainframe." However, there were negative factors such as training not being mentioned and a long learning curve for its users. Mr. Ellis gave a score of 6 for SSM, feeling that KPMG met all of the requirements but did not offer more than the basic requirements. Mr. Strange opined that Mr. Bankirer, Dr. Addy and Mr. Ellis (evaluators 1, 5 and 4) were inconsistent with each other in their evaluation of the SSM reference. He stated that this inconsistency showed a flaw in the evaluation process in that the evaluators did not have enough training to uniformly evaluate past corporate experience, thereby, in his view, creating an arbitrary evaluation process. Mr. Bankirer gave the SSM reference a score of 5, Ellis a score of 6, and Addy a score of 6. Even though the scores were similar, Mr. Strange contended that they gave conflicting comments regarding the size of the project. Mr. Ellis stated that the size of the project was hard to determine as the cost was listed as greater than five million dollars and the database size given, but the number of records was not given. Mr. Bankirer found that the project was low in cost and Dr. Addy stated that over five million dollars was a positive factor in his consideration. However, the evaluators looked at all of the factors in Table 8.2 in scoring each reference. Other factors that detracted from KPMG's score for the SSM reference were: similarity to the CAMS system not being explained, according to Dr. Addy; no indication of training (all of the evaluators); the number of records not being provided (evaluator Ellis); little development shown (Bankirer) and usability problems (Dr. Addy). Mr. Strange admitted that the evaluators may have been looking at other factors besides the dollar value size in order to score the SSM reference. Mr. Esser gave the Armstrong reference a score of 6. He felt that the reference did not contain any database information or cost data and that there was no legacy conversion shown. Dr. Addy also gave Armstrong a score of 6. He inferred that this reference had data conversion as well as training and the high dollar volume which were all positive factors. He could not tell, however, from the project description, what role KPMG actually had in the project. Mr. Ellis gave a score of 7 for the Armstrong reference stating that the Armstrong reference offered more information regarding the nature of the project than had the SSM and Duke references. Mr. Bankirer gave KPMG a score of 7 for the Armstrong reference. He found that the positive factors were that the reference had more site locations and offered training but, on the negative side, was not specific regarding KPMG's role in the project. Mr. Focht opined that the evaluators did not understand the nature of the product and services the Department was seeking to obtain as the Department's training did not cover the nature of the procurement and the products and services DOR was seeking. However, when he made this statement he admitted he did not know the evaluators' backgrounds. In fact, Bankirer, Ellis, Addy and Doolittle were part of a group that developed the ITN and clearly knew what CSE was seeking to procure. Further, Mr. Esser stated that he was familiar with COTS and described it as a commercial off-the-shelf software package. Mr. Esser explained that an ERP solution or Enterprise Resource Plan is a package that is designed to do a series of tasks, such as produce standard reports and perform standard operations. He did not believe that he needed further training in COTS/ERP to evaluate the proposals. Mr. Doolittle was also familiar with COTS/ERP and believed, based on the amount of funding, that it was a likely response to the ITN. Dr. Addy's doctoral dissertation research was in the area of software re-use. COTS is one of the components that comprise a development activity and re-use. He became aware during his research of how COTS packages are used in software engineering. He has also been exposed to ERP packages. ERP is only one form of a COTS package. In regard to the development of the ITN and the expectations of the development team, Dr. Addy stated that they were amenable to any solution that met the requirements of the ITN. They fully expected the compliance solutions were going to be comprised of mostly COTS and ERP packages. Furthermore, the ITN in Section 1.1, on page 1-2 states, ". . . FDOR will consider an applicable Enterprise Resource Planning (ERP) or Commercial Off the Shelf (COTS) based solution in addition to custom development." Clearly, this ITN was an open procurement and to train evaluators on only one of the alternative solutions would have biased the evaluation process. Mr. Doolittle gave each of the KPMG corporate references a score of 6. Mr. Strange and Mr. Focht questioned the appropriateness of these scores as the corporate references themselves gave KPMG average ratings of 8.3, 8.2 and 8.0. However, Mr. Focht admitted that Mr. Doolittle's comments regarding the corporate references were a mixture of positive and negative comments. Mr. Focht believed, however, that as the reference corporations considered the same factors for providing ratings on the reference forms, that it was inconsistent for Mr. Doolittle to separately evaluate the same factors that the corporations had already rated. However, there is no evidence in the record that KPMG provided Table 8.2 to the companies completing the reference forms and that the companies consulted the table when completing their reference forms. Therefore, KPMG did not prove that it had taken all measures available to it to improve its scores. Moreover, Mr. Focht's criticism would impose a requirement on Mr. Doolittle's evaluation which was not supported by the ITN. Mr. Focht admitted that there was no criteria in the ITN which limited the evaluator's discretion in scoring to the ratings given to the corporate references by those corporate reference customers. All of the evaluators used Table 8.2 as their guide for scoring the corporate references. As part of his evaluation, Dr. Addy looked at the methodology used by the proposers in each of the corporate references to implement the solution for that reference company. He was looking at methodology to determine its degree of similarity to CAMS CE. While not specifically listed in Table 8.2 as a similarity to CAMS, Table 8.2 states that the list is not all inclusive. Clearly, methodology is a measure of similarity and therefore is not an arbitrary criterion. Moreover, as Dr. Addy used the same process and criteria in evaluating all of the proposals there was no prejudice to KPMG by use of this criterion since all vendors were subjected to it. Mr. Strange stated that KPMG appeared to receive lower scores for SAP applications than other vendors. For example, evaluator 1 gave a score of 7 to Deloitte's reference for Suntax. Suntax is an SAP implementation. It is difficult to draw comparisons across vendors, yet the evaluators consistently found that KPMG references lacked key elements such as data conversion, information on starting and ending costs, and information on database size. All of these missing elements contributed to a reduction in KPMG's scores. Nevertheless, KPMG received average scores of 5.5 for Duke, 5.7 for SSM and 6.3 for Armstrong, compared with the score of 7 received by Deloitte for Suntax. There is only a gap of 1.5 to .7 points between Deloitte and KPMG's scores for SAP implementations, despite the deficient information within KPMG's corporate references. Key Staff Criterion: The proposals contain a summary of the experience of key staff and attached résumés. KPMG's proposed key staff person for Testing Lead was Frank Traglia. Mr. Traglia's summary showed that he had 25-years' experience respectively, in the areas of child support enforcement, information technology, project management and testing. Strange and Focht admitted that Traglia's résumé did not specifically list any testing experience. Mr. Focht further admitted that it was not unreasonable for evaluators to give the Testing Lead a lower score due to the lack of specific testing information in Traglia's résumé. Mr. Strange explained that the résumé was from a database of résumés. The summary sheet, however, was prepared by those KPMG employees who prepared the proposal. All of the evaluators resolved the conflicting information between the summary sheet and the résumé by crediting the résumé as more accurate. Each evaluator thought that the résumé was more specific and expected to see specific information regarding testing experience on the résumé for someone proposed as the Testing Lead person. Evaluators Addy and Ellis gave scores to the Testing Lead criterion of 4 and 5. Mr. Ron Vandenberg (evaluator 8) gave the Testing Lead a score of 9. Mr. Vandenberg was the only evaluator to give the Testing Lead a high score. The other evaluators gave the Testing Lead an average score of 4.2. The Vandenberg score thus appears anomalous. All of the evaluators gave the Testing Lead a lower score as it did not specifically list testing experience. Dr. Addy found that the summary sheet listed 25-years of experience in child support enforcement, information technology, and project management and system testing. As he did not believe this person had 100 years of experience, he assumed those experience categories ran concurrently. A strong candidate for Testing Lead should demonstrate a combination of testing experience, education and certification, according to Dr. Addy. Mr. Doolittle also expected to see testing experience mentioned in the résumé. When evaluating the Testing Lead, Mr. Bankirer first looked at the team skills matrix and found it interesting that testing was not one of the categories of skills listed for the Testing Lead. He then looked at the summary sheet and résumé from Mr. Traglia. He gave a lower score to Traglia as he thought that KPMG should have put forward someone with demonstrable testing experience. The evaluators gave a composite score to key staff based on the criteria in Table 8.2. In order to derive the composite score that he gave each staff person, Mr. Esser created a scoring system wherein he awarded points for each attribute in Table 8.2 and then added the points together to arrive at a composite score. Among the criteria he rated, Mr. Esser awarded points for CSE experience. Mr. Focht and Mr. Strange contended that since the term CSE experience is not actually listed in Table 8.2 that Mr. Esser was incorrect in awarding points for CSE experience in his evaluation. Table 8.2 does refer to relevant experience. There is no specific definition provided in Table 8.2 for relevant experience. Mr. Focht stated that relevant experience is limited to COTS/ERP experience, system development, life cycle and project management methodologies. However, these factors are also not listed in Table 8.2. Mr. Strange limited relevance to experience in the specific role for which the key staff person was proposed. This is a limitation that also is not imposed by Table 8.2. CSE experience is no more or less relevant than the factors posited by KPMG as relevant experience. Moreover, KPMG included a column in its own descriptive table of key staffs for CSE experience. KPMG must have seen this information as relevant if it included it in its proposal as well. Inclusion of this information in its proposal demonstrated that KPMG must have believed CSE experience was relevant at the time its submitted its proposal. Mr. Strange held the view that, in the bidders conference in a reply to a vendor question, the Department representative stated that CSE experience was not required. Therefore, Mr. Esser could not use such experience to evaluate key staff. Question 47 of the Vendor Questions and Answers, Volume 2 stated: QUESTION: In scoring the Past Corporate Experience section, Child Support experience is not mentioned as a criterion. Would the State be willing to modify the criteria to include at least three Child Support implementations as a requirement? ANSWER: No. However, a child support implementation that also meets the other characteristics (contract value greater than $5 million, serves a large number of users, includes data conversion from a legacy system and includes training development) would be considered "similar to CAMS CE." The Department's statement involved the scoring of corporate experience not key staff. It was inapplicable to Mr. Esser's scoring system. Mr. Esser gave the Training Lead a score of 1. According to Esser, the Training Lead did not have a ten-year résumé, for which he deducted one point. The Training Lead had no specialty certification or extensive experience and had no child support experience and received no points. Mr. Esser added one point for the minimum of four years of specific experience and one point for the relevance of his education. Mr. Esser gave the Project Manager a score of 5. The Project Manager had a ten-year résumé and required references and received a point for each. He gave two points for exceeding the minimum required informational technology experience. The Project Manager had twelve years of project management experience for a score of one point, but lacked certification, a relevant education and child support enforcement experience for which he was accorded no points. Mr. Esser gave the Project Liaison person a score of According to Mr. Focht, the Project Liaison should have received a higher score since she has a professional history of having worked for the state technology office. Mr. Esser, however, stated that she did not have four years of specific experience and did not have extensive experience in the field, although she had a relevant education. Mr. Esser gave the Software Lead person a score of 4. The Software Lead, according to Mr. Focht, had a long set of experiences with implementing SAP solutions for a wide variety of different clients and should have received a higher score. Mr. Esser gave a point each for having a ten-year résumé, four years of specific experience in software, extensive experience in this area and relevant education. According to Mr. Focht the Database Lead had experience with database pools including the Florida Retirement System and should have received more points. Mr. Strange concurred with Mr. Focht in stating that Esser had given low scores to key staff and stated that the staff had good experience, which should have generated more points. Mr. Strange believed that Mr. Esser's scoring was inconsistent but provided no basis for that conclusion. Other evaluators also gave key staff positions scores of less than 7. Dr. Addy gave the Software Lead person a score of 5. The Software Lead had 16 years of experience and SAP development experience as positive factors but had no development lead experience. He had a Bachelor of Science and a Master of Science in Mechanical Engineering and a Master's in Business Administration, which were not good matches in education for the role of a Software Lead person. Dr. Addy gave the Training Lead person a score of 5. The Training Lead had six years of consulting experience, a background in SAP consulting and some training experience but did not have certification or education in training. His educational background also was electrical engineering, which is not a strong background for a training person. Dr. Addy gave the subcontractor managers a score of 5. Two of the subcontractors did not list managers at all, which detracted from the score. Mr. Doolittle gave the Training Lead person a He believed that based on his experience and training it was an average response. Table 8.2 contained an item in which a proposer could have points detracted from a score if the key staff person's references were not excellent. The Department did not check references at this stage in the evaluation process. As a result, the evaluators simply did not consider that item when scoring. No proposer's score was adversely affected thereby. KPMG contends that checking references would have given the evaluators greater insight into the work done by those individuals and their relevance and capabilities in the project team. Mr. Focht admitted, however, that any claimed effect on KPMG's score is conjectural. Mr. Strange stated that without reference checks information in the proposals could not be validated but he provided no basis for his opinion that reference checking was necessary at this preliminary stage of the evaluation process. Dr. Addy stated that the process called for checking references during the timeframe of oral presentations. They did not expect the references to change any scores at this point in the process. KPMG asserted that references should be checked to ascertain the veracity of the information in the proposals. However, even if the information in some other proposal was inaccurate it would not change the outcome for KPMG. KPMG would still not have the required number of points to advance to the next evaluation tier. Divergency in Scores The Source Selection Plan established a process for resolving divergent scores. Any item receiving scores with a range of 5 or more was determined to be divergent. The plan provided that the Coordinator identify divergent scores and then report to the evaluators that there were divergent scores for that item. The Coordinator was precluded from telling the evaluator, if his score was the divergent score, i.e., the highest or lowest score. Evaluators would then review that item, but were not required to change their scores. The purpose of the divergent score process was to have evaluators review their scores to see if there were any misperceptions or errors that skewed the scores. The team wished to avoid having any influence on the evaluators' scores. Mr. Strange testified that the Department did not follow the divergent score process in the Source Selection Plan as the coordinator did not tell the evaluators why the scores were divergent. Mr. Strange stated that the evaluator should have been informed which scores were divergent. The Source Selection Plan merely instructed the coordinator to inform the evaluators of the reason why the scores were divergent. Inherently scores were divergent, if there was a five-point score spread. The reason for the divergence was self- explanatory. The evaluators stated that they scored the proposals, submitted the scores and each received an e-mail from Debbie Stephens informing him that there were divergent scores and that they should consider re-scoring. None of the evaluators ultimately changed their scores. Mr. Esser's scores were the lowest of the divergent scores but he did not re-score his proposals as he had spent a great deal of time on the initial scoring and felt his scores to be valid. Neither witnesses Focht or Strange for KPMG provided more than speculation regarding the effect of the divergent scores on KPMG's ultimate score and any role the divergent scoring process may have had in KPMG not attaining the 150 point passage score. Deloitte - Suntax Reference: Susan Wilson, a Child Support Enforcement employee connected with the CAMS project signed a reference for Deloitte Consulting regarding the Suntax System. Mr. Focht was concerned that the evaluators were influenced by her signature on the reference form. Mr. Strange further stated that having someone who is heavily involved in the project sign a reference did not appear to be fair. He was not able to state any positive or negative effect on KPMG by Wilson's reference for Deloitte, however. Evaluator Esser has met Susan Wilson but has had no significant professional interaction with her. He could not recall anything that he knew about Ms. Wilson that would favorably influence him in scoring the Deloitte reference. Dr. Addy also was not influenced by Wilson. Mr. Doolittle has only worked with Wilson for a very short time and did not know her well. He has also evaluated other proposals where department employees were a reference and was not influenced by that either. Mr. Ellis has only known Wilson from two to four months. Her signature on the reference form did not influence him either positively or negatively. Mr. Bankirer had not known Wilson for a long time when he evaluated the Suntax reference. He took the reference at face value and was not influenced by Wilson's signature. It is not unusual for someone within an organization to create a reference for a company who is competing for work to be done for the organization.

Recommendation Having considered the foregoing Findings of Fact, Conclusions of Law, the evidence of record and the pleadings and arguments of the parties, it is, therefore, RECOMMENDED that a final order be entered by the State of Florida Department of Revenue upholding the proposed agency action which disqualified KPMG from further participation in the evaluation process regarding the subject CAMS CE Invitation to Negotiate. DONE AND ENTERED this 26th day of September, 2002, in Tallahassee, Leon County, Florida. P. MICHAEL RUFF Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-3060 (850) 488-9675 SUNCOM 278-9675 Fax Filing (850) 921-6847 www.doah.state.fl.us Filed with Clerk of the Division of Administrative Hearings this 26th day of September, 2002. COPIES FURNISHED: Cindy Horne, Esquire Earl Black, Esquire Department of Revenue Post Office Box 6668 Tallahassee, Florida 32399-0100 Robert S. Cohen, Esquire D. Andrew Byrne, Esquire Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive Tallahassee, Florida 32308 Seann M. Frazier, Esquire Greenburg, Traurig, P.A. 101 East College Avenue Tallahassee, Florida 32302 Bruce Hoffmann, General Counsel Department of Revenue 204 Carlton Building Tallahassee, Florida 32399-0100 James Zingale, Executive Director Department of Revenue 104 Carlton Building Tallahassee, Florida 32399-0100

Florida Laws (3) 120.569120.5720.21
# 9
KETURA BOUIE | K. B. vs DEPARTMENT OF HEALTH AND REHABILITATIVE SERVICES, 96-004200 (1996)
Division of Administrative Hearings, Florida Filed:Tallahassee, Florida Sep. 04, 1996 Number: 96-004200 Latest Update: Jun. 09, 1997

The Issue Whether Ketura Bouie suffers from “retardation”, as that term is defined by Section 393.063(43), Florida Statutes, and therefore qualifies for developmental services offered by the Respondent agency under Chapter 393, Florida Statutes.

Findings Of Fact Ketura Bouie is 15 years old. She currently resides in Tallahassee, Florida. She is enrolled in a new school after transferring from Chatahoochee. Ketura has had several “social” promotions from grade to grade over the years. Her application for developmental services has been denied by the Respondent agency. Wallace Kennedy, Ph.D., is a Board-certified and Florida-licensed clinical psychologist. He was accepted as an expert in clinical psychology and the testing of children. He conducted a psychological evaluation of Ketura on April 12, 1995, for which he has provided a written narrative dated April 13, 1995. His narrative was admitted in evidence. Ketura was 13 years old at the time of Dr. Kennedy’s evaluation. He administered three standardized tests which are recognized and accepted for determining applicants’ eligibility for developmental services. These tests were: a wide range achievement test, Wechsler Intelligence Scale for Children— Revised (WISC-R), and Vineland Adaptive Behavior Scale. (Vineland) The wide range achievement test generally measures literacy. Ketura recognized only half of the upper-case letters of the alphabet and only a few three-letter kindergarten words. Her results indicated that she has the achievement level expected of a five and a half year old kindergarten student, even though she was then placed in the seventh grade. In Dr. Kennedy's view, there is "no chance Ketura will become functionally literate". The WISC-R measures intellectual functioning and academic aptitude without penalizing the child for handicaps. The mean score on this test is 100. To score two or more deviations from this mean, a subject must score 70 or below. All of Ketura’s WISC-R scores on the test administered by Dr. Kennedy in April 1995 were well below 70. They consisted of a verbal score of 46, a performance score of 46, and a full scale score of 40. Ketura’s full scale IQ of 40 is in the lowest tenth of the first percentile and represents a low moderate level of mental retardation. Ketura’s full scale score of 40 is the lowest result that WISC-R can measure. The Vineland measures communication, daily living skills, and socialization. Ketura’s composite score for Dr. Kennedy on the Vineland was 42. In conducting the Vineland test, Dr. Kennedy relied on information obtained through his own observation of Ketura and information obtained from Ketura’s mother. It is typical in the field of clinical psychology to rely on information supplied by parents and caregivers, provided they are determined to be reliable observers. Dr. Kennedy assessed Ketura’s mother to be a reliable observer. Dr. Kennedy’s Vineland test revealed that Ketura has a social maturity level of about six years of age. Her verbal and written communication skills are poor. Ketura has poor judgment regarding her personal safety. She cannot consistently remember to use a seatbelt and cannot safely use a knife. She has poor domestic skills. She has no concept of money or of dates. She does not help with the laundry or any other household task. She cannot use the phone. Ketura’s socialization skills are also poor. She does not have basic social manners. Her table manners and social interactive skills are poor. She has no friends, and at the time of Dr. Kennedy’s evaluation, she was unhappy due to classmates making fun of her for being unable to recite the alphabet. Dr. Kennedy rendered an ultimate diagnosis of moderate mental retardation and opined that Ketura's retardation is permanent. Although Dr. Kennedy observed that Ketura was experiencing low levels of depression and anxiety during his April 1995 tests and interview, he did not make a clinical psychological diagnosis to that effect. He attributed these emotional components to Ketura’s lack of confidence in being able to perform the tasks required during testing. In his opinion, Ketura did not have any behavioral or emotional problems which interfered with the reliability of the tests he administered. Also, there were no other conditions surrounding his evaluation which interfered with the validity or reliability of the test scores, his evaluation, or his determination that Ketura suffers from a degree of retardation which would qualify her for developmental services. In Dr. Kennedy’s expert opinion, even if all of Ketura's depression and anxiety were eliminated during testing, her WISC-R scores would not have placed her above the retarded range in April 1995. The retardation range for qualifying for developmental services is 68 or below. Ketura’s I.Q. was tested several times between 1990 and April 1995 with resulting full scale scores ranging from 40 to All or some of these tests and/or reports on the 1990 - 1995 tests were submitted to the agency with Ketura’s application for developmental services. Also included with Ketura’s application to the agency were mental health reports documenting depression, a recognized mental disorder. The most recent of these was one done as recently as May of 1996. However, none of these reports were offered or admitted in evidence at formal hearing. Respondent’s sole witness and agency representative, was Ms. JoAnne Braun. She is an agency Human Service Counselor III. Ms. Braun is not a Florida-licensed psychologist and she was not tendered as an expert witness in any field. As part of the application process, she visited with Ketura and her mother in their home. She also reviewed Petitioner’s application and mental health records described above. She reviewed the fluctuating psychological test scores beginning in 1990, one of which placed Ketura at 70 and another of which placed her at 74 on a scale of 100. Ms. Braun also reviewed a March 1995 psychological testing series that showed Ketura had a verbal 50, performance 60, and full scale 62 on the WISC-R test, one month before Dr. Kennedy’s April 1995 evaluation described above. However, none of these items which she reviewed was offered or admitted in evidence. The agency has guidelines for assessing eligibility for developmental services. The guidelines were not offered or admitted in evidence. Ms. Braun interpreted the agency's guidelines as requiring her to eliminate the mental health aspect if she felt it could depress Ketura's standard test scores. Because Ms. Braun "could not be sure that the mental health situation did not depress her scores," and because the fluctuation of Ketura’s test scores over the years caused Ms. Braun to think that Ketura’s retardation might not “reasonably be expected to continue indefinitely”, as required by the controlling statute, she opined that Ketura was not eligible for developmental services. Dr. Kennedy's assessment and expert psychological opinion was that if Ketura's scores were once higher and she now tests with lower scores, it might be the result of better testing today; it might be due to what had been required and observed of her during prior school testing situations; it might even be because she was in a particularly good mood on the one day she scored 70 or 74, but his current testing clearly shows she will never again do significantly better on standard tests than she did in April 1995. In his education, training, and experience, it is usual for test scores to deteriorate due to a retarded person's difficulties in learning as that person matures. I do not consider Ms. Braun’s opinion, though in evidence, as sufficient to rebut the expert opinion of Dr. Kennedy. This is particularly so since the items she relied upon are not in evidence and are not the sort of hearsay which may be relied upon for making findings of fact pursuant to Section 120.58(1)(a), Florida Statutes. See, Bellsouth Advertising & Publishing Corp. v. Unemployment Appeals Commission and Robert Stack, 654 So.2d 292 (Fla. 5th DCA 1995); and Tenbroeck v. Castor, 640 So.2d 164, (Fla. 1st DCA 1994). Particularly, there is no evidence that the "guidelines" (also not in evidence) she relied upon have any statutory or rule basis. Therefore, the only test scores and psychological evaluation upon which the undersigned can rely in this de novo proceeding are those of Dr. Kennedy. However, I do accept as binding on the agency Ms. Braun’s credible testimony that the agency does not find that the presence of a mental disorder in and of itself precludes an applicant, such as Ketura, from qualifying to receive developmental services; that Ketura is qualified to receive agency services under another program for alcohol, drug, and mental health problems which Ketura also may have; and that Ketura’s eligibility under that program and under the developmental services program, if she qualifies for both, are not mutually exclusive.

Recommendation Upon the foregoing findings of fact and conclusions of law, it is RECOMMENDED that the Department of Children and Families issue a Final Order awarding Ketura Bouie appropriate developmental services for so long as she qualifies under the statute.RECOMMENDED this 24th day of February, 1997, at Tallahassee, Florida. ELLA JANE P. DAVIS Administrative Law Judge Division of Administrative Hearings The DeSoto Building 1230 Apalachee Parkway Tallahassee, Florida 32399-1550 (904) 488-9675 SUNCOM 278-9675 Fax FILING (904) 921-6847 Filed with the Clerk of the Division of Administrative Hearings this 24th day of February, 1997. COPIES FURNISHED: Gregory D. Venz, Agency Clerk Department of Children and Families Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Richard A. Doran General Counsel Building 2, Room 204 1317 Winewood Blvd. Tallahassee, FL 32399-0700 Marla Ruth Butler Qualified Representative Children's Advocacy Center Florida State University Tallahassee, FL 32302-0287 Marian Alves, Esquire Department of Health and Rehabilitative Services 2639 North Monroe Street Suite 100A Tallahassee, FL 32399-2949

Florida Laws (2) 120.57393.063
# 10

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer