Elawyers Elawyers
Washington| Change

KPMG CONSULTING, INC. vs DEPARTMENT OF REVENUE, 02-001719BID (2002)

Court: Division of Administrative Hearings, Florida Number: 02-001719BID Visitors: 36
Petitioner: KPMG CONSULTING, INC.
Respondent: DEPARTMENT OF REVENUE
Judges: P. MICHAEL RUFF
Agency: Department of Revenue
Locations: Tallahassee, Florida
Filed: May 01, 2002
Status: Closed
Recommended Order on Thursday, September 26, 2002.

Latest Update: Oct. 15, 2002
Summary: The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from furthe
More
02-1719

STATE OF FLORIDA

DIVISION OF ADMINISTRATIVE HEARINGS


KPMG CONSULTING, INC.,


Petitioner,


vs.


DEPARTMENT OF REVENUE,


Respondent.


and


DELOITTE CONSULTING, INC.,


Intervenor.

)

)

)

)

) Case No. 02-1719BID

)

)

)

)

)

)

)

)

)

)

)


RECOMMENDED ORDER


Pursuant to notice, this cause came on for formal proceeding and hearing before P. Michael Ruff, duly-designated Administrative Law Judge of the Division of Administrative Hearings. The hearing was conducted on June 24 and 26, 2002, in Tallahassee, Florida. The appearances were as follows:

APPEARANCES


For Petitioner: Robert S. Cohen, Esquire

D. Andrew Byrne, Esquire

Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive

Tallahassee, Florida 32308


For Respondent: Cindy Horne, Esquire

Earl Black, Esquire Department of Revenue Post Office Box 6668

Tallahassee, Florida 32399-0100

For Intervenor: Seann M. Frazier, Esquire

Greenburg, Traurig, P.A.

101 East College Avenue Tallahassee, Florida 32302


STATEMENT OF THE ISSUE


The issue to be resolved in this proceeding concerns whether the Department of Revenue (Department, DOR) acted clearly erroneously, contrary to competition, arbitrarily or capriciously when it evaluated the Petitioner's submittal in response to an Invitation to Negotiate (ITN) for a child support enforcement automated management system-compliance enforcement (CAMS CE) in which it awarded the Petitioner a score of 140 points out of a possible 230 points and disqualified the Petitioner from further consideration in the invitation to negotiate process.

PRELIMINARY STATEMENT


On April 22, 2002, The Petitioner KPMG, INC. (KPMG), filed a timely, formal written protest of its disqualification from further consideration by the Respondent in the CAMS CE procurement. The Respondent transmitted the Petition to the Division of Administrative Hearings for further proceeding, and the matter was set for hearing on May 13, 2002. Pursuant to a joint request from all parties, the hearing was continued until June 24 and 26, 2002. Deloitte Consulting, Inc. (Deloitte) filed a Petition for Intervention which was granted without

objection, and the formal hearing was conducted as noticed, on the above dates.

The Petitioner presented the testimony of two witnesses by deposition at hearing: James Focht, Senior Manager for KPMG, and Michael Strange, Business Development Manager for KPMG, as well as the depositions of the evaluators. The Petitioner presented nineteen exhibits, all of which were admitted into evidence. The Respondent presented the testimony of seven witnesses: Lillie Bogan, Child Support Enforcement Program Director; Randolph A. Esser, Information Systems Director for the Department of Highway Safety and Motor Vehicles; Edward Addy, Ph.D., Program Director for Northrup Grumman Information Technology; Frank Doolittle, Process Manager for Child Support Enforcement Compliance Enforcement; Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; H. P. Barker, Jr., Procurement Consultant; and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program. The Respondent presented one exhibit, which was admitted into evidence. No witnesses were presented by the Intervenor.

Upon conclusion of the hearing a transcript was requested, and the parties availed themselves of the opportunity to submit Proposed Recommended Orders. The Proposed Recommended Orders were considered in the rendition of this Recommended Order.

FINDINGS OF FACT


Procurement Background:


  1. The Respondent, the (DOR) is a state agency charged with the responsibility of administering the Child Support Enforcement Program (CSE) for the State of Florida, in accordance with Section 20.21(h), Florida Statutes. The DOR issued an ITN for the CAMS Compliance Enforcement implementation on February 1, 2002. This procurement is designed to give the Department a "state of the art system" that will meet all Federal and State Regulations and Policies for Child Support Enforcement, improve the effectiveness of collections of child support and automate enforcement to the greatest extent possible. It will automate data processing and other decision- support functions and allow rapid implementation of changes in regulatory requirements resulting from revised Federal and State Regulation Policies and Florida initiatives, including statutory initiatives.

    CSE services suffer from dependence on an inadequate computer system known as the "FLORIDA System" which was not originally designed for CSE and is housed and administered in another agency. The current FLORIDA System cannot meet the Respondent's needs for automation and does not provide the Respondent's need for management and reporting requirements and the need for a more flexible system. The DOR needs a system

    that will ensure the integrity of its data, will allow the Respondent to consolidate some of the "stand-alone" systems it currently has in place to remedy certain deficiencies of the FLORIDA System and which will help the Child Support Enforcement system and program secure needed improvements.

  2. The CSE is also governed by Federal Policy, Rules and Reporting requirements concerning performance. In order to improve its effectiveness in responding to its business partners in the court system, the Department of Children and Family Services, the Sheriff's Departments, employers, financial institutions and workforce development boards, as well as to the Federal requirements, it has become apparent that the CSE agency and system needs a new computer system with the flexibility to respond to the complete requirements of the CSE system.

  3. In order to accomplish its goal of acquiring a new computer system, the CSE began the procurement process. The Department hired a team from the Northrup Grumman Corporation headed by Dr. Edward Addy to head the procurement development process. Dr. Addy began a process of defining CSE needs and then developing an ITN which reflected those needs. The process included many individuals in CSE who would be the daily users of the new system. These individuals included Andrew Michael Ellis, Revenue Program Administrator III for Child Support Enforcement Compliance Enforcement; Frank Doolittle, Process

    Manager for Child Support Enforcement Compliance Enforcement and Harold Bankirer, Deputy Program Director for the Child Support Enforcement Program.

  4. There are two alternative strategies for implementing a large computer system such as CAMS CE: a customized system developed especially for CSE or a Commercial Off The Shelf, Enterprise Resource Plan (COTS/ERP). A COTS/ERP system is a pre-packaged software program, which is implemented as a system- wide solution. Because there is no existing COTS/ERP for child support programs, the team recognized that customization would be required to make the product fit its intended use. The team recognized that other system attributes were also important, such as the ability to convert "legacy data" and to address such factors as data base complexity and data base size.

    The Evaluation Process:


  5. The CAMS CE ITN put forth a tiered process for selecting vendors for negotiation. The first tier involved an evaluation of key proposal topics. The key topics were the vendors past corporate experience (past projects) and its key staff. A vendor was required to score 150 out of a possible 230 points to enable it to continue to the next stage or tier of consideration in the procurement process. The evaluation team wanted to remove vendors who did not have a serious chance of becoming the selected vendor at an early stage. This would

    prevent an unnecessary expenditure of time and resources by both the CSE and the vendor. The ITN required that the vendors provide three corporate references showing their past corporate experience for evaluation. In other words, the references involved past jobs they had done for other entities which showed relevant experience in relation to the ITN specifications. The Department provided forms to the vendors who in turn provided them to their corporate references that they themselves selected. The vendors also included a summary of their corporate experience in their proposal drafted by the vendors themselves. Table 8.2 of the ITN provided positive and negative criteria by which the corporate references would be evaluated.

    The list in Table 8.2 is not meant to be exhaustive and is in the nature of an "included but not limited to" standard. The vendors had the freedom to select references whose projects the vendors' believed best fit the criteria upon which each proposal was to be evaluated.

  6. For the key staff evaluation standard, the vendors provided summary sheets as well as résumés for each person filling a lead role as key staff members on their proposed project team. Having a competent project team was deemed by the Department to be critical to the success of the procurement and implementation of a large project such as the CAMS CE. Table

    8.2 of the ITN provided the criteria by which the key staff would be evaluated.

    The Evaluation Team:


  7. The CSE selected an evaluation team which included Dr. Addy, Mr. Ellis, Mr. Bankirer, Mr. Doolittle and Mr. Esser. Although Dr. Addy had not previously performed the role of an evaluator, he has responded to several procurements for Florida government agencies. He is familiar with Florida's procurement process and has a doctorate in Computer Science as well as seventeen years of experience in information technology.

    Dr. Addy was the leader of the Northrup Grumman team which primarily developed the ITN with the assistance of personnel from the CSE program itself. Mr. Ellis, Mr. Bankirer and Mr. Doolittle participated in the development of the ITN as well. Mr. Bankirer and Mr. Doolittle had previously been

    evaluators in other procurements for Federal and State agencies prior to joining the CSE program. Mr. Esser is the Chief of the Bureau of Information Technology at the Department of Highway Safety and Motor Vehicles and has experience in similar, large computer system procurements at that agency. The evaluation team selected by the Department thus has extensive experience in computer technology, as well as knowledge of the requirements of the subject system.

  8. The Department provided training regarding the evaluation process to the evaluators as well as a copy of the ITN, the Source Selection Plan and the Source Selection Team Reference Guide. Section 6 of the Source Selection Team Reference Guide entitled "Scoring Concepts" provided guidance to the evaluators for scoring proposals. Section 6.1 entitled "Proposal Evaluation Specification in ITN Section 8" states:

    Section 8 of the ITN describes the method by which proposals will be evaluated and scored. SST evaluators should be consistent with the method described in the ITN, and the source selection process documented in the Reference Guide and the SST tools are designed to implement this method.


    All topics that are assigned to an SST evaluator should receive at the proper time an integer score between 0 and 10 (inclusive). Each topic is also assigned a weight factor that is multiplied by the given score in order to place a greater or lesser emphasis on specific topics. (The PES workbook is already set to perform this multiplication upon entry of the score.)


    Tables 8-2 through 8-6 in the ITN Section 8 list the topics by which the proposals will be scored along with the ITN reference and evaluation and scoring criteria for each topic. The ITN reference points to the primary ITN section that describes the topic. The evaluation and scoring criteria list characteristics that should be used to affect the score negatively or positively. While these characteristics should be used by each SST evaluator, each evaluator is free to emphasize each characteristic more or less than any other characteristic. In

    addition, the characteristics are not meant to be inclusive, and evaluators may consider other characteristics that are not listed .

    . . (Emphasis supplied).


    The preponderant evidence demonstrates that all the evaluators followed these instructions in conducting their evaluations and none used a criterion that was not contained in the ITN, either expressly or implicitly.

    Scoring Method:


  9. The ITN used a 0 to 10 scoring system. The Source Selection Team Guide required that the evaluators use whole integer scores. They were not required to start at "7," which was the average score necessary to achieve a passing 150 points, and then to score up or down from 7. The Department also did not provide guidance to the evaluators regarding a relative value of any score, i.e., what is a "5" as opposed to a "6" or a "7." There is no provision in the ITN which establishes a baseline score or starting point from which the evaluators were required to adjust their scores.

  10. The procurement development team had decided to give very little structure to the evaluators as they wanted to have each evaluator score based upon his or her understanding of what was in the proposal. Within the ITN the development team could not sufficiently characterize every potential requirement, in the form that it might be submitted, and provide the consistency

    of scoring that one would want in a competitive environment. This open-ended approach is a customary method of scoring, particularly in more complex procurements in which generally less guidance is given to evaluators. Providing precise guidance regarding the relative value of any score, regarding the imposition of a baseline score or starting point, from which evaluators were required to adjust their scores, instruction as to weighing of scores and other indicia of precise structure to the evaluators would be more appropriate where the evaluators themselves were not sophisticated, trained and experienced in the type of computer system desired and in the field of information technology and data retrieval generally. The evaluation team, however, was shown to be experienced and trained in information technology and data retrieval and experienced in complex computer system procurement.

  11. Mr. Barker is the former Bureau Chief of Procurement for the Department of Management Services. He has 34 years of procurement experience and has participated in many procurements for technology systems similar to CAMS CE. He established that the scoring system used by the Department at this initial stage of the procurement process is a common method. It is customary to leave the numerical value of scores to the discretion of the evaluators based upon each evaluator's experience and review of the relevant documents. According wider discretion to

    evaluators in such a complex procurement process tends to produce more objective scores.

  12. The evaluators scored past corporate experience (references) and key staff according to the criteria in Table

    8.2 of the ITN. The evaluators then used different scoring strategies within the discretion accorded to them by the 0 to 10 point scale. Mr. Bankirer established a midrange of 4 to 6 and added or subtracted points based upon how well the proposal addressed the CAMS CE requirements. Evaluator Ellis used 6 as his baseline and added or subtracted points from there.

    Dr. Addy evaluated the proposals as a composite without a starting point. Mr. Doolittle started with 5 as an average score and then added or subtracted points. Mr. Esser gave points for each attribute in Table 8.2, for key staff, and added the points for the score. For the corporate reference criterion, he subtracted a point for each attribute the reference lacked. As each of the evaluators used the same methodology for the evaluation of each separate vendor's proposal, each vendor was treated the same and thus no specific prejudice to KPMG was demonstrated.

    Corporate Reference Evaluation:


  13. KPMG submitted three corporate references: Duke University Health System (Duke), SSM Health Care (SSM), and Armstrong World Industries (Armstrong). Mr. Bankirer gave the

    Duke reference a score of 6, the SSM reference a score of 5 and the Armstrong reference a score of 7. Michael Strange, the KPMG Business Development Manager, believed that 6 was a low score.

    He contended that an average score of 7 was required to make the 150-point threshold for passage to the next level of the ITN consideration. Therefore, a score of 7 would represent minimum compliance, according to Mr. Strange. However, neither the ITN nor the Source Selection Team Guide identified 7 as a minimally compliant score. Mr. Strange's designation of 7 as a minimally compliant score is not provided for in the specifications or the scoring instructions.

  14. Mr. James Focht, Senior Manager for KPMG testified that 6 was a low score, based upon the quality of the reference that KPMG had provided. However, Mr. Bankirer found that the Duke reference was actually a small-sized project, with little system development attributes, and that it did not include information regarding a number of records, the data base size involved, the estimated and actual costs and attributes of data base conversion. Mr. Bankirer determined that the Duke reference had little similarity to the CAMS CE procurement requirements and did not provide training or data conversion as attributes for the Duke procurement which are attributes necessary to the CAMS CE procurement. Mr. Strange and Mr. Focht admitted that the Duke reference did not specifically contain

    the element of data conversion and that under the Table 8.2, omission of this information would negatively affect the score. Mr. Focht admitted that there was no information in the Duke Health reference regarding the number of records and the data base size, all of which factors diminish the quality of Duke as a reference and thus the score accorded to it.

  15. Mr. Strange opined that Mr. Bankirer had erred in determining that the Duke project was a significantly small sized project since it only had 1,500 users. Mr. Focht believed that the only size criterion in Table 8.2 was the five million dollar cost threshold, and, because KPMG indicated that the project cost was greater than five million dollars, that KPMG had met the size criterion. Mr. Focht believed that evaluators had difficulty in evaluating the size of the projects in the references due to a lack of training. Mr. Focht was of the view that the evaluator should have been instructed to make "binary choices" on issues such as size. He conceded, however, that evaluators may have looked at other criteria in Table 8.2 to determine the size of the project, such as database size and number of users. However, the corporate references were composite scores by the evaluators, as the ITN did not require separate scores for each factor in Table 8.2. Therefore,

    Mr. Focht's focus on binary scoring for size, to the exclusion

    of other criteria, mis-stated the objective of the scoring process.

  16. The score given to the corporate references was a composite of all of the factors in Table 8.2, and not merely monetary value size. Although KPMG apparently contends that size, in terms of dollar value, is the critical factor in determining the score for a corporate reference, the vendor questions and answers provided at the pre-proposal conference addressed the issue of relevant criteria. Question 40 of the vendor questions and answers, Volume II, did not single out "project greater than five million dollars" as the only size factor or criterion.

    QUESTION: Does the state require that each reference provided by the bidder have a contract value greater than $5 million; and serve a large number of users; and include data conversion from a legacy system; and include training development?


    ANSWER: To get a maximum score for past corporate experience, each reference must meet these criteria. If the criteria are not fully met, the reference will be evaluated, but will be assigned a lower score depending upon the degree to which the referenced project falls short of these required characteristics.


    Therefore, the cost of the project is shown to be only one component of a composite score.

  17. Mr. Strange opined that Mr. Bankirer's comment regarding the Duke reference, "little development, mostly SAP

    implementation" was irrelevant. Mr. Strange's view was that the CAMS CE was not a development project and Table 8.2 did not specifically list development as a factor on which proposals would be evaluated. Mr. Focht stated that in his belief

    Mr. Bankirer's comment suggested that Mr. Bankirer did not understand the link between the qualifications in the reference and the nature of KPMG's proposal.

  18. Both Strange and Focht believe that the ITN called for a COTS/ERP solution. Mr. Focht stated that the ITN references a COTS/ERP approach numerous times. Although many of the references to COTS/ERP in the ITN also refer to development, Mr. Strange also admitted that the ITN was open to a number of approaches. Furthermore, both the ITN and the Source Selection Team Guide stated that the items in Table 8.2 are not all inclusive and that the evaluators may look to other factors in the ITN. Mr. Bankirer noted that there is no current CSE COTS/ERP product on the market. Therefore, some development will be required to adapt an off-the-shelf product to its intended use as a child support case management system.

  19. Mr. Bankirer testified that the Duke project was a small-size project with little development. Duke has three sites while CSE has over 150 sites. Therefore, the Duke project is smaller than CAMS. There was no information provided in the KPMG submittal regarding data base size and number of records

    with regard to the Duke project. Mr. Bankirer did not receive the information he needed to infer a larger sized-project from the Duke reference.

  20. Mr. Esser also gave the Duke reference a score of 6.


    The reference did not provide the data base information required, which was the number of records in the data base and the number of "gigabytes" of disc storage to store the data, and there was no element of legacy conversion.

  21. Dr. Addy gave the Duke reference a score of 5. He accepted the dollar value as greater than five million dollars. He thought that the Duke Project may have included some data conversion, but it was not explicitly stated. The Duke customer evaluated training so he presumed training was provided with the Duke project. The customer ratings for Duke were high as he expected they would be, but similarity to the CAMS CE system was not well explained. He looked at size in terms of numbers of users, number of records and database size. The numbers that were listed were for a relatively small-sized project. There was not much description of the methodology used and so he gave it an overall score of 5.

  22. Mr. Doolittle gave the Duke reference a score of 6.


    He felt that it was an average response. He listed the number of users, the number of locations, that it was on time and on budget, but found that there was no mention of data conversion,

    database size or number of records. (Consistent with the other evaluators). A review of the evaluators comments makes it apparent that KPMG scores are more a product of a paucity of information provided by KPMG corporate references instead of a lack of evaluator knowledge of the material being evaluated.

  23. Mr. Ellis gave a score of 6 for the Duke reference.


    He used 6 as his baseline. He found the required elements but nothing more justifying in his mind raising the score above 6.

  24. Mr. Focht and Mr. Strange expressed the same concerns regarding Bankirer's comment, regarding little development, for the SSM Healthcare reference as they had for the Duke Health reference. However, both Mr. Strange and Mr. Focht admitted that the reference provided no information regarding training. Mr. Strange admitted that the reference had no information regarding data conversion. Training and data conversion are criteria contained in Table 8.2. Mr. Strange also admitted that KPMG had access to Table 8.2 before the proposal was submitted and could have included the information in the proposal.

  25. Mr. Bankirer gave the SSM reference a score of 5. He commented that the SAP implementation was not relevant to what the Department was attempting to do with the CAMS CE system. CAMS CE does not have any materials management or procurement components, which was the function of the SAP components and the

    SSM reference procurement or project. Additionally, there was no training indicated in the SSM reference.

  26. Mr. Esser gave the SSM reference a score of 3. His comments were "no training provided, no legacy data conversion, project evaluation was primarily for SAP not KPMG". However, it was KPMG's responsibility in responding to the ITN to provide project information concerning a corporate reference in a clear manner rather than requiring that an evaluator infer compliance with the specifications. Mr. Focht believed that legacy data conversion could be inferred from the reference's description of the project. Mr. Strange opined that Mr. Esser's comment was inaccurate as KPMG installed SAP and made the software work. Mr. Esser gave the SSM reference a score of 3 because the reference described SAP's role, but not KPMG's role in the installation of the software. When providing information in the reference SSM gave answers relating to SAP to the questions regarding system capability, system usability, system reliability but did not state KPMG's role in the installation. SAP is a large enterprise software package. This answer created an impression of little KPMG involvement in the project.

  27. Dr. Addy gave the SSM reference a score of 6.


    Dr. Addy found that the size was over five million dollars and customer ratings were high except for a 7 for usability with reference to a "long learning curve" for users. Data conversion

    was implied. There was no strong explanation of similarity to CAMS CE. It was generally a small-sized project. He could reason some similarity into it, even though it was not well described in the submittal.

  28. Mr. Doolittle gave the SSM reference a score of 6.


    Mr. Doolittle noted, as positive factors, that the total cost of the project was greater than five million dollars, that it supported 24 sites and 1,500 users as well "migration from a mainframe." However, there were negative factors such as training not being mentioned and a long learning curve for its users. Mr. Ellis gave a score of 6 for SSM, feeling that KPMG met all of the requirements but did not offer more than the basic requirements.

  29. Mr. Strange opined that Mr. Bankirer, Dr. Addy and Mr. Ellis (evaluators 1, 5 and 4) were inconsistent with each other in their evaluation of the SSM reference. He stated that this inconsistency showed a flaw in the evaluation process in that the evaluators did not have enough training to uniformly evaluate past corporate experience, thereby, in his view, creating an arbitrary evaluation process.

  30. Mr. Bankirer gave the SSM reference a score of 5, Ellis a score of 6, and Addy a score of 6. Even though the scores were similar, Mr. Strange contended that they gave conflicting comments regarding the size of the project.

    Mr. Ellis stated that the size of the project was hard to determine as the cost was listed as greater than five million dollars and the database size given, but the number of records was not given. Mr. Bankirer found that the project was low in cost and Dr. Addy stated that over five million dollars was a positive factor in his consideration. However, the evaluators looked at all of the factors in Table 8.2 in scoring each reference. Other factors that detracted from KPMG's score for the SSM reference were: similarity to the CAMS system not being explained, according to Dr. Addy; no indication of training (all of the evaluators); the number of records not being provided (evaluator Ellis); little development shown (Bankirer) and usability problems (Dr. Addy). Mr. Strange admitted that the evaluators may have been looking at other factors besides the dollar value size in order to score the SSM reference.

  31. Mr. Esser gave the Armstrong reference a score of 6.


    He felt that the reference did not contain any database information or cost data and that there was no legacy conversion shown. Dr. Addy also gave Armstrong a score of 6. He inferred that this reference had data conversion as well as training and the high dollar volume which were all positive factors. He could not tell, however, from the project description, what role KPMG actually had in the project. Mr. Ellis gave a score of 7 for the Armstrong reference stating that the Armstrong reference

    offered more information regarding the nature of the project than had the SSM and Duke references. Mr. Bankirer gave KPMG a score of 7 for the Armstrong reference. He found that the positive factors were that the reference had more site locations and offered training but, on the negative side, was not specific regarding KPMG's role in the project.

  32. Mr. Focht opined that the evaluators did not understand the nature of the product and services the Department was seeking to obtain as the Department's training did not cover the nature of the procurement and the products and services DOR was seeking. However, when he made this statement he admitted he did not know the evaluators' backgrounds. In fact, Bankirer, Ellis, Addy and Doolittle were part of a group that developed the ITN and clearly knew what CSE was seeking to procure.

  33. Further, Mr. Esser stated that he was familiar with COTS and described it as a commercial off-the-shelf software package. Mr. Esser explained that an ERP solution or Enterprise Resource Plan is a package that is designed to do a series of tasks, such as produce standard reports and perform standard operations. He did not believe that he needed further training in COTS/ERP to evaluate the proposals. Mr. Doolittle was also familiar with COTS/ERP and believed, based on the amount of funding, that it was a likely response to the ITN.

  34. Dr. Addy's doctoral dissertation research was in the area of software re-use. COTS is one of the components that comprise a development activity and re-use. He became aware during his research of how COTS packages are used in software engineering. He has also been exposed to ERP packages. ERP is only one form of a COTS package.

  35. In regard to the development of the ITN and the expectations of the development team, Dr. Addy stated that they were amenable to any solution that met the requirements of the ITN. They fully expected the compliance solutions were going to be comprised of mostly COTS and ERP packages. Furthermore, the ITN in Section 1.1, on page 1-2 states, ". . . FDOR will consider an applicable Enterprise Resource Planning (ERP) or Commercial Off the Shelf (COTS) based solution in addition to custom development." Clearly, this ITN was an open procurement and to train evaluators on only one of the alternative solutions would have biased the evaluation process.

  36. Mr. Doolittle gave each of the KPMG corporate references a score of 6. Mr. Strange and Mr. Focht questioned the appropriateness of these scores as the corporate references themselves gave KPMG average ratings of 8.3, 8.2 and 8.0. However, Mr. Focht admitted that Mr. Doolittle's comments regarding the corporate references were a mixture of positive and negative comments. Mr. Focht believed, however, that as the

    reference corporations considered the same factors for providing ratings on the reference forms, that it was inconsistent for Mr. Doolittle to separately evaluate the same factors that the corporations had already rated. However, there is no evidence in the record that KPMG provided Table 8.2 to the companies completing the reference forms and that the companies consulted the table when completing their reference forms. Therefore, KPMG did not prove that it had taken all measures available to it to improve its scores. Moreover, Mr. Focht's criticism would impose a requirement on Mr. Doolittle's evaluation which was not supported by the ITN. Mr. Focht admitted that there was no criteria in the ITN which limited the evaluator's discretion in scoring to the ratings given to the corporate references by those corporate reference customers.

  37. All of the evaluators used Table 8.2 as their guide for scoring the corporate references. As part of his evaluation, Dr. Addy looked at the methodology used by the proposers in each of the corporate references to implement the solution for that reference company. He was looking at methodology to determine its degree of similarity to CAMS CE. While not specifically listed in Table 8.2 as a similarity to CAMS, Table 8.2 states that the list is not all inclusive. Clearly, methodology is a measure of similarity and therefore is not an arbitrary criterion. Moreover, as Dr. Addy used the same

    process and criteria in evaluating all of the proposals there was no prejudice to KPMG by use of this criterion since all vendors were subjected to it.

  38. Mr. Strange stated that KPMG appeared to receive lower scores for SAP applications than other vendors. For example, evaluator 1 gave a score of 7 to Deloitte's reference for Suntax. Suntax is an SAP implementation. It is difficult to draw comparisons across vendors, yet the evaluators consistently found that KPMG references lacked key elements such as data conversion, information on starting and ending costs, and information on database size. All of these missing elements contributed to a reduction in KPMG's scores. Nevertheless, KPMG received average scores of 5.5 for Duke, 5.7 for SSM and 6.3 for Armstrong, compared with the score of 7 received by Deloitte for Suntax. There is only a gap of 1.5 to .7 points between Deloitte and KPMG's scores for SAP implementations, despite the deficient information within KPMG's corporate references.

    Key Staff Criterion:


  39. The proposals contain a summary of the experience of key staff and attached résumés. KPMG's proposed key staff person for Testing Lead was Frank Traglia. Mr. Traglia's summary showed that he had 25-years' experience respectively, in the areas of child support enforcement, information technology, project management and testing. Strange and Focht admitted that

    Traglia's résumé did not specifically list any testing experience. Mr. Focht further admitted that it was not unreasonable for evaluators to give the Testing Lead a lower score due to the lack of specific testing information in Traglia's résumé. Mr. Strange explained that the résumé was from a database of résumés. The summary sheet, however, was prepared by those KPMG employees who prepared the proposal. All of the evaluators resolved the conflicting information between the summary sheet and the résumé by crediting the résumé as more accurate. Each evaluator thought that the résumé was more specific and expected to see specific information regarding testing experience on the résumé for someone proposed as the Testing Lead person.

  40. Evaluators Addy and Ellis gave scores to the Testing Lead criterion of 4 and 5. Mr. Ron Vandenberg (evaluator 8) gave the Testing Lead a score of 9. Mr. Vandenberg was the only evaluator to give the Testing Lead a high score. The other evaluators gave the Testing Lead an average score of 4.2. The Vandenberg score thus appears anomalous.

  41. All of the evaluators gave the Testing Lead a lower score as it did not specifically list testing experience.

    Dr. Addy found that the summary sheet listed 25-years of experience in child support enforcement, information technology, and project management and system testing. As he did not

    believe this person had 100 years of experience, he assumed those experience categories ran concurrently. A strong candidate for Testing Lead should demonstrate a combination of testing experience, education and certification, according to Dr. Addy. Mr. Doolittle also expected to see testing experience mentioned in the résumé. When evaluating the Testing Lead,

    Mr. Bankirer first looked at the team skills matrix and found it interesting that testing was not one of the categories of skills listed for the Testing Lead. He then looked at the summary sheet and résumé from Mr. Traglia. He gave a lower score to Traglia as he thought that KPMG should have put forward someone with demonstrable testing experience.

  42. The evaluators gave a composite score to key staff based on the criteria in Table 8.2. In order to derive the composite score that he gave each staff person, Mr. Esser created a scoring system wherein he awarded points for each attribute in Table 8.2 and then added the points together to arrive at a composite score. Among the criteria he rated, Mr. Esser awarded points for CSE experience. Mr. Focht and

    Mr. Strange contended that since the term CSE experience is not actually listed in Table 8.2 that Mr. Esser was incorrect in awarding points for CSE experience in his evaluation.

  43. Table 8.2 does refer to relevant experience. There is no specific definition provided in Table 8.2 for relevant

    experience. Mr. Focht stated that relevant experience is limited to COTS/ERP experience, system development, life cycle and project management methodologies. However, these factors are also not listed in Table 8.2. Mr. Strange limited relevance to experience in the specific role for which the key staff person was proposed. This is a limitation that also is not imposed by Table 8.2. CSE experience is no more or less relevant than the factors posited by KPMG as relevant experience. Moreover, KPMG included a column in its own descriptive table of key staffs for CSE experience. KPMG must have seen this information as relevant if it included it in its proposal as well. Inclusion of this information in its proposal demonstrated that KPMG must have believed CSE experience was relevant at the time its submitted its proposal.

  44. Mr. Strange held the view that, in the bidders conference in a reply to a vendor question, the Department representative stated that CSE experience was not required. Therefore, Mr. Esser could not use such experience to evaluate key staff. Question 47 of the Vendor Questions and Answers, Volume 2 stated:

    QUESTION: In scoring the Past Corporate Experience section, Child Support experience is not mentioned as a criterion. Would the State be willing to modify the criteria to include at least three Child Support implementations as a requirement?

    ANSWER: No. However, a child support implementation that also meets the other characteristics (contract value greater than

    $5 million, serves a large number of users, includes data conversion from a legacy system and includes training development) would be considered "similar to CAMS CE."


    The Department's statement involved the scoring of corporate experience not key staff. It was inapplicable to Mr. Esser's scoring system.

  45. Mr. Esser gave the Training Lead a score of 1.


    According to Esser, the Training Lead did not have a ten-year résumé, for which he deducted one point. The Training Lead had no specialty certification or extensive experience and had no child support experience and received no points. Mr. Esser added one point for the minimum of four years of specific experience and one point for the relevance of his education.

  46. Mr. Esser gave the Project Manager a score of 5. The Project Manager had a ten-year résumé and required references and received a point for each. He gave two points for exceeding the minimum required informational technology experience. The Project Manager had twelve years of project management experience for a score of one point, but lacked certification, a relevant education and child support enforcement experience for which he was accorded no points.

  47. Mr. Esser gave the Project Liaison person a score of


  1. According to Mr. Focht, the Project Liaison should have

    received a higher score since she has a professional history of having worked for the state technology office. Mr. Esser, however, stated that she did not have four years of specific experience and did not have extensive experience in the field, although she had a relevant education.

    1. Mr. Esser gave the Software Lead person a score of 4.


      The Software Lead, according to Mr. Focht, had a long set of experiences with implementing SAP solutions for a wide variety of different clients and should have received a higher score. Mr. Esser gave a point each for having a ten-year résumé, four years of specific experience in software, extensive experience in this area and relevant education.

    2. According to Mr. Focht the Database Lead had experience with database pools including the Florida Retirement System and should have received more points. Mr. Strange concurred with Mr. Focht in stating that Esser had given low scores to key staff and stated that the staff had good experience, which should have generated more points.

      Mr. Strange believed that Mr. Esser's scoring was inconsistent but provided no basis for that conclusion.

    3. Other evaluators also gave key staff positions scores of less than 7. Dr. Addy gave the Software Lead person a score of 5. The Software Lead had 16 years of experience and SAP development experience as positive factors but had no

      development lead experience. He had a Bachelor of Science and a Master of Science in Mechanical Engineering and a Master's in Business Administration, which were not good matches in education for the role of a Software Lead person.

    4. Dr. Addy gave the Training Lead person a score of 5.


The Training Lead had six years of consulting experience, a background in SAP consulting and some training experience but did not have certification or education in training. His educational background also was electrical engineering, which is not a strong background for a training person. Dr. Addy gave the subcontractor managers a score of 5. Two of the subcontractors did not list managers at all, which detracted from the score. Mr. Doolittle gave the Training Lead person a

  1. He believed that based on his experience and training it was an average response.

    1. Table 8.2 contained an item in which a proposer could have points detracted from a score if the key staff person's references were not excellent. The Department did not check references at this stage in the evaluation process. As a result, the evaluators simply did not consider that item when scoring. No proposer's score was adversely affected thereby.

    2. KPMG contends that checking references would have given the evaluators greater insight into the work done by those individuals and their relevance and capabilities in the project

      team. Mr. Focht admitted, however, that any claimed effect on KPMG's score is conjectural. Mr. Strange stated that without reference checks information in the proposals could not be validated but he provided no basis for his opinion that reference checking was necessary at this preliminary stage of the evaluation process. Dr. Addy stated that the process called for checking references during the timeframe of oral presentations. They did not expect the references to change any scores at this point in the process. KPMG asserted that references should be checked to ascertain the veracity of the information in the proposals. However, even if the information in some other proposal was inaccurate it would not change the outcome for KPMG. KPMG would still not have the required number of points to advance to the next evaluation tier.

      Divergency in Scores


    3. The Source Selection Plan established a process for resolving divergent scores. Any item receiving scores with a range of 5 or more was determined to be divergent. The plan provided that the Coordinator identify divergent scores and then report to the evaluators that there were divergent scores for that item. The Coordinator was precluded from telling the evaluator, if his score was the divergent score, i.e., the highest or lowest score. Evaluators would then review that item, but were not required to change their scores. The purpose

      of the divergent score process was to have evaluators review their scores to see if there were any misperceptions or errors that skewed the scores. The team wished to avoid having any influence on the evaluators' scores.

    4. Mr. Strange testified that the Department did not follow the divergent score process in the Source Selection Plan as the coordinator did not tell the evaluators why the scores were divergent. Mr. Strange stated that the evaluator should have been informed which scores were divergent. The Source Selection Plan merely instructed the coordinator to inform the evaluators of the reason why the scores were divergent. Inherently scores were divergent, if there was a five-point score spread. The reason for the divergence was self- explanatory.

    5. The evaluators stated that they scored the proposals, submitted the scores and each received an e-mail from Debbie Stephens informing him that there were divergent scores and that they should consider re-scoring. None of the evaluators ultimately changed their scores. Mr. Esser's scores were the lowest of the divergent scores but he did not re-score his proposals as he had spent a great deal of time on the initial scoring and felt his scores to be valid. Neither witnesses Focht or Strange for KPMG provided more than speculation regarding the effect of the divergent scores on KPMG's ultimate

      score and any role the divergent scoring process may have had in KPMG not attaining the 150 point passage score.

      Deloitte - Suntax Reference:


    6. Susan Wilson, a Child Support Enforcement employee connected with the CAMS project signed a reference for Deloitte Consulting regarding the Suntax System. Mr. Focht was concerned that the evaluators were influenced by her signature on the reference form. Mr. Strange further stated that having someone who is heavily involved in the project sign a reference did not appear to be fair. He was not able to state any positive or negative effect on KPMG by Wilson's reference for Deloitte, however.

    7. Evaluator Esser has met Susan Wilson but has had no significant professional interaction with her. He could not recall anything that he knew about Ms. Wilson that would favorably influence him in scoring the Deloitte reference.

      Dr. Addy also was not influenced by Wilson. Mr. Doolittle has only worked with Wilson for a very short time and did not know her well. He has also evaluated other proposals where department employees were a reference and was not influenced by that either. Mr. Ellis has only known Wilson from two to four months. Her signature on the reference form did not influence him either positively or negatively. Mr. Bankirer had not known Wilson for a long time when he evaluated the Suntax reference.

      He took the reference at face value and was not influenced by Wilson's signature. It is not unusual for someone within an organization to create a reference for a company who is competing for work to be done for the organization.

      CONCLUSIONS OF LAW


    8. The Division of Administrative Hearings has jurisdiction of the subject matter of and the parties hereto. Sections 120.569 and 120.57(1)(3), Florida Statutes (2001). A bid protest proceeding is designed to:

      [D]etermine whether the agency's proposed action is contrary to the agency's governing statutes, the agency's rules or policies, or the bid or proposal specifications. The standard of proof for such proceeding shall be whether the proposed agency action was clearly erroneous, contrary to competition, arbitrary, or capricious.


      Section 120.57(3)(f), Florida Statutes (2001).


    9. While Section 120.57(3)(f), Florida Statutes, describes these proceedings as de novo, the courts have defined " de novo" for the purposes of a protest to a competitive procurement as a "form of inter-agency review. The Administrative Law Judge may receive evidence as with any formal hearing under Section 120.57(1), Florida Statutes, but the object of the proceeding is to evaluate the action taken by the agency." State Contracting & Engineering Corp. v. Dep't. of

      Transportation, 709 So. 2d 607, 609 (Fla. 1st DCA 1998) citing

      Intercontinental Properties, Inc. v. State Dep't. of Health and Rehabilitative Svcs., 606 So. 2d 308 (Fla. 3d DCA 1992).

    10. The party initiating a competitive procurement protest bears the burden of proof. Section 120.57(3)(f), Florida Statutes (2001). Findings of Fact must be based upon a preponderance of the evidence. Section 120.57(1)(j), Florida Statutes (2001).

    11. The standard of proof in a proceeding such as this concerns whether the proposed agency action was clearly erroneous, contrary to competition, arbitrary or capricious. Section 120.57(3)(f), Florida Statutes.

    12. A capricious action is one taken without thought or reason or irrationally. An arbitrary decision is one not supported by facts or logic. Agrico Chemical Co. v. Dep't of

      Environmental Regulation, 365 So. 2d 759, 763 (Fla. 1st DCA 1978). A decision is clearly erroneous when unsupported by substantial evidence or contrary to the clear weight of the evidence or induced by an erroneous view of the law. Blacks Law

      Dictionary, Rev. 4th Ed., (1968).


    13. An act is contrary to competition when it offends the purpose of competitive bidding. That purpose has been articulated as follows:

      [T]o protect the public against collusive contracts; to secure fair competition upon equal terms to all bidders; to remove not

      only collusion but temptation for collusion and opportunity for gain at public expense; to close all avenues to favoritism and fraud in its various forms; to secure the best values for the [public] at the lowest possible expense; and to afford an equal advantage to all desiring to do business with the [government], by affording an opportunity for an exact comparison of bids.


      Wester v Belote, 103 Fla. 976, 138 So. 721, 723-4, (Fla. 1931). Harry Pepper & Assoc., Inc. v. City of Cape Coral, 352 So. 2d 1190, 1192 (Fla. 2d DCA 1977).

    14. The CAMS CE ITN has a two-tier evaluation process.


      After a finding of initial responsiveness, the evaluators scored the key proposal topics for the remaining proposals, including KPMG. The key proposal topics were past corporate experience, for which the proposers submitted references from prior projects, and key staff, for which the proposers submitted staff résumés. To advance to the next evaluation tier, a proposer must score at least 150 of 230 points. KPMG scored 140 points and was eliminated from the next round of evaluation and protested that elimination.

    15. The Petitioner objected to the scoring system used in the evaluation as arbitrary and capricious. It had two objections to the scoring system: that evaluators were given too much discretion and there was no established base line score. The ITN established a 0-10 scoring system with 0 being poor and 10 being excellent. Within the parameters of that

      scale the evaluators were not given any further guidance regarding the meaning of a "5" versus a "7." Nor were the evaluators given a scoring methodology, such as to start from 0 and add points or start from 5 and add or subtract points.

      Hence, the evaluators each developed his own scoring methodology. The Petitioner argued that the lack of consistent scoring methodology made the scoring process arbitrary.

      However, each evaluator used his same scoring methodology for evaluating every proposal. Therefore, all the proposals were evaluated by the same criteria. The scoring methodology was not arbitrarily or capriciously applied against only the Petitioner and the criteria used were explicitly provided for or implicitly allowed by the specifications of the ITN.

    16. In the evaluation of complex procurements the established and better practice is to employ a scoring system that permits the evaluators to use their own knowledge and experience to evaluate the proposals. The evaluators chosen by the Department had knowledge and skills in computer systems and knowledge of necessary functionalities for a CSE Case Management System. As this is a complex procurement which generated complex and highly technical responses, it would be difficult and counter-productive to set out every possible anticipated computer system and devise a specific scoring system to cover all of these solutions in the ITN. Such an approach is the

      direct opposite of the teams' intent in developing the ITN. The development team was trying to have an open-ended approach to solutions to generate innovative responses. It is impossible to cover all the range of acceptable responses in specific scoring standards without inadvertently leaving out relevant criteria.

      Therefore, the more cogent, rational approach is to trust the discretion of the evaluators who have relevant skill and experience to determine the merits of highly complex proposals such as CAMS CE system, rather than to impose highly detailed, inflexible, vote standards on them.

    17. Table 8.2 established the basic components that a complete proposal should contain. The Petitioner asserted that the evaluators should have received more training regarding appropriate scoring of the criteria. However, given the nature of the CAMS CE ITN, in which the nature of the computer system solution was left open to the proposer, and the complexity of the ITN and proposals, it would be difficult to provide more specific training without creating bias for one solution over another or inadvertently leaving out or precluding another solution.

    18. KPMG did not protest the specifications of the ITN within the 72-hour period provided by Section 120.57(3)(f), Florida Statutes. Even though KPMG asserted that the instant case does not involve a protest of the specifications, clearly

      if, as KPMG contends, the Department established scoring standards through training of evaluators, the scoring standards would be de facto specifications. To the extent that KPMG is challenging what it purports is a lack of sufficiently specific scoring standards or training of the evaluators in scoring methodology, it would appear that KPMG is making an indirect attack on specifications, or the purported lack thereof, the time for which is past and has been waived.

    19. KPMG asserted that the evaluators should have started at a base line of 7 and scored proposals up or down from there. It asserts that an average of 7 is a minimally compliant score. It bases the assertion on the passage score of 150 points out of

      230 points for a vendor to continue in the evaluation process.


      In order to score 150 points a proposer must score an average of


      7 on the key proposal topics. KPMG's underlying assumption was that the Department was seeking to have all average proposals advance to the next evaluation level; however, it provided no evidence that the ITN, the Source Selection Plan or the Source Selection Training Guide showed that the Department had this goal. To the contrary, as discussed above, those plans and guides as well as the ITN purposely avoided such scoring restrictions. The evidence demonstrated that CSE was seeking an optimal solution rather than an average solution; otherwise the

      cut-off line for key proposal topics would have been the halfway mark or 115 points instead of 150 points.

    20. KPMG asserted that it was unfairly evaluated due to the evaluators' ignorance of the solutions the ITN was designed to procure. Specifically it posited a COTS/ERP solution which is a commercial software package rather than a system built for CSE compliance enforcement. The ITN stated that either solution was acceptable. KPMG ignored the stated intent of the ITN to accept either solution and instead extrapolated an intent by the Department to seek a COTS/ERP solution from the references to such a solution in the ITN, although Mr. Focht admitted that many of these references also referred to custom development. Mr. Addy stated that the ITN development team expected to see solutions which were a mixture of COTS/ERP products and customization. Other than speculation, KPMG did not offer any preponderant evidence that the ITN was seeking purely a COTS/ERP solution.

    21. KPMG based its premise that the evaluators did not understand a COTS/ERP solution on the comments of the evaluators that the KPMG references failed to show any development. As there is no current commercial product on the market which is a CSE Case Management System, some custom development will be required. Indeed, if such a system existed, there would be no need for the instant ITN as the Department would simply purchase

      that product through a sole-source procurement. Mr. Strange stated that, in his opinion, the evaluators use of the term "development" rather than the term "customization" demonstrated that they did not understand COTS/ERP. The Department, however, offered evidence of the evaluators knowledge of COTS/ERP products which was unrefuted.

    22. The backgrounds of the evaluators belied the assertion of their lack of understanding of the ITN or proposed solutions. Dr. Addy has a Doctorate in Information Technology. His dissertation topic was re-usable software of which COTS/ERP is a sub-set. Dr. Addy wrote the ITN and understood the types of solutions that vendors were likely to posit in response to the ITN. Mr. Doolittle, Mr. Ellis and Mr. Bankirer knew the functions they were seeking from a Case Management System. All testified that they were familiar with COTS/ERP solutions.

      Mr. Esser, as the Head of Information Technology at the Department of Highway Safety and Motor Vehicles, has participated in many information technology procurements and was also familiar with COTS/ERP.

    23. KPMG asserts that the evaluators needed more training in COTS/ERP solutions. This position is not borne out by the evidence, when it is considered that other allegations made by KPMG itself contradict this assertion in that it complained that other similar SAP applications, which were also COTS/ERP, had

      received higher scores. KPMG did not assert that any customized systems received higher scores. The evaluators thus appeared to understand COTS/ERP, they just believed that KPMG's references were deficient.

    24. KPMG asserted that the evaluators' scores should have been aligned with the ratings given to KPMG by the corporate references. It stated that as the references had already rated the project, the evaluators could not assign a different score to it. The Petitioner, however, offered no bases for its assertion. The evaluators noted the ratings given by the references but expected high ratings from corporate references chosen by the proposing vendor itself, KPMG. As KPMG had chosen them they presumed they would have chosen references which would give KPMG high ratings. There is no requirement in the ITN that instructed evaluators to conform their evaluation of a reference to the ratings provided by that reference company.

    25. KPMG asserted that the evaluators did not properly score its proposal in regard to the size of the project. Table

      8.2 stated that any project less than five million dollars would negatively affect the score of that proposer. KPMG asserted that the scoring for this factor was binary. Since KPMG's references said the project was greater than five million dollars, then KPMG should have gotten the full allotment of

      points, in its view. It further asserted that the cost is the only measure of size according to Table 8.2.

    26. KPMG's assertions ignore the fact that the score was for the entire reference and not just for cost. Even if the evaluators gave full credit to KPMG for having a project that had cost over five million dollars, the reference score was negatively affected by other factors such as legacy data base conversion, lack of information regarding the size of the database and training.

    27. Several evaluators commented that the Duke and SSM references were small-sized projects. KPMG stated that these comments were inaccurate as costs are the only measure of size and the Duke and SSM references met the cost criterion. However, the evaluators were also looking at the project size in relation to CAMS CE. Therefore, the evaluators looked at data base size, number of records and number of sites. The evaluators found that some of the information was missing and the information in the references reflected a project that was smaller and less complex than CAMS. These factors negatively affected KPMG's scores. KPMG disputed the scoring of the Testing Lead person. All of the evaluators, but one, gave the Testing Lead a low score. The summary sheet for the Testing Lead listed 25 years of experience in testing, but the résumé had no specific testing experience listed. Several years of

      specific testing experience would be expected to appear on the résumé of the person selected as a Testing Lead for a large project such as CAMS CE. The evaluators gave the résumé more credence than the summary sheet. The résumé provided the specific description of the person's experience and was prepared by that person, whereas the KPMG employees working on the project, and who prepared the proposal, prepared the experience summary sheet.

    28. KPMG challenged Esser's scoring of key staff. It stated that Esser gave lower scores to key staff than the other evaluators and that Esser gave points to key staff for CSE experience. KPMG stated that CSE experience was not a criterion for evaluation according to Table 8.2 of the ITN. However, Table 8.2 stated that the listed criteria were not inclusively listed. Table 8.2 asked that the evaluators consider relevant experience, but did not specifically define relevant experience. KPMG's complaint appears to be another indirect challenge to specifications i.e., that the specifications in Table 8.2 did not provide specific instruction to evaluators regarding the definition of relevant experience. KPMG waived the right to such a challenge by not protesting the specifications within the appropriate 72-hour period after their issuance.

    29. CAMS CE is a CSE project. Logically, CSE experience is relevant. KPMG's proposal listed CSE experience in its teams

      skills matrix. Clearly, the KPMG employees, who prepared the proposal, believed that it was relevant experience. When asked to define relevant experience from their understanding, KPMG's witnesses also listed criteria that were not in Table 8.2. It is apparent that Table 8.2 provided discretion to evaluators regarding scoring of relevant experience.

    30. KPMG further claimed that, in their Vendor Questions and Answers, the Department stated that CSE experience was not required. It was, therefore, unfair according to KPMG, to allow Esser to use this criterion. However, that specific Vendor Question and Answer referred to references for past corporate experience, not to key staff résumés. It did not apply to Esser's evaluation of key staff. Esser employed the same criterion when evaluating all proposals. The criterion was not arbitrarily and capriciously applied.

    31. KPMG disputed Esser's scores, as his scores were uniformly lower than other evaluators. However, KPMG did not provide any evidence of bias against KPMG by Esser. Esser provided a description of the rationale he employed in determining each score. The rationale was uniformly applied and had no inherent bias against KPMG or its proposal. KPMG has not met the burden of proof to sustain overturning Esser's scores.

    32. KPMG stated that the entire scoring process for key staff was flawed as Table 8.2 had a criterion of "references

      not excellent," which would negatively affect the scores. The Department did not check references at this stage in the evaluation process. As a less than excellent reference was a detraction from a score, KPMG would not have gained points from the reference checks. KPMG witnesses speculated that if the Department had checked references, some missing information for résumés could have been filled in and that key staff persons would have received a higher score. However, KPMG failed to offer any concrete examples. The evidence also demonstrated that references were not checked for any proposals at this stage in the evaluation process. There is no evidence that the Department's failure to check references at this stage of the evaluation process was arbitrary or capricious or created any competitive disadvantage for KPMG.

    33. KPMG challenged the Department's execution of the divergent score process as not following the procedure laid out in the Source Selection Plan . The Plan stated that the coordinator must inform the evaluator of the reason for the score divergence. KPMG interpreted that requirement to mean that all evaluators must be informed which scores were divergent. The team developing the ITN wished to avoid disseminating the information of which scores were divergent. The purpose of the divergent scoring process was to trigger a review of the scoring of certain items to discover errors, not

      to influence evaluators to change their scores to conform to a norm.

    34. Intervenor Deloitte submitted a reference for the Suntax project, which was signed by Susan Wilson. Susan Wilson is a CSE employee. KPMG contended that Wilson's signature on this reference created an unfair advantage for Deloitte. Other than speculation, it offered no evidence that any undue influence had been exerted by Wilson. Contrarily, the evaluators had little or no acquaintance with Wilson and were not affected by her involvement in the Suntax project.

RECOMMENDATION


Having considered the foregoing Findings of Fact, Conclusions of Law, the evidence of record and the pleadings and arguments of the parties, it is, therefore,

RECOMMENDED that a final order be entered by the State of Florida Department of Revenue upholding the proposed agency action which disqualified KPMG from further participation in the evaluation process regarding the subject CAMS CE Invitation to Negotiate.

DONE AND ENTERED this 26th day of September, 2002, in Tallahassee, Leon County, Florida.


P. MICHAEL RUFF Administrative Law Judge

Division of Administrative Hearings The DeSoto Building

1230 Apalachee Parkway

Tallahassee, Florida 32399-3060

(850) 488-9675 SUNCOM 278-9675

Fax Filing (850) 921-6847 www.doah.state.fl.us


Filed with Clerk of the

Division of Administrative Hearings this 26th day of September, 2002.


COPIES FURNISHED:


Cindy Horne, Esquire Earl Black, Esquire Department of Revenue Post Office Box 6668

Tallahassee, Florida 32399-0100


Robert S. Cohen, Esquire

D. Andrew Byrne, Esquire

Cooper, Byrne, Blue & Schwartz, LLC 1358 Thomaswood Drive

Tallahassee, Florida 32308


Seann M. Frazier, Esquire Greenburg, Traurig, P.A.

101 East College Avenue Tallahassee, Florida 32302


Bruce Hoffmann, General Counsel Department of Revenue

204 Carlton Building Tallahassee, Florida 32399-0100

James Zingale, Executive Director Department of Revenue

104 Carlton Building Tallahassee, Florida 32399-0100


NOTICE OF RIGHT TO SUBMIT EXCEPTIONS


All parties have the right to submit written exceptions within

10 days from the date of this Recommended Order. Any exceptions to this Recommended Order should be filed with the agency that will issue the Final Order in this case.


Docket for Case No: 02-001719BID
Issue Date Proceedings
Oct. 15, 2002 Final Order filed.
Sep. 26, 2002 Recommended Order issued (hearing held June 24 and 26, 2002) CASE CLOSED.
Sep. 26, 2002 Recommended Order cover letter identifying hearing record referred to the Agency sent out.
Aug. 02, 2002 (Proposed) Proposed Recommended Order (filed by Petitioner via facsimile).
Aug. 02, 2002 Deloitte Consulting, L.P.`s Proposed Recommended Order filed.
Aug. 01, 2002 Respondent`s Proposed Recommended Order filed.
Jul. 15, 2002 Transcript filed.
Jun. 26, 2002 CASE STATUS: Hearing Held; see case file for applicable time frames.
Jun. 20, 2002 (Joint) Prehearing Stipulation (filed via facsimile).
Jun. 13, 2002 Petition to Intervene by Deloitte Consulting, Inc. (filed via facsimile).
May 22, 2002 Notice of Taking Deposition Duces Tecum, M. Strange, J. Focht(2) filed.
May 10, 2002 Order Granting Continuance and Re-scheduling Hearing issued (hearing set for June 24 and 26, 2002; 10:00 a.m.; Tallahassee, FL).
May 07, 2002 Amended Notice of Proceedings filed by Respondent.
May 06, 2002 Motion for Continuance (filed by Respondent via facsimile).
May 03, 2002 Order of Pre-hearing Instructions issued.
May 03, 2002 Notice of Hearing issued (hearing set for May 13, 2002; 10:00 a.m.; Tallahassee, FL).
May 01, 2002 Notice of Proceedings filed.
May 01, 2002 Formal Written Protest filed.
May 01, 2002 Agency referral filed.

Orders for Case No: 02-001719BID
Issue Date Document Summary
Oct. 11, 2002 Agency Final Order
Sep. 26, 2002 Recommended Order Agency established that flexible standards for evaluation committee`s scoring of ITN proposals were appropriate in ITN initial sage of review, in complex computer system procurement, where evaluators were trained and sophisticated as to Agency`s needs.
Source:  Florida - Division of Administrative Hearings

Can't find what you're looking for?

Post a free question on our public forum.
Ask a Question
Search for lawyers by practice areas.
Find a Lawyer