Welcome to Gatekeeper’s fifth article on RFPs and the third in the “RFP Guide” series.
Our previous articles covered:
- How to prepare a clear set of requirements for your CMS using a CMS Requirements Template
- How to turn your Contract Management System requirements specification into an RFP
- RFP Guide 1 - How to set up an RFP scoring system and release the RFP
- RFP Guide 2 - How to conduct a preliminary review of supplier proposals
In this article, we'll outline the key concepts and activities involved in the next stage of the process - evaluating and scoring each proposal received from suppliers in response to the RFP.
How to Evaluate RFPs and score each proposal.
To evaluate RFPs can be an arduous and lengthy undertaking, depending on:
- The number of scorable elements in and associated with the RFP
- The number of proposals to be scored
- The length and complexity of the contracts involved
- The comparability of the pricing elements submitted in each proposal
Some time-savings can be achieved because each member of the RFP evaluation team has been assigned specific areas to evaluate based on their background and expertise, allowing much of the scoring to be conducted in parallel streams. Knowing the evaluation criteria early on can also save time.
Conversely, some time may be lost waiting for clarifications or further information from the suppliers about aspects of their proposals detected during the scoring process.
An effective RFP evaluation process and scoring requires close attention to the following:
1. Rules of engagement
- No member of the evaluation team should:
- Have a personal interest in any participating supplier
- Have a financial interest in any supplier recommended for award
- Place themselves in any other type of conflict of interest situation
- Discuss any aspect of a proposal, evaluation outcomes or the organisation's intentions with respect to the RFP with anybody other than other evaluation team members and appropriate organisational personnel
- Have any direct or indirect contact with any representative of a participating supplier prior to, during or after the course of the RFP process, unless authorised by and in the presence of the Designated Contact
- Act alone or in collusion with any other evaluation team members for the benefit or to the detriment of any participating suppliers
- Accept gifts of any kind from participating suppliers, whether monetary or non-monetary, that are not in compliance with law or organisational policy
- Copy, disseminate or use for personal benefit of any kind, any information, whether marked as confidential or not, obtained by any method as an evaluation team member
- Reveal the identities of any other evaluation team members
- Bypass any security settings in a scoresheet and make unauthorised changes to the assigned requirement weights or group percentages
- Each member of the evaluation team must:
- Independently, fairly and impartially review and score each proposal
- Comply with their obligations regarding confidentiality
- Act professionally at all times and in accordance with the organisation's policies and ethical / integrity standards
- Advise team management immediately on becoming aware of any actual or potential conflict of interest, or the appearance of it, concerning the evaluator
- Promptly advise team management on becoming aware of any issues affecting the evaluator's ability to meet scoring deadlines
- Evaluate proposals received in response to the RFP based solely on the merits of the information presented
- Exercise prudence, judgment and common sense when providing any commentary in a scoresheet or issues log. Controversial remarks must be avoided at all times, but especially where local legislation may require evidence to be made public about how a preferred proposal was selected and the fairness of the evaluation, and allow challenges to the RFP outcome as a consequence
- Register any gifts received from participating suppliers.
2. Evaluation team readiness
Scoring RFP responses and requires each evaluation team member to have a thorough understanding of their role and the tools available to help them.
A scoring process guideline and/or training, plus early review of each proposal prior to the official start of scoring should cover:
- Reinforcement of the timelines and the potential consequences of delay
- The rules of engagement
- Each team member's overall scoring responsibilities, as assigned in stage 4 of the RFP process
- Each item a team member is responsible for scoring, whether a requirement, a cost element, the quality of a presentation or something else
- What the supplier has proposed with respect to each item, where applicable
- What needs to be considered in allocating a score for any particular item, such as how well a proposal satisfies a requirement
- The scoring scales to be used
- The scoring context that gives meaning to the scores assigned
- The scoresheets used to record the assigned scores and issue logs used to capture any comments or questions
- Other tools used to facilitate various activities during the RFP process.
3. Supplier team responsiveness
Evaluating and scoring RFPs typically requires a large amount of interaction with the participating suppliers.
Following release of the RFP to the invited suppliers, this interaction mainly involves the suppliers raising questions and comments for the team to deal with about the requirements, the timetable and so on.
This situation is mostly reversed on receipt of supplier proposals, with the evaluation team raising questions and comments about the proposals submitted for the suppliers to deal with.
A supplier team's response time to questions can vary due to workload priorities. When they are dealing with multiple RFPs simultaneously, the more complicated and probably higher-valued RFPs are much more likely to get the focus of attention than otherwise, especially if the bid team is relatively small.
The knock-on effects of low supplier responsiveness for the evaluation team are frustration and potential schedule overruns. For the slow-responding supplier, it can affect the scores received.
It's important that efforts are made not only to prevent one side from affecting the other side's ability to respond to questions, but also to effectively deal with the occurrence of any response delay. This can be achieved by both sides:
- Working together to establish an issue prioritisation scheme with defined response times and escalation paths applicable to both sides
- Tracking the performance of each side in meeting those response times and triggering any needed escalation
- Clearly expressing questions or concerns, with a focus on eliminating ambiguity and minimising opportunities for misinterpretation
- Submitting questions and concerns as they are discovered, rather than when a certain amount has been collected, or submitting them at a certain time each day or only on certain days.
- Promptly responding to requests for clarification of any questions or comments received
- Arranging conference calls managed by the Designated Contact and attended by the relevant questioners, whenever and as soon as the use of email for Q&A becomes unproductive.
4. Consistent score allocation
It can be simple to score some aspects of a proposal but others may require the application of various levels of judgment and foresight. Maintaining a consistent approach to allocating scores is required both within and across proposals.
Inconsistency or drift can be experienced with the first proposals evaluated, while the evaluator settles into the rhythm of scoring. It can also occur with the later proposals, as evaluation fatigue sets in.
There are two ways to deal with drift:
- Review each proposal's scores the day after scoring has been completed to determine if the assigned scores still seem valid, and make any adjustments necessary
- After all proposals have been individually reviewed, do a side-by-side check of the scores assigned in the same areas of each proposal, where the supplier responses are very similar. If some scores seem unjustifiably different given the similarity of responses, make any adjustments necessary.
5. Price normalisation
Pricing is commonly a difficult area to evaluate. In order to be able to compare supplier prices on a like-for-like basis, discrete pricing for each cost element needs to be provided.
Where insufficient detail is provided to allow some cost elements to be directly compared across suppliers, despite a request for more information, a normalisation process is required to provide estimates of what the comparable costs might be.
The normalisation process must be configured to meet the exact circumstances. As an example, consider the case where one supplier provides a bundled price for a software licence covering (i) the licence, (ii) upgrades, and (iii) maintenance and support, where other suppliers have provided individual pricing for each of these elements.
The normalisation process might proceed as follows when a recalcitrant supplier declines to provide the requested level of detail:
- For each supplier who provided the three required element pricing details
- Calculate the bundled price as the sum of the three element prices
- Calculate the percentage of the bundled price that each element's price represents
- For each element, calculate the average percentage of the bundled price across all suppliers who provided the required details
- For the supplier who provided only a bundled price, apply to that bundled price each of the three elements' average percentage of the bundled price calculated above to produce an estimated price for those elements that can be used for element pricing comparison across suppliers.
6. Scoring focus
We've already discussed how each element of a proposal requires its own scoring scale and scoring context, and how this information will be provided for guidance in each scoresheet. The focus of scoring can vary for different areas of the RFP, as follows:
- Best fit. Used where the scoring scale describes certain fixed outcomes (say, high cost and low risk, high cost and high risk, etc) that requires the use of judgement from the evaluator to estimate supplier risk levels
- Level of acceptability. This is another judgement call on the part of the evaluator. It is commonly used for acceptability assessment of:
- a supplier, in terms of financial viability, quality of proposal / presentation, reference checks and so on
- a contract clause, where the evaluator typically needs to state any objection and propose alternative wording for the clause
- Level of fit. Typically required where a very specific requirement has been stated (say, widgets must come in bronze, silver and gold colours), and the score reflects how well a proposal satisfies that requirement
- Relative ranking. Required for items like pricing or delivery guarantees, where comparison against each supplier's offering is required. The highest score is assigned to the supplier whose offering is most attractive, depending on whether a higher or lower offering is better. Scores for other suppliers should be assigned in decreasing amounts according to the decreasing comparative attractiveness of their offerings.
7. Scoring analysis
Depending on the legislative environment that the RFP is subject to, greater or lesser care may be needed to minimise opportunities for challenges to the RFP outcomes.
Irrespective of this, there is an overwhelming need to ensure that the best possible outcome for the organisation is obtained from the RFP.
The following checks need to be conducted on the scores received to discover if there is any cause for concern:
- On receipt of each evaluation team member's individual scoresheets:
- Delete any scores assigned outside the specific elements the evaluator is required to score
- Check if any requirement weights or group percentages differ from the master scoresheet templates and reset as necessary
- Following aggregation and averaging of all individual scoresheets to show consolidated total scores, discussion with the relevant evaluators and instigation of certain actions may be necessary, if analysis shows that:
- Identical scores have been assigned to the same proposal elements by different evaluators, a potential indicator of collusion
- The scores of any evaluators for any specific proposal diverge consistently and significantly from the mean scores across all evaluators for that proposal, a possible indicator of bias for or against that proposal.
This stage of the RFP process deals with evaluation and scoring of supplier proposals, covering:
- How the evaluation team members should prepare and conduct themselves in relation to the scoring process
- The need for both sides to work together to minimise friction and delay
- Approaches for achieving consistency in scoring and normalising pricing models to allow like-for-like comparison
- Clarification of scoring contexts and methods for detecting potentially suspicious scoring.
You should now be able to plan and prepare for, and conduct when ready, the RFP stage activities outlined in this article.
Our next article in this RFP Guide series will outline the key concepts and activities involved in identifying any proposals warranting further consideration, obtaining updated proposals and negotiating final offers, negotiating and executing a contract, then finalising the RFP.