3| © Hill+Knowlton Strategies
large pool of vendors, for sole-source contracts
or where a vendor subsequently was part of a
merger or acquisition. Some government
stakeholders with experience of a VPM regime
noted their hesitancy to record negative
evaluations because of the potential for the
evaluations to be accessed under Access to
Information or similar federal rules.
Stakeholders agreed that communication would
be important to ensure the success of any VPM
policy, with vendors informed early on in the
contracting process about key performance
indicators (KPIs). They agreed that there should
be regular check-ins and communications
timelines set out in the policy to avoid
misunderstandings and potential delays.
The evaluation process should be flexible to
allow vendors to work on their performance and
improve their performance score, with the results
of the evaluation process disseminated across
government departments to amplify learnings
and clarify any misunderstandings.
Building a VPM Policy
Participants in the consultations were asked to
create their own VPM Policy, incorporating key
parts of the VPM Process.
Three quarters felt the policy should be applied
to all government contracts for all goods and
services groups, but were evenly divided
between applying it only to contracts in excess of
$100,000 (including thresholds above $100,000
such as $1-million and above) or to contracts of
any amount. A minority of participants proposed
the application of VPM to contracts under the
$100,000 threshold. Many participants who felt it
should apply to all contracts supported a simpler
VPM process for smaller-value projects. One
quarter believed the regime should be applied
only to government contracts for certain goods or
services groups, contingent on characteristics
such as risk or location.
Participants differed on when vendors should be
provided with interim evaluation results. The
largest number favoured reports every six
months or, for contracts shorter than six months,
at the mid-way point of the contract and at close-
out. Others preferred reports every 12 months
(or only at contract close-out), at other intervals
such as contract milestones, or at the discretion
of the CA. Some said the timing of reports should
be dependent on the nature of the goods or
services provided. Whatever interval was
chosen, it would be important to consider the
impact frequent evaluations would have on the
TA’s workload.
The most often proposed method for calculating
a vendor’s performance rating at contract close-
out would be to use a weighted average of all the
vendor’s final and interim scores. Some
participants preferred using the vendor’s final
scores and most recent interim scores, or just
the vendor’s final scores. The CA should have
primary responsibility for initiating and
communicating the results of evaluations, while
the TA would conduct the evaluations in most
cases. Participants underscored the importance
of appropriate technical knowledge in the
execution of performance evaluations.
Most participants agreed that vendor
performance ratings should be used as a
weighting in future contract evaluations, along
with price and, where applicable, technical
compliance. Many felt that the fairest way for
new vendors and existing bidders to compete
would be to assign any bidder without a valid
performance rating a default score of “3.” A
similar number of participants also noted that
newer vendors could be assigned a score that is
the average of the scores recorded in the
database, or reallocate the points to other
evaluation criteria (financial and non-financial)
proportionally.
Discussions about the most appropriate appeals
process revealed considerable differences
among participants, with most evenly split
between an independent appeals organization
and a combination of an independent appeals
organization, an executive of the CA
organization and/or a senior management
committee of PSPC. Whatever process was
selected, participants emphasized it should be at
arm’s length from PSPC, particularly if vendor