Can We Figure Out a Way to Make Price Evaluations Easier?

While changes in RFP instructions and solicitations may allow for improvements in the evaluation process, we must also consider ways in which to improve price evaluations. From federal agency to agency, there is not one consistent approach or practice to evaluating the tradeoff between a technical score and price. In our experience, Department of Defense agencies will sometimes pay 1-3% more for a higher technically rated proposal. For Civilian agencies, we have seen an average premium of 5-10% for higher technically rated proposals. But this is not a hard and fast rule. We have seen government agencies pay 30% more for proposals with higher technical ratings and we have also seen proposals priced 10% below the winning cost get removed for being unrealistically low.

In Part 2 of this article series, I offer a recommendation to address one of the biggest challenges that takes place in government source selections: the evaluation and subjective tradeoff decisions on price. To reduce the amount of subjectivity in tradeoff decisions, we need to look at options that qualitatively score the price factor. Yes, that’s right, I recommend assigning an adjectival rating to price!

Using Adjectival Ratings in Price Evaluations

Each federal agency has their own adjectival rating system they use for evaluating proposals. In FAR Part 15 RFPs, we often see an adjectival rating scale of Outstanding, Good, Acceptable, Marginal, and Unacceptable. While there are variations in these rating systems among agencies, my recommendation is to use those very adjectival ratings to differentiate between prices. In order to do so, evaluators need to define each adjective from a price perspective. What is considered Outstanding and what is considered Unacceptable? First, the Government would need to do some homework – I recommend that they determine the initial parameters of price reasonableness and realism. This will help to establish what pricing is considered too high and what pricing is considered too low. This exercise would incorporate several of the seven price analysis techniques by which the Government can make a fair and reasonable price determination per FAR 15.404-1(b)(2). Through their research of previous Government contracts, historical prices paid, parametric estimating methods, published price lists, internal estimates, or market research for the same or similar items, they would be able to develop the initial baseline price estimate or range on what constitutes a reasonable and realistic price range.

Once that baseline price range is established internally, the Government can then assign an adjectival rating to specific variances or percentage differences between the price proposals received. Understandably, this variance could vary widely from agency to agency and requirement to requirement – the development of these variances would be subject to certain types of procurements and types of services, products, or solutions being procured. It will be up to the agency’s discretion in determining the price variance that results in the rating. The remaining adjectives are assigned based on the difference in the offeror’s price to that of the lowest price. Below is a hypothetical table that shows a variance of 3.75% difference in pricing that result in the adjectival rating.

table that shows a variance of 3.75% difference in pricing that result in the adjectival rating

By comparing these pricing “scores” against the other technical evaluation criteria, you will have a more fair and objective way to compare price and non-price factors. Agencies will be comparing all evaluation scores simply by their adjectival ratings instead of trying to tradeoff specific dollar amounts between differently priced proposals. Traditionally, the Government uses price to guide their decisions when technical proposals are equal. This can be frustrating for industry in best value tradeoff competitions when the Government simply assigns all offerors an “Acceptable” technical rating and defaults to using price as the deciding factor. My suggested approach puts a stronger onus and focus on the technical proposal rather than simply defaulting to the lowest price proposal. While this may not work for all federal solicitations, I should hope to see more agencies institute more objective practices and reduce poorly defined or highly subjective evaluation decisions.