top of page
  • Facebook
  • X
  • Youtube
  • Linkedin

Bid Leveling: The Apples-to-Apples Challenge in Construction Bidding

Bid leveling is one of the most operationally complex tasks GCs perform during the sub bidding process, and it's where many of a project's eventual problems get either prevented or set in motion. The challenge is conceptually simple but practically difficult: take 5 sub bids for the same scope and figure out which one is actually the lowest. The conceptual simplicity hides the operational complexity. The 5 bids are never identical. Sub A includes specific items in their base price; Sub B excludes those items as alternates. Sub C assumes a different schedule than Sub D. Sub E uses different waste assumptions than Sub A. The bottom-line numbers don't compare directly because the bids aren't covering the same ground.


The contractor who treats this as a simple "pick the lowest number" exercise either gets lucky (the bids happened to be similar enough that the numbers comparable) or makes mistakes that surface during construction. The contractor who runs structured bid leveling identifies the differences, normalizes the scope across bids, and produces a comparison that reflects what each sub is actually offering at what total cost. This is what separates GCs who consistently award the right subs from GCs whose sub awards regularly produce surprises during execution.


This article covers the bid leveling process, the specific operational realities, and the software that supports leveling. The deeper coverage of the broader sub bidding workflow lives here: Subcontractor Bidding Process. Read this article for coverage of subcontractor management more broadly.

Why Sub Bids Are Never Identical


The starting point for understanding leveling is recognizing why bids that nominally cover the same scope produce different totals. Several factors drive the variation.


Scope Inclusion Differences

The most common variation. Sub A includes specific items in their base price; Sub B excludes those items, treating them as alternates or owner-furnished; Sub C addresses them differently. 


Common examples:

  • Cleanup responsibilities (some subs include, some assume the GC handles)

  • Specific materials (sub may include or exclude based on their typical practice)

  • Coordination work (some subs include, some assume the GC manages)

  • Specialty items (some included as base, some as alternates)

  • Permits and inspections (some included, some excluded)

  • Mobilization and demobilization (often varies by sub)

Without normalizing for these differences, comparing bottom-line numbers compares different scopes.


Schedule Assumption Differences

Subs base their pricing on assumed schedules: when work begins, how long it takes, when it completes, how it sequences with other work. When schedules differ between bids, the underlying productivity and cost assumptions differ.


Specific patterns:

  • Sub assumes weekend work that wasn't in the GC's plan

  • Sub assumes shifts that produce different labor costs

  • Sub assumes sequence that differs from the project schedule

  • Sub assumes weather conditions different from the actual season

Quality Tier Differences

Same nominal scope can be priced at different quality tiers. Sub A bids the project with mid-grade materials; Sub B with premium; Sub C with builder-grade. The bids are technically responsive to the spec but reflect different quality assumptions.


This is particularly common when specifications are vague or use ranges ("approved equal" language, "standard practice" references).


Productivity and Crew Assumptions

Different subs have different actual productivity rates and use different crew structures. Their bids reflect their actual costs at their actual productivity, which produces different totals even for the same scope.


This isn't necessarily a problem; it's actually one of the things competition is supposed to surface. But it means the bottom-line numbers reflect different operational realities, not just different markup.


Risk and Contingency Differences

Subs assess risk differently and price contingency accordingly. Sub A may include 5% contingency for risk; Sub B may include 10%; Sub C may include none and plan to negotiate change orders if conditions warrant.


The bid totals reflect these different risk approaches, with implications for whether the apparent "low bid" remains low after change orders during construction.


Qualification and Exclusion Differences

Subs include qualifications and exclusions in their bids that affect what's actually being bid. 


Common patterns:

  • Specific exclusions (e.g., "excludes work below grade")

  • Specific qualifications (e.g., "based on continuous power available")

  • Conditions on the bid (e.g., "valid if work begins by [date]")

  • Assumptions about other parties (e.g., "assumes other trades coordinate as scheduled")

These qualifications can dramatically affect what the sub is actually committing to. A bid with significant exclusions isn't actually competitive with a bid that's all-inclusive at the same price.

Pro Tip: Before starting bid leveling, list the specific items most likely to vary across bids in your project type. For commercial drywall, this might include cleanup, ceiling work coordination, finishing detail level, and specific accessories. For commercial electrical, it might include fixture grade, low-voltage scope, controls coordination, and specific testing requirements. The pre-defined list ensures you systematically check each variable across bids rather than only catching variations that happen to stand out. Operations that develop project-type-specific leveling checklists produce more thorough comparison than operations that approach each project from scratch.

How Bid Leveling Actually Works


The leveling process has structured steps that produce normalized bid comparison.


Step 1: Identify Scope Items Each Bid Covers

The first step is producing a master scope list that captures every item any bid addresses. This list combines:

  • Items in the original scope of the ITB

  • Items specifically included by various bids

  • Items specifically excluded by various bids

  • Alternates offered by some bids

The master list typically has 30-100 line items for a meaningful commercial scope. Operations that try to level without a structured master list often miss items that one bid included and others didn't.


Step 2: Mark Each Bid's Coverage of Each Item

For each line item in the master list, mark whether each bid includes the item, excludes it, or treats it as an alternate. The result is a matrix showing each bid's coverage profile.


This is where the differences become visible. Bid A includes 95 of 100 items; Bid B includes 87; Bid C includes 92. The bottom-line numbers can't be compared directly because they reflect different coverage profiles.


Step 3: Quantify Excluded Items

For items that some bids include and others exclude, quantify the cost of the excluded items. The excluded items will need to be addressed somehow: pricing them separately, having another sub handle them, having the GC self-perform.


The cost of addressing excluded items gets added to the bid that excluded them, producing an equalized comparison. If Bid B excluded $15,000 worth of items that Bid A included, Bid B's effective total is its bid plus $15,000 to cover those excluded items.


Step 4: Account for Schedule and Quality Differences

If bids reflect different schedule or quality assumptions, the bid totals need adjustment to reflect what each bid would cost at the project's actual schedule and quality requirements. This is harder to quantify than scope inclusion differences but matters.


Step 5: Account for Risk and Qualifications

Bid qualifications and exclusions affect what each sub is actually committing to. A bid with significant qualifications carries risk that an all-inclusive bid doesn't. The leveling needs to account for this either through risk-adjusted comparison or through clarification with the bidder.


Step 6: Produce Normalized Comparison

The final output is a comparison showing each bid's effective total at equivalent scope. The normalized totals reflect what each sub would actually cost to do the project's full scope with consistent quality, schedule, and risk allocation.


The "real low bid" is the bid with the lowest normalized total, not necessarily the lowest bottom-line number. Sometimes the apparent low bid is actually mid-pack after normalization. Sometimes the apparent high bid is actually the most competitive after accounting for what it includes.


Step 7: Identify Scope Gaps

After leveling, items that no bid included become visible as scope gaps. These gaps need to be addressed before award:

  • Request supplementary pricing from one or more bidders

  • Adjust the project scope structure

  • Plan for the gap items as additional cost

  • Self-perform the gap items

  • Solicit specialty subs for the gap items

Operations that identify scope gaps before award can address them deliberately. Operations that miss gaps until construction face change order disputes or absorbed costs.

Case Study: A 50-person commercial GC ran bid leveling through spreadsheets through 2023, with results that varied by which estimator handled each project. Some projects had thorough leveling; others had cursory comparison. The cumulative impact surfaced when the operations director ran a year-end review of completed projects and found that lowest-bid awards correlated only weakly with lowest actual costs at project closeout. The data showed that "lowest bid" sub awards on 35% of projects had higher actual costs than mid-pack bids would have produced, primarily because the lowest bids had included exclusions or assumptions that produced change orders or quality issues during construction. They implemented Procore's bid leveling features in early 2024 with structured leveling workflow enforced on every project. By month 12, the correlation between leveled-low-bid awards and lowest-actual-cost outcomes had improved meaningfully, and the operational consistency across projects had improved as well. The lesson was that bid leveling quality has direct, measurable impact on project outcomes, and structured workflow produces consistency that ad hoc spreadsheet leveling can't match.

Software That Supports Bid Leveling


Several platforms include bid leveling capability with varying focus and depth.


Bid Management Platforms With Leveling Features

The major bid management platforms include leveling capability:


BuildingConnected (Autodesk): Strong bid leveling features with side-by-side comparison, scope normalization, and qualification tracking. Particularly capable for operations with established workflow on the platform.


Procore: Leveling features integrated with broader bid management and PM workflow. Strong for operations using Procore for project execution.


ConstructConnect: Leveling capability primarily as part of the broader bid intelligence platform.


SmartBid: Focused leveling and bid comparison platform used by some commercial GCs.

These platforms typically run leveling as one component among many in the bid management workflow. The leveling capability is meaningful but isn't the primary differentiator between platforms.


What Strong Leveling Software Does

The capabilities below distinguish strong leveling features:


Side-by-Side Bid Display: Visual comparison of bids with line items aligned vertically, making coverage differences immediately visible.


Scope Item Tracking: Master scope list with inclusion/exclusion marked per bid, supporting systematic coverage comparison.


Cost Adjustment Capability: Tools for entering adjustments to normalize bids (excluded items priced, schedule adjustments, quality adjustments), with adjusted totals calculated automatically.


Qualification Capture: Structured capture of bid qualifications and exclusions, with visibility during comparison.


Award Decision Documentation: Tools for documenting the leveling analysis and award decision, supporting both audit defense and lessons learned.


Integration With Award Workflow: Once a sub is selected, the leveling outputs flow into contract execution and project management.


Manual Leveling Approaches

For smaller GCs or operations with simpler scope, manual leveling through spreadsheets can work. The spreadsheet approach typically involves:

  • Master scope list as rows

  • Each bid as a column

  • Inclusion/exclusion captured in cells

  • Formulas calculating adjusted totals

  • Notes capturing qualifications and assumptions

This works for moderate complexity but accumulates limitations as scope grows. Operations doing significant commercial work typically benefit from dedicated platforms.


Hybrid Approaches

Some operations use bid management platforms for the broader workflow but conduct leveling in spreadsheets that they import into the platform after analysis. This produces flexibility for the leveling work while preserving documentation in the platform.


When Leveling Software Doesn't Earn Out

For very simple scope (single trade, limited line items), the overhead of structured leveling tools may exceed the value. A 4-bid comparison for a simple drywall scope on a small project doesn't really need software-driven leveling.


The threshold where leveling software earns its cost typically arrives when:

  • Scope complexity exceeds 30-50 line items

  • Multiple trades being compared simultaneously

  • Multiple bids per scope routinely (5+)

  • Project values exceed thresholds where bid quality affects outcomes meaningfully

Below these thresholds, manual leveling can be adequate. Above them, dedicated tools typically pay back through better leveling quality and reduced administrative time.

Pro Tip: Document the leveling analysis even if it seems obvious at the time. The documentation matters for two reasons: when disputes arise during construction about what scope was actually awarded, the leveling documentation supports the GC's position; when looking back at sub performance over time, the leveling documentation supports analysis of which bids actually produced the best outcomes versus which apparent low bids produced surprises. Operations that document leveling analysis systematically have better dispute defense and better data for ongoing improvement than operations that perform leveling without documentation.

Leveling Quality Determines Award Quality


Bid leveling is where many of a project's eventual outcomes get set. Strong leveling produces awards that match what the project actually needs at competitive cost. Weak leveling produces awards based on apparent low bids that turn out to be incomplete scope, wrong quality, or unrealistic assumptions, with consequences that surface during construction.


The investment in structured leveling isn't dramatic in any single dimension. Software platforms with leveling capability are typically reasonable in cost relative to the protection they provide. The procedural discipline (master scope lists, systematic coverage tracking, qualification capture, adjustment documentation) doesn't require new headcount but does require structured workflow that ad hoc approaches don't provide. The returns show up project after project through awards that produce expected outcomes rather than constant surprises.


Coverage of how leveled bids flow into project execution can be found in our guide to bidding software and PM software integration. For coverage of the broader bid management framework, see out full guide: What is Bid Management Software?

Frequently Asked Questions 

How long should bid leveling take?

It depends on scope complexity and bid count. For a typical commercial scope with 4-6 bids, structured leveling typically runs 2-6 hours per scope. Simpler scopes may take less time; complex scopes (mechanical systems, complex specialties) may take longer. Operations that consistently spend less than 1 hour per scope on leveling are typically not doing thorough leveling; operations that spend more than 8 hours per scope often have inefficient processes that could be streamlined. The right time depends on what produces accurate normalized comparison rather than on hitting specific time targets.


What's the most common bid leveling mistake?

Treating apparent low bid as actual low bid without normalizing for coverage differences. The pattern: GC awards based on bottom-line numbers without thoroughly checking what each bid actually includes versus excludes. The award produces surprises during construction when the "low bid" sub claims work as out-of-scope that the GC assumed was included, or when missing items surface that no sub actually priced. The fix is structured leveling that catches these issues before award rather than during construction.


Should I let subs see other subs' bids during leveling?

Generally no. Sharing competitive bid information with other bidders is typically considered unethical and may violate competitive bid principles. The GC may seek clarifications from individual bidders about specific items in their bids without sharing competitive information. The exception is design-build or negotiated work where the relationship dynamics are different from competitive bidding. For competitive bid environments, maintain the confidentiality of each bidder's pricing throughout the leveling and award process.


How do I handle a bid with significant qualifications I can't accept?

Several options. The clearest is to clarify with the bidder whether they can remove the qualification at no cost adjustment, accept the qualification with appropriate cost adjustment, or stand by the qualification (in which case the bid effectively isn't responsive). The decision depends on the qualification's significance and the bidder's flexibility. Don't ignore qualifications hoping they won't matter during construction; they typically do. The leveling stage is the right time to address qualifications, not the construction stage.

bottom of page