The DCMA 14 Point Assessment

Some schedules represent credible tasks logically and can be used for tracking progress, and some are art projects that may be pleasing to the eye but are primarily worthless.  Tired of dealing with vendors and programs trying to manage complex and expensive efforts using art projects, the US Defense Contract Management Agency (DCMA) developed and distributed objective criteria for evaluating schedules both quantitatively and qualitatively in 2005.

The metrics were adopted first by the US Department of Defense (DOD) and are now required of their subcontractors by some major defense contractors.  While not hard and fast rules, schedules that comply with the guidelines tend to be more credible and manageable. Schedules that do not comport with the guidelines are unlikely to be useful for understanding or tracking changes in a credible way.

Chrono™ provides a mechanism to assess compliance with the DCMA 14 Point Assessment.  This chapter explains the standard and how Chrono™ users can review the assessment of their schedule.

  1. Logic – It’s generally considered best practice for all tasks in a schedule to have at least one predecessor and at least one successor.  This does not include summaries or milestones.  The actual test looks at tasks that are not marked 100% complete to have at least one predecessor and one successor.  The threshold for compliance is that no more than 5% of tasks should be without both predecessor and successor.  It is a good rule of thumb that all tasks have predecessors and a successor.
  2. Leads – This metric measures the percentage of tasks that have a negative lag between them.  An example would be task B scheduled to start 3 days before the end of task A.  Although most scheduling tools allow this, using leads confounds efforts to calculate project float (or slack) and the critical path.  The standard is that no tasks should have leads.  Usually, when someone suggests a lead is needed, tasks can be further decomposed so that traditional Finish to Start relationships can capture and better represent task logic.
  3. Lags – The Lag metric is the percentage of project tasks that have a positive lag between them.  Lag is a positive delay on a dependency.  If we said task B can start three days after task A finishes, we have defined a 3-day lag between A and B.  The threshold for this metric is no more than 5% of all task relationships should contain lags.  A better practice is to have a named task in the dependency chain to explain what is being waited for; for example, perhaps Task A was to paint the table, and Task B was to set the table.  Inserting a 3-day task for “Dry the Paint” between the two would eliminate the lag and better document the task logic.  Although a lag between finishing task A and starting task B is not advised, starting task B “x” days after task A starts (a Start-to-Start relationship type) is more acceptable provided “x” is not greater than task A total duration.
  4. Relationship Types – The metric is the percentage of all project tasks that use Finish to Start relationships.  Most schedule logic can be represented with Finish to Start dependencies, but in some circumstances, Start to Start and Finish to Finish dependencies may be appropriate; for example, a quality assurance task may not start until the work being assessed has begun.  The threshold for this metric is 90%.  No fewer than 90% of the relationships in the schedule should be Finish to Start.
  5. Hard Constraints – A hard constraint is a specified fixed date for a task to begin or end (Must Finish On, Must Start On).  Hard constraints can mask progress/performance issues and thwart schedule analysis because they stop a schedule from responding to delays of predecessors.  The metric is the number of unfinished tasks with hard constraints.  The threshold is that no more than 5% of incomplete activities in the schedule may use hard constraints.
  6. High Float – This metric measures the percentage of unfinished tasks with a total float greater than 44 working days.  While high float might be a good indicator that there is slack in a schedule, it can also indicate that task logic is missing; the assumption being that a task can rarely slip more than 2 months without affecting the end date.  The threshold for this metric is that no more than 5% of the unfinished tasks in a schedule should have a High Float.
  7. Negative Float – Negative Float occurs when the schedule predicts that a critical or contractual milestone will be missed or a slipping task collides with a hard constraint.  Essentially, negative float suggests that the schedule will not achieve its objectives and is usually a sign that intervention is necessary.  The threshold for negative float is zero.  Any task with a negative float will fail this test.
  8. High Duration – This metric counts the number of unfinished tasks with a duration greater than 44 workdays (2 months).  High-duration tasks are problematic in many cases because it is challenging to monitor progress.  The remedy is often to further decompose the task into smaller, well-defined tasks with shorter durations.  The threshold for this metric is that no more than 5% of the unfinished tasks in the schedule should have a duration greater than 44 workdays.
  9. Invalid Dates – The metric for invalid dates examines both forecast and actual task start and finish dates.  Tasks forecasted to finish in the past (earlier than the project status date) or reported as having started in the future (later than the current status date) are deemed invalid.  The threshold for the Invalid Dates metric is that zero tasks in the schedule should reflect an invalid date because this undermines the credibility of the entire schedule.
  10. Resources – This optional metric represents the percentage of unfinished tasks that have resources associated with them.  Organizations that wish to use this metric can enforce that 100% of the unfinished tasks identify a resource.
  11. Missed Tasks – The overlooked task metric tracks the number of baselined tasks scheduled to finish on or before the status date but have not been marked complete.  This does not include tasks forecasted to be late after the status date, it is only retrospective.  The threshold for this metric is that no more than 5% of the tasks in the schedule should reflect missed dates.
  12. Critical Path Test – This is a pass/fail metric that evaluates the integrity of task logic in the schedule.  The first step is to identify the critical path in the task network, then the first task(s) in the network has a slip introduced, and the resulting slip in the project end date should slip by the same amount.  If the slip inserted at the beginning is the same as the slip observed at the end, the test is passed, else the test is failed.  This test identifies flawed task logic or hard constraints that are making the schedule unresponsive.
  13. Critical Path Length Index (CPLI) – This is a measure of the efficiency required to achieve a schedule milestone at the assigned time, defined as the sum of the remaining project duration in workdays on the critical path plus total float (the difference between the forecast and baseline finish dates of the finish milestone), divided by the remaining project duration.  CPLI = 1.0 indicates that the project must execute as planned to complete on time.  CPLI > 1.0 suggests that there is some schedule margin remaining.  CPLI < 1.0 indicates the project is not on track to achieve its goal.  The threshold for this metric is CPLI < 0.95, which indicates the project does not appear to be on track to achieve its schedule goals.
  14. Baseline Execution Index (BEI) – This metric evaluates the project team’s schedule performance against the baseline plan.  It is calculated by dividing the total number of tasks completed by the total number of baselined to have been completed by the project status date.  BEI = 1.0 indicates the project team appears to be executing according to plan.  BEI > 1.0 suggests that the project team is performing ahead of plan.  BEI < 1.0 indicates that the project team is behind schedule.  The passing threshold for this metric is BEI, not less than 0.95.

Although the DCMA assessment is not a widely recognized industry standard, it generally represents good scheduling practice.  Whether or not your organization is required to use it, you may find this analysis helps to identify issues with schedule logic and performance that should be investigated.  Satisfying these targets doesn’t mean that a schedule is credible or correct but failing to meet these goals indicates that a thorough schedule review to understand why the standards weren’t met might be in order.

For more information on Project Management Tools, check out Chrono™ or click here to Contact Us.

Advanced Project Planning, Monitoring & Controlling Made Simple with