Troubleshooting "Job failed due to excessive parallelism"

Problem:

When a CircleCI job specifies a parallelism value that exceeds the configured concurrency limit, the job fails immediately rather than queuing. This can be confusing, as users may expect jobs to queue when concurrency is maxed out.

Why It Happens:

CircleCI enforces concurrency limits based on the account type and plan.

Additionally for container runner there is a default of 20 concurrent tasks.

Jobs where parallelism exceeds account level or container runner maximum concurrency are expected to fail with the "Job failed due to excessive parallelism" error.

Solutions:

  • Solution 1: Check Your Concurrency Limit

    Confirm the concurrency limit for your specific account. This information can often be found in your CircleCI plan details or by contacting support.

    For container runners, the default limit is 20 concurrent tasks. This can be configured using the agent.maxConcurrentTasks parameter if your runner concurrency allows a higher value.

  • Solution 2: Adjust Parallelism

    Reduce the parallelism value in your job configuration to stay within the concurrency limit. For example, setting parallelism: 20 instead of parallelism: 21 can prevent job failures.

Example Scenarios:

Failing Scenario:

job-a:
parallelism: 21

This job fails because it the parallelism exceeds the concurrency limit.

Queueing Scenario:

job-b:
parallelism: 20

job-c:
parallelism: 20

Both jobs can run in sequence if the concurrency limit is 20. The subsequent job, job-c, will queue until job-b has completed.

Outcome:

By verifying your concurrency limit and adjusting job parallelism accordingly, you can avoid job failures and ensure your workflows run as expected.

Additional Resources:

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Article is closed for comments.