Optimize Odoo and PostgreSQL: Performance Tuning Guide

Introduction

You’ve got Odoo running on your server and now it’s time to squeeze out every drop of performance. This guide shows you how to fine-tune both Odoo and PostgreSQL so that Odoo runs faster, handles more users, and stays smooth even when things get busy, whether you manage the server yourself or through Cloudpepper.

Before you start

Keep your server’s CPU and RAM usage below 80%.

When tuning Odoo and PostgreSQL, monitor resource usage closely. Brief spikes over 80% are usually harmless if recovery is quick, but consistently staying below this level reduces the risk of bottlenecks and helps your Odoo instances run smoothly.

With Cloudpepper, you can track CPU and RAM usage in the Monitoring tab of your server dashboard. For example, the server below showed occasional spikes towards 100%. Reducing the number of workers immediately stabilized usage back under 80%.

CPU and RAM should not go beyond 80%

Step 1. Choose the right number of workers

What is an Odoo worker?

An Odoo instance spawns several types of workers – dedicated processes that each load the full Python environment and handle different tasks:

  • HTTP workers (workers): handles the bulk of user activity, such as loading pages, creating invoices, or processing sales.
  • Cron workers (max_cron_threads): executes scheduled background jobs like reminders, auto-confirmations, or data updates.
  • Long-polling worker: manages real-time features such as chat and live notifications.

Think of workers like cashiers. Too few and queues build up; too many and they sit idle wasting resources.

For performance tuning, the focus is almost always on HTTP workers (we’ll just call these “workers”), since they drive user interactions. Cron and long-polling workers usually stay at one each. Only consider increasing the cron workers (max_cron_threads) to 2 or 3 if you have many background tasks running in parallel.

How many workers do I need?

As a general rule of thumb, 1 worker can support ~6 concurrent users or ~5,000 website visitors per day. However, actual capacity per worker varies widely with usage patterns. For example, one Odoo deployment might handle 50–100 users on a single worker if they perform light operations, whereas another with heavy transactions might see one worker struggle with 20 users.

Example case: “How many workers do I need for 20 Odoo users?

If all 20 users are all working concurrently (actively clicking, saving, or loading screens at the same moment), your Odoo instance will need 3 to 4 workers (20 active users / 6 concurent users/worker = 3.33 workers). If most users are idle and only a few work at the same time, 1 worker is enough, but setting 2 gives you some buffer for spikes.

Minimum of 2 workers required for PDF generation

If your math suggests just 1 worker, Odoo requires at least 2 to generate PDF reports: one to process request and another to serve static assets (CSS, images). With only 1 worker, the request will block itself, and the report will fail.

What about zero workers?

When you set your workers = 0, Odoo will run in “threaded” mode. This is fine for development, demos, or very light loads where you want to minimize memory usage. If you choose this mode, make sure to also set threads = 2 as PDF generation still requires two concurrent threads.

However, for production environments, multiprocess mode (workers > 0) is strongly recommended. It offers better CPU utilization, crash isolation, and enables Odoo’s built-in worker recycling for long-running or memory-heavy processes.

Where to set Odoo workers?

You can set the workers parameter in the odoo.conf configuration file of your Odoo instance. You will need to restart the Odoo service for the changes to take effect.

On Cloudpepper, go to the Config tab of your instance and click Save. Your Odoo instance will automatically restart with the new configuration.

How to change Odoo workers in Cloudpepper dashboard

How many workers can my CPU handle?

If you size by hardware rather than user counts, start from your CPU core count. A common upper-limit guideline is:

Total workers (HTTP + cron) = CPU cores × 2 + 1

This includes all HTTP and cron workers across all Odoo instances on the server. The long-polling worker isn’t part of this formula since it barely uses CPU, but it does load the full Odoo environment, so make sure to budget memory (RAM) for it as an extra process.

Example: on a 4-core server

4 cores × 2 + 1 = 9 total workers (HTTP workers + cron workers)
Understanding the cron worker

The formula takes into account the workers Odoo uses for background tasks.

This worker is configured separately using the “max_cron_threads” parameter. In the example above with 9 total workers, your Odoo config could for example look like this:

workers = 8
max_cron_threads = 1

or if you run background tasks simultaneously it can be split like this:
workers = 7
max_cron_threads = 2

Keep in mind: every worker uses RAM

Each Odoo worker loads the full environment and keeps using memory (RAM) even when idle. Tasks like report generation or background jobs can push memory usage much higher.

Typical RAM usage per worker:

  • Idle: ~80–200 MB (sometimes more)
  • Light usage: ~200-300 MB
  • Average usage: ~300-400 MB
  • Heavy usage (large reports, etc.): 1GB or more

Example: 2-core server with 4 GB RAM

Using the CPU-based rule:
2 × 2 + 1 = 4 HTTP workers + 1 cron worker
+ 1 long-polling worker
= 6 total workers

6 workers × ~300 MB = ~1.8 GB total RAM

That’s about 44% of the server’s 4GB memory used just by Odoo workers, not counting PostgreSQL, the OS, and other services.

How many Odoo workers in total are active on my server?

If you’re using Cloudpepper, you can view the total number of active Odoo workers across all your instances directly on the Dashboard of your server.

Amount of Odoo workers on cloudpepper dashboard

Optimizing worker count for heavy loads (less is more)

When running intensive tasks like large data imports or big reports, Odoo and PostgreSQL compete heavily for CPU. In these cases, it’s best to start conservatively:

Odoo HTTP Workers = number of CPU cores

This approach ensures:

  • Each worker has enough CPU time
  • Less CPU context-switching overhead
  • PostgreSQL keeps the resources it needs
Scaling up safely

If the system feels sluggish or cron jobs start queuing, and your server’s CPU usage stays under 70 percent with enough free RAM, you can add another worker. This lets you increase performance from a stable baseline without risking system stability.

Recommended worker setup summary

Based on users:

workers = amount of active concurrent users ÷ 6
max_cron_threads = 1

Based on CPU (recommended start and in case of heavy load):

workers = number of CPU cores (minimum 2)
max_cron_threads = 1

Based on CPU (upper limit rule)

workers = 2x number of CPU cores (minimum 2)
max_cron_threads = 1

Important Odoo settings beyond workers

Before jumping into PostgreSQL tuning, let’s cover a few more essential Odoo settings that can affect performance, stability, and resource usage.

ParameterDefault ValueRecommended SettingExplanation & Tips
max_cron_threads11 for most instances, 2 to 3 if you have many concurrent cron jobs or long-running cron tasksDefines how many workers can run scheduled (cron) jobs simultaneously. Each cron thread counts as an extra worker, so be sure the server has sufficient CPU / RAM; you may need to reduce regular HTTP workers accordingly.
limit_time_cpu60 sIf you have legitimate jobs (mass imports, huge PDF reports) that exceed 60s CPU, you might temporarily raise the limits or, better, optimize/batch those jobs.Maximum CPU time a worker may spend on a single request (including cron). If exceeded, Odoo force-kills the process, protecting against infinite loops or runaway code. Aim to keep all tasks within 60 s.
limit_time_real120 sRoughly 2 × limit_time_cpu (≈ 120 s when CPU limit is 60 s)Maximum “wall-clock” time a request can take—including DB waits and external calls. Keeps a buffer for I/O or network delays while still capping excessively slow requests.
limit_memory_soft2048 MB2048 MB (2 GB) for most cases. Keep all workers (HTTP + cron + 1 long-polling process) × limit_memory_soft below the machine’s physical RAM (preferably ≤ 70–80% to leave room for PostgreSQL and the OS). On small RAM servers, either reduce worker count or lower the limits (e.g., limit_memory_soft = 1024 MB) so the product still fits safely.Soft worker memory cap. If exceeded, the worker finishes its current job then restarts gracefully, curbing leaks and bloated processes without interrupting active users.
limit_memory_hard2560 MBSlightly above limit_memory_soft, e.g. 2560 MB (2.5 GB)Hard memory ceiling. When hit, the worker is killed immediately to prevent system overload—a last-resort safety net.
db_maxconn64In most cases Odoo does not use more than 1-2 connections. You can set it to 32 as a safe ceiling. Only very special cases need 64.Controls how many concurrent DB connections each Odoo worker may open. Too low → Odoo waits on free connections; too high → PostgreSQL exhausts memory. Use the formula to size it safely.

Step 2. Tuning PostgreSQL for Odoo

2.1 Parameter recommendations

These recommendations assume PostgreSQL and Odoo are running on the same server.

2.1.1 Memory for caching & sorts

PostgreSQL ships with defaults for tiny 1 GB servers. For Odoo, those defaults are far too conservative. By adjusting four key memory parameters, you can unlock significant performance gains.

ParameterWhat it controlsRecommended values
shared_buffersDB page cache15-20% of total RAM. Can be adjusted upwards on a dedicated server with very large RAM (eg. 25-40%), but shouldn’t surpass 6-8GB in most cases.
effective_cache_sizeHint for planner (Postgres + OS cache)50-70% total RAM excluding Odoo
≈ 3× shared_buffers
work_memRAM per sort/hash op8 MB (4 GB box) → 32 MB (16 GB box). Increasing further might benefit large analytical queries in Odoo if the server has plenty of RAM, but it should be done gradually and with monitoring (check PostgreSQL stats to ensure it’s not causing memory swaps).
maintenance_work_memVACUUM & index build~1/16 of total RAM. eg. 4 GB RAM → 256 MB

2.1.2 WAL & checkpoint smoothing

Fewer, smoother checkpoints reduce stalls.

ParameterWhy it mattersRecommended value
checkpoint_completion_targetSpread writes, avoid I/O spikes0.9
checkpoint_timeoutLowers checkpoint frequency and I/O spikes15-30min. Default of 5min is usually too short.
min_wal_size / max_wal_sizeHow much WAL before forcing checkpoint1 GB / 2 GB (small Odoo instance) → 2 GB / 4 GB (large Odoo instance). If you have enough disk space, you can increase it even further (e.g. 1–2GB min, 5–10GB max) to even more greatly reduce checkpoint frequency and I/O spikes.

Fewer checkpoints = fewer fsync storms and faster commits under load.

2.1.3 Connections, parallelism & SSD tweaks

Keep max_connections sane, align parallel workers with cores, and use SSD friendly planner hints.

ParameterWhy it mattersRecommended value
max_connectionsRAM per backend ~10 MBIn practice: start with 50 and monitor from your dashboard.
Theoretical ceiling: max_connections ≥ [(HTTP workers + max_cron_threads + 1 long-polling + extra job-runners) × db_maxconn], for all Odoo instances.
Can be optimized to lower values of 20-50 when using pgbouncer.
max_worker_processesParallel infrastructure= CPU cores (2-core → 2, 4-core → 4, 8-core → 8).
max_parallel_workersPool of parallel workers used by all parallel queries= CPU cores (2-core → 2, 4-core → 4, 8-core → 8). Always ensure max_parallel_workers ≤ max_worker_processes
max_parallel_workers_per_gatherCores per query≈ ½ cores, eg. set to 1-2 for 2‑4vCPU or 3-4 for 8‑core. For larger CPUs (16+vCPU), start with 4-6.
random_page_costPlanner: SSD random read speed1.1 on SSD/NVMe
effective_io_concurrencyLinux readahead64 (SATA SSD) → 200 (NVMe)
jitJIT speeds up big queries but slows Odoo’s many small ones, wasting CPUoff

2.1.4 Autovacuum keeps you fast

Autovacuum prevents table bloat. Defaults are fine, just be sure it is enabled. On very large tables set a per‑table lower autovacuum_vacuum_scale_factor (e.g. 0.05) so cleanup runs sooner.

2.2 Sample PostgreSQL configs

2.2.1 Small – 2 vCPU / 4 GB RAM (NVMe SSD)

shared_buffers = 768MB
effective_cache_size = 2.5GB
work_mem = 8MB
maintenance_work_mem = 256MB
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
min_wal_size = 1GB
max_wal_size = 2GB
max_connections = 50
max_worker_processes = 2
max_parallel_workers = 2
max_parallel_workers_per_gather = 1
random_page_cost = 1.1
effective_io_concurrency = 200
jit = off

2.2.2 Medium – 4 vCPU / 8 GB RAM (NVMe SSD)

shared_buffers = 1.5GB
effective_cache_size = 5GB
work_mem = 16MB
maintenance_work_mem = 512MB
checkpoint_timeout = 20min
checkpoint_completion_target = 0.9
min_wal_size = 1GB
max_wal_size = 4GB
max_connections = 100
max_worker_processes = 4
max_parallel_workers = 4
max_parallel_workers_per_gather = 2
random_page_cost = 1.1
effective_io_concurrency = 200
jit = off

2.2.3 Large – 8 vCPU / 16 GB RAM (NVMe SSD)

shared_buffers = 3GB
effective_cache_size = 12GB
work_mem = 32MB
maintenance_work_mem = 1GB
checkpoint_timeout = 30min
checkpoint_completion_target = 0.9
min_wal_size = 2GB
max_wal_size = 8GB
max_connections = 100
max_worker_processes = 8
max_parallel_workers = 8
max_parallel_workers_per_gather = 4
random_page_cost = 1.1
effective_io_concurrency = 200
jit = off
Share this article