Optimize Odoo and PostgreSQL: Performance Tuning Guide

Introduction

A fast Odoo server isn’t just about having powerful hardware. It’s also about how well Odoo and PostgreSQL are tuned to the server’s capacity. Good tuning speeds up loading times, improves scalability, and keeps your server responsive—even during busy periods.

This guide gives practical tuning tips for various server setups, whether you manage your own server or use Cloudpepper to manage your Odoo server.

Balancing CPU and RAM between Odoo and PostgreSQL

When Odoo and PostgreSQL run on the same server, they share the same pool of CPU and RAM. The CPU handles all processing tasks, while RAM provides fast access to data needed by both systems.

Proper tuning ensures neither Odoo nor PostgreSQL runs short on resources, keeping your system smooth and responsive.

Aim to remain below 80% of CPU and RAM usage at all times

During tuning, monitor your server’s resource usage and ensure CPU and RAM usage remain below 80% capacity at all times. This prevents Odoo instance(s) getting bottlenecked by the server.

If you’re using Cloudpepper, you can monitor your CPU and RAM under the Monitoring tab in your server dashboard. Below is an example of a Cloudpepper-managed server where CPU usage occasionally spiked above 80%. Reducing the amount of workers immediately stabilized the system to under 80%.

CPU and RAM should not go beyond 80%

Step 1. Choose the right number of workers

An Odoo instance spawns several types of workers – dedicated processes that each load the full Python environment and handle different types of tasks:

  • HTTP workers (workers): handles the bulk of user activity, such as loading pages, creating invoices, or processing sales.
  • Cron workers (max_cron_threads): executes scheduled background jobs like reminders, auto-confirmations, or data updates.
  • Long-polling worker: manages real-time features such as chat and live notifications.

Think of Odoo workers like cashiers at a supermarket checkout. Too few cashiers and customers line up (requests back up); too many and they start idle, wasting resources.

When tuning performance, the primary focus is on HTTP workers (workers), since these are responsible for user interactions. For cron and long-polling workers, you typically only need one each. In some cases where you have many background tasks running concurrently, you want to increase cron workers (max_cron_threads) to 2 or 3.

“I have 4 active users—how many workers (HTTP workers) do I need?”

As a general rule of thumb:

  • 1 HTTP worker supports ~6 concurrent users (~25 active users)
  • Or ~5,000 website visitors per day

“So I just need 1 HTTP worker for 4 users?”
Not quite. PDF report generation in Odoo requires at least 2 HTTP workers: one to handle the user’s request and another to serve static assets like CSS and images. If only one is available, the request will block itself, and the report will fail to generate.

So with 4 active users, your Odoo system requires 2 HTTP workers. As one worker can handle 6 users, this setup provides enough headroom to support up to 12 active users on a regular Odoo installation.

Keep in mind the active user estimation may vary depending on your custom modules and how intensively the system is used.

“What if I set My Odoo instance to use 0 workers?”

When you set your workers = 0, Odoo will run in “threaded” mode. This is fine for development, demos, or very light loads where you want to minimize memory usage. If you choose this mode, make sure to also set threads = 2 as PDF generation still requires two concurrent threads.

However, for production environments, multiprocess mode (workers > 0) is strongly recommended. It offers better CPU utilization, crash isolation, and enables Odoo’s built-in worker recycling for long-running or memory-heavy processes.

Where can I set my Odoo HTTP workers?

You can change the amount of HTTP workers by editing the odoo.conf configuration file of your Odoo instance and setting the workers parameter to the desired value. You will need to restart the Odoo service for the changes to take effect.

When using Cloudpepper, you can change the workers parameter under the Config tab of your instance and click Save. Your Odoo instance will automatically restart with the new amount of workers.

How to change Odoo workers in Cloudpepper dashboard

How many workers can my CPU handle?

If you size by hardware rather than user counts, start from your CPU core count.

A common upper-limit guideline is:

Total workers (HTTP + cron) = CPU cores × 2 + 1

This includes all HTTP and cron workers across all Odoo instances on the server. The long-polling worker is not included in this formula, as it uses minimal CPU. However, it still loads the full Odoo environment, so be sure to account for its memory usage as one additional process.

Example: on a 4-core server

4 cores × 2 + 1 = 9 total workers (HTTP workers + cron workers)
Understanding the cron worker

The formula takes into account the workers Odoo uses for background tasks.

This worker is configured separately using the “max_cron_threads” parameter. In the example above with 9 total workers, your Odoo config could for example look like this:
workers = 8
max_cron_threads = 1

or can be split like this:
workers = 7
max_cron_threads = 2

Keep in mind: each extra worker uses RAM

Each Odoo worker preloads the full environment and continuously consumes memory—even when idle. Resource-intensive tasks like report generation or background jobs can significantly raise memory demands.

Typical RAM usage per worker:

  • Idle: ~80–200 MB (or sometimes more)
  • Light usage: ~200-300 MB
  • Average usage: ~300-400 MB
  • Heavy usage (e.g., large reports): 1GB or more

Example: on a 2-core server with 4 GB RAM

Following the CPU-based rule:
2 × 2 + 1 = 4 HTTP workers + 1 cron worker
+ 1 long-polling worker
= 6 total workers

6 workers × ~300 MB = ~1.8 GB total RAM

That’s roughly 44% of the total RAM of a 2vCPU 4GB RAM server, excluding memory needed for PostgreSQL, the operating system, and other services.

How many Odoo workers are active on my server?

If you’re using Cloudpepper, you can view the total number of active Odoo workers across all your instances directly from the Dashboard of your server.

Amount of Odoo workers on cloudpepper dashboard

Optimizing worker count for heavy loads (“Less is more”)

Under intensive workloads (e.g., heavy data imports, large reports), Odoo and PostgreSQL compete significantly for CPU. In such scenarios, start conservatively:

Odoo HTTP Workers = amount of CPU cores

This ensures:

  • Workers have adequate CPU time
  • Reduced CPU context-switching
  • PostgreSQL maintains sufficient CPU availability
Scaling up workers safely

If your system feels sluggish or cron jobs start queuing up—and your server’s CPU usage stays consistently below 70% with enough free RAM—you can confidently add another worker. This lets you scale performance upward from a balanced baseline without risking stability.

Recommended worker setup summary

General recommendation:

workers = 2x amount of CPU cores (min. 2)
max_cron_threads = 1

Balanced baseline under heavy load:

workers = amount of CPU cores (min. 2)
max_cron_threads = 1

Other Odoo configuration parameters

Before jumping into PostgreSQL tuning, let’s cover a few more essential Odoo settings that can affect performance, stability, and resource usage.

ParameterDefault ValueRecommended SettingExplanation & Tips
max_cron_threads12 – 3 if you have many concurrent cron jobs or long-running cron tasksDefines how many workers can run scheduled (cron) jobs simultaneously. Each cron thread counts as an extra worker, so be sure the server has sufficient CPU / RAM; you may need to reduce regular HTTP workers accordingly.
limit_time_cpu60 sOnly raise for heavy reports or unusually long operationsMaximum CPU time a worker may spend on a single request (including cron). If exceeded, Odoo force-kills the process, protecting against infinite loops or runaway code. Aim to keep all tasks within 60 s.
limit_time_real120 sRoughly 2 × limit_time_cpu (≈ 120 s when CPU limit is 60 s)Maximum “wall-clock” time a request can take—including DB waits and external calls. Keeps a buffer for I/O or network delays while still capping excessively slow requests.
limit_memory_soft1024 MB (1 GB) for most casesSoft worker memory cap. If exceeded, the worker finishes its current job then restarts gracefully, curbing leaks and bloated processes without interrupting active users.
limit_memory_hardSlightly above limit_memory_soft, e.g. 2048 MBHard memory ceiling. When hit, the worker is killed immediately to prevent system overload—a last-resort safety net.
db_maxconn64Keep (HTTP workers + cron workers + 1 long-polling worker + job queue threads) × db_maxconn ≤ PostgreSQL max_connectionsControls how many concurrent DB connections each Odoo process may open. Too low → Odoo waits on free connections; too high → PostgreSQL exhausts memory. Use the formula to size it safely.

Step 2. Tuning PostgreSQL for Odoo

The default PostgreSQL config is built for a 1 GB micro‑server. Raising just four parameters often triples throughput.

2.1 Memory for caching & sorts

The default PostgreSQL config is built for a 1 GB micro‑server. Raising just four parameters often triples throughput:

ParameterWhat it controlsRecommended values
shared_buffersDB page cache25 % of RAM (20 % if Odoo on same box)
effective_cache_sizeHint for planner (Postgres + OS cache)≈ 3× shared_buffers
work_memRAM per sort/hash op8 MB (4 GB box) → 32 MB (16 GB box)
maintenance_work_memVACUUM & index build256 MB → 1 GB depending on RAM

2.2 WAL & checkpoint smoothing

ParameterWhy it mattersRecommended value
checkpoint_completion_targetSpread writes, avoid I/O spikes0.9
min_wal_size / max_wal_sizeHow much WAL before forcing checkpoint1 GB / 2 GB (small) → 2 GB / 4 GB (large)
wal_buffersBuffer before WAL hits disk16MB

Fewer checkpoints = fewer fsync storms and faster commits under load.

2.3 Connections, parallelism & SSD tweaks

ParameterWhy it mattersRecommended value
max_connectionsRAM per backend ~10 MB20–50 (single Odoo instance)
max_worker_processesParallel infrastructure= CPU cores
max_parallel_workers_per_gatherCores per query1 (2‑core) → 4 (8‑core)
random_page_costPlanner: SSD random read speed1.1 on SSD/NVMe
effective_io_concurrencyLinux readahead64 (SATA SSD) → 200 (NVMe)

2.4 Autovacuum keeps you fast

Autovacuum prevents table bloat. Defaults are fine, just be sure it is enabled. On very large tables set a per‑table lower autovacuum_vacuum_scale_factor (e.g. 0.05) so cleanup runs sooner.

2.5 Sample PostgreSQL configs

2.2.1 Small – 2 vCPU / 4 GB RAM (NVMe SSD)

shared_buffers = 1GB
effective_cache_size = 3GB
work_mem = 8MB
maintenance_work_mem = 256MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
min_wal_size = 1GB
max_wal_size = 2GB
max_connections = 20
max_worker_processes = 2
max_parallel_workers_per_gather = 1
random_page_cost = 1.1
effective_io_concurrency = 200

3.2.2 Medium – 4 vCPU / 8 GB RAM (NVMe SSD)

shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 16MB
maintenance_work_mem = 512MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
min_wal_size = 1GB
max_wal_size = 2GB
max_connections = 30
max_worker_processes = 4
max_parallel_workers_per_gather = 2
random_page_cost = 1.1
effective_io_concurrency = 200

3.2.3 Large – 8 vCPU / 16 GB RAM (NVMe SSD)

shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 32MB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
min_wal_size = 2GB
max_wal_size = 4GB
max_connections = 50
max_worker_processes = 8
max_parallel_workers_per_gather = 4
random_page_cost = 1.1
effective_io_concurrency = 200

Share this article