top of page
hand-businesswoman-touching-hand-artificial-intelligence-meaning-technology-connection-go-

Optimizing PostgreSQL Queries


Optimizing PostgreSQL Queries: Techniques and Examples

PostgreSQL is a powerful and versatile relational database system, but the efficiency of your queries can significantly impact performance. Here, we’ll explore several optimization techniques, complete with examples, to help you streamline your PostgreSQL queries for better performance.

1. Indexing: The Key to Fast Retrieval

Indexes are essential for speeding up data retrieval. They work like a table of contents for your database tables. Let’s consider a simple example:

Suppose you have a table employees:

CREATE TABLE employees (

id SERIAL PRIMARY KEY,

name VARCHAR(100),

department VARCHAR(100),

salary NUMERIC

);

If you frequently query employees by their department, create an index:

CREATE INDEX idx_department ON employees(department);

Now, a query like this:

SELECT * FROM employees WHERE department = ‘Sales’;

Will run much faster because PostgreSQL can quickly locate the relevant rows using the index.

Query Performance Comparison:

PostgreSQL performs a sequential scan on the entire employees table. This means it checks each row one by one until it finds all rows that match the condition. This process can be slow, especially with a large dataset.

Index Impact: Before and after indexing, use EXPLAIN:

  • Before: High cost due to Seq Scan.

  • After: Reduced cost with Index Scan.


2. Avoiding SELECT *

While it may be tempting to retrieve all columns, it can lead to unnecessary overhead. Always specify only the columns you need. For example:

SELECT * FROM employees WHERE salary > 50000;

Use:

SELECT name, salary FROM employees WHERE salary > 50000;

This reduces the amount of data transferred and processed, improving performance.

3. Use of Joins vs. Subqueries

Joins can often be more efficient than subqueries. Consider the following example:

If you want to find employees who earn more than the average salary in their department, you might write a subquery:

SELECT name FROM employees WHERE salary > (SELECT AVG(salary) FROM employees WHERE department = ‘Sales’);

Query Performance Comparison:

Joins can be faster, especially with proper indexing, as they allow the database to use efficient algorithms to combine datasets.

  • Readability: For straightforward relationships, joins can make queries easier to read.

  • Complexity: Multiple joins can make queries complex and harder to maintain.


Instead, use a join:

SELECT e.name

FROM employees e

JOIN (

SELECT department, AVG(salary) as avg_salary

FROM employees

GROUP BY department

) avg_dept ON e.department = avg_dept.department

WHERE e.salary > avg_dept.avg_salary;

Joins can be more efficient, especially with proper indexing.

Query Performance Comparison:

  • Modularity: Subqueries can break down complex logic into manageable parts.

  • Clarity: Can clarify intent, especially when the logic is inherently hierarchical.

  • Performance: Subqueries may be less efficient, particularly correlated subqueries that run for each row in the outer query.

  • Overhead: Depending on how PostgreSQL optimizes them, they can create unnecessary complexity in execution plans.


  1. Query Execution Plans with EXPLAIN

Understanding how PostgreSQL executes your queries is crucial for optimization. Use the EXPLAIN command to analyze execution plans.

For example:

EXPLAIN SELECT * FROM employees WHERE department = ‘Marketing’;

Look for:

  • Seq Scan: Indicates a sequential scan; consider adding an index if this appears on large tables.

  • Index Scan: Shows that an index is being used, which is typically what you want.

EXPLAIN help identify things like:

  • Inaccurate statistics leading to poor join/scan choices

  • Maintenance activity (VACUUM and ANALYZE) not being aggressive enough

  • Corrupted indexes requiring a REINDEX

  • Index definition vs. query mismatch

  • work_mem being set too low, preventing in-memory sorts and joins

  • Poor performance due to join order listing when writing a query

  • Improper ORM configuration

  • EXPLAIN is one of the most invaluable tools for PostgreSQL, saving lots of time.


Note About Statistics in PostgreSQL Query Optimization

  • The query optimizer calculates costs based on statistics stored in pg_statistic. However, this data is not presented in a human-readable format, so it’s not useful for direct examination. For better visibility into the table and row statistics, try looking at pg_stats.

  • Suppose any of these internal statistics are off (i.e., a bloated table or too many joins that cause the genetic query optimizer to kick in). In that case, a sub-optimal plan may be selected, leading to poor query performance. Having bad statistics isn’t necessarily a problem; they aren’t always updated in real-time, and much of it depends on PostgreSQL’s internal maintenance. So, database maintenance must be conducted regularly, which means frequent VACUUM-ing and ANALYZE-ing.

  • Without good statistics, you could end up with something like this:


Postgres Query : EXPLAIN SELECT * FROM hospitalization_discharge WHERE visit_times < 2;


When an EXPLAIN is prepended to a query, the query plan gets printed, but the query is not run. As a result, we can’t ascertain whether the statistics stored in the database are accurate, nor can we determine if certain operations necessitated costly I/O instead of executing entirely in memory. When used with ANALYZE, the query is run, and the query plan and some under-the-hood activity are printed out.

If we look at the first query above and run EXPLAIN ANALYZE instead of EXPLAIN, we get:


Postgres Query : EXPLAIN ANALYZE SELECT * FROM hospitalization_discharge WHERE visit_times < 2;


There’s more information: actual time and rows, as well as planning and execution times.


5. Data Type Optimization

Choosing the right data type is vital for performance. For instance, if you have a column that will only store small integers, use SMALLINT:

CREATE TABLE products (

id SERIAL PRIMARY KEY,

name VARCHAR(100),

stock SMALLINT

);

This minimizes storage space and can improve cache performance.


6. Limit Locks with Transaction Management

Locks can slow down your queries, particularly in high-concurrency environments. To minimize locking issues:

  • Keep transactions short. For instance, if you’re updating multiple rows, batch updates can help:

BEGIN;

UPDATE employees SET salary = salary * 1.1 WHERE department = ‘Engineering’;

COMMIT;

By keeping transactions concise, you reduce the time locks are held.


7. Regular Maintenance

Regular maintenance is crucial for optimal performance. Use the following commands:

  • VACUUM: Reclaims storage and helps optimize performance.

VACUUM employees;

  • ANALYZE: Updates statistics for the query planner.

ANALYZE employees;


8. Leveraging Materialized Views

For complex queries that don’t require real-time data, consider using materialized views. They store the result of a query physically, which can improve performance.

For example:

CREATE MATERIALIZED VIEW department_avg_salary AS

SELECT department, AVG(salary) as avg_salary

FROM employees

GROUP BY department;

You can refresh this view as needed, allowing for quick access to average salaries by department without recalculating every time.


A Quick Review of Scan Types and Joins for Performance Tuning:

Every join type and scan type has its time and place. Some people look for the term “sequential scan” in fear, not considering if it’s worthwhile to access data another way. Take, for example, a table with two rows — it would not make sense to the query optimizer to scan the index and then go back and retrieve data from the disk when it could just scan the table and pull data out without touching the index. In this case, and for most small tables, doing a sequential scan would be more efficient.


To review the join and scan types that PostgreSQL works with:

Sequential scan:

  • A brute-force retrieval from disk

  • Scans the whole table

  • Fast for small tables

Index scan:

  • Scans all/some rows in an index; looks up rows in heap

  • Causes random seek, which can be costly for old-school spindle-based disks

  • Faster than a sequential scan when extracting a small number of rows for large tables

Index only scan:

  • Scans all/some rows in the index

  • No need to look up rows in the table because the values we want are already stored in the index itself

Bitmap heap scan:

  • Scans index, building a bitmap of pages to visit

  • Looks up only relevant pages in the table for desired rows


Join Types

Nested loops:

  • Scans for matching rows in the inner table for each row in the outer table

  • Fast to start, best for small tables

Merge join:

  • Zipper operation on sorted data sets

  • Good for large tables

  • High startup cost if an additional sort is required

Hash join:

  • Builds hash of inner table values, scans outer table for matches

  • Only usable for equality conditions

  • High startup cost but fast execution

Again, every scan type and join type has its place. It’s important that the query optimizer has good statistics to work with.

Conclusion

Optimizing PostgreSQL queries involves a combination of smart indexing, efficient SQL writing, understanding execution plans, and regular maintenance. By applying these techniques and examples, you can significantly enhance the performance of your PostgreSQL database, leading to faster query responses and a more efficient data handling process. Happy optimizing!


5 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page