Postgres data partitioning and Django

Nowadays, Big Data is not only a trend, but also a new paradigm leading to radical changes in the software development methodologies. Because CPU clock speed are barely increasing, multi-core processors are becoming a standard in physical infrastructure. Scaling up (with high-end machines) is much more expensive than scaling out with a cluster of distributed mid-end machines. From a data perspective, distributed machines raise new challenges when it comes to data access, data integrity or data availability.

Partitioning, also known as sharding, is often a good solution for faster data access: different partitions/shards are placed on different machines inside a cluster. Data access will benefit from data being distributed on multiple disks and the query distributed across multiple processors.

Partitioning and sharding

Although partitioning and sharding are used interchangeably, in Postgres this is not true.
In Postgres, partitioning refers to splitting up a table into smaller tables on the same machine, while sharding means splitting up the table into smaller tables on different machines.

Sharding can improve performance by executing a query in parallel on multiple machines/CPUs, rather than on a single machine/CPU, while partitioning divides data locally, based on a criteria, in order to help the query planner decide which partitions will be include in the query execution. Using less data for the same end result improves the query speed.

Starting with Postgres 9.6, a query can also end-up being executed in parallel using multiple CPUs [1].

Partitioning is available natively starting with Postgres 10 [2], while sharding gained more support starting with Postgres 9.6 with improvements over the Foreign Data Wrapper, but it is not natively supported [3].

The reason behind Postgres partitioning

Partitioning divides data on certain criterias, allowing a query to execute faster when accessing large segments of a single partition by benefiting from sequential scan inside the partition, rather than a random access [4].

Inserts become faster. Having the table divided in smaller tables also means having smaller indexes to recalculate. Conditional selects and joins execute faster because they operate on less data. If the planner detects that the partition does not satisfy the condition it will not use it when executing the query.The criteria on which partitioning is done is crucial to performance improvement. Only queries that use this criteria will benefit.

Consider partitioning a table based on its ID field. ID is a primary key on a table of 20 000 000 records, having each partition of 1 000 000 records in size. When inserting, the new ID will end up in the last partition. Because this is maximum 1 000 000 in size rather than 20 000 000, reindexing will execute much faster, resulting in a faster insert. When executing a query with the ID field in its where clause, the planner will exclude the partitions that do not match the clause. If the condition is ID BETWEEN 18 500 000 AND 19 500 000, only the last two partitions will be used by the query, reading maximum 2 000 000 records instead of 20 000 000.

Prior to version 10, Postgres did not support partitioning natively, but it provided a few mechanisms that leveraged the development of tools for partitioning tables.

Before deciding to implement partitioning, searching for alternatives is a good approach. This may lead to discovering that the database design is not aligned anymore with the new specifications or that using a non relational database fits better the context.

Postgres partitioning implementation

The fundamental Postgres feature that sits at the very core of partitioning is table inheritance. Postgres allows a table to inherit from another, thus allowing by default a query to span over the inheritance tree [5].

In order to partition a table, several child tables(partitions) have to be created. Each child table will have set a table constraint (based on the partitioning criteria) which will define the partition appartenance condition. If the partitioning is based on the ID primary key, the constraint will be under a similar form: CHECK ( ID BETWEEN 0 AND 1000000 ) [4].
With the table constraints defined, the planner will be able to decide which child tables will be included in the query execution.

By default when executing an insert, data will be added into the parent table. In order to redirect data to the correct partition, triggers have to be applied on the parent table. Thus, the parent table will become a dispatcher for queries, storing no data [4].

As in Postgres there are no mechanisms to automate the creation and maintenance of partitions, several tools that offer these services have been implemented as Postgres extensions. One such tool is PG Partman [6], which I will further use to implement partitioning.

Partitioning with Partman

Partman automates all the details behind partitioning and it is pretty straightforward to use.
Once partman is installed, the partman schema and extension has to be created on the database and also the schema has to be put in the Postgres search path.

Partman provides a few utility functions/scripts to set up the partitions automatically based on the provided parameters. The code below

SELECT partman.create_parent('public.sample_sample1', 'id', 'partman', '1000', p_trigger_return_null:=true);

will create partitioning for table sample_sample1 having partitions based on ID and size 1000. The argument ‘partman’ specifies the type of partitioning(using inheritance and triggers), which can also be ‘native’ [7]. For the last argument I will come back a bit later.

At the moment, only the partitioning structure has been created(the child tables and the trigger for dispatching queries), no data has been moved yet. In order to move data from the parent table into the correct partitions, the python script has to be executed [8].

python /usr/lib/postgresql/9.5/bin/ -c 'host= user=postgres password=postgres dbname=solution1' -p public.sample_sample1 -t id -i 100

Once the data has been moved, the new data structure is ready to be put into use.

Note: Partition management(creation) is done by partman.run_maintenance(). This can be configured to run on a schedule that has to be under the ingestion rate of the data.

The last argument, p_trigger_return_null:=true(which is also the default value), specifies that the trigger Partman creates, should not return a value. Because the trigger is a before trigger, if it would return a value, that value would also be inserted in the parent table, ending up with duplicate data in our partitioning scheme [9]. Thus, Partman specifies p_trigger_return_null:=true by default to avoid this scenario.

While this solution works for partitioning in general, many ORM rely on the database capability to return the ID of an inserted row. Because the trigger is configured to return NULL, Postgres’s insert with returning clause will not perform as expected, instead of the ID, the client will receive NULL.

In order to see this in practice, I have chosen Django ORM, which also suffers from this problem, and I will provide a few solutions which aim to solve the problem.

Integrating Postgres partitioning with Django ORM

Using an ORM helps a lot when it comes to trivial data manipulation and also increases the readability of the code. In Django, the objects that represent database tables are called models. Models have all the methods required to interact with the underlying database. Thus, the save method(create) translates into an insert query.

When a new object is created(new row in the database) the primary key is not available on the model from the beginning. Because Postgres supports insert with returning clause, Django ORM tries to use this, to populate the object with the primary key for further usage, such as returning it as a response for an endpoint or using it as a foreign key for another model object(table). If the insert returns NULL while Django ORM expects a value, an exception will be thrown: TypeError: 'NoneType' object is not subscriptable.

To avoid the problem I will present a few solutions, each with pros and cons, performing better or worse depending on the context. The code for the solutions is hosted at It is a Django 1.9 project with Python 3.6.3, having one application (Sample) with one model (Sample1) and a management command for populating the database. The Postgres version is 9.5 and Partman 3.1.0 is used.

Note: Before trying out the solutions, Postgres has to be set up with Partman. The best way would be to create the Partman schema and extension on the default database template to avoid creating them on each solution database.

Solution 1

A more general solution which is not ORM framework dependent, is to write extra code on the database side.

Having Partman return a value from the before insert trigger, an after insert trigger has to be added to remove the value from the parent table [10].

After setting up the Django project, populate the database using the management command:

./ create_data 10000

To partition the sample1 table, the partman function create_parent has to be invoked on the database side:

SELECT partman.create_parent('public.sample_sample1', 'id', 'partman', '1000', p_trigger_return_null:=false);

This will set up partitioning for table sample1, by ID, having the partition size of maximum 1000 records and using a before insert trigger which  will return a value back upon insert.

After the partitioning is set up, the after insert trigger can be created, on the database side.

CREATE OR REPLACE FUNCTION trg_func_after_insert() RETURNS trigger AS $func$
            v_trigger_exists bool;
            v_trig_name text;
            v_count int;
            v_partition_interval int;
                SELECT partition_interval INTO v_partition_interval FROM partman.part_config;
            IF v_partition_interval > 0 THEN
                v_trig_name := partman.check_name_length(p_object_name := \'sample_sample1\', p_suffix := \'_part_trig\');
                        v_trigger_exists := EXISTS (SELECT 1 FROM pg_trigger WHERE NOT tgisinternal AND tgname = v_trig_name);
                IF v_trigger_exists THEN
                        DELETE FROM ONLY public.sample_sample1 WHERE id =;
                END IF;
            END IF;
            RETURN NULL;
END $func$
LANGUAGE plpgsql;

CREATE TRIGGER trg_after_insert AFTER INSERT ON sample_sample1 FOR EACH ROW EXECUTE PROCEDURE trg_func_after_insert();

With the after insert trigger in place, Django will not throw an exception when saving models.
(The SQL code is included in the project’s migrations for an easy deployment. There is no problem if the migration is applied before partitioning is done.)


  • Portable to other frameworks other than Django.
  • Affects only one table.


  • Overhead introduced to delete the value from the parent.
  • The parent table has to periodically be vacuumed since its log file will grow.

Another solution on the database side would be to create a view on the parent table and an instead of trigger. More details about it are presented here:

For those not comfortable with SQL code, the next solutions are Django specific.

Solution 2

If the pk is not required after saving the model, there is no point in trying to fetch it. Unfortunately, the exception is thrown and some code has to be written to avoid this.

As before, set up the Django project, populate the database and create the partitioning without returning a value:

SELECT partman.create_parent('public.sample_sample1', 'id', 'partman', '1000');

After partitioning is ready, the Django model will suffer one small change: the _do_insert() method from Sample1 model will be overridden as below.

class Sample1(models.Model):
    name = models.CharField(max_length=50)
    description = models.TextField(max_length=200)

    def _do_insert(self, manager, using, fields, update_pk, raw):
        return manager._insert([self], fields=fields, return_id=False,
                               using=using, raw=raw)

With this solution the partman before trigger will return NULL, but Django does not expect a returning pk for this model. Basically it will tell the ORM to use the insert statement without a returning clause.

After a save the pk will be None.


  • No overhead for inserts.
  • Affects only one model.


  • Pk is not returned(further operations on the model which require a pk will fail).
Solution 3

Because in most cases, after saving the model, the pk is required for further operations, the second solution doesn’t really solve the problem.

The setup for this is the same as for the second solution:

SELECT partman.create_parent('public.sample_sample1', 'id', 'partman', '1000');

After everything is set up, to following code must be added to the project

from django.db.backends.postgresql_psycopg2 import features
features.DatabaseFeatures.can_return_id_from_insert = False

This will disable insert with returning clause for Postgres database. To fetch the pk, Django ORM will make an extra query to fetch the last pk created.


  • Pk is returned
  • Easy setup


  • Overhead for extra query
  • Affects other models

All solutions presented until now are based on Postgres 9.5, which handles partitioning using triggers, which in turn introduces the problem.
Further I will look into Postgres 10 partitioning and Django. Only the Postgres version changes, the rest remains the same.

Postgres 10 partitioning and Django

Postgres 10 implements partitioning natively, triggers are not used to dispatch data in the correct partition anymore, hence, the problem disappears. Although under the hood table inheritance is still used, triggers are not required in order to dispatch the data correctly [2]. With the new version, there are improvements to dispatching data, but nothing changed when it comes to partition management.  Creating partitions is still done manually and the partitioned table has to be declared as such upfront [2]. Because of this, Partman still plays a role in setting up partitioning and managing partitions [7]. The good thing about the lack of triggers is that ORMs that depend on insert with returning clause will work as expected.

Further, I will look into the same type of exercise as in the previous solutions, but using Postgres 10. After setting up the Django project(migrating and populating with data), to partition the sample1 table in Postgres 10, there are some extra steps that need to be done. Because the native partitioning needs the table to be set upfront as such, the table that Django created running the migrations needs to be renamed(becoming a backup) and another table similar to the previous one has to be created, but this time enabling partitioning.

Solution 4

The schema of the Django created table will be useful to create the new table:

/usr/lib/postgresql/10/bin/pg_dump -Fp -s -t 'public.sample_sample1_bk' -d solution4 -h -p 5433 -U postgres -W  > create_sample1.sql

In order to create the new table with partitioning, all indexes, primary keys and unique constraints have to be removed from the schema dump(see file create_sample1.sql) [2]. Also to avoid any conflicts, sequences have to be renamed in the sql file.

Instead of the standard table creation statement from the schema dump, a new statement with partitioning has to be used:

CREATE TABLE sample_sample1 (
    id integer NOT NULL,
    name character varying(50) NOT NULL,
    description text NOT NULL

After the table is created, partman can take over and create the partitions:

SELECT partman.create_parent('public.sample_sample1', 'id', 'native', '1000', p_template_table:='public.sample_sample1_bk');

The Partman function is invoked somehow different than before. The partitioning type is native, which tells Partman to use the Postgres 10 partitioning and a template is used as a model for the partitions.

As before, there is no data inside the partitions yet, but unlike before the data is moved from the back-up table into the partitioned one:

INSERT INTO sample_sample1 (SELECT * FROM sample_sample1_bk WHERE id >= limit1 AND id < limit2);

The where clause helps inserting data in batches, because partitions do not exist for all the IDs. To succeed in copying data, partman.run_maintenance() has to be ran as a background job or manually after each insert that fills up the last partition, in order to create new partitions.

After the data is copied in the new partitioned table, the new sequence has to be updated to match the data size:

SELECT SETVAL('public.new_sample_sample1_id_seq', COALESCE(MAX(id), 1) ) FROM public.sample_sample1;

Comparing with Postgres 9.5 partitioning, the setup on the database is clearly more transparent(no trigger have be created and maintained). Unfortunately, because Postgres 10 is very recent, Partman does not support a smooth transition from the old table to the new partitioned table as it does with the previous version.


  • Inserts with returning clause work without any patching.


  • Setup is more complicated.


Postgres partitioning improves queries that use the partitioning criteria, but achieving this is not transparent from the software engineer and in some cases not easy to implement.
Before choosing to implement partitioning in Postgres, it is best to research alternatives that may fit the context better.

A good example for when to use Postgres partitioning is when querying instensively data based on a timestamp from the last timeframe (such as payments, transactions, shipments, events...). If partitioning is done on the timestamp with the range of month for example, then all queries will run only on the last partition, keeping the old partitions only for data analysis for background jobs.


Author Image

Sebastian Brestin

Sebastian founded Qwertee in 2017. He holds a BS in computer science from Babes Bolyai university (Cluj-Napoca, Romania). His expertise ranges from backend development to data engineering, and he has a keen interest in network and security related topics. His experience includes working in multinational corporations such as HP but also in a fast paced startup environment. Sebastian has a wide variety of interests such as learning about video game design or meeting up with the local startup community.