Home » Programming » Databases » How to Export and Back Up PostgreSQL Databases

How to Export and Back Up PostgreSQL Databases

This tutorial will show you how to export, import, and back up your PostgreSQL clusters and databases.

You should regularly back up. It’s one of the most important things you can do as a sysadmin, developer, and even a hobbyist. Losing your data is detrimental to business, and your projects. Losing your family photos sucks too.

This article demonstrates the various way to import and export PostgreSQL data so that it can be backed up. This process should be done regularly (or automated!), and performed before updating or upgrading your PostgreSQL clusters.

PostgreSQL Admin Permissions

All of the commands in this article assume that you are running them from the Linux terminal as a user with the correct permissions.

Usually, running everything as the postgres users is sufficient on most default configurations:

sudo su postgres

The above command will log you in as the default postgres user.

How to Export and Import Data Using PostgreSQL SQL Dump

The first method for exporting data from PostgreSQL is the pg_dump command.

This is the recommended way for exporting/importing databases – for backup purposes or otherwise.

Exporting Using pg_dump

To export a PostgreSQL database from the active cluster run the following pg_dump command:

pg_dump DATABASE_NAME > /path/to/export/DATABASE_NAME_EXPORT.pgsql

To do this you will need to know the names of the databases you are exporting. This can be found by running:

psql -l

To dump ALL databases into a single file run:

pg_dumpall > /path/to/export/ALL_DATABASES_EXPORT.pgsql

If your databases are really big, and you wish to split the export into multiple files, pipe the output of pg_dump to the split command:

pg_dump DATABASE_NAME | split -b 2G - /path/to/export/DATABASE_NAME_EXPORT_SPLIT.pgsql

The above command splits the files into 2GB chunks. This will also work using the pg_dumpall command.

Restoring Dumped PostgreSQL Databases

Database dumps created using the above method can be restored to the default running PostgreSQL cluster using the psql command:

psql DATABASE_NAME < /path/to/export/DATABASE_NAME_EXPORT.pgsql

…or if you dumped all of the databases into a single file, they can all be restored by running:

psql -f /path/to/export/ALL_DATABASES_EXPORT.pgsql postgres

You must have full database administrative rights to the cluster to restore a full dump of all databases.

To restore split files, they will need to be re-assembled using the cat command, and then piped to the psql command:

cat /path/to/export/DATABASE_NAME_EXPORT_SPLIT.pgsql* | psql DATABASE_NAME

Note the wildcard (*) character in the cat command above – each file output by split will be numbered sequentially, this will iterate over all of the matching files and join them before outputting them for import.

File System Backup/Restore

This is not the recommended way to import and export data from PostgreSQL, so use it only if you have to.

The data for PostgreSQL is written to the file system, so these user data files can be backed up and restored.

First, the database server must be shut down so that all data is written to disk and there is no chance of corruption. To do this run:

sudo systemctl stop postgresql

If you are running multiple clusters run:

sudo pg_ctlcluster VERSION_NUMBER stop

Next, find the path to the data directory for your cluster. This can be done by running the pg_lsclusters command:

pg_lsclusters

The output should look something like this:

Ver Cluster Port Status Owner    Data directory               Log file
9.1 main    5433 online postgres /var/lib/postgresql/9.1/main /var/log/postgresql/postgresql-9.1-main.log
11  main    5432 online postgres /var/lib/postgresql/11/main  /var/log/postgresql/postgresql-11-main.log

Identify the path to the data directory for the cluster you wish to back up, then archive it using the tar command:

tar -cf /path/backup/file/POSTGRES_BACKUP.tar /path/to/postgres/data/dir/main

Replacing the paths with the paths to where you want your backup to go, and the path to the PostgreSQL data directory, respectively.

To restore the backup, un-archive the tar file back to the original directory:

tar -xvf file.tar -C /path/to/postgres/data/dir/main

You may be tempted to try and backup individual database files from within the PostgreSQL data directory. Don’t even try it – without the associated commit log files, they will be useless. Grab the whole data directory, or export individual databases using the pg_dump command as shown earlier in this article.

Continuous Archiving and Point-in-Time Recovery (PITR)

There is a third, somewhat esoteric method of backing up PostgreSQL databases – continuous archiving and point-in-time recovery (PITR). This involves backing up the logged transactions that are made for every change made to your PostgreSQL databases (known as the WAL or write-ahead log). By backing these up, you can replay the logged transactions to recreate a database.

This is useful for backing up a database after a system crash – changes that may not yet have been written to disk will be present in the log files, so the system can be brought back to a known state. This is probably not necessary on developer machines, but is highly useful in production.

As it’s a more complex process, I’ll leave it to the official PostgreSQL docs to get you up to speed.

Back Up Now!

Seriously, go back up your stuff when you’re finished reading this sentence. You should keep your data stored on different storage devices in at least 3 different physical locations (and/or in the cloud) to make sure that if there’s a disaster, you still get to keep your stuff.

Backup regularly, and if you’re updating or doing maintenance on your systems or servers, backup before making any changes. Then back up your working configuration once everything’s operating smoothly.

SHARE:
Photo of author
Author
I'm Brad, and I'm nearing 20 years of experience with Linux. I've worked in just about every IT role there is before taking the leap into software development. Currently, I'm building desktop and web-based solutions with NodeJS and PHP hosted on Linux infrastructure. Visit my blog or find me on Twitter to see what I'm up to.

Leave a Comment