pg_dump mydb -n 'myschema' -a -Fp -t myschema.mytable --column-inserts > /data1/insert_statements.sql| gzip
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Showing posts with label pg_dump and pg_restore. Show all posts
Showing posts with label pg_dump and pg_restore. Show all posts
Friday, August 26, 2022
How to generate insert statements for a table in PostgreSQL
Thanks to Carlos Becker for pointing out the following solution on how to generate insert statments for a table in PostgreSQL
Wednesday, April 6, 2022
Wrappers for pg_dump and pg_restore
Export:
#pg_dump dumps a database as a text file or to other formats. #!/bin/bash export SCRIPT_NAME=`basename $0` export HOST=`uname -n` export TS=`date +\%m.\%d.\%y\_%H_%M_%S` export RUN_DATE=`date +\%m.\%d.\%y` export RUN_DIR=. export LOG_DIR=/tmp export DUMPDIR=/pgdata/export export JOB_NAME="ExportPGDB" export VERSION=1.0.0 export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log exec &> ${LOGFILE} echo "Starting export job at " `date` pg_dump -Fd musicdb -n music -v -f ${DUMPDIR}/mussikkdbexport -j 2 echo "Ending job at " `date` exit
Import:
# pg_restore - restore a PostgreSQL database from an archive file created by pg_dump #!/bin/bash export SCRIPT_NAME=`basename $0` export HOST=`uname -n` export TS=`date +\%m.\%d.\%y\_%H_%M_%S` export RUN_DATE=`date +\%m.\%d.\%y` export RUN_DIR=. export LOG_DIR=/tmp export DUMPDIR=/pgdata/export export JOB_NAME="ImportPGDB" export VERSION=1.0.0 export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log exec &> ${LOGFILE} echo "Starting job at " `date` pg_restore ${DUMPDIR}/mussikkdbexport -C -c -d postgres -j 4 -v pg_restore ${DUMPDIR}/mussikkdbexport -c -d musicdb -j 4 -v echo "Ending job at " `date` exit
More articles on pg_dump and pgrestore:
Monday, October 18, 2021
How to export a single table in PostgreSQL
Example of exporting a single table.
In the directory-format:
Since I am using the directory output format however, I am required to use it, and it specifies the target directory instead of a file. In this case the directory is created by pg_dump and it must not exist before.
In plain text file format:
In custom format:
pg_dump -Fd mydb -t myschema.mytable -v -f /data/mydirectory -j 2Where
-Fd = export into a directory-format archive -t = table name -v = verbose -f = the target directory -j = number of export jobsNote that -f can be omitted for file based output formats.
Since I am using the directory output format however, I am required to use it, and it specifies the target directory instead of a file. In this case the directory is created by pg_dump and it must not exist before.
In plain text file format:
pg_dump mydb -n 'myschema' -a -Fp -t mytable | gzip > mydb_myschema_mytable.gzWhere
-Fp = plain text format -a = data only, no metadata. In the case of plain text, this means no "create table" statement is generated -t = table nameWhen the export is done, you may unpack the mydb_myschema_mytable.gz and you will observe that the file contains COPY directives for each row.
In custom format:
pg_dump mydb -n 'myschema' -a -Fc -t mytable | gzip > mydb_myschema_mytable_custom.gzWhere all the options are identical to the plain text format, except for
-Fc = custom data format
Thursday, September 16, 2021
How to list the contents of a custom format PostgreSQL export file
List the content of a "custom format" export file created by pg_dump by specifying the -l flag:
List the content of a "directory format" export file created by pg_dump by specifying the -l flag. Note that the path should point to the directory in which the toc.dat file resides, in my case /data1/exports/mydir:
In both cases, output will be something akin to
pg_restore /data1/exports/musicdb_Fc.dmp -l
List the content of a "directory format" export file created by pg_dump by specifying the -l flag. Note that the path should point to the directory in which the toc.dat file resides, in my case /data1/exports/mydir:
pg_restore /data1/exports/mydir/ -l
In both cases, output will be something akin to
; ; Archive created at 2021-09-16 12:51:17 CEST ; dbname: musicdb ; TOC Entries: 29 ; Compression: -1 ; Dump Version: 1.13-0 ; Format: CUSTOM ; Integer: 4 bytes ; Offset: 8 bytes ; Dumped from database version: 11.12 ; Dumped by pg_dump version: 11.12 ; ; ; Selected TOC Entries: ; 10; 2615 45112854 SCHEMA - music postgres 3764; 0 0 ACL - SCHEMA music postgres 202; 1259 45112865 TABLE music albums music 203; 1259 45112868 TABLE music albums_1974 music 204; 1259 45112871 TABLE music albums_1979 music 201; 1259 45112863 SEQUENCE music albums_album_id_seq music 205; 1259 45112874 TABLE music albums_default music 207; 1259 45112879 TABLE music artists music 206; 1259 45112877 SEQUENCE music artists_artist_id_seq music 209; 1259 45112884 TABLE music bands music 208; 1259 45112882 SEQUENCE music bands_band_id_seq music 200; 1259 45112858 TABLE music songs music 199; 1259 45112856 SEQUENCE music songs_song_id_seq music 3750; 0 45112868 TABLE DATA music albums_1974 music 3751; 0 45112871 TABLE DATA music albums_1979 music 3752; 0 45112874 TABLE DATA music albums_default music 3754; 0 45112879 TABLE DATA music artists music 3756; 0 45112884 TABLE DATA music bands music 3748; 0 45112858 TABLE DATA music songs music 3765; 0 0 SEQUENCE SET music albums_album_id_seq music 3766; 0 0 SEQUENCE SET music artists_artist_id_seq music 3767; 0 0 SEQUENCE SET music bands_band_id_seq music 3768; 0 0 SEQUENCE SET music songs_song_id_seq music 3620; 2606 45112862 CONSTRAINT music songs songs_pkey music
Tuesday, September 14, 2021
How to export and import a schema using the directory format
What exactly is the directory format used by pg_dump?
The documentation states:
Dumps can be output in script or archive file formats
and
The alternative archive file formats must be used with pg_restore to rebuild the database.
Do they bring any advantages over the plain-text format?
Archive file formats allows pg_restore to
be selective about what is restored
reorder the items prior to being restored
be portable across architectures.
be used to examine the contents of an archive
use parallel workers during export and import
The documentation claims that pg_dump used archive file formats and pg_restore provides a flexible archival and transfer mechanism.
There are two types of archive file format output:
The “custom” format (-Fc)
The “directory” format (-Fd)
Both of these allows for selection and reordering of all archived items, support parallel restoration, and are compressed by default, but note that only the “directory” format supports parallelization of the export utility pg_dump.
In other words, using pg_dump + pg_restore to move data should be a transition that most seasoned Oracle DBAs will be able to do. The usage of these utilities is a lot similar to Oracle Data Pump.
Export schema "music" in database "musicdb" using the directory format archive:
I am using two workers indicated by the -j flag.
You need to point to a valid path for your directory, which is /export. pg_dump will create the subdirectory "musicdbexport" for you, and according to the documentation it should not be created beforehand.
If you list the path after export, you'll see files like these:
Import schema "music" in database "musicdb" using the directory format archive:
-C means "create the target database"
-c means "drop the database objects before creating"
-j is the number of jobs started in parallel
Remeber that the file toc.dat must exist in the directory, otherwise you'll see the error message pg_restore: [archiver] directory "/pgdata/export" does not appear to be a valid archive ("toc.dat" does not exist)
Remember, you don't have to drop the database first, it's only an option you have. You could also choose to import the schema "music" directly into an existing database:
If you examine the log aftwards, you'll typically see output like this:
Following that, we see how the database "musicdb" is created:
Then the pg_restore connects to the new database and creates the schema in there:
After that, the objects are imported one by one:
In my case, these errors were thrown because I did not take care to create a user called "music" up front:
pg_dump is documented here
pg_restore is documenteted here
Archive file formats allows pg_restore to
The documentation claims that pg_dump used archive file formats and pg_restore provides a flexible archival and transfer mechanism.
There are two types of archive file format output:
Both of these allows for selection and reordering of all archived items, support parallel restoration, and are compressed by default, but note that only the “directory” format supports parallelization of the export utility pg_dump.
In other words, using pg_dump + pg_restore to move data should be a transition that most seasoned Oracle DBAs will be able to do. The usage of these utilities is a lot similar to Oracle Data Pump.
Export schema "music" in database "musicdb" using the directory format archive:
pg_dump -Fd musicdb -n music -v -f /export/musicdbexport -j 2
You need to point to a valid path for your directory, which is /export. pg_dump will create the subdirectory "musicdbexport" for you, and according to the documentation it should not be created beforehand.
If you list the path after export, you'll see files like these:
cd /export/musicdbexport total 40K drwxr-xr-x 3 postgres postgres 4.0K Sep 7 16:22 .. -rw-r--r-- 1 postgres postgres 8.0K Sep 7 16:22 toc.dat -rw-r--r-- 1 postgres postgres 53 Sep 7 16:22 3756.dat.gz -rw-r--r-- 1 postgres postgres 292 Sep 7 16:22 3754.dat.gz -rw-r--r-- 1 postgres postgres 67 Sep 7 16:22 3751.dat.gz -rw-r--r-- 1 postgres postgres 142 Sep 7 16:22 3748.dat.gz -rw-r--r-- 1 postgres postgres 25 Sep 7 16:22 3750.dat.gz drwx------ 2 postgres postgres 4.0K Sep 7 16:22 . -rw-r--r-- 1 postgres postgres 88 Sep 7 16:22 3752.dat.gzTransfer these file, including the toc.dat file, to the destination server.
Import schema "music" in database "musicdb" using the directory format archive:
pg_restore /pgdata/export -C -c -d postgres -j 4 -v
Remeber that the file toc.dat must exist in the directory, otherwise you'll see the error message pg_restore: [archiver] directory "/pgdata/export" does not appear to be a valid archive ("toc.dat" does not exist)
Remember, you don't have to drop the database first, it's only an option you have. You could also choose to import the schema "music" directly into an existing database:
pg_restore /pgdata/export -d musicdb -j 4 -v
If you examine the log aftwards, you'll typically see output like this:
pg_restore: connecting to database for restoreSo postgres is simply connecting to the default database (named "postgres") in order to create the database which schema "music" happens to be exported from.
Following that, we see how the database "musicdb" is created:
pg_restore: processing item 3759 ENCODING ENCODING pg_restore: processing item 3760 STDSTRINGS STDSTRINGS pg_restore: processing item 3761 SEARCHPATH SEARCHPATH pg_restore: processing item 3762 DATABASE musicdb pg_restore: creating DATABASE "musicdb"
Then the pg_restore connects to the new database and creates the schema in there:
pg_restore: connecting to new database "musicdb" pg_restore: processing item 10 SCHEMA music pg_restore: creating SCHEMA "music"
After that, the objects are imported one by one:
pg_restore: processing item 202 TABLE albums pg_restore: creating TABLE "music.albums"Like schema imports in Oracle, any users that may have grants on the objects being imported must exist beforehand for any GRANT statements to execute successfully.
In my case, these errors were thrown because I did not take care to create a user called "music" up front:
pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 202; 1259 45112865 TABLE albums music pg_restore: [archiver (db)] could not execute query: ERROR: role "music" does not exist Command was: ALTER TABLE music.albums OWNER TO music;
pg_dump is documented here
pg_restore is documenteted here
Thursday, August 13, 2020
How to perform a full export and import of an entire database in PostgreSQL
Export the database "db01" from your source postgres server:
pg_dump db01 -v -Fc >db01.dmp
The -v means "verbose" and -Fc means "custom format" (as opposed to "directory format", which would be indicated by -Fd)
Import the database "db01" in your destination postgres server
First, transfer the directory db01.dmp file from the source server to the target server using scp.pg_restore -v -d db01 -1 db01.dmp
If the database doesn't exist, you need to connect to another database in the cluster first:
pg_restore -v -C -d postgres /exports/full_db_export.dmp
Export the database "db01" using the directory format:
pg_dump -Fd db01 -f /exports/full_db01 -v
This will create the directory /exports/full_db01 which contains all the files necessary for restore. The -v means "verbose" and -Fd means "directory format"
Import the database "db01":
Again, transfer the needed files, this time the entire directory /exports/full_db01 from the source server to the target server using scp.
Then import the database, either by recreating the database indicated by the -C option:
pg_restore /exports/full_db01 -C -c -d postgres -j 2 -v
or by importing the objects into an existing database, in my case "db02":
pg_restore /exports/full_db01 -c -d db02 -j 2 -v
The official documentation for the pg_dump utility can be found here
here
Monday, February 17, 2020
How to export and import a schema in PostgreSQL
Schema exports and imports
There are many ways to export a schema using pg_dump:
Export the schema scott from db01, data only, no metadata (object definitions):
pg_dump db01 -n 'scott' -a | gzip > db01_scott.gz
Which is really just a shorthand for the syntax
pg_dump db01 -n 'scott' -a -f db01_scott.txt | gzip > db01_scott.gz
The -a option dumps only the data, not the schema (data definitions). Table data, large objects, and sequence values are dumped.
Export the schema music from the database musicdb, metadata only, no actual data:
pg_dump musicdb -n 'music' -s | gzip > musicdb.music_definitions.gz
Export schema scott using the custom postgres format
pg_dump -v --schema=scott db01 -Fc > db01_scott.dmp
Alternatively, use the -f switch to send the output to a specific file:
pg_dump -v --schema=scott db01 -Fc -f db01_scott.dmp
Export schema scott using the "plain" format like this:
pg_dump -v --schema=scott db01 -Fp > scott.db01.sql
Alternatively, you can use the -f switch to send output to a specific file:
pg_dump -v --schema=music musicdb -Fp -f musicdb_music.txt
Next, I am using the -f option and piping the resulting output directly to gzip for compression:
pg_dump -v --schema=music musicdb -f mymusic.txt | gzip > musicdb_music.txt.gz
Whichever method you used, transfer the exported data to the destination server using ssh
Import the schema "music" into the existing database "musicdb", metadata only, no actual data:
gunzip -c musicdb.music_definitions.gz | psql musicdb
Import the data for schema "music" into the existing database "musicdb", no metadata, only the actual data.
Note that this will throw errors upon import if the tables do not exist beforehand:
gunzip -c musicdb.music_actualdata.gz | psql musicdb
For import using custom format, use pg_restore instead of psql.
Import the schema "music" and the existing database "musicdb":
pg_restore -d musicdb musicdb_Fc.dmp -v
Create the database "musicdb", then import the schema "music" into it:
pg_restore -d postgres musicdb_Fc.dmp -v -C
See also my posts about how to export and import using the directory format, and how to list the contents of a custom formated export file
You can find a wrapper for pg_dump and pg_restore in the article Wrappers for pg_dump and pg_restore
The official documentation for the pg_dump utility can be found here
Subscribe to:
Posts (Atom)