Postgres Crash Course
These Postgres notes, like the old Oracle article is focused on users coming from MySQL. So, I’m assuming that you know the SQL basics and are familiar with command line tools, database creation, dump and import operations.
Remember that Postgres documentation is your friend.
A PostgreSQL user account is called role
. It expects a role
matching an Unix user. Let’s create a new role
for current logged user:
sudo -u postgres createuser -s $USER
This will create a superuser (-s
), so we can create new databases, users, etc.
Like MySQL, Postgres has a lot of terminal utilities, like psql
(the default client), pg_dump
, pg_restore
and even some commands that are shortcuts to some SQL commands, like the createdb
and createuser
.
Create a database (replace <database>
):
createdb <database>
To connect to the database and run SQL queries and commands:
psql -d <database>
Under psql
, non SQL commands starts with a backslash, like \q
to exit. You can check help using \?
for psql
commands and with \h
for SQL commands.
To connect to a remote database:
psql -U <user> -d <database> -h '<host>'
To run a single query from the command line:
psql -U <user> -d <database> -h '<host>' -W -c 'SELECT * FROM <table>'
String quotes
Strings must be quoted using single quotes:
UPDATE products SET name = 'cool-aid';
If you need to insert a single quote, use it twice:
UPDATE users SET name = 'Sant''Anna';
Auto Increment
Postgres emulates the auto increment feature using the regular SQL sequences, but with a handy shortcut called SERIAL
.
So you can create an auto increment ID like this:
CREATE TABLE products(
id SERIAL
);
It has 3 types:
Name | Storage Size | Range |
---|---|---|
SMALLSERIAL | 2 bytes | 1 to 32,767 |
SERIAL | 4 bytes | 1 to 2,147,483,647 |
BIGSERIAL | 8 bytes | 1 to 9,223,372,036,854,775,807 |
Dump (Database export and import)
To export the database, use the pg_dump
utility:
pg_dump -F c --file=<database>-$(date "+%Y-%m-%d").sql.gz --dbname=<database> --no-owner
Explaining the non obvious params:
-F c
: custom format, which is by default compressed--no-owner
: skip objects ownership, so the ownership will belong to the user later importing the data
If you’re trying to dump from a remote host:
pg_dump -F c --file=<database>-$(date "+%Y-%m-%d").sql.gz --dbname=<database> --no-owner --username=<user> --host=<host> --port=5432
Import
Import with (replace <file>
and <database>
):
pg_restore -c <file>.sql.gz --dbname=<database>
The -c
param will clean (drop) database objects before recreating.
You can use the --no-owner
too if you want to ignore the objects ownership and use the current user role
.
Allow local passwordless connections
IDEs and others will run socket connections, and we can allow them to connect without the user password.
Edit /etc/postgresql/<version>/main/pg_hba.conf
(replacing <version>
by Postgres version) and change METHOD
in the related IPV4 section to trust
:
# TYPE DATABASE USER ADDRESS METHOD
# local is for Unix domain socket connections only
# local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
Troubleshooting
If you’re trying to import and the drop database command is failing because someone is connected (i.e., your SQL script is producing ERROR: cannot drop the currently open database
):
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '<database>';
If you’re trying to import a .gz
file with pg_restore
but you’re seeing pg_restore: error: input file does not appear to be a valid archive
, it’s because the file was not generated using -F c
. Instead it was compressed after the dump. You can import it with:
gunzip -c <file>.gz | psql <database>