Managing EDB Postgres Distributed (PGD) databases

As described in the architecture document, EDB Postgres Distributed for Kubernetes is an operator created to deploy PGD databases. It provides an alternative over deployment with TPA, and by leveraging the Kubernetes ecosystem, it can offer self-healing and declarative control. The operator is also responsible of the backup and restore operations. See Backup.

However, many of the operations and control of PGD clusters aren't managed by the operator. The pods created by EDB Postgres Distributed for Kubernetes come with the PGD CLI installed. You can use this tool, for example, to execute a switchover.

PGD CLI

Warning

Don't use the PGD CLI to create and delete resources. For example, avoid the create-proxy and delete-proxy commands. Provisioning of resources is under the control of the operator, and manual creation and deletion isn't supported.

As an example, execute a switchover command.

We recommend that you use the PGD CLI from proxy pods. To find them, get a pod listing for your cluster:

kubectl get pods -n my-namespace

NAME                   READY   STATUS    RESTARTS      AGE
location-a-1-1         1/1     Running   0             2h
location-a-2-1         1/1     Running   0             2h
location-a-3-1         1/1     Running   0             2h
location-a-proxy-0     1/1     Running   0             2h
location-a-proxy-1     1/1     Running   0             2h

The proxy nodes have proxy in the name. Choose one, and get a command prompt in it:

kubectl exec -n my-namespace -ti location-a-proxy-0 -- bash

You now have a bash session open with the proxy pod. The pgd command is available:

pgd

Available Commands:
  check-health       Checks the health of the EDB Postgres Distributed cluster.
  <- snipped ->
  switchover         Switches over to new write leader.
  <- snipped ->

You can easily move your way through getting the information needed for the switchover:

pgd switchover --help

  $ pgd switchover --group-name group_a --node-name bdr-a1
  switchover is complete
pgd show-groups

Group      Group ID   Type   Parent Group Location Raft Routing Write Leader 
-----      --------   ----   ------------ -------- ---- ------- ------------ 
world      3239291720 global                       true true    location-a-2 
location-a 2135079751 data   world                 true true    location-a-1 
pgd show-nodes 
Node         Node ID    Group      Type Current State Target State Status Seq ID 
----         -------    -----      ---- ------------- ------------ ------ ------ 
location-a-1 3165289849 location-a data ACTIVE        ACTIVE       Up     1      
location-a-2 3266498453 location-a data ACTIVE        ACTIVE       Up     2      
location-a-3 1403922770 location-a data ACTIVE        ACTIVE       Up     3     

Accessing the database

In Use cases is a discussion on using the database within the Kubernetes cluster versus from outside. In Connectivity, you can find a discussion on services, which is relevant for accessing the database from applications.

However you implement your system, your applications must use the proxy service to connect to reap the benefits of PGD and of the increased self-healing capabilities added by the EDB Postgres Distributed for Kubernetes operator.

Important

As per the EDB Postgres for Kubernetes defaults, data nodes are created with a database called app and owned by a user named app, in contrast to the bdrdb database described in the EDB Postgres Distributed documentation. You can configure these values in the cnp section of the manifest. For reference, see Bootstrap in the EDB Postgres for Kubernetes documentation.

You might, however, want access to your PGD data nodes for administrative tasks, using the psql CLI.

You can get a pod listing for your PGD cluster and kubectl exec into a data node:

kubectl exec -n my-namespace -ti location-a-1-1 -- psql

In the familiar territory of psql, remember that the default created database is named app (see previous warning).

postgres=# \c app
You are now connected to database "app" as user "postgres".
app=# \x
Expanded display is on.
app=# select * from bdr.node_summary;
-[ RECORD 1 ]---------------------------------------
node_name              | location-a-1
node_group_name        | location-a
interface_connstr      | host=location-a-1-node user=streaming_replica sslmode=verify-ca port=5432 sslkey=/controller/certificates/streaming_replica.key sslcert=/controller/certificates/streaming_replica.crt sslrootcert=/controller/certificates/server-ca.crt application_name=location-a-1 dbname=app
peer_state_name        | ACTIVE
peer_target_state_name | ACTIVE

<- snipped ->

For your applications, use the non-privileged role (app by default).

You need the user credentials, which are stored in a Kubernetes secret:

kubectl get secrets 

NAME                     TYPE                       DATA   AGE
<- snipped ->
location-a-app           kubernetes.io/basic-auth   2      2h

This secret contains the username and password needed for the Postgres DSN, encoded in base64:

kubectl get secrets location-a-app -o yaml

apiVersion: v1
data:
  password: <base64-encoded-password>
  username: <base64-encoded-username>
kind: Secret
metadata:
  creationTimestamp: <timestamp>
  labels:

<- snipped ->