postgresql – Using the array result of a select statement returns ERROR: pq: operator does not exist: integer = integer[]

The query in question is the following:

SELECT * FROM options WHERE id = any(SELECT option_ids FROM workshops WHERE id=3)
ERROR: pq: operator does not exist: integer = integer()

Basically the statement inside any() returns an array of ids which I want to use to get some rows on the options table.

I don’t understand why is it trying to compare an integer to the whole array, isnt any supposed to compare a single int to multiple integers in an array?

postgresql – Must an index cover all columns to be used in a merge?

PostgreSQL can perform an index scan in both cases, but it prefers a sequential scan and a sort in the first case, because it thinks that will be faster. In the second case, an index-only scan is possible and faster.

You could of course add all table columns to the index, but that is usually unreasonable and may even exceed the size limit for an index entry.

If you have reason to believe that PostgreSQL is doing the wrong thing, you can try if the query is faster with enable_seqscan set to off. If yes, perhaps you have to adjust random_page_cost to match your hardware.

How to temporarily disable foreign keys in Amazon RDS Aurora PostgreSQL?

Hi I am using DMS to load data from physical server to AWS Aurora PostgreSQL, but I am getting error for foreign key constraint.

I want to disable using any sp / function. Could you please help me?

postgresql – How to get the current value of sequence of a table in remote server using foreign data wrapper?

Remote server

create table tempparent
empname     VARCHAR,
CONSTRAINT uk_tempparent_id UNIQUE (id)

create table tempchild
parentid    BIGINT,
address     VARCHAR,
CONSTRAINT uk_tempchild_id UNIQUE (id)

local server

empname     VARCHAR
SERVER remoteserver
OPTIONS (schema_name 'public', table_name 'tempparent');

parentid    BIGINT,
address     VARCHAR
SERVER remoteserver
OPTIONS (schema_name 'public', table_name 'tempchild');

How to get the currval of tempparent_id_seq from local server

Restore PostgreSQL Data from root folder

I guess that you are talking about the data directory when you say “root folder”.

If the server was stopped while you copied the data directory, you are good. Just restore the files and start PostgreSQL.

If the server was running while you copied the files, you can also restore it and start the server, but your database will probably be corrupted. With some luck you will be able to run a pg_dump and salvage your data that way. If that fails, or the dump does not restore cleanly, you’ll need to call in somebody who is skilled with PostgreSQL database corruption.

sequelize.js – How to create this type of a m:n relationship in PostgreSQL with Sequelize?

I am fairly new to PostgreSQL and Sequelize, and I want to create this kind of a many to many relationship.

Users – Groups, Basically multiple groups can have multiple users and multiple users can have multiple groups.

User – Creates a group (becomes owner) – Can add other users to the group (can modify their permissions – add/remove from a group)

I want this kind of objects with relationships :

User Model :
name: String,
email: String,
groups: Array(UUID)

Group Model :
name: String,
owner: UUID,
members: Array({
  id: UUID,

How do I achieve this in PostgreSQL with Sequelize?

My current idea is to create a join table – user_groups and store the relationships there and use the include method to get the foreign values, but I don’t think I will get the same results as the above-mentioned model? How do I achieve relationships with a model like this?

postgresql – Joining historical data in postgres

I have a table of data (omitted below), a table of state transitions for that data, and a table of snapshots of that data.

CREATE TABLE thing_event (thing int, created timestamp, old_state bool, new_state bool);
CREATE TABLE thing_snapshot (thing int, created timestamp, state bool);

CREATE INDEX thing_event_idx ON thing_event USING btree (thing, created);
CREATE INDEX thing_snapshot_idx ON thing_snapshot USING btree (thing, created);

INSERT INTO thing_event (thing, created, old_state, new_state)
SELECT generate_series, now() - interval '365' day * random(), random() > 0.5, random() > 0.5
FROM generate_series(1, 100000) JOIN (select 1 from generate_series(1, 10)) g2 on true;

INSERT INTO thing_snapshot (thing, created, state)
SELECT thing, created, new_state FROM thing_event;

I want to get the state of the data as of a historical state transition:

  (SELECT *, row_number() OVER (PARTITION BY sn.thing ORDER BY sn.created DESC)
   FROM thing_snapshot sn
   JOIN thing_event ev
     ON sn.thing = ev.thing AND sn.created <= ev.created)
SELECT * FROM cte WHERE row_number = 1;

Subquery Scan on cte  (cost=57554.25..512495.39 rows=16609 width=35) (actual time=105.736..2687.783 rows=100000 loops=1)
 Filter: (cte.row_number = 1)
 Rows Removed by Filter: 5400000
 ->  WindowAgg  (cost=57554.25..470971.97 rows=3321873 width=35) (actual time=105.736..2565.230 rows=5500000 loops=1)
   ->  Merge Join  (cost=57554.25..412839.20 rows=3321873 width=27) (actual time=105.724..1308.860 rows=5500000 loops=1)
     Merge Cond: (sn.thing = ev.thing)
     Join Filter: (sn.created <= ev.created)
     Rows Removed by Join Filter: 4500000
     ->  Gather Merge  (cost=57552.00..174018.48 rows=1000000 width=13) (actual time=105.673..228.029 rows=1000000 loops=1)
       Workers Planned: 2
       Workers Launched: 2
       ->  Sort  (cost=56551.98..57593.65 rows=416667 width=13) (actual time=97.910..122.637 rows=333333 loops=3)
         Sort Key: sn.thing, sn.created DESC
         Sort Method: external merge  Disk: 9272kB
         Worker 0:  Sort Method: external merge  Disk: 8520kB
         Worker 1:  Sort Method: external merge  Disk: 8704kB
         ->  Parallel Seq Scan on thing_snapshot sn  (cost=0.00..10536.67 rows=416667 width=13) (actual time=0.074..22.694 rows=333333 loops=3)
     ->  Materialize  (cost=0.42..64424.99 rows=1000000 width=14) (actual time=0.022..370.762 rows=9999991 loops=1)
       ->  Index Scan using thing_event_idx on thing_event ev  (cost=0.42..61924.99 rows=1000000 width=14) (actual time=0.017..147.555 rows=1000000 loops=1)

That all makes sense, 500k is 1MM * an average of half of the snapshots matching the join condition per event, but is there a faster way to pull this off? The Rows Removed by Filter: 5400000 would be nice to bypass somehow.

Here’s an alternative, simpler but slower:

FROM thing_event ev
  (SELECT *, row_number() OVER (PARTITION BY thing ORDER BY created DESC)
   FROM thing_snapshot WHERE thing = ev.thing AND created <= ev.created) sn ON row_number = 1

Nested Loop  (cost=0.42..16562910.43 rows=1000000 width=35) (actual time=0.045..2946.091 rows=1000000 loops=1)
 ->  Seq Scan on thing_event ev  (cost=0.00..16370.00 rows=1000000 width=14) (actual time=0.011..39.413 rows=1000000 loops=1)
 ->  Subquery Scan on sn  (cost=0.42..16.54 rows=1 width=21) (actual time=0.001..0.003 rows=1 loops=1000000)
   Filter: (sn.row_number = 1)
   Rows Removed by Filter: 4
   ->  WindowAgg  (cost=0.42..16.50 rows=3 width=21) (actual time=0.001..0.003 rows=6 loops=1000000)
     ->  Index Scan Backward using thing_snapshot_idx on thing_snapshot  (cost=0.42..16.45 rows=3 width=13) (actual time=0.001..0.001 rows=6 loops=1000000)
           Index Cond: ((thing = ev.thing) AND (created <= ev.created))

postgresql – What is the data type of ’emp::regclass’?

regclass and the other reg types are “object identifier types”. As the documentation says:

The oid type is currently implemented as an unsigned four-byte integer.


The OID alias types have no operations of their own except for specialized input and output routines. These routines are able to accept and display symbolic names for system objects, rather than the raw numeric value that type oid would use.

So it is just a convenience: it really is the numeric object ID, but is displayed as the object name.

regclass can be cast to numerical data types: then it will become the oid value.

regclass can also be cast to text: then it will become the table name.

postgresql – Simple website sql database

I’m getting into backend with postgresql and I would like to know how much of my example would fit for a real website database, just for storing and then displaying it on website.

create table clients (
    first_name VARCHAR(50) NOT NULL,
    last_name VARCHAR(50) NOT NULL,
    age INT CHECK (age >= 18) NOT NULL,
    password VARCHAR(100) NOT NULL,
    card VARCHAR(70) DEFAULT ('undefined') UNIQUE NOT NULL,
    country VARCHAR(50) DEFAULT ('undefined') NOT NULL,
    language VARCHAR(50) DEFAULT ('undefined') NOT NULL

insert into clients (first_name, last_name, age, email, password, joined, language) values ('Rustie', 'Matchell', 18, '', 'OSauq0z2suY', '2021-04-18 05:26:40', 'Kurdish');
insert into clients (first_name, last_name, age, email, password, card, joined, country, language) values ('Ulric', 'Hoggins', 20, '', 'M4hnFLJ5XeP', '30243414381012', '2021-02-20 08:07:13', 'China', 'Mongolian');
insert into clients (first_name, last_name, age, email, password, card, joined, country, language) values ('Sephira', 'Bayly', 26, '', 'INL57w6gXe', '5100138794351466', '2021-04-25 06:17:26', 'North Korea', 'Gujarati');
insert into clients (first_name, last_name, age, email, password, card, joined, country, language) values ('Hermine', 'Fassman', 29, '', '1UX4TApQMEuV', '3552094428434244', '2021-06-18 06:48:54', 'Indonesia', 'Albanian');


 id | first_name | last_name | age |            email             |   password   |        card        |       joined        |        country        |  language
  1 | Rustie     | Matchell  |  18 |    | OSauq0z2suY  | undefined          | 2021-04-18 05:26:40 | undefined             | Kurdish
  2 | Ulric      | Hoggins   |  20 |             | M4hnFLJ5XeP  | 30243414381012     | 2021-02-20 08:07:13 | China                 | Mongolian
  3 | Sephira    | Bayly     |  26 |           | INL57w6gXe   | 5100138794351466   | 2021-04-25 06:17:26 | North Korea           | Gujarati
  4 | Hermine    | Fassman   |  29 |         | 1UX4TApQMEuV | 3552094428434244   | 2021-06-18 06:48:54 | Indonesia             | Albanian

postgresql – Writing custom server logs in Postgres

We have set up a Postgres instance that is writing logs directly to AWS CloudWatch. We would like to be able to write our own logs for the purpose of automated processing. Unfortunately our current experiments seemed to generate logs only on the client (caller) side:

CREATE or replace FUNCTION logInfo() RETURNS void AS $$
       raise notice 'Hello World!';
       raise info 'Hello World!';
$$ LANGUAGE plpgsql;

select logInfo();

Is there a way to write logs on the server side too?