postgresql – Is there any possible way to get a progress estimate to ALTER TYPE OF COLUMN in Postgres?

I know there is currently no officially supported way, but I have almost two 100% CPU hours on a 15,000,000 row table changing an INTEGER to SMALLINT and I have no idea how long this will continue. Is there a file I can see grow on the file system? Do you rewrite the table instead or create a new file? Is there a way to pause this so that I can run a benchmark on the same machine with fewer rows, so I can get a row per second estimate for this operation?

Restore Postgresql database of table files

Is it possible to restore a Postgres database when only the table files are accessed in $PGDATA/base/? The system catalogs are empty.

Background

Someone with sudo executed a rm -Rf under /var/lib. The random files were removed, but not the files in the database that interest me. Postgres did not start, even after manually restoring multiple files / directories. So I created a new $PGDATA and copied the files from the database, waiting for Postgres to read them automatically. Apparently, Postgres relies heavily on their system catalogs to maintain links between what is on the disk and what the user sees.

operator does not exist: time stamp with time zone + integer in PostgreSql

I ran into a strange problem in postgres. I have two databases created in two different periods.
They both run on the same PostrgreSql 9.6 version on the same machine. When I run the query SELECT now() + 30 in database 1 -> It works correctly.
The same query that I copy and paste into the second database is giving error: operator does not exist: timestamp with time zone + integer in PostgreSql

Can someone guide me on this?

enter the image description here

postgresql – How do I locate my PostGres data directory while the server is down?

I am on Mac 10.13.6. I rebooted my machine and PostGres, which normally starts on reboot, was not running. Unfortunately I can't remember how or where I installed it. I can locate instances of "pg_ctl"

sudo find / -name "pg_ctl"

returns

/usr/local/bin/pg_ctl
/usr/local/Cellar/postgresql/9.6.1/bin/pg_ctl
/usr/local/Cellar/postgresql/9.5.0/bin/pg_ctl
/usr/local/Cellar/postgresql/9.6.15/bin/pg_ctl
/usr/local/Cellar/postgresql@9.6/9.6.15/bin/pg_ctl
/usr/local/Cellar/libpq/11.5/bin/pg_ctl
/usr/local/Cellar/postgresql@9.5/9.5.19/bin/pg_ctl

However, I need to tell it a data directory for it to start, and I cannot understand how to know the data directory while the server is down. What type of files should I search to find out where the data directory is?

Postgresql Recursion Query – Database Administrator Stack Exchange

I have a table like this:

effective_from  archived      cal
2020-01-19      2020-01-20    15.3
2020-01-13      2020-01-19    42.2
2020-01-17      2020-01-18    13.6
2020-01-16      2020-01-17    11.2
2020-01-15      2020-01-16    7.2
2020-01-14      2020-01-15    7.2
2020-01-13      2020-01-15    8.6
2020-01-13      2020-01-14    4.2
2020-01-12      2020-01-13    3.7

I would like to write a query that returns a table that has the effective_from most recent row, as determined by the archived row. I can do it raw, where I sort the table by file and then do a select and join on each row in turn based on the effective_of column:

SELECT effective_from, archived, cal FROM stack_question ORDER BY archived ASC;

(SELECT * FROM stack_question WHERE effective_from <= '12-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '13-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '14-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '13-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '15-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '16-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '17-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '13-Jan-2020' ORDER BY archived DESC LIMIT 1)
UNION
(SELECT * FROM stack_question WHERE effective_from <= '19-Jan-2020' ORDER BY archived DESC LIMIT 1) ORDER BY effective_from ASC;

This gives me the expected result:

2020-01-12  2020-01-13 3.7
2020-01-13  2020-01-19 42.2
2020-01-19  2020-01-20 15.3

But of course that's not really a viable way to do it. I could always write a function in pg / PLSQL or whatever, but I feel like there is an efficient way to do it with just a SELECT. CTEs / recursion seem to be the most promising approach, but I can't understand what my example would look like from what I've read.

Does anyone have any rough suggestions on the best approach here?

postgresql: designing a multi-tenant database for a scenario with multiple types of users

I am developing a SaaS application to recruit that has multiple types of users like Client, Recruiter, Panellist Y Owner etc. There is also a possibility that user types may increase.

the Owner have a Organization And this is what my tenant will be. Here I am considering the organization of each owner as a tenant. I am using Django to develop this application and I will use the schema function provided in PostgreSQL, therefore there will be a scheme for each tenant

What I plan to do is that there will be a User table to be kept in the public scheme and stores generic user information such as name, surname, email, password, etc. User The table will be used to log in to the site. And there will be additional tables for each of the respective user types, each with its own unique set of columns. These additional tables will be specific to the tenant / organization.

Now once the user logs in to the site, they can change to their desired tenant / organization something similar to how workspaces work in Slack. Within the tenant / organization, the user can assume the role of the user types mentioned above that were assigned by the Owner to the user Then the user can be a Recruiter in a tenant and he can also be a Panellist in another tenant

There will be a UserOrganization junction table that will track the fact that a User can belong to multiple Organization.

Here is an outline diagram of what I am thinking:

Outline diagram

NOTE: I didn't make the schema connection for the tables in the second tenant because it was starting to get complicated, but I assume they are present there.

My questions are:

postgresql – pcp_attach_node not working in pgpool 2

I have been working on configuring pgpool2 with 2 database nodes and I need to connect to the node, which is currently down. when I run

postgres@pg-pool:~$ pcp_attach_node -p 9898 -n 1 -U postgres

I get the following error.

pcp_attach_node -p 9898 -n 1 -U postgres pcp_attach_node: error when
loading shared libraries: libpcp.so.1: unable to open shared object file:
No such file or directory

I'm using:

- pgpool-II-4.0.1
- ubuntu 18.04 LTS
- postgresql 10

I installed:

postgresql-contrib 
gcc 
make 
libpq-dev

pgpool installation:

./configure
make
make install

Contents of / usr / local / lib

-rwxr-xr-x  1 root root     959 Mar 18 07:55 libpcp.la*
lrwxrwxrwx  1 root root      15 Mar 18 07:55 libpcp.so -> libpcp.so.1.0.0*
lrwxrwxrwx  1 root root      15 Mar 18 07:55 libpcp.so.1 -> libpcp.so.1.0.0*
-rwxr-xr-x  1 root root  244296 Mar 18 07:55 libpcp.so.1.0.0*

Your help would be great

migration: SQLite works, but migrated PostgreSQL database causes ERROR – Django 3.0

Situation

  • I have created the Django 3.0 project with a couple of apps.
  • I have created the accounts application based on the following course and it is github
  • From what I have created an application for authentication acc
  • All of this has been done in a SQLite database
  • I have previously tried a PostgreSQL database for initial application that worked fine
  • but now when I change the settings.py file the SQLite to PostgreSQL I receive an error. I'm trying to log in.
  • If I change the configuration.py to SQLite everything works perfectly (for example: authentication, user login, user who does things on the website with their own settings)
  • I use decorators.py to keep registered users visiting the login and registration pages and that gives error when I switch to postgresql. I only use here HttpResponse which contains the error message

decorators.py

from django.http import HttpResponse
from django.shortcuts import redirect

def unauthenticated_user(view_func):
    def wrapper_func(request, *args, **kwargs):
        if request.user.is_authenticated:
            return redirect('home')
        else:
            return view_func(request, *args, **kwargs)

    return wrapper_func

def allowed_users(allowed_roles=()):
    def decorator(view_func):
        def wrapper_func(request, *args, **kwargs):

            group = None
            if request.user.groups.exists():
                group = request.user.groups.all()(0).name

            if group in allowed_roles:
                return view_func(request, *args, **kwargs)
            else:
                return HttpResponse('Authorized')
        return wrapper_func
    return decorator

ERROR

If I login while settings.py use PostgreSQL. If I disconnect, everything works fine again. If I use SQL Lite I can login and everything works perfectly

ValueError at /
The view accounts.decorators.wrapper_function didn't return an HttpResponse object. It returned None instead.
Request Method: GET
Request URL:    http://localhost...
Django Version: 3.0
Exception Type: ValueError
Exception Value: The view accounts.decorators.wrapper_function didn't return an HttpResponse object. It returned None instead.
Exception Location: /Users/.../python3.7/site-packages/django/core/handlers/base.py in _get_response, line 126
Python Executable:  /Users/.../bin/python3
Python Version: 3.7.3
.....

Request information
USER MYUSERNAME
GET No GET data
POST No POST data
FILES  No FILES data
COOKIES ...
...

Tired of solving

  • The guide I follow created usergroups which I also did in my migrated postgreSQL database, but still got the same error as USER1 in the comments section.
    • This was the recommendation in the lower section of the video.
    • "USER1 found it, I forgot to change the user group!
    • -> USER2 go to the administration panel and in your user section add the client in the section of the chosen group "
    • I have done exactly that and it didn't work, the only difference is that I have used a migrated postgresql and they used the original SQLight which if I use it also works for me but I want it to work with PostgreSQL.
  • I have data, tables in both databases but PostgreSQL for some old staff and SQLite for all.
    • I have tried to migrate the SQLite to PostgreSQL with this guide
    • I have successfully created a copy of the SQLite database
    • but when I changed the setting to postgres and I try python manage.py migrate He says Running migrations: No migrations to apply.
    • python manage.py loaddata db.json
    • The users are migrated (I can log in with them and get an error just like with the only SQlite users, if I mistype the user or the password won't allow me to enter) from SQLite but I don't see any of the data tables in Postgresql if I look for it with an IDE
  • I've talked to other people on forums, but many said it's the decorator file that is problematic, but it only happens exactly in the database change.
  • I have created a new postgresql database and tried to migrate everything (migration no longer migrated everything). After trying to sign up with a new account, it gave me the following error message after filling in the form and hitting submit
DoesNotExist at /register/
Group matching query does not exist.
  • I've also created an AWS Bucket postgreSQL database as the course leader, migrated it and connected it to the server and in configuration but still got the same error.

Postgresql 12 replication works but no data

I have successfully configured replication in Postgresql 12:

  1. Ran pg_basebackup
  2. The data is copied
  3. Posgresql service started
  4. The service is running and the tables are present.

Replication works and is validated with select * from pg_stat_replication;.

The "problem" is that I don't see any data in the tables. What happens? Where's all the data?

postgresql – declare variable in bash script set var = "psql -c" select * from someting "and run var

I am looking for a way to integrate psql statements into bash scripts for devops reasons.

The simplest way you would do that is:
CMD = & # 39; psql -c "DROP DATABASE restore_database;" & # 39;

then use a bash function to execute the declaration:

   EXEC () {

    $CMD > /dev/null 2>&1
    if ( $? -eq 0 )
    then
            RETVAR="DONE"
    else
            echo "Command: $CMD is not working."
            echo "- Exiting Function"
            exit 0
    fi
    }

I can't believe this doesn't work …