Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FATAL: role "root" does not exist #16

Open
patsevanton opened this issue Jul 30, 2021 · 21 comments
Open

FATAL: role "root" does not exist #16

patsevanton opened this issue Jul 30, 2021 · 21 comments

Comments

@patsevanton
Copy link

docker-compose up 
Creating network "kafka_default" with the default driver
Creating kong-postgres ... done
Creating kong-migration ... done
Creating kong           ... done
Attaching to kong-postgres, kong-migration, kong
kong-migration    | Bootstrapping database...
kong-postgres     | ********************************************************************************
kong-postgres     | WARNING: POSTGRES_HOST_AUTH_METHOD has been set to "trust". This will allow
kong-postgres     |          anyone with access to the Postgres port to access your database without
kong-postgres     |          a password, even if POSTGRES_PASSWORD is set. See PostgreSQL
kong-postgres     |          documentation about "trust":
kong-postgres     |          https://www.postgresql.org/docs/current/auth-trust.html
kong-postgres     |          In Docker's default configuration, this is effectively any other
kong-postgres     |          container on the same system.
kong-postgres     | 
kong-postgres     |          It is not recommended to use POSTGRES_HOST_AUTH_METHOD=trust. Replace
kong-postgres     |          it with "-e POSTGRES_PASSWORD=password" instead to set a password in
kong-postgres     |          "docker run".
kong-postgres     | ********************************************************************************
kong-postgres     | The files belonging to this database system will be owned by user "postgres".
kong-postgres     | This user must also own the server process.
kong-postgres     | 
kong-postgres     | The database cluster will be initialized with locale "en_US.utf8".
kong-postgres     | The default database encoding has accordingly been set to "UTF8".
kong-postgres     | The default text search configuration will be set to "english".
kong-postgres     | 
kong-postgres     | Data page checksums are disabled.
kong-postgres     | 
kong-postgres     | fixing permissions on existing directory /var/lib/postgresql/data ... ok
kong-postgres     | creating subdirectories ... ok
kong-postgres     | selecting default max_connections ... 100
kong-postgres     | selecting default shared_buffers ... 128MB
kong-postgres     | selecting default timezone ... Etc/UTC
kong-postgres     | selecting dynamic shared memory implementation ... posix
kong-postgres     | creating configuration files ... ok
kong-postgres     | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
kong-postgres     | initializing pg_authid ... ok
kong-postgres     | setting password ... ok
kong-postgres     | initializing dependencies ... ok
kong-postgres     | creating system views ... ok
kong-postgres     | loading system objects' descriptions ... ok
kong-postgres     | creating collations ... ok
kong-postgres     | creating conversions ... ok
kong-postgres     | creating dictionaries ... ok
kong-postgres     | setting privileges on built-in objects ... ok
kong-postgres     | creating information schema ... ok
kong-postgres     | loading PL/pgSQL server-side language ... ok
kong-postgres     | vacuuming database template1 ... ok
kong-postgres     | copying template1 to template0 ... ok
kong-postgres     | copying template1 to postgres ... ok
kong-postgres     | syncing data to disk ... 
kong-postgres     | WARNING: enabling "trust" authentication for local connections
kong-postgres     | You can change this by editing pg_hba.conf or using the option -A, or
kong-postgres     | --auth-local and --auth-host, the next time you run initdb.
kong-postgres     | ok
kong-postgres     | 
kong-postgres     | Success. You can now start the database server using:
kong-postgres     | 
kong-postgres     |     pg_ctl -D /var/lib/postgresql/data -l logfile start
kong-postgres     | 
kong-postgres     | waiting for server to start....LOG:  database system was shut down at 2021-07-30 07:31:46 UTC
kong-postgres     | LOG:  MultiXact member wraparound protections are now enabled
kong-postgres     | LOG:  database system is ready to accept connections
kong-postgres     | LOG:  autovacuum launcher started
kong-postgres     |  done
kong-postgres     | server started
kong-postgres     | CREATE DATABASE
kong-postgres     | 
kong-postgres     | 
kong-postgres     | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
kong-postgres     | 
kong-postgres     | LOG:  received fast shutdown request
kong-postgres     | waiting for server to shut down....LOG:  aborting any active transactions
kong-postgres     | LOG:  autovacuum launcher shutting down
kong-postgres     | LOG:  shutting down
kong-postgres     | LOG:  database system is shut down
kong-postgres     |  done
kong-postgres     | server stopped
kong-postgres     | 
kong-postgres     | PostgreSQL init process complete; ready for start up.
kong-postgres     | 
kong-postgres     | LOG:  database system was shut down at 2021-07-30 07:31:49 UTC
kong-postgres     | LOG:  MultiXact member wraparound protections are now enabled
kong-postgres     | LOG:  database system is ready to accept connections
kong-postgres     | LOG:  autovacuum launcher started
kong-postgres     | FATAL:  role "root" does not exist
kong              | 2021/07/30 07:31:57 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong              | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong              | 2021/07/30 07:31:58 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up' to proceed
kong              | stack traceback:
kong              |     [C]: in function 'error'
kong              |     /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong              |     /usr/local/share/lua/5.1/kong/init.lua:475: in function 'init'
kong              |     init_by_lua:3: in main chunk
kong              | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up' to proceed
kong              | stack traceback:
kong              |     [C]: in function 'error'
kong              |     /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong              |     /usr/local/share/lua/5.1/kong/init.lua:475: in function 'init'
kong              |     init_by_lua:3: in main chunk
kong-migration    | migrating core on database 'kong'...
kong-migration    | core migrated up to: 000_base (executed)
kong-migration    | core migrated up to: 003_100_to_110 (executed)
kong-migration    | core migrated up to: 004_110_to_120 (executed)
kong-migration    | core migrated up to: 005_120_to_130 (executed)
kong-migration    | core migrated up to: 006_130_to_140 (executed)
kong-migration    | core migrated up to: 007_140_to_150 (executed)
kong-migration    | core migrated up to: 008_150_to_200 (executed)
kong-migration    | core migrated up to: 009_200_to_210 (executed)
kong-migration    | core migrated up to: 010_210_to_211 (executed)
kong-migration    | core migrated up to: 011_212_to_213 (executed)
kong-migration    | core migrated up to: 012_213_to_220 (executed)
kong-migration    | core migrated up to: 013_220_to_230 (executed)
kong-migration    | migrating acl on database 'kong'...
kong-migration    | acl migrated up to: 000_base_acl (executed)
kong-migration    | acl migrated up to: 002_130_to_140 (executed)
kong-migration    | acl migrated up to: 003_200_to_210 (executed)
kong-migration    | acl migrated up to: 004_212_to_213 (executed)
kong-migration    | migrating acme on database 'kong'...
kong-migration    | acme migrated up to: 000_base_acme (executed)
kong-migration    | migrating basic-auth on database 'kong'...
kong-migration    | basic-auth migrated up to: 000_base_basic_auth (executed)
kong-migration    | basic-auth migrated up to: 002_130_to_140 (executed)
kong-migration    | basic-auth migrated up to: 003_200_to_210 (executed)
kong-migration    | migrating bot-detection on database 'kong'...
kong-migration    | bot-detection migrated up to: 001_200_to_210 (executed)
kong-migration    | migrating hmac-auth on database 'kong'...
kong-migration    | hmac-auth migrated up to: 000_base_hmac_auth (executed)
kong-migration    | hmac-auth migrated up to: 002_130_to_140 (executed)
kong-migration    | hmac-auth migrated up to: 003_200_to_210 (executed)
kong-migration    | migrating ip-restriction on database 'kong'...
kong-migration    | ip-restriction migrated up to: 001_200_to_210 (executed)
kong-migration    | migrating jwt on database 'kong'...
kong-migration    | jwt migrated up to: 000_base_jwt (executed)
kong-migration    | jwt migrated up to: 002_130_to_140 (executed)
kong-migration    | jwt migrated up to: 003_200_to_210 (executed)
kong-migration    | migrating key-auth on database 'kong'...
kong-migration    | key-auth migrated up to: 000_base_key_auth (executed)
kong-migration    | key-auth migrated up to: 002_130_to_140 (executed)
kong-migration    | key-auth migrated up to: 003_200_to_210 (executed)
kong-migration    | migrating oauth2 on database 'kong'...
kong-migration    | oauth2 migrated up to: 000_base_oauth2 (executed)
kong-migration    | oauth2 migrated up to: 003_130_to_140 (executed)
kong-migration    | oauth2 migrated up to: 004_200_to_210 (executed)
kong              | 2021/07/30 07:31:59 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong              | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong-migration    | oauth2 migrated up to: 005_210_to_211 (executed)
kong-migration    | migrating rate-limiting on database 'kong'...
kong-migration    | rate-limiting migrated up to: 000_base_rate_limiting (executed)
kong-migration    | rate-limiting migrated up to: 003_10_to_112 (executed)
kong-migration    | rate-limiting migrated up to: 004_200_to_210 (executed)
kong-migration    | migrating response-ratelimiting on database 'kong'...
kong-migration    | response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
kong-migration    | migrating session on database 'kong'...
kong-migration    | session migrated up to: 000_base_session (executed)
kong-migration    | session migrated up to: 001_add_ttl_index (executed)
kong-migration    | 41 migrations processed
kong-migration    | 41 executed
kong-migration    | Database is up-to-date
kong              | 2021/07/30 07:31:59 [notice] 1#0: using the "epoll" event method
kong              | 2021/07/30 07:31:59 [notice] 1#0: openresty/1.19.3.2
kong              | 2021/07/30 07:31:59 [notice] 1#0: built by gcc 10.3.1 20210424 (Alpine 10.3.1_git20210424) 
kong              | 2021/07/30 07:31:59 [notice] 1#0: OS: Linux 5.4.0-42-generic
kong              | 2021/07/30 07:31:59 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
kong              | 2021/07/30 07:31:59 [notice] 1#0: start worker processes
kong              | 2021/07/30 07:31:59 [notice] 1#0: start worker process 23
kong              | 2021/07/30 07:31:59 [notice] 1#0: start worker process 24
kong-migration exited with code 0
kong              | 2021/07/30 07:31:59 [notice] 23#0: *2 [lua] warmup.lua:92: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by_lua*
kong              | 2021/07/30 07:31:59 [notice] 24#0: *1 [kong] init.lua:290 only worker #0 can manage, context: init_worker_by_lua*
kong              | 2021/07/30 07:31:59 [notice] 23#0: *2 [lua] warmup.lua:129: single_dao(): finished preloading 'services' into the core_cache (in 0ms), context: init_worker_by_lua*
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
kong-postgres     | FATAL:  role "root" does not exist
@suyaser
Copy link

suyaser commented Aug 6, 2021

same issue,
did you reach any solution?

@crsanti
Copy link

crsanti commented Oct 5, 2021

That's because you're running pg_isready without passing -U <username>. Can you verify?

@dolanor
Copy link

dolanor commented Nov 9, 2021

That's because you're running pg_isready without passing -U <username>. Can you verify?

I put

…
      test: [ "CMD-SHELL", "pg_isready", "-U", "${POSTGRES_USER}" ]
…

and get the same problem.
I replaced the ${POSTGRES_USER by the actual username, and still gets the spam in the logs.

@TiagoGouvea
Copy link

TiagoGouvea commented Jan 14, 2022

Same thing here. Using test: ["CMD-SHELL", "pg_isready", "-U", "myUserName"] it keep printing:

breakoutdb       | 2022-01-14 17:27:46.497 UTC [118] FATAL:  role "root" does not exist
breakoutdb       | 2022-01-14 17:27:51.861 UTC [132] FATAL:  role "root" does not exist
breakoutdb       | 2022-01-14 17:27:57.336 UTC [147] FATAL:  role "root" does not exist
breakoutdb       | 2022-01-14 17:28:02.722 UTC [163] FATAL:  role "root" does not exist
breakoutdb       | 2022-01-14 17:28:08.117 UTC [178] FATAL:  role "root" does not exist
breakoutdb       | 2022-01-14 17:28:13.475 UTC [193] FATAL:  role "root" does not exist

@imamfzn
Copy link

imamfzn commented Jan 17, 2022

We need to pass the user and database argument, example on my command:

test: ["CMD", "pg_isready", "-U", "user", "-d", "kong_db"]

it works for me

@TiagoGouvea
Copy link

Even with user and database, role "root" does not exist continues here. :(

But, thanks for the faster reply.

@patlehmann1
Copy link

patlehmann1 commented Feb 15, 2022

After struggling for a while, I found that this test command worked for me.

test: [ "CMD", "pg_isready", "-q", "-d", "{YOUR_DATABASE_NAME}", "-U", "{YOUR_DATABASE_USERNAME}" ]

for me, I had to add the literal username and db name for it to work

test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]

got it from here.

@patsevanton
Copy link
Author

patsevanton commented Feb 15, 2022

I check:

test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]

Get error

kong-postgres   | FATAL:  role "postgres" does not exist

@patsevanton
Copy link
Author

I check

test: [ "CMD", "pg_isready", "-q", "-d", "kong", "-U", "kong" ]

Don`t have error
But have another error

docker-compose up                                                                                                                                          0.2s
[+] Running 4/3
 - Network docker-compose-healthcheck_default  Created                                                                                0.0s 
 - Container kong-postgres                     Created                                                                                2.8s 
 - Container kong-migration                    Created                                                                                0.1s
 - Container kong                              Created                                                                                0.1s 
Attaching to kong, kong-migration, kong-postgres
kong-postgres   | ********************************************************************************
kong-postgres   | WARNING: POSTGRES_HOST_AUTH_METHOD has been set to "trust". This will allow
kong-postgres   |          anyone with access to the Postgres port to access your database without
kong-postgres   |          a password, even if POSTGRES_PASSWORD is set. See PostgreSQL
kong-postgres   |          documentation about "trust":
kong-postgres   |          https://www.postgresql.org/docs/current/auth-trust.html
kong-postgres   |          In Docker's default configuration, this is effectively any other
kong-postgres   |          container on the same system.
kong-postgres   |
kong-postgres   |          It is not recommended to use POSTGRES_HOST_AUTH_METHOD=trust. Replace
kong-postgres   |          it with "-e POSTGRES_PASSWORD=password" instead to set a password in
kong-postgres   |          "docker run".
kong-postgres   | ********************************************************************************
kong-postgres   | The files belonging to this database system will be owned by user "postgres".
kong-postgres   | This user must also own the server process.
kong-postgres   |
kong-postgres   | The database cluster will be initialized with locale "en_US.utf8".
kong-postgres   | The default database encoding has accordingly been set to "UTF8".
kong-postgres   | The default text search configuration will be set to "english".
kong-postgres   |
kong-postgres   | Data page checksums are disabled.
kong-postgres   |
kong-postgres   | fixing permissions on existing directory /var/lib/postgresql/data ... ok
kong-postgres   | creating subdirectories ... ok
kong-postgres   | selecting default max_connections ... 100
kong-postgres   | selecting default shared_buffers ... 128MB
kong-postgres   | selecting default timezone ... Etc/UTC
kong-postgres   | selecting dynamic shared memory implementation ... posix
kong-postgres   | creating configuration files ... ok
kong-postgres   | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
kong-postgres   | initializing pg_authid ... ok
kong-postgres   | setting password ... ok
kong-postgres   | initializing dependencies ... ok
kong-postgres   | creating system views ... ok
kong-postgres   | loading system objects' descriptions ... ok
kong-postgres   | creating collations ... ok
kong-postgres   | creating conversions ... ok
kong-postgres   | creating dictionaries ... ok
kong-postgres   | setting privileges on built-in objects ... ok
kong-postgres   | creating information schema ... ok
kong-postgres   | loading PL/pgSQL server-side language ... ok
kong-postgres   | vacuuming database template1 ... ok
kong-postgres   | copying template1 to template0 ... ok
kong-postgres   | copying template1 to postgres ... ok
kong-postgres   | syncing data to disk ... ok
kong-postgres   |
kong-postgres   | Success. You can now start the database server using:
kong-postgres   |
kong-postgres   |     pg_ctl -D /var/lib/postgresql/data -l logfile start
kong-postgres   |
kong-postgres   |
kong-postgres   | WARNING: enabling "trust" authentication for local connections
kong-postgres   | You can change this by editing pg_hba.conf or using the option -A, or
kong-postgres   | --auth-local and --auth-host, the next time you run initdb.
kong-postgres   | waiting for server to start....LOG:  database system was shut down at 2022-02-15 04:04:37 UTC
kong-postgres   | LOG:  MultiXact member wraparound protections are now enabled
kong-postgres   | LOG:  autovacuum launcher started
kong-postgres   | LOG:  database system is ready to accept connections
kong-postgres   |  done
kong-postgres   | server started
kong-postgres   | CREATE DATABASE
kong-postgres   |
kong-postgres   |
kong-postgres   | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
kong-postgres   |
kong-postgres   | waiting for server to shut down....LOG:  received fast shutdown request
kong-postgres   | LOG:  aborting any active transactions
kong-postgres   | LOG:  autovacuum launcher shutting down
kong-postgres   | LOG:  shutting down
kong-postgres   | LOG:  database system is shut down
kong-postgres   |  done
kong-postgres   | server stopped
kong-postgres   |
kong-postgres   | PostgreSQL init process complete; ready for start up.
kong-postgres   |
kong-postgres   | LOG:  database system was shut down at 2022-02-15 04:04:38 UTC
kong-postgres   | LOG:  MultiXact member wraparound protections are now enabled
kong-postgres   | LOG:  autovacuum launcher started
kong-postgres   | LOG:  database system is ready to accept connections
kong-migration  | Bootstrapping database...
kong-migration  | migrating core on database 'kong'...
kong-migration  | core migrated up to: 000_base (executed)
kong-migration  | core migrated up to: 003_100_to_110 (executed)
kong-migration  | core migrated up to: 004_110_to_120 (executed)
kong-migration  | core migrated up to: 005_120_to_130 (executed)
kong-migration  | core migrated up to: 006_130_to_140 (executed)
kong-migration  | core migrated up to: 007_140_to_150 (executed)
kong-migration  | core migrated up to: 008_150_to_200 (executed)
kong-migration  | core migrated up to: 009_200_to_210 (executed)
kong-migration  | core migrated up to: 010_210_to_211 (executed)
kong-migration  | core migrated up to: 011_212_to_213 (executed)
kong-migration  | core migrated up to: 012_213_to_220 (executed)
kong-migration  | core migrated up to: 013_220_to_230 (executed)
kong-migration  | core migrated up to: 014_230_to_270 (executed)
kong-migration  | migrating acl on database 'kong'...
kong-migration  | acl migrated up to: 000_base_acl (executed)
kong-migration  | acl migrated up to: 002_130_to_140 (executed)
kong-migration  | acl migrated up to: 003_200_to_210 (executed)
kong-migration  | acl migrated up to: 004_212_to_213 (executed)
kong-migration  | migrating acme on database 'kong'...
kong-migration  | acme migrated up to: 000_base_acme (executed)
kong-migration  | migrating basic-auth on database 'kong'...
kong            | 2022/02/15 04:04:50 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong            | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong            | 2022/02/15 04:04:50 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up' to proceed
kong            | stack traceback:
kong            |       [C]: in function 'error'
kong            |       /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong            |       /usr/local/share/lua/5.1/kong/init.lua:506: in function 'init'
kong            |       init_by_lua:3: in main chunk
kong            | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up' 
to proceed
kong            | stack traceback:
kong            |       [C]: in function 'error'
kong            |       /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong            |       /usr/local/share/lua/5.1/kong/init.lua:506: in function 'init'
kong            |       init_by_lua:3: in main chunk
kong-migration  | basic-auth migrated up to: 000_base_basic_auth (executed)
kong-migration  | basic-auth migrated up to: 002_130_to_140 (executed)
kong-migration  | basic-auth migrated up to: 003_200_to_210 (executed)
kong-migration  | migrating bot-detection on database 'kong'...
kong-migration  | bot-detection migrated up to: 001_200_to_210 (executed)
kong-migration  | migrating hmac-auth on database 'kong'...
kong-migration  | hmac-auth migrated up to: 000_base_hmac_auth (executed)
kong-migration  | hmac-auth migrated up to: 002_130_to_140 (executed)
kong-migration  | hmac-auth migrated up to: 003_200_to_210 (executed)
kong-migration  | migrating ip-restriction on database 'kong'...
kong-migration  | ip-restriction migrated up to: 001_200_to_210 (executed)
kong-migration  | migrating jwt on database 'kong'...
kong-migration  | jwt migrated up to: 000_base_jwt (executed)
kong-migration  | jwt migrated up to: 002_130_to_140 (executed)
kong-migration  | jwt migrated up to: 003_200_to_210 (executed)
kong-migration  | migrating key-auth on database 'kong'...
kong-migration  | key-auth migrated up to: 000_base_key_auth (executed)
kong-migration  | key-auth migrated up to: 002_130_to_140 (executed)
kong-migration  | key-auth migrated up to: 003_200_to_210 (executed)
kong-migration  | migrating oauth2 on database 'kong'...
kong-migration  | oauth2 migrated up to: 000_base_oauth2 (executed)
kong-migration  | oauth2 migrated up to: 003_130_to_140 (executed)
kong-migration  | oauth2 migrated up to: 004_200_to_210 (executed)
kong-migration  | oauth2 migrated up to: 005_210_to_211 (executed)
kong-migration  | migrating rate-limiting on database 'kong'...
kong-migration  | rate-limiting migrated up to: 000_base_rate_limiting (executed)
kong-migration  | rate-limiting migrated up to: 003_10_to_112 (executed)
kong-migration  | rate-limiting migrated up to: 004_200_to_210 (executed)
kong-migration  | migrating response-ratelimiting on database 'kong'...
kong-migration  | response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
kong-migration  | migrating session on database 'kong'...
kong-migration  | session migrated up to: 000_base_session (executed)
kong-migration  | session migrated up to: 001_add_ttl_index (executed)
kong-migration  | 42 migrations processed
kong-migration  | 42 executed
kong-migration  | Database is up-to-date
kong exited with code 1
kong-migration exited with code 0
kong            | 2022/02/15 04:04:52 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong            | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong            | 2022/02/15 04:04:52 [notice] 1#0: using the "epoll" event method
kong            | 2022/02/15 04:04:52 [notice] 1#0: openresty/1.19.9.1
kong            | 2022/02/15 04:04:52 [notice] 1#0: built by gcc 10.3.1 20210424 (Alpine 10.3.1_git20210424)
kong            | 2022/02/15 04:04:52 [notice] 1#0: OS: Linux 5.10.60.1-microsoft-standard-WSL2
kong            | 2022/02/15 04:04:52 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker processes
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1098
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1099
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1100
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1101
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1102
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1103
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1104
kong            | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1105
kong            | 2022/02/15 04:04:52 [notice] 1099#0: *3 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1101#0: *4 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1098#0: *1 [lua] warmup.lua:92: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by
context: init_worker_by_lua*
kong            | 2022/02/15 04:04:52 [notice] 1103#0: *5 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1105#0: *8 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1100#0: *2 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1098#0: *1 [lua] warmup.lua:129: single_dao(): finished preloading 'services' into the core_cache (in 0ms), contextcache (in 0ms), context: init_worker_by_lua*
kong            | 2022/02/15 04:04:52 [notice] 1102#0: *7 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:52 [notice] 1104#0: *6 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*      
kong            | 2022/02/15 04:04:57 [crit] 1105#0: *15 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1103#0: *12 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1100#0: *14 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1102#0: *16 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1101#0: *13 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1098#0: *17 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong            | 2022/02/15 04:04:57 [crit] 1104#0: *18 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer

@patsevanton
Copy link
Author

Error FATAL: role "root" does not exist fixed by #17

@sp1thas
Copy link

sp1thas commented Mar 19, 2022

My docker-compose file looks like:

version: "2.2"

services:
  results:
    image: postgres:12
    env_file:
      - config/server/base.env
      - config/server/${ENV}.env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]
      interval: 10s
      timeout: 5s
      retries: 5
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

and the error Error FATAL: role "root" does not exist was still there.

The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:

      test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]

@dolanor
Copy link

dolanor commented Apr 5, 2022

Error FATAL: role "root" does not exist fixed by #17

I can confirm this works!
Thanks!

@Yuri-Lima
Copy link

My docker-compose file looks like:

version: "2.2"

services:
  results:
    image: postgres:12
    env_file:
      - config/server/base.env
      - config/server/${ENV}.env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]
      interval: 10s
      timeout: 5s
      retries: 5
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

and the error Error FATAL: role "root" does not exist was still there.

The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:

      test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]

I confirm as well.
Thas works in my case.

Thanks so much

@birthdaysgift
Copy link

birthdaysgift commented Jan 24, 2023

The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:

      test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]

Thanks, @sp1thas!
I've also figured out that same effect can be achieved in this way:

    test: [ "CMD-SHELL", "pg_isready -d $POSTGRES_DB -U $POSTGRES_USER"]

We can work with POSTGRES_DB and POSTGRES_USER just using one dollar sign $ without curly braces {}, same as we work with ordinary environment variables in shell.

birthdaysgift pushed a commit to fenya123/forum123 that referenced this issue Jan 31, 2023
We want to deploy our changes automatically after merge into develop branch. To do that we need to setup basic Continuous Deployment pipeline.

In the scope of this task we need to create basic Continuous Deployment pipeline to automate deployment process for each push of `develop` branch.

Steps to do:

Docker compose

Add new docker compose setup for prod environment inside `envs/prod` directory with following services: application, database, reverse proxy. Configuration of each service has a detailed description below.

App

- add `gunicorn` to `requirements-prod.txt`
  it will be used as a WSGI server for our flask app (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/gunicorn/))
- add a configuration file for gunicorn inside `envs/prod/gunicorn/config.py`
- add a `Dockerfile` which builds our app with `requirements.txt` and `requirements-prod.txt`
- add `envs/prod/Dockerfile.dockerignore` to ignore cache directories and `.env` file with sensitive production data
- add `envs/prod/.gitignore` and ignore `envs/prod/.env` to prevent it to be accidentally commited
- add `env_file` section to app service in docker compose file to provide sensitive data to our application via `.env` file
- add `depends_on` section to make our app depend on database service
- add `entrypoint` section which should run first apply migrations and the run gunicorn with our config

Yes, we decided to use the simplest db migration model, when applying migration happens in the app container before the application start. It's not a good practice for big and distributed infrastructures which also cares about zero-downtime deployment and data consistency between migrations. More about the correct approach for doing migrations in production environment you can read [here](https://pythonspeed.com/articles/schema-migrations-server-startup/). Now our simplest approach we choose is sufficient for our needs.

Also we didn't expose any ports because we want our app to run under reverse proxy. So all external requests will come to our reverse proxy and after that will be redirected to our app. Therefore our app will be connected only to reverse proxy server inside docker network.

Database

- update `src/config.py` to have a different environment variable `POSTGRES_HOST`, `POSTGRES_PORT`, `POSTGRES_USER` and `POSTGRES_DB`
  since postgres image requires these variables it will be more convenient if our application will work with same variables too, and we will be able to provide one `.env` file to both - application and database
- add `healthcheck` to postgres container (see [this](https://github.com/peter-evans/docker-compose-healthcheck#waiting-for-postgresql-to-be-healthy))
  also we need to provide `POSTGRES_USER` and `POSTGRES_DB` to healthcheck script to avoid `FATAL: role "root" does not exist` issue (see [this](peter-evans/docker-compose-healthcheck#16))
- add a volume to keep our production db data

Reverse proxy

- add `nginx` service to docker compose file, we will use it as a reverse proxy (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/nginx/))
- add `envs/prod/nginx/nginx.conf` with simple configuration from [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/nginx/#configuration)
- provide nginx config to nginx container via volumes (`./nginx/nginx.conf:/etc/nginx/nginx.conf:ro`)
- expose 80 port to make it externally accessible
- add reverse proxy configuration to flask (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/proxy_fix/))
  to be able to change this configuration we will make these proxy options to be configurable via env variables and add them to our `src/config.py`

VPS

To host our application we need to find a VPS and configure it. We decided to deploy application to our production host via simple commands run through ssh. From our GitHub workflow we want to execute some commands via ssh to do these steps: stop running containers, fetch last repository changes from Github, run containers with last changes (with `--build` option).
So to be able to do that, we need to setup our VPS host:
- install [docker-desktop](https://docs.docker.com/desktop/install/ubuntu/#install-docker-desktop)
- install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [configure](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup#_your_identity) username and email for it
- generate new ssh key for our production host and add it to GitHub (see [docs](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent))
- clone our repository to `/forum123` directory
- create `/forum123/envs/prod/.env` file and provide all configuration options listed in our `src/config.py`

After installing docker we had a problem with `docker.sock`, and [this](https://stackoverflow.com/questions/43569781/unable-to-start-docker-service-with-error-failed-to-start-docker-service-unit/43576628#43576628) SO answer was really helpful.

After trying to run our docker services on VPS we got error saying that port 80 has been occupied already. It was because apache server running on this port. So we had to disable it (see [this](https://www.cyberciti.biz/faq/star-stop-restart-apache2-webserver/)).

Deploy workflow
- add `PROD_SSH_HOST`, `PROD_SSH_USERNAME` and `PROD_SSH_PASSWORD` GitHub secrets for this repo
- add a `.github/workflows/deploy.yml` workflow which should run on every push to `develop` branch
- add checkout action to our workflow
- add [ssh action](https://github.com/appleboy/ssh-action) which will use our secrets to connect via ssh and execute commands on the production host machine

We wanted to use environment variables `PROD_APP_ROOT` and `DEPLOY_BRANCH_NAME` in our script running through ssh, but faced with problems that seemed to be unresolved for that moment (see [issue](appleboy/ssh-action#58)).

Also we haven't found a solution to put all commands we want to run via ssh to a separate `.sh` file and use in this action. Since these command will be run on host machine but this `.sh` file will be in our CI environment running this deploy workflow. The one way to do that could be - first move our `.sh` deployment script to host and then run something like `source ./deploy.sh` via ssh. But we didn't want to deal with `scp` action in our wokflow and decided to do it in a simpler way - just write all commands in our `ssh` github action as is.
@sneko
Copy link

sneko commented Feb 7, 2023

Also had this issue! Thanks all for bringing the solution, it seems related to formatting whereas it worked with Postgres 13.7 (but not 14.6).

I had previously:

      test: ['CMD-SHELL', 'psql', '-h', 'localhost', '-U', '$$POSTGRES_USER', '-c', 'select 1', '-d', '$$POSTGRES_DB']

And I switched to:

      test: ['CMD-SHELL', 'psql -h localhost -U $${POSTGRES_USER} -c select 1 -d $${POSTGRES_DB}']

Note I see everyone using pg_isready, but in my case after debugging some race conditions in the past I decided to use psql directly. Here the comment why I did this months ago:

Note: at start we tried pg_isready but it's not efficient since the postgres container restarts the server at startup (to init some scripts) so we ended with broken connections...
The best is to try a real query to be sure it's up and running as advised in docker-library/postgres#146 (comment)

@timini
Copy link

timini commented Apr 8, 2023

someone please update the docs with this fix!

@pleymor
Copy link

pleymor commented Jun 30, 2023

My docker-compose file looks like:

version: "2.2"

services:
  results:
    image: postgres:12
    env_file:
      - config/server/base.env
      - config/server/${ENV}.env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]
      interval: 10s
      timeout: 5s
      retries: 5
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

and the error Error FATAL: role "root" does not exist was still there.
The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:

      test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]

I confirm as well. Thas works in my case.

Thanks so much

It's working with $$ indeed but also with simple $:

       test: [ "CMD-SHELL", "pg_isready -d ${POSTGRES_DB} -U $POSTGRES_USER}"]

@pkit
Copy link

pkit commented Aug 18, 2023

Doesn't work too, healthcheck just continues to run and spam logs with:

db_1        | 2023-08-18 20:24:37.733 UTC [362/1] [[unknown]:[unknown]] LOG:  connection received: host=127.0.0.1 port=45668
db_1        | 2023-08-18 20:24:37.734 UTC [362/2] [master:[unknown]] LOG:  connection authorized: user=master database=master application_name=pg_isready

The issue for me exists only in postgres image >=14 earlier images work fine.

@pkit
Copy link

pkit commented Aug 18, 2023

Ok, solution for somebody who needs healthcheck for dependencies only:

    tmpfs:
      - /run
    healthcheck:
      test: [ "CMD-SHELL", "[ -r /var/run/postgresql/ready ] || ( pg_isready && touch /var/run/postgresql/ready)" ]

It will run only till it's ready and not spam logs.
It will produce FATAL: role "root" does not exist only once (at atartup).

@seyhak
Copy link

seyhak commented Jan 9, 2024

After struggling for a while, I found that this test command worked for me.

test: [ "CMD", "pg_isready", "-q", "-d", "{YOUR_DATABASE_NAME}", "-U", "{YOUR_DATABASE_USERNAME}" ]

for me, I had to add the literal username and db name for it to work

test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]

got it from here.

-q stands for -quiet. You just muted log

@NgatiaFrankline
Copy link

NgatiaFrankline commented Jan 27, 2024

Had this error when i upgrade to postgres 16 container.
It was coming from a healthcheck:

    healthcheck:
      test: /usr/bin/pg_isready || exit 1
      interval: 5s
      timeout: 10s
      retries: 120

It appears that the script /usr/bin/pg_isread uses PGUSER in envs which I did not specify (will default to root if its PGUSER is empty), instead, I was using POSTGRES_USER. So used has to specify both:

    environment:
      BUILD_ENV: docker
      POSTGRES_USER: postgres
      PGUSER: postgres

Further reading: https://stackoverflow.com/questions/60193781/postgres-with-docker-compose-gives-fatal-role-root-does-not-exist-error

TheBreaken added a commit to netlogix/docker that referenced this issue Feb 5, 2024
At the moment I get the error "Error FATAL: role "root" does not exist" after starting database container. To fix that we need to define the postgres user.

The fix is from here peter-evans/docker-compose-healthcheck#16 (comment)

Related: CRM-677
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests