-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FATAL: role "root" does not exist #16
Comments
same issue, |
That's because you're running pg_isready without passing |
I put
and get the same problem. |
Same thing here. Using
|
We need to pass the user and database argument, example on my command: test: ["CMD", "pg_isready", "-U", "user", "-d", "kong_db"] it works for me |
Even with user and database, But, thanks for the faster reply. |
After struggling for a while, I found that this test command worked for me.
for me, I had to add the literal username and db name for it to work
got it from here. |
I check:
Get error
|
I check
Don`t have error
|
Error FATAL: role "root" does not exist fixed by #17 |
My docker-compose file looks like: version: "2.2"
services:
results:
image: postgres:12
env_file:
- config/server/base.env
- config/server/${ENV}.env
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data and the error The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command: test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"] |
I can confirm this works! |
I confirm as well. Thanks so much |
Thanks, @sp1thas! test: [ "CMD-SHELL", "pg_isready -d $POSTGRES_DB -U $POSTGRES_USER"] We can work with |
We want to deploy our changes automatically after merge into develop branch. To do that we need to setup basic Continuous Deployment pipeline. In the scope of this task we need to create basic Continuous Deployment pipeline to automate deployment process for each push of `develop` branch. Steps to do: Docker compose Add new docker compose setup for prod environment inside `envs/prod` directory with following services: application, database, reverse proxy. Configuration of each service has a detailed description below. App - add `gunicorn` to `requirements-prod.txt` it will be used as a WSGI server for our flask app (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/gunicorn/)) - add a configuration file for gunicorn inside `envs/prod/gunicorn/config.py` - add a `Dockerfile` which builds our app with `requirements.txt` and `requirements-prod.txt` - add `envs/prod/Dockerfile.dockerignore` to ignore cache directories and `.env` file with sensitive production data - add `envs/prod/.gitignore` and ignore `envs/prod/.env` to prevent it to be accidentally commited - add `env_file` section to app service in docker compose file to provide sensitive data to our application via `.env` file - add `depends_on` section to make our app depend on database service - add `entrypoint` section which should run first apply migrations and the run gunicorn with our config Yes, we decided to use the simplest db migration model, when applying migration happens in the app container before the application start. It's not a good practice for big and distributed infrastructures which also cares about zero-downtime deployment and data consistency between migrations. More about the correct approach for doing migrations in production environment you can read [here](https://pythonspeed.com/articles/schema-migrations-server-startup/). Now our simplest approach we choose is sufficient for our needs. Also we didn't expose any ports because we want our app to run under reverse proxy. So all external requests will come to our reverse proxy and after that will be redirected to our app. Therefore our app will be connected only to reverse proxy server inside docker network. Database - update `src/config.py` to have a different environment variable `POSTGRES_HOST`, `POSTGRES_PORT`, `POSTGRES_USER` and `POSTGRES_DB` since postgres image requires these variables it will be more convenient if our application will work with same variables too, and we will be able to provide one `.env` file to both - application and database - add `healthcheck` to postgres container (see [this](https://github.com/peter-evans/docker-compose-healthcheck#waiting-for-postgresql-to-be-healthy)) also we need to provide `POSTGRES_USER` and `POSTGRES_DB` to healthcheck script to avoid `FATAL: role "root" does not exist` issue (see [this](peter-evans/docker-compose-healthcheck#16)) - add a volume to keep our production db data Reverse proxy - add `nginx` service to docker compose file, we will use it as a reverse proxy (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/nginx/)) - add `envs/prod/nginx/nginx.conf` with simple configuration from [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/nginx/#configuration) - provide nginx config to nginx container via volumes (`./nginx/nginx.conf:/etc/nginx/nginx.conf:ro`) - expose 80 port to make it externally accessible - add reverse proxy configuration to flask (see [flask docs](https://flask.palletsprojects.com/en/2.2.x/deploying/proxy_fix/)) to be able to change this configuration we will make these proxy options to be configurable via env variables and add them to our `src/config.py` VPS To host our application we need to find a VPS and configure it. We decided to deploy application to our production host via simple commands run through ssh. From our GitHub workflow we want to execute some commands via ssh to do these steps: stop running containers, fetch last repository changes from Github, run containers with last changes (with `--build` option). So to be able to do that, we need to setup our VPS host: - install [docker-desktop](https://docs.docker.com/desktop/install/ubuntu/#install-docker-desktop) - install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [configure](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup#_your_identity) username and email for it - generate new ssh key for our production host and add it to GitHub (see [docs](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent)) - clone our repository to `/forum123` directory - create `/forum123/envs/prod/.env` file and provide all configuration options listed in our `src/config.py` After installing docker we had a problem with `docker.sock`, and [this](https://stackoverflow.com/questions/43569781/unable-to-start-docker-service-with-error-failed-to-start-docker-service-unit/43576628#43576628) SO answer was really helpful. After trying to run our docker services on VPS we got error saying that port 80 has been occupied already. It was because apache server running on this port. So we had to disable it (see [this](https://www.cyberciti.biz/faq/star-stop-restart-apache2-webserver/)). Deploy workflow - add `PROD_SSH_HOST`, `PROD_SSH_USERNAME` and `PROD_SSH_PASSWORD` GitHub secrets for this repo - add a `.github/workflows/deploy.yml` workflow which should run on every push to `develop` branch - add checkout action to our workflow - add [ssh action](https://github.com/appleboy/ssh-action) which will use our secrets to connect via ssh and execute commands on the production host machine We wanted to use environment variables `PROD_APP_ROOT` and `DEPLOY_BRANCH_NAME` in our script running through ssh, but faced with problems that seemed to be unresolved for that moment (see [issue](appleboy/ssh-action#58)). Also we haven't found a solution to put all commands we want to run via ssh to a separate `.sh` file and use in this action. Since these command will be run on host machine but this `.sh` file will be in our CI environment running this deploy workflow. The one way to do that could be - first move our `.sh` deployment script to host and then run something like `source ./deploy.sh` via ssh. But we didn't want to deal with `scp` action in our wokflow and decided to do it in a simpler way - just write all commands in our `ssh` github action as is.
Also had this issue! Thanks all for bringing the solution, it seems related to formatting whereas it worked with Postgres 13.7 (but not 14.6). I had previously: test: ['CMD-SHELL', 'psql', '-h', 'localhost', '-U', '$$POSTGRES_USER', '-c', 'select 1', '-d', '$$POSTGRES_DB'] And I switched to: test: ['CMD-SHELL', 'psql -h localhost -U $${POSTGRES_USER} -c select 1 -d $${POSTGRES_DB}'] Note I see everyone using
|
someone please update the docs with this fix! |
It's working with $$ indeed but also with simple $: test: [ "CMD-SHELL", "pg_isready -d ${POSTGRES_DB} -U $POSTGRES_USER}"] |
Doesn't work too, healthcheck just continues to run and spam logs with:
The issue for me exists only in postgres image |
Ok, solution for somebody who needs healthcheck for dependencies only: tmpfs:
- /run
healthcheck:
test: [ "CMD-SHELL", "[ -r /var/run/postgresql/ready ] || ( pg_isready && touch /var/run/postgresql/ready)" ] It will run only till it's ready and not spam logs. |
|
Had this error when i upgrade to postgres 16 container.
It appears that the script /usr/bin/pg_isread uses PGUSER in envs which I did not specify (will default to root if its PGUSER is empty), instead, I was using POSTGRES_USER. So used has to specify both:
Further reading: https://stackoverflow.com/questions/60193781/postgres-with-docker-compose-gives-fatal-role-root-does-not-exist-error |
At the moment I get the error "Error FATAL: role "root" does not exist" after starting database container. To fix that we need to define the postgres user. The fix is from here peter-evans/docker-compose-healthcheck#16 (comment) Related: CRM-677
The text was updated successfully, but these errors were encountered: