nohup airflow scheduler > airflowscheduler.out &. nohup airflow webserver -p 8080 > airflowwebserver.out &. I propose to fix this in the 1.10 branch. To start our Airflow webserver and scheduler, we have to run the below commands: Airflow Webserver:. The Dockerfile and its entrypoint come from the 2.0 refactoring (master branch) and have been backported to the 1.10 branch. RUN chgrp -R 0 /some/directory & chmod -R g+rwX /some/directory Files to be executed should also have group execute permissions.Īdding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image: Check what Airflow image your docker-compose.yaml is using and use that image, in my case it's: apache/airflow:2.3.2 I same folder where you have your docker-compose. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.įor an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Create new Airflow docker image with installed Python requirements. Please see the OpenShift official guidelines:īy default, OpenShift Enterprise runs containers using an arbitrarily assigned user ID. This was way before Airflow introduced a production Docker image support in 1.10.10. When your docker-compose is up you could run service docker-compose exec SERVICENAME bash and check to which group specific directory belongs to and then add this group to your user permission in docker-compose. This is especially true when running containers in OpenShift. Our docker image extends upon the puckel/docker-airflow image. Little late to the party, but you could add a user to the default group, which creates the directory. Check that the database container is up and running and that airflow initdb was executed. There is a pretty detailed guide on how to achieve what you are looking for on the Airflow docs here.Depending on your requirements, this may be as easy as extending the original image using a From directive while creating a new Dockerfile, or you may need to customize the image to suit your needs. ![]() This tells airflow to load dags from that folder, in your case that path references inside the container. ![]() It is a common good practice for Docker/Kubernetes to create a user who is not root to run the main process of the container of course, but it is also a good practice to make the user member of the root group (which provides not special rights by itself and is not a security issue). By default, on your airflow config you have the following line.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |