If you’re running the Celery Executor, tasks can run on any worker at any given time. For that reason, there’s no guarantee that a downstream task will find the file you’re writing as part of a prior task. Additionally, every time the worker restarts (i.e. every astro airflow deploy
), all files written to it are removed.
For these reasons, we’d recommend always using an external file system (AWS S3, GCP GCS, Azure Blob, etc).
Related forum posts: Setting on_failure
and on_success
callbacks, and another on writing files to workers.