Airflow local executor node failure

We are planning to run airflow in local executor mode using Postgres. I understand that with this mode parallelism is possible but all of the processes, scheduler, worker will run in one server. How do I recover from a node failure. Is there any option here or am I forced to use celery executor with rabbit mq etc. we don’t have a lot of dags so I don’t have a need for the celery executor. But these dags are critical enough that they need to be brought up in a few minutes in case of a node failure. What are my options using the local executor.

Thanks…

Hi @satfactor - welcome to our community!

You are correct in your assumption that fault-tolerant execution requires a distributed worker setup using celery and a queueing system like Redis or RabbitMQ. Additionally, you are able to easily spin up lightweight and fault-tolerant Airflow environments on Astronomer Cloud. Because of the way we’ver architected our platform on Kubernetes, it’s quite easy to run a readily-available Airflow environment on the local executor; we take care of maintaining availability and protecting against node failures under the hood.