Tasks fail with no logs on Kubernetes Executor

When the worker pod can’t start up at all, no logs are generated for the task. The most common cause for this is the pod being stuck in pending because of resource contention for memory, CPU, or ephemeral storage. If you have kubectl access to your cluster, you should be able to see the pending pods and kubectl describe one of them. That will give you a better indication of the error causing the pod to stay in pending.

The order of events is:

  1. Airflow puts task into queued
  2. K8s executor sends a request via the Kubernetes API to schedule the pod
  3. Pod goes into pending to await resource availability
  4. worker_pods_pending_timeout seconds pass (default 300).
  5. Airflow declares the task to have timed out.
  6. Task is marked as failed and there are no logs available