NoneType' object has no attribute 'create_dagrun'

When hitting the “Trigger DAG” button I get an error message:

AttributeError: 'NoneType' object has no attribute 'create_dagrun'

Any ideas what this could be?
(It works for the example_dag but not for the DAGs I’ve added. What attribute am I missing? Perhaps it’s because I have a single task in my DAG?)

I’m suspecting that you’re running locally from WSL and haven’t mounted your disk before starting astro airflow start (I have this problem all the time). This means that your scheduler doesn’t have access to read your DAGs.

sudo mount --bind /mnt/c /c
cd /c …

Thank you for the input but I am running on astronomer remotely.
Maybe it’s related to this:

When I click “view code” I can’t even see the code of the DAG:
[Errno 2] No such file or directory: '/usr/local/airflow/dags/'

Strangely, the DAG works fine … but no code view and no “trigger dag” …

@AndrewHarmon, any ideas?


Hard to say with out seeing the scheduler logs. i would debug this locally by using astro airflow start. you can view the logs of the scheduler container it spins up.

Thanks @AndrewHarmon. I run it locally and there I can view the source code via “View code”. But the very same deployed one gives me errors. Could the path to DAGs have been compromised somehow? I run a very basic installation with ‘astro airflow init’.

what is your astro cli version? astro version. and what does your dockerfile look like? astro airflow start should spin up containers that are basically identical to the containers running in cloud

Astro CLI Version: 0.7.5-2

FROM astronomerinc/ap-airflow:0.9.0-1.10.3-onbuild

Great if you can take a look at our cluster …

your webserver is failing, can you try upping the AU?

Worked, thanks! Bit cryptic with those error messages … :wink:

also, you may want to set is_delete_operator_pod=True on the KubePodOperator so old pods don’t hang around when they finish

Thanks! Done! But it takes a bit of time to see these changes even if I hit “refresh DAG”

yeah, it may take a min for the new web server service to come up and then it will have to do it airflow stuff parsing dags etc. it can take a minute or two

1 Like

Hey @lapidus- know it’s been quite a while, but just wanted to check in here and let you/anyone else interested know a bit more about what we’re doing to fix these Webserver issues in core airflow.

In Airflow versions that predate 1.10.10, the webserver had to parse all of the DAG files, which meant you needed to up your webserver resources as you added more DAGs. In 1.10.10+, one of our colleagues here at Astronomer (@kaxil) introduced a DAG serialization feature that allows the webserver to grab the DAG state directly from the underlying database, which significantly decreases the load on the webserver and allows for more optimal resource consumption.

As soon as we have a 1.10.10 image that’s compatible with Astronomer Cloud/Enterprise, we’ll update our docs to show how to enable DAG serialization and link to them from here.