Error: column dag.last_parsed_time does not exist

After deploying an image, I see the following exception when navigating to the deployment webserver:

Ooops!
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.

Python version: 3.7.10
Airflow version: 2.0.2
Node: heliocentric-exploration-2937-webserver-5cb9fdb9d6-vn5gw

Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1277, in _execute_context
cursor, statement, parameters, context
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py”, line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedColumn: column dag.last_parsed_time does not exist
LINE 1: …AS dag_is_subdag, dag.is_active AS dag_is_active, dag.last_p…
^

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 2447, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File “/usr/local/lib/python3.7/site-packages/flask/_compat.py”, line 39, in reraise
raise value
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1950, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File “/usr/local/lib/python3.7/site-packages/airflow/www/auth.py”, line 34, in decorated
return func(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/airflow/www/views.py”, line 497, in index
filter_dag_ids = current_app.appbuilder.sm.get_accessible_dag_ids(g.user)
File “/usr/local/lib/python3.7/site-packages/airflow/www/security.py”, line 273, in get_accessible_dag_ids
return {dag.dag_id for dag in accessible_dags}
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter
return self._execute_and_instances(context)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py”, line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1011, in execute
return meth(self, multiparams, params)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py”, line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1130, in _execute_clauseelement
distilled_params,
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1317, in execute_context
e, statement, parameters, cursor, context
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1511, in handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from
=e
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py”, line 182, in raise

raise exception
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py”, line 1277, in _execute_context
cursor, statement, parameters, context
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py”, line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column dag.last_parsed_time does not exist
LINE 1: …AS dag_is_subdag, dag.is_active AS dag_is_active, dag.last_p…
^

[SQL: SELECT dag.dag_id AS dag_dag_id, dag.root_dag_id AS dag_root_dag_id, dag.is_paused AS dag_is_paused, dag.is_subdag AS dag_is_subdag, dag.is_active AS dag_is_active, dag.last_parsed_time AS dag_last_parsed_time, dag.last_pickled AS dag_last_pickled, dag.last_expired AS dag_last_expired, dag.scheduler_lock AS dag_scheduler_lock, dag.pickle_id AS dag_pickle_id, dag.fileloc AS dag_fileloc, dag.owners AS dag_owners, dag.description AS dag_description, dag.default_view AS dag_default_view, dag.schedule_interval AS dag_schedule_interval, dag.concurrency AS dag_concurrency, dag.has_task_concurrency_limits AS dag_has_task_concurrency_limits, dag.next_dagrun AS dag_next_dagrun, dag.next_dagrun_create_after AS dag_next_dagrun_create_after
FROM dag]
(Background on this error at: )

The issue looks like an incompatibility between the Airflow deployment version and my image, but I have confirmed my image is using 2.0.0, Quay, and the deployment Airflow version is 2.0.0 also.

I tried deploying to a 2.0.2 deployment, but for that I get an email properly telling me the versions are incompatible and that I first need to upgrade (downgrade?) the deployment:

Your recent attempt to upgrade Analytics Service 2.0 (Airflow 2.0.2) to Airflow 2.0.0 failed. To upgrade your Deployment on Astronomer to a new version of Apache Airflow, you must: 1. Initialize the Airflow Upgrade via the Deployment Settings page of the Astronomer UI or via the CLI (

$ astro deployment airflow upgrade

), THEN 2. Change the Astronomer Certified Image in your Dockerfile to match your new, desired version of Apache Airflow 3. Deploy changes to Astronomer (

$ astro deploy

) Please complete the steps above before deploying again to Astronomer. Until you take action, your Airflow Deployment will continue to run 2.0.2. For guidelines on how to cancel this upgrade and other information, refer to our [Airflow Versioning]

Hey @cnunezdb - thank you for reaching out, and I’m sorry you’re having trouble here. From the looks of your error, the problem here has to do with a migrations job that typically handles the DB migration when you upgrade Airflow versions.

A few things:

  • Are you running on Astronomer Cloud, or Astronomer Enterprise (self-hosted)?
  • If you’re on Astronomer Cloud, can you tell me what your deployment “release name” is? Should be something like: astro-galactic-5678
  • Once you change your Dockerfile to a 2.0.2 image and deploy your change, you unfortunately CANNOT downgrade back to 2.0.0.
  • The Airflow version in the Astro UI must match the Airflow version that corresponds to the Airflow image tag in your Dockerfile

I’d suggest the following:

  1. Go to the Astronomer UI, and select Upgrade > Airflow 2.0.2
  2. Then, go to your Dockerfile change the FROM statement to: FROM quay.io/astronomer/ap-airflow:2.0.2-buster-onbuild
  3. Then, deploy your Dockerfile change to Astronomer with $ astro deploy

That should do the trick. If you already changed your Dockerfile to 2.0.2 but haven’t done step 1, just make sure to do step 1 and then skip to step 3 and deploy again.

Want to let me know if this works? We know this is a confusing process, and we actually just revised this email to be clearer. For more information, you can also refer to our “Upgrade Airflow on Astronomer” doc.

I was able to find the answer. The base image I was running was 2.0.0, quay.io/astronomer/ap-airflow:2.0.0-buster-onbuild, while the Airflow version I was running was 2.0.2, from astro dev run version. This was the state after astro dev init.

After changing the base image to quay.io/astronomer/ap-airflow:2.0.2-buster-onbuild and targeting a deployment running Airflow 2.0.2, the deployment worked.

1 Like

@cnunezdb Hm, so you just ran $ astro dev init, your base image was 2.0.0 and your Airflow Deployment running on Astronomer was running 2.0.2?

We’ll be making 2.0.2 the default base image in the Astro CLI here soon, since that’s the case at deployment creation via the Astro UI/CLI.

In any case, glad you figured it out and that you’re up and running on Airflow’s latest :slightly_smiling_face:

Also, when I run: astro dev init -v 2.0.2 the base image I get is FROM quay.io/astronomer/ap-airflow:1.10.5-11-buster-onbuild. I guess that must be because 2.0.2 is not a recognized version string, but in this case, the init tool should perhaps give a warning or fail. Also, what are the valid version strings for that command?

Hm, so you just ran $ astro dev init , your base image was 2.0.0 and your Airflow Deployment running on Astronomer was running 2.0.2?

Correct.

Hi @cnunezdb - hm, that’s certainly not expected behavior, so apologies for that. What version of the Astro CLI are you running?

If you’re on 0.25.0 latest, astro dev init -v --2.0.2 should actually give you
FROM quay.io/astronomer/ap-airflow:2.0.0-buster-onbuild. Here’s some output from my machine:

$ astro dev init -v --2.0.2
Initializing Airflow project
Not connected to Astronomer, pulling Airflow development files from 2.0.0-buster-onbuild
Initialized empty astronomer project in…

It’s better than 1.10.5-11, but that’s still a bug on our side :slight_smile: I’ll make sure to get that reported. Thank you again for your patience, and glad you’re all set for now!