Is Astronomer IO down?

Astronomer Cloud:
Our dag tasks are queued but never picked up. Prev runs succeeded. Here is the message we are getting.
Exception:
Executor reports task instance finished (failed) although the task says its queued. (Info: None) Was the task killed externally?

Seems related to Slack outage today. It is back up running now.
BTW, our DAGs are not tied to slack. Maybe Astronomer IO has some dependency on Slack.

Down again. No details in logs except for “Executor reports task instance finished (failed) although the task says its queued. (Info: None) Was the task killed externally?” Execution environment is Kubernetes. But the Dag tasks are queued under celery and never get picked up. Looks like an astronomer bug.

When I click on the task, I get this error:
Ooops!
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.

Python version: 3.7.9
Airflow version: 2.0.0+astro.1
Node: traditional-protostar-xxxxx

Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 2447, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File “/usr/local/lib/python3.7/site-packages/flask/_compat.py”, line 39, in reraise
raise value
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1950, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File “/usr/local/lib/python3.7/site-packages/airflow/www/auth.py”, line 34, in decorated
return func(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/airflow/www/decorators.py”, line 60, in wrapper
return f(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/airflow/www/views.py”, line 1193, in task
attr = getattr(ti, attr_name)
File “/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py”, line 817, in previous_start_date_success
return self.get_previous_start_date(state=State.SUCCESS)
File “/usr/local/lib/python3.7/site-packages/airflow/utils/session.py”, line 65, in wrapper
return func(*args, session=session, **kwargs)
File “/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py”, line 801, in get_previous_start_date
return prev_ti and pendulum.instance(prev_ti.start_date)
File “/usr/local/lib/python3.7/site-packages/pendulum/init.py”, line 174, in instance
raise ValueError(“instance() only accepts datetime objects.”)
ValueError: instance() only accepts datetime objects.

Hi @ven-govindarajan ! Thanks for reporting this, and sorry to hear you ran into trouble testing Airflow 2.0 on Astronomer.

In terms of Astronomer Cloud, we are not aware of any out-of-the-ordinary downtime or outages for any of our customers on January 4th, and we were not dependent on or impacted by Slack’s outage. If there was every platform-wide interruption, you can always check at status.astronomer.io.

For your Airflow Deployment in particular, there could be a number of reasons why your tasks weren’t executed as expected. Have you continued to see those errors more recently? Can you share more detail on the DAGs you’re running?

Given the variety of root causes, I’d ultimately recommend reaching out to Astronomer Support - we have a team of Airflow + Astro experts who are ready to help to get you up and running.

Thanks, @ven-govindarajan ! Excited to have you onboard.