Task Instances exiting without failure, and retrying with retries disabled

Hi,

I have a dag using taskflow on Airflow 2.0.0 that has been failing and retrying a task without producing an error. The task, a python operator, triggers an external report service for a list of users, and waits for each report to run before triggering the next:

[2022-07-12 18:10:52,847] {creator_reporting.py:92} INFO - Requested Report
[2022-07-12 18:10:52,848] {creator_reporting.py:104} INFO - Awaiting Report 
[2022-07-12 18:10:53,867] {creator_reporting.py:121} INFO - sleep - report enqueued
[2022-07-12 18:11:04,603] {creator_reporting.py:124} INFO - Report ready for 
[2022-07-12 18:11:04,604] {creator_reporting.py:142} INFO - Report finished in state succeeded
[2022-07-12 18:11:04,604] {creator_reporting.py:149} INFO - appending row
[2022-07-12 18:11:06,291] {creator_reporting.py:92} INFO - Requested Report
[2022-07-12 18:11:06,292] {creator_reporting.py:104} INFO - Awaiting Report 
[2022-07-12 18:11:07,459] {creator_reporting.py:121} INFO - sleep - report enqueued

However, the task will unexpectedly initiate a retry, even though I have retries disabled:

[2022-07-12 18:11:19,798] {taskinstance.py:1043} INFO - 
--------------------------------------------------------------------------------
[2022-07-12 18:11:19,799] {taskinstance.py:1044} INFO - Starting attempt 2 of 1
[2022-07-12 18:11:19,799] {taskinstance.py:1045} INFO - 
[2022-07-12 18:11:19,823] {taskinstance.py:1064} INFO - Executing <Task(_PythonDecoratedOperator): run_reports_task> on 2022-07-12T18:08:14.962868+00:00
[2022-07-12 18:11:19,866] {standard_task_runner.py:52} INFO - Started process 209 to run task
[2022-07-12 18:11:19,874] {standard_task_runner.py:76} INFO - Running:
[2022-07-12 18:11:19,881] {standard_task_runner.py:77} INFO - Job 97445: Subtask run_reports_task

I’ve noticed that the exits coincide with a SIGTERM for GPID 180 (this if from a different run hence the different timestamp) but I’m not sure what’s causing that:

07/11 16:14:39[2022-07-11 20:14:39,579] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 180
07/11 16:14:39[2022-07-11 20:14:39,579] {scheduler_job.py:1308} INFO - Exited execute loop
07/11 16:14:39[2022-07-11 20:14:39,576] {process_utils.py:66} INFO - Process psutil.Process(pid=180, status='terminated', exitcode=0, started='20:09:46') (180) terminated with exit code 0
07/11 16:14:39[2022-07-11 20:14:39,242] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 180

I thought scheduler_zombie_task_threshold might be killing the job, but the job appears to fail before that threshold is reached. I also haven’t been able to replicate this issue locally - only on our Astronomer deployment. Any recommendations on what could be the cause?