KubernetesPodOperator, Docker and Node

This might be helpful to others so I ask here …

If I want to run the below basic nodejs file on Astronomer, what are some best practices?

// importData.js
import { dataImporter } from "my-npm-module"

dataImporter.run()

In the KubernetesPodOperator I have to point to an image:

KubernetesPodOperator(
        is_delete_operator_pod=False,
        namespace="astronomer-cloud-****",
        image="ubuntu:16.04",
        cmds=["bash", "-cx"],
        arguments=["echo", "10", "echo pwd"],
        labels={"foo": "bar"},
        name="airflow-test-pod",
        in_cluster=True,
        task_id="my_js_code",
        get_logs=True,
    )

Would I have to bundle my nodejs file in a pre-built Docker container or is there a way that this can happen on-the-fly?

It seems a bit cumbersome to first build a separate Dockerized project with this nodejs file. Put it onto a Ubuntu-Node custom Docker. Upload that to a registry and then reference that in my KubernetesPodOperator.

Am I missing something here? Could it be handled better inside the scope of my astronomer airflow project?

do you need to run the KubernetsPodOperator? can you just add nodejs to packages.txt and then use the BashOperator to run your node code?

@AndrewHarmon, hmmm. I thought KubernetesPodOperator was a better fit for this but I could try the BashOperator.

Still curious about the need to put Docker on a registry in this case though. Is that always a requirement for KubernetesPodOperator with custom images? Or any chance they (the Dockerfiles) can be specified within the Airflow project?

The KubePodOperator has to pull an image from a repository to run it. just like docker run. if you are using some kind if ci/cd process, you could incorporate a build step that builds your image and pushing to a docker repository as part of your build process before your code is deployed to Astronomer.