You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently upgraded the module version we are using from 0.32.0 to 0.34.2. However, after the upgrade process, the runners that would get spun up stopped registering themselves to Github.
Overview of the Issue
Upon closer inspection of the logs it was noticed that the binaries syncer lambda was erroring out with the following error:
{
"stack": "Runtime.HandlerNotFound: index.handler is undefined or not exported\n at Object.module.exports.load (/var/runtime/UserFunction.js:246:11)\n at Object.<anonymous> (/var/runtime/index.js:43:30)\n at Module._compile (internal/modules/cjs/loader.js:1085:14)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)\n at Module.load (internal/modules/cjs/loader.js:950:32)\n at Function.Module._load (internal/modules/cjs/loader.js:790:12)\n at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)\n at internal/main/run_main_module.js:17:47",
"message": "index.handler is undefined or not exported"
}
Since our S3 bucket containing the github actions binary was cleared prior to the upgrade, no new binaries were put in place. The EC2 instances errored out (rightfully) while attempting to download the binaries from S3 (as the file did not exist):
Downloading the GH Action runner from s3 bucket s3://<REDACTED>/actions-runner-linux.tar.gz
fatal error
Motivation for or Use Case
Regular binaries lambda being used to download Github actions release
Forest Version(s)
Module version 0.34.2
Reproduce the Error
Deploy the infrastructure fresh without a pre-existing binary in the S3 bucket.
Workaround
Currently there are two workarounds:
Manually download and put the binary file in the S3 bucket
Downgrade
We have downgraded the module back to 0.32.0 ... it has fixed the problem and the binary was re-uploaded to the S3 bucket.
The text was updated successfully, but these errors were encountered:
Sorry for the hassle ... I just re-downloaded the zip file from 0.34.2 release and compared the SHA to the last time I downloaded it and they are different. Very sure it was a mistake on my part of somehow misnaming and referencing a wrong zip.
I did attempt to re-deploy from 0.34.2 version of the module multiple times before opening the issue but it didn't occur to me to re-download to ensure I didn't mess up the zip files for the lambda.
Now that I've done that .. it is working as expected
We recently upgraded the module version we are using from 0.32.0 to 0.34.2. However, after the upgrade process, the runners that would get spun up stopped registering themselves to Github.
Overview of the Issue
Upon closer inspection of the logs it was noticed that the binaries syncer lambda was erroring out with the following error:
Since our S3 bucket containing the github actions binary was cleared prior to the upgrade, no new binaries were put in place. The EC2 instances errored out (rightfully) while attempting to download the binaries from S3 (as the file did not exist):
Motivation for or Use Case
Regular binaries lambda being used to download Github actions release
Forest Version(s)
Module version
0.34.2
Reproduce the Error
Deploy the infrastructure fresh without a pre-existing binary in the S3 bucket.
Workaround
Currently there are two workarounds:
We have downgraded the module back to 0.32.0 ... it has fixed the problem and the binary was re-uploaded to the S3 bucket.
The text was updated successfully, but these errors were encountered: