Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Docker image assumes running in AWS infrastructure for S3 storage method #53

Closed
nullrocket opened this issue Sep 16, 2018 · 13 comments
Closed

Comments

@nullrocket
Copy link

I don't run the docker image on EC2 or even AWS infrastructure so the following fails

// src/files/s3/S3Store.ts
AWS.config.credentials = new AWS.EC2MetadataCredentials({
    httpOptions: { timeout: 5000 },
    maxRetries: 10,
});

Commenting it out and passing ENV variables for AWS credentials to docker run works, but a flag on the s3 config in config.js would probably be better. Unless I'm missing some other way to override.

@MarshallOfSound
Copy link
Contributor

but a flag on the s3 config in config.js would probably be better

Yeah, the best way to handle this would probably be a credentials object on the s3 object that would override using the ec2 metadata service.

@nullrocket
Copy link
Author

I can create a pull request for this.

@MarshallOfSound
Copy link
Contributor

@nullrocket That's awesome, to make things go smoothly please remember to sign our CLA --> https://github.com/atlassian/nucleus#contributors

@MarshallOfSound
Copy link
Contributor

Fixed in #46

@dannycarrera
Copy link

I don't think this issue is resolved. When not using aws EC2, even after adding an endpoint to s3.config.init, The following error is thrown:

{ CredentialsError: Missing credentials in config
nucleus_1  |     at IncomingMessage.<anonymous> (/opt/service/node_modules/aws-sdk/lib/util.js:864:34)
nucleus_1  |     at emitNone (events.js:111:20)
nucleus_1  |     at IncomingMessage.emit (events.js:208:7)
nucleus_1  |     at endReadableNT (_stream_readable.js:1064:12)
nucleus_1  |     at _combinedTickCallback (internal/process/next_tick.js:139:11)
nucleus_1  |     at process._tickDomainCallback (internal/process/next_tick.js:219:9)
nucleus_1  |   message: 'Missing credentials in config',
nucleus_1  |   retryable: false,
nucleus_1  |   time: 2020-02-29T22:48:55.220Z,
nucleus_1  |   code: 'CredentialsError',
nucleus_1  |   originalError: 
nucleus_1  |    { message: 'Could not load credentials from EC2MetadataCredentials',
nucleus_1  |      retryable: false,
nucleus_1  |      time: 2020-02-29T22:48:55.220Z,
nucleus_1  |      code: 'CredentialsError' } }

@damienallen
Copy link

damienallen commented Mar 5, 2020

Also having the same issue here, trying to connect to DigitalOcean spaces.

I've tried using the recommended environmental variables and a mounted config file (using AWS_SHARED_CREDENTIALS_FILE) to no avail. Does anyone have a working non-AWS setup? Otherwise, it may be wise to re-open this issue.

@dannycarrera
Copy link

@damienallen I'm interested in using DigitalOcean spaces too. I didn't want to derail this issue, so I made a new one #84. I'd be happy to help get support for DigitalOcean Spaces implemented. If you've made progress, would you be interested in sharing? We can continue this conversation on #84.

@masterkain
Copy link

I deploy on Scaleway using their Object Storage service and can't get this to work, I made sure everything is fine but yeah, it fails on EC2MetadataCredentials, I even tried to force the region to no avail:

  s3: {
    init: {
      endpoint: process.env.S3_ENDPOINT, // The alternate endpoint to reach the S3 instance at,
      s3ForcePathStyle: process.env.S3_PATH_STYLE, // Always use path style URLs
      region: process.env.AWS_REGION
    },

    bucketName: process.env.S3_BUCKET, // The name for your S3 Bucket

    cloudfront: null
    // cloudfront: { // If you don't have CloudFront set up and just want to use the S3 bucket set this to "null
    //   distributionId: '', // The CloudFront distribution ID, used for invalidating files
    //   publicUrl: '', // Fully qualified URL for the root of the CloudFront proxy for the S3 bucket
    // }
  },
│ Thu, 07 May 2020 00:08:21 GMT nucleus:s3 Deciding to write file (either because overwrite is enabled or the key didn't exist)
│ { Error: connect ENETUNREACH 169.254.169.254:80
│     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
│   message: 'Missing credentials in config',
│   errno: 'ENETUNREACH',
│   code: 'CredentialsError',
│   syscall: 'connect',
│   address: '169.254.169.254',
│   port: 80,
│   time: 2020-05-07T00:11:30.400Z,
│   originalError:
│    { message: 'Could not load credentials from EC2MetadataCredentials',
│      errno: 'ENETUNREACH',
│      code: 'CredentialsError',
│      syscall: 'connect',
│      address: '169.254.169.254',
│      port: 80,
│      time: 2020-05-07T00:11:30.399Z,
│      originalError:
│       { errno: 'ENETUNREACH',
│         code: 'ENETUNREACH',
│         syscall: 'connect',
│         address: '169.254.169.254',
│         port: 80,
│         message: 'connect ENETUNREACH 169.254.169.254:80' } } }

@b-zurg
Copy link

b-zurg commented May 9, 2020

This might be related: aws/aws-sdk-js#692

I was able to connect to a local Minio server to test the S3 update interaction by having the following s3 configuration. What basically did it was passing in the S3 configuration directly. The s3ForcePathStyle: true and signatureVersion: "v4" was necessary for connecting to Minio but I would consult the documentation of your object storage providers to see if they're necessary.

  s3: {
    init: {
      endpoint: "http://127.0.0.1:9000",
      s3ForcePathStyle: true,
      signatureVersion: "v4",
      accessKeyId: process.env.AWS_ACCESS_KEY_ID,
      secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
    },
    bucketName: 'test-update', 
    cloudfront: null
  },

@vgribok
Copy link

vgribok commented May 26, 2020

I am getting the same error when running the app as a Fargate/ECS container, with the AWS ECS Task Role granted full access to the S3 bucket. In the configuration I have only specified bucket name and have set file strategy to "s3". An example of configuring the app for the full-AWS setup would be appreciated.

@b-zurg
Copy link

b-zurg commented May 26, 2020

Nice! I have it successfully running in a very similar fashion - ECS + Fargate + Parameter store for env variables but it took a while to get things right. I'll have a write-up in the coming days I can share.

@markelrod
Copy link

@b-zurg Did you ever write up the details on how you configured nucleus to run on Fargate?

@brunodasilvalenga
Copy link

Got it working, setting:

s3: {
    init: {
      accessKeyId: process.env.AWS_ACCESS_KEY_ID,
      secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
    }
  },

I'm running the container inside of the EKS, will be good if we could use the assumed role, but the SDK version is too old and does not have support for it.

Thanks for the input @b-zurg

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

9 participants