Easy to use SFTP (SSH File Transfer Protocol) server with OpenSSH. This is an automated build linked with the debian and alpine repositories.
- Define users as command arguments, STDIN or mounted in
/etc/sftp-users.conf
(syntax:user:pass[:e][:uid[:gid[:dir1[,dir2]...]]]...
).- Set UID/GID manually for your users if you want them to make changes to your mounted volumes with permissions matching your host filesystem.
- Add directory names at the end, if you want to create them and/or set user ownership. Perfect when you just want a fast way to upload something without mounting any directories, or you want to make sure a directory is owned by a user (chown -R).
- Mount volumes in user's home directory. Not supported with s3fs_fuse addition
- The users are chrooted to their home directory, so you must mount the volumes in separate directories inside the user's home directory (/home/user/mounted-directory).
- s3fs is currently only supported with a single user-dir. Adding additional mounts for multiple users should be simple, but not in my original use-case. last-in wins currently
docker run -e S3_BUCKET_NAME=yourbucket[:/OPTIONAL_SUBDIR] -e AWSACCESSKEYID=your_aws_access_key_id -e AWSSECRETACCESSKEY=your_aws_secret_key --security-opt apparmor:unconfined --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 21:22 -d chessracer/sftp-s3fs testuser:pass:::sync_to_s3
User "testuser" with password "pass" can login with sftp and upload files to a folder in that container at /home/testuser/sync_to_s3. Files uploaded this way are synced to S3 in the named S3_BUCKET_NAME. The provided AWSACCESSKEYID must be associated with a role that has access permissions to the S3 bucket.
Run the provided docker-compose.yml file, providing values in the environment for for AWSACCESSKEYID, AWSSECRETACCESSKEY, S3_BUCKET_NAME, USERNAME, and PASSWORD. eg:
AWSACCESSKEYID=your_aws_access_key_id AWSSECRETACCESSKEY=your_aws_secret_key S3_BUCKET_NAME=your_bucket_name USERNAME=testuser PASSWORD=pass docker-compose up -d
sftp -P 21 testuser@172.17.0.1:sync_to_s3
sftp> put somefile
sftp> ls
somefile
Store users in config - NOTE: Multiple users not yet supported for s3fs (last-in wins for s3fs mount)
docker run \
-v /host/users.conf:/etc/sftp-users.conf:ro \
-v /host/share:/home/foo/share \
-v /host/documents:/home/foo/documents \
-v /host/http:/home/bar/http \
-p 2222:22 -d atmoz/sftp
/host/users.conf:
foo:123:1001
bar:abc:1002
Add :e
behind password to mark it as encrypted. Use single quotes if using terminal.
docker run \
-v /host/share:/home/foo/share \
-p 2222:22 -d atmoz/sftp \
'foo:$1$0G2g0GSt$ewU0t6GXG15.0hWoOX8X9.:e:1001'
Tip: you can use atmoz/makepasswd to generate encrypted passwords:
echo -n "your-password" | docker run -i --rm atmoz/makepasswd --crypt-md5 --clearfrom=-
Mount all public keys in the user's .ssh/keys/
directory. All keys are automatically
appended to .ssh/authorized_keys
.
docker run \
-v /host/id_rsa.pub:/home/foo/.ssh/keys/id_rsa.pub:ro \
-v /host/id_other.pub:/home/foo/.ssh/keys/id_other.pub:ro \
-v /host/share:/home/foo/share \
-p 2222:22 -d atmoz/sftp \
foo::1001
Put your programs in /etc/sftp.d/
and it will automatically run when the container starts.
See next section for an example.
If you are using --volumes-from
or just want to make a custom directory
available in user's home directory, you can add a script to /etc/sftp.d/
that
bindmounts after container starts.
#!/bin/bash
# File mounted as: /etc/sftp.d/bindmount.sh
# Just an example (make your own)
function bindmount() {
if [ -d "$1" ]; then
mkdir -p "$2"
fi
mount --bind $3 "$1" "$2"
}
# Remember permissions, you may have to fix them:
# chown -R :users /data/common
bindmount /data/admin-tools /home/admin/tools
bindmount /data/common /home/dave/common
bindmount /data/common /home/peter/common
bindmount /data/docs /home/peter/docs --read-only