-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
GCP support for stateless, HA installations #2768
Comments
Something like this. I've built/tested this locally, against Terraform-managed GCS buckets. I'd prefer not to have any code in there whatsoever relating to the provisioning of GCS buckets, but am following the pattern set in the S3 code. |
Thanks for the interest @joshdurbin and for the work to add Google Cloud Storage backend. For us to productise this, it would be good to have a complete HA GCP Story.
I'm going to leave this open to gather more feedback from the community and team. |
@benarent Agreed. We're looking to house all the data for Teleport, stateless in GCP-native resources. In fact I'm presently working to get event and non event storage off the ground in Firestore and should have something workable by end of the week or very early next. Should I update this issue to reflect a broader GCP story? |
Yes @joshdurbin , that would be great. Let me know how you get on. |
I was not familiar with firestore specifically, but looking at it, it seems like it supports server side encryption by default, which is very nice and events. @benarent can you put this on the next product planning meeting to we can discuss this with mr @kontsevoy |
@joshdurbin questions for you: Who is going to use this feature in production (is this a hack project, production company policy, etc) |
@klizhentas I'm in the process of building several clusters in our environment(s) at BC. We'd prefer to keep our data in Google's systems/services, hence this effort, and additional care not to maintain other state storage systems (etcd, NFS or even NFS via GCS/Fuse) . Ideally this work would get brought into core Teleport until then, though, we're running custom builds. @benarent Current state of things; I should have cluster state working tomorrow, I'm a bit behind schedule. Currently implemented are:
That work is all in the BC fork. |
@klizhentas @benarent Alright, this work is close to done. I've been testing the changes locally against GCS and Firestore for the last few days. The data in the screenshots is junk data for a local cluster, no secrets. Indexes remain created manually, I'll add support for ensuring their creation in the next day or two. Events are stored in Firestore with the document ID equal to the session ID and event type. Datastore documents cannot be stored with their IDs equal to the key as the value of the key violates Firestore document ID requirements (contain forward slashes, periods, etc...). The document IDs are SHA1 hashed from the key to maintain fast fetching from a known point rather than a query. Both the Firestore Event "handler" and the Firestore backend have ticker-based expired entry removal. The Firestore Event "handler" does so by evaluating the timestamp on the record while the Firestore backend actually evaluates based on the "expires" property, if set. The Firestore backend uses a document snapshot query stream to consume document changes for the collection -- allowing all auth servers watchers to receive updates. I'll issue a PR once I drop in the programmatic index creation and cleanup some error handling. |
@kontsevoy has approved the feature in the product, so I'll be helping you along the way. Once you are ready to review, let's do a zoom review kick off session once you are ready. |
@joshdurbin can you please post the configuration of the teleport with the new changes, we would like to review it as well. |
Woot. What I'm using right now, outside GCP's environment, on my laptop is: storage:
type: firestore
collection_name: cluster-data
credentials_path: /var/lib/teleport/gcs_creds
project_id: bc-jdurbin
audit_events_uri: 'firestore://events?projectID=bc-jdurbin&credentialsPath=/var/lib/teleport/gcs_creds'
audit_sessions_uri: 'gs://teleport-session-storage-2?credentialsPath=/var/lib/teleport/gcs_creds&projectID=bc-jdurbin' In GCP you'd likely make use of attached compute service accounts and forego credentials files. |
Full config being used at the moment is: #
# Sample Teleport configuration file.
#
teleport:
nodename: C02WG09CHTDH
data_dir: /var/lib/teleport
pid_file: /var/run/teleport.pid
auth_token: cluster-join-token
auth_servers:
- 0.0.0.0:3025
connection_limits:
max_connections: 15000
max_users: 250
log:
output: stderr
severity: DEBUG
ca_pin: ""
storage:
type: firestore
collection_name: cluster-data
credentials_path: /var/lib/teleport/gcs_creds
project_id: bc-jdurbin
audit_events_uri: 'firestore://events?projectID=bc-jdurbin&credentialsPath=/var/lib/teleport/gcs_creds'
audit_sessions_uri: 'gs://teleport-session-storage-2?credentialsPath=/var/lib/teleport/gcs_creds&projectID=bc-jdurbin'
auth_service:
enabled: "yes"
listen_addr: 0.0.0.0:3025
tokens:
- proxy,node:cluster-join-token
session_recording: ""
client_idle_timeout: 0s
disconnect_expired_cert: false
keep_alive_count_max: 0
ssh_service:
enabled: "yes"
proxy_service:
enabled: "yes"
listen_addr: 0.0.0.0:3023
web_listen_addr: 0.0.0.0:3080
tunnel_listen_addr: 0.0.0.0:3024
https_key_file: /var/lib/teleport/webproxy_key.pem
https_cert_file: /var/lib/teleport/webproxy_cert.pem |
Updates are in after running through the PR. Keys are now human readable. |
Tests for the Firestor backend and Firestone events sub-system are configured in by default to use the gcloud emulator but can easily be modified to hit live Firestore via creds or GCE SA bindings. GCS uses FakeGCS for tests, essentially an in-code emulator. |
I'm just reviewing this open issue, since it looks like this will make 4.2, and wanted to get ahead with some Documentation. Will we cut a GCP specific release, since I see it's not enabled by default? d346f2b#diff-beec5651c04d7af5273733679b64c00c |
Just to indicate an early preference, I would prefer us not to need separate artefacts for this. Our downloads page is already getting crowded. |
To be honest, I followed what was there for S3 / DynamoDB, however, I never had to supply the flags to get S3/DynamoDB or the GCS/Firestore support. I never looked into why that was the case and duplicated the docs from S3/DynamoDB. I’m not sure I follow with regards to downloads, do you currently split things out on your downloads pages? It doesn’t look like it; https://gravitational.com/teleport/download/. Same with the enterprise build, things are not broken out w/ for S3/DynamoDB support presently. |
@webvictim @benarent I think having teleport support both GCP and AWS out of the box is OK, as it does not affect the binary too much. FIPS is a bit different case, because it has to be recompiled with boringcrypto |
I've added a |
I think most of this work is already done? |
Howdy, I'm looking to add Google GCS support to Teleport for recording storage. I have an ask to see this built out for Teleport for a proposed rollout within several Google projects/environments and, based on scroll back in Gravitational Community Slack forum, it seems a few others have asked for similar functionality. I have reviewed the implementation for S3, which is straight forward enough, essentially:
lib/events/gcssessions
lib/events/gcssessions/gcshandler.go
mirror functionality inlib/events/s3sessions/s3handler.go
SchemGCS
toconstants.go
initUploadHandler
, inlib/service/service.go
TestParseSessionsURI
inlib/utils/utils_test.go
Adding GCS as a storage backend would require an additional dependency, which is currently at version 0.40.0:
The text was updated successfully, but these errors were encountered: