Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

proposal: support google cloud logging service plugin #5474

Closed
shuaijinchao opened this issue Nov 10, 2021 · 1 comment · Fixed by #5538
Closed

proposal: support google cloud logging service plugin #5474

shuaijinchao opened this issue Nov 10, 2021 · 1 comment · Fixed by #5538
Assignees
Labels
enhancement New feature or request

Comments

@shuaijinchao
Copy link
Member

Issue description

Hi, Community

Currently, the cloud service provider that APISIX stores analysis logs only supports Alibaba Cloud.

As one of the world's largest cloud service providers, Google has a very large user base.

Therefore, I propose that APISIX supports the synchronization of logs to Google Cloud Logging in the form of plugin, which can not only meet the diverse log storage and analysis needs of users, but also enrich the surrounding ecology of APISIX.

Environment

  • apisix version (cmd: apisix version):
  • OS (cmd: uname -a):
  • OpenResty / Nginx version (cmd: nginx -V or openresty -V):
  • etcd version, if have (cmd: run curl http://127.0.0.1:9090/v1/server_info to get the info from server-info API):
  • apisix-dashboard version, if have:
  • the plugin runner version, if the issue is about a plugin runner (cmd: depended on the kind of runner):
  • luarocks version, if the issue is about installation (cmd: luarocks --version):
@shuaijinchao shuaijinchao changed the title proposal: support google logging service plugin proposal: support google cloud logging service plugin Nov 10, 2021
@spacewander spacewander added the enhancement New feature or request label Nov 10, 2021
@shuaijinchao
Copy link
Member Author

shuaijinchao commented Nov 11, 2021

Name

plugin name is: google-cloud-logging

Configuration

{
    "inactive_timeout":10,
    "max_retry_count":0,
    "buffer_duration":60,
    "resource":{
        "type":"global"
    },
    "log_id":"syslog",
    "auth_config":{
        "private_key":"-----BEGIN RSA PRIVATE KEY-----KEY-----END RSA PRIVATE KEY-----",
        "token_uri":"http://127.0.0.1:1980/google/logging/token",
        "scopes":[
            "https://apisix.apache.org/logs:admin"
        ],
        "entries_uri":"http://127.0.0.1:1980/google/logging/entries",
        "project_id":"apisix"
    },
    "retry_delay":1,
    "batch_max_size":1
}
  • auth_config the google service account config(Semi-optional, one of auth_config or auth_file must be configured)
  • auth_config.private_key the private key parameters of the Google service account
  • auth_config.project_id the project id parameters of the Google service account
  • auth_config.token_uri the token uri parameters of the Google service account
  • auth_config.scopes the access scopes parameters of the Google service account, refer to: https://developers.google.com/identity/protocols/oauth2/scopes#logging
  • auth_config.entries_uri google logging service API
  • auth_file path to the google service account json file(Semi-optional, one of auth_config or auth_file must be configured)
  • resource the Google monitor resource, refer to: https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource
  • log_id google logging id, refer to: https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
  • max_retry_count max number of retries before removing from the processing pipe line
  • retry_delay number of seconds the process execution should be delayed if the execution fails
  • buffer_duration max age in seconds of the oldest entry in a batch before the batch must be processed
  • inactive_timeout max age in seconds when the buffer will be flushed if inactive
  • batch_max_size max size of each batch

Details

  1. Obtain and assemble request information in the APISIX Log phase
  2. To interact with google logging service for the first time, you need to request token information. After obtaining the token, it will be cached in the memory of the working node.
  3. After obtaining a valid token, put the request information into the batch processing queue. When the batch processing queue triggers the batch_max_size or batch_timeout threshold, the data in the queue is synchronized to the google cloud logging service
  4. Before each request is sent, check whether the token is about to time out, and refresh the token if it will time out.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants