Skip to content

feat(logging): log stream provider to cache log stream #3646

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Merged

Conversation

NikaHsn
Copy link
Member

@NikaHsn NikaHsn commented Aug 30, 2023

Issue #, if available:

Description of changes:

  • add caching to log stream provider
  • cleanup the sync logic
  • TODO: add log rotation to queued item store

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@NikaHsn NikaHsn requested a review from a team as a code owner August 30, 2023 00:09
@NikaHsn NikaHsn force-pushed the feat/logging/cloudwatch branch from 56385a6 to e76bebb Compare August 30, 2023 00:17
Comment on lines 203 to 207
if (totalByteSize + size >= _maxLogEventsBatchSize ||
logEvents.length >= _maxNumberOfLogEventsInBatch ||
(logEvents.length > 1 &&
currentLogEvent.timestamp - logEvents.first.timestamp >=
_maxLogEventsTimeSpanInBatch)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please break these out into variables

);

verify(
() => mockCloudWatchLogStreamProvider.logStream,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question. Why not use the real log stream provider here?

return _client.putLogEvents(request).result;
} on Exception {
rethrow;
}
}

Stream<_LogBatch> _getLogBatchesToSync() async* {
final queuedLogs = (await _logStore.getAll()).toList();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These can be streamed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean the log store to return stream instead of Future or do you mean creating stream from _logStore.getAll()?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, _logStore.getAll() can be a Stream, then this method becomes a transformer of batches of log records from the DB to batches which can be synced.

@override
Future<String> get logStream async {
FutureOr<String> get logStream async {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making this FutureOr is a decent idea, but by marking this async it will always return a Future.

In the cached case, this is the same as:

Future<String> get logStream {
   if (_cachedLogStreams.contains(logStreamName)) {
      return Future.value(logStreamName);
    }
}

Further, if you await the return of a FutureOr it is implicitly wrapped in Future.value and you cannot use the result synchronously.

That is to say, to fully leverage FutureOr, you need to switch on the return and treat it as a sealed class over T and Future<T>.

https://dartpad.dev/?id=dc5c257693d0d454c819ffcf38b626f0

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated to Future

@NikaHsn NikaHsn force-pushed the feat/logging/cloudwatch branch 4 times, most recently from cf1f3ed to 949a371 Compare August 30, 2023 19:41
}
logger.error('Failed to sync batched logs to CloudWatch', e);
final response = await _sendToCloudWatch(events);
if (response.rejectedLogEventsInfo?.tooNewLogEventStartIndex != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two other properties to this, expiredLogEventEndIndex and tooOldLogEventEndIndex, which will need to be handled. I'm guessing later iterations will take into account partial completion as well since currently it is an all-or-nothing approach for a batch?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, added a TODO

Future<String> get logStream;
/// Returns log stream name from cache. if cache is missing it
/// calls [createLogStream] and return the log stream name.
FutureOr<String> get logStream;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would maybe change the name to currentLogStream or similar to distinguish from createLogStream. Is this class going to be public?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the DefaultCloudWatchLogStreamProvider is not going to be public. should I move it to the cloudwatch_logger_plugin.dart and make it private?

when(() => mockCloudWatchLogStreamProvider.logStream)
.thenAnswer((_) async => Future<String>.value('log stream name'));

when(() => mockQueuedItemStore.addItem(any(), any()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then there should be tests for the log stream provider as well since that's currently not being tested.

@NikaHsn NikaHsn force-pushed the feat/logging/cloudwatch branch from 949a371 to 060eb22 Compare August 31, 2023 00:14
@NikaHsn NikaHsn force-pushed the feat/logging/cloudwatch branch from 060eb22 to 121fb88 Compare August 31, 2023 16:29
@NikaHsn NikaHsn merged commit 5206d87 into aws-amplify:feat/logging/cloudwatch Aug 31, 2023
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants