-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Add obsreport to receiverhelper scrapers #1961
Add obsreport to receiverhelper scrapers #1961
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1961 +/- ##
==========================================
- Coverage 91.88% 91.86% -0.03%
==========================================
Files 276 279 +3
Lines 16495 16587 +92
==========================================
+ Hits 15157 15238 +81
- Misses 922 929 +7
- Partials 416 420 +4
Continue to review full report at Codecov.
|
af4e5f9
to
77f7248
Compare
77f7248
to
da21dde
Compare
} | ||
} | ||
|
||
span := trace.FromContext(scraperCtx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this span guaranteed to always be the one created by StartMetricsScrapeOp? Should we explicitly return the span from StartMetricsScrapeOp, and accept it as an argument in EndMetricsScrapeOp instead?
Or alternatively, make StartMetricsScrapeOp return an additional struct which contains the span and has an End()
or RecordResult()
function.
func StartMetricsScrapeOp(...) (context.Context, *MetricsScrapeOp) {
...
ctx, span := trace.StartSpan(scraperCtx, spanName)
return ctx, &MetricsScrapeOp{ctx: ctx, span: span}
}
// or you could instead make this unexported, and add an interface.
type MetricsScrapeOp struct {
ctx context.Context
span trace.Span
}
func (m *MetricsScrapeOp) End(numScrapedMetrics int, err error) {
// use m.ctx and m.span
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a very good point. I like your idea of returning a struct with an End()
function.
I don't want to change this here though as all the other obsreport_*.go
files follow this same pattern. Probably worth doing this in a followup PR though.
Overall looks good, thanks for adding this! |
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
package obsreport |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this go in obsreport (or must it for technical reasons?) or could the scraper stuff all be kept together?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's much more convenient adding this here as can make use of many of the helper functions for testing, etc. I think it is also more consistent to put this here? (even thought Scrapers aren't a first class component)
da21dde
to
e45fccf
Compare
@james-bebbington any updates on this? |
0895d5e
to
7d433eb
Compare
Sorry just got back to this now and addressed the feedback. I also updated this so that for the generated traces & metrics, scrapers are scoped to the receiver they belong to, and the scraper name is optional, i.e. will not be included in traces & metrics if set to an empty string - this is useful in the case of a simple receiver with only one scraper. |
This PR now breaks contrib-test because it adds a |
e7090f4
to
1cff501
Compare
d9d1caf
to
7b3f624
Compare
7b3f624
to
539afad
Compare
|
||
// PartialScrapeError can be used to signalize that a subset of metrics were failed | ||
// to be scraped | ||
type PartialScrapeError struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could be wrong but it feels like this and obsreport stuff about scraping would ideally all be in the same package (or subpackages) instead of spread across core. @bogdandrutu any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I'm not sure about this one.
I felt the obsreport
stuff should stay together, and since that package needs to reference PartialScrapeError
to compute metric stats, I needed to put this error in a separate package as well. consumererror
seems to be the most appropriate place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's leave it there for the moment. @james-bebbington you promised me an issue to discuss making Scraper a top level component.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I? 😅
After applying this library to the hostmetrics receiver, my thoughts were that we could make this library even more usable if we:
- Add
ScraperFactory
interfaces - Add code to simplify creation of scrapers via factories to
receiver/receiverhelper
At that point, the Scraper interfaces will have many similarities in structure with other components, and it may arguably make sense to:
- Move interfaces from
receiver/receiverhelper/scraper.go
intocomponent/scraper.go
I don't think that scrapers should be a "top level component" though in the sense that they should be configurable at the top level like receivers / processors / exporters. Here's a doc Jay and I were using a while back to discuss the idea of making Scraper a top level component which eventually led us to do none of the things in that doc and implement this as a library in receiverhelper instead: https://docs.google.com/document/d/15WKpYvw23rQ6WnBhoE2wdk4szfjlXEXg8WYYvBcKshA/edit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move this to an issue so we don't forget about this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Add make tasks to update examples versions and run all examples to test the collector can at least start * Extend antoine's changes to make it more compatible with linux development box + split out examples portion of `Makefile` to `examples/` directory Co-authored-by: hughesjj <james@subject17.net>
Add obsreport to receiverhelper scraper library
consumererror.PartialScrapeError
for denoting that a scrape was partially successful and recording the number of metrics that failed to be scrapedreceiverhelper.CombineScrapeErrors
to support combining partial scrape errorsobsreport
with Trace & Metric observability support for Scrapers. Obsreport records the number of failed scrapes based on the partial scrape error count if relevantBlocks #1949
Sample Output: (see #1949)
Note in these examples we see quite a lot of errors scraping process data due to not running the Collector in "Administrative mode" on Windows:
Traces (from http://localhost:55679/debug/tracez?zspanname=scraper%2fprocess%2fMetricsScraped&ztype=2&zsubtype=0):
Metrics (from http://localhost:8888/metrics):