-
-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
PSQL and SQL profiler #7086
PSQL and SQL profiler #7086
Conversation
…ILED_STATEMENT_ID and MON$CALL_STACK.MON$COMPILED_STATEMENT_ID.
Are there any objections to integrate this in master, or anything yet to discuss? |
In firebird-devel we talked about an ability to profile other connections. Has it been added or are you going to add it later? |
I think this change would not invalidate the current design, so I think it does not make much sense to continue adding features before an initial integration and lack of feedback on what we have now. |
I don't mind if so |
I'd prefer the default plugin to be used without naming in |
Once we support profiling other connections (I agree this is a top priority feature), should we extend |
While I don't mind |
Why |
Given the default profiler plugin is supposedly to be really usable (not just a draft example of how things should be coded), why its tables/views are created dynamically rather than being part of ODS? Are they expected to change significantly with time? |
I really miss timings (count/min/max/total) for statements as a whole. It would be inconvenient to measure them with some external tools and then look into PROF$ tables/views for details, it would be more handy to have everything in a single place. Especially if we speak about multiple executions of a single prepared/cached statement. |
I've attempted to collect execution times in the past, but wanted to split them into total/cpu/wait parts, with the wait part being also detailed (I/O, lock, latch, pause, etc). Given that extra measurements are not always dirty cheap, I had a doubt they should be presented via MON$ tables unconditionally. Now it looks like it could be integrated with your profiler design after it's committed and thus measured on demand. But the question is how deep we need to dive into the CPU time. Is it OK to calculate it as |
I feel the profiler package also needs a routine |
Talking about configuration: because profiling is entirely engine-related, I'd say that this setting should belong to Engine14.conf, not firebird.conf. |
|
My idea was to have a FB namespace to differentiate things from users objects, as they can also use the dollar sign. But since it's new name convention, I would not have a problem to change it. We also have |
How these stored timings would be different than aggregate the request-based timings per |
Data (even when flushed) is stored as part of user transaction, that may be rolled back later. I do not see a way that automatic flush would work with this or would be less confusing than manually flush the data before read. But this could be useful in the case of profiling others connections. |
This looks good to me. |
Automatic flushing could behave like it's executed in autonomous transaction. Rollback will surely not be possible, but one may always delete the rows manually. This should be well-documented, of course. |
They cannot be aggregated from |
On 5/1/22 19:02, Adriano dos Santos Fernandes wrote:
Given the default profiler plugin is supposedly to be really
usable (not just a draft example of how things should be coded),
why its tables/views are created dynamically rather than being
part of ODS? Are they expected to change significantly with time?
1. Then it's not really plugin design
I want to explicitly agree with Adriano here.
I see no reasons for one particular plugin to have required for it
tables in ODS.
|
Change RDB$PROFILER.START_SESSION parameters order and put defaults on them.
…SSION parameters.
Use autonomous transaction in flush.
I've implemented profiling of others attachments with this commit set. The semantics of it is documented in the readme. To make things not confusing, user's transaction is not passed to plugins anymore and flush always starts its own transaction. |
…of MSVC. Note, it is not documented so far for newly released MSVC 17.1.
Assume _MSC_VER will be increased to be >= 2000 when\if VC CRT library get new version number in suffix.
Add some static_asserts.
Add parameter FLUSH_INTERVAL to START_SESSION.
I think we need TOTAL time of requests (as stored in Then it's possible to calculate MIN/MAX per STATEMENT (as stored in |
AFAIK there is no way to directly get the "wait time" from the OS. It's the elapsed time minus the thread's CPU time (user + kernel). I think we can pass the elapsed and total thread's CPU time. The helper views could also calculate the total wait time. We should decide if the APIs that currently receives only Or if we would need to add another interface which would be extensible and plugin writers would call its methods to get timings. The downside of this is that it will be slower. |
Yes, no way to get wait time from the system. But we can measure "logical waits" ourselves in most points where we checkout from the engine. Some of them won't be honest (IO wait time will actually be IO CPU time when reads are performed from filesystem cache, for example), but this is OK as they're expected to be short and unlikely to be noticeable as "top" waits. If the CPU time is also measured, then we could calculate "wait" as "total time - CPU time" and it should be more or less real, but I doubt it gonna be useful. So I'm somewhat skeptical in whether we really need CPU time... |
Add ProfilerStats interface and pass it to plugin instead of runTime parameter. Rename *_TIME columns to *_ELAPSED_TIME.
Anyone see a blocking point to have this merged in master? |
On 5/4/22 11:39, Vlad Khorsun wrote:
We also have |PLG$*| tables, so maybe we should name them
|PLG$PROF*|?
This looks good to me.
What does it means ? PLG == ???
That's just prefix like RDB$, see PLG$SRP_USERS in sec db.
|
On 04/05/2022 07:49, Vlad Khorsun wrote:
So, if user call RDB$PROFILER.START_SESSION and RDB$PROFILER.FLUSH in
the same (own) transaction, profiler will create metadata and attempt to
use it (to INSERT something) in the same transaction ?
No, and actually things happens a bit different.
The default plugin uses the transaction from startSession to query a
sequence to make the session id.
There is the init method. Init receives a attachment and transaction.
The default plugin uses the attachment to start a new transaction to
create the metadata.
Init is called by engine once per database/plugin when a profiler
session is about to be created.
Adriano
|
@@@ QA issue @@@ |
No description provided.