Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Defuse throttled_func when it's accidentally engaged #18235

Merged
merged 3 commits into from
Nov 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/cascadia/WinRTUtils/inc/ThrottledFunc.h
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ class ThrottledFunc : public std::enable_shared_from_this<ThrottledFunc<leading,
{
try
{
std::apply(self->_func, self->_storage.take());
self->_storage.apply(self->_func);
}
CATCH_LOG();
}
Expand Down
45 changes: 25 additions & 20 deletions src/inc/til/throttled_func.h
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,22 @@ namespace til
}
}

std::tuple<Args...> take()
void apply(const auto& func)
{
std::unique_lock guard{ _lock };
auto pendingRunArgs = std::move(*_pendingRunArgs);
_pendingRunArgs.reset();
return pendingRunArgs;
decltype(_pendingRunArgs) args;
{
std::unique_lock guard{ _lock };
args = std::exchange(_pendingRunArgs, std::nullopt);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As suggested by Dustin, this now uses std::exchange.

}
// Theoretically it should always have a value, because the throttled_func
// should not call the callback without there being a reason.
// But in practice a failure here was observed at least once.
// It's unknown to me what caused it, so the best we can do is avoid a crash.
assert(args.has_value());
if (args)
{
std::apply(func, *args);
}
}

explicit operator bool() const
Expand All @@ -60,10 +70,12 @@ namespace til
return _isPending.exchange(true, std::memory_order_relaxed);
}

std::tuple<> take()
void apply(const auto& func)
{
reset();
return {};
if (_isPending.exchange(false, std::memory_order_relaxed))
{
func();
}
}

void reset()
Expand Down Expand Up @@ -171,31 +183,24 @@ namespace til
void flush()
{
WaitForThreadpoolTimerCallbacks(_timer.get(), true);
if (_storage)
{
_trailing_edge();
}
_timer_callback(nullptr, this, nullptr);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading this code I've realized that I can ever so slightly reduce the binary size by moving _trailing_edge into the _timer_callback and calling _timer_callback directly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends honestly. If all of timer_callback is larger and less deduplicatable with COMDAT folding, it could be larger overall.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I think I see what you mean. It does have the additional benefit that it swallows the exception, just like the regular timer callback. This makes it more predictable IMO, because it now behaves identical.

}

private:
static void __stdcall _timer_callback(PTP_CALLBACK_INSTANCE /*instance*/, PVOID context, PTP_TIMER /*timer*/) noexcept
try
{
static_cast<throttled_func*>(context)->_trailing_edge();
}
CATCH_LOG()

void _trailing_edge()
{
const auto self = static_cast<throttled_func*>(context);
if constexpr (Leading)
{
_storage.reset();
self->_storage.reset();
}
else
{
std::apply(_func, _storage.take());
self->_storage.apply(self->_func);
}
}
CATCH_LOG()

wil::unique_threadpool_timer _createTimer()
{
Expand Down
Loading