-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Add Support for Local Whisper Models and Integrate llmR for Summarization #23
Conversation
Removed the llm related functions since they will be provided by the llmR package
- Added `use_whisper_local_stt` function to support local Whisper models via Python with reticulate. - Added `use_mlx_whisper_local_stt` function for MLX Whisper models, optimized for Mac OS with Apple Silicon. - Updated `perform_speech_to_text` to use `whisper_local` as the default model. - Enhanced `speech_to_summary_workflow` to display the selected speech-to-text model. - Updated documentation and NAMESPACE to export the new functions. - Added `reticulate` to the Suggests field in DESCRIPTION for Python integration.
Update README to describe the use of llmR for summarisation and the addition of the new local models for stt.
Caution Review failedThe pull request is closed. WalkthroughThe changes in the pull request involve updates to the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Enhancements
use_whisper_local_stt
anduse_mlx_whisper_local_stt
to support local Whisper models via Python with reticulate, with the second being optimized for Mac OS with Apple Silicon (Commit: 69e4f5e).llmR
package for LLM interactions, removing redundant LLM-related functions (Commit: 2331b46).perform_speech_to_text
to usewhisper_local
as the default model and enhancedspeech_to_summary_workflow
to display the selected speech-to-text model (Commit: 69e4f5e).Fixes
rlang::check_installed
for better package management (Commit: 3227b0d).Documentation
llmR
for summarization and the addition of new local models for speech-to-text (Commit: 8bff883).Summary
This pull request introduces significant enhancements to the
minutemaker
package by adding support for local Whisper models, integrating thellmR
package for LLM interactions, and improving the speech-to-text workflow. Additionally, it fixes dependency management issues and updates the documentation to reflect these changes.In the realm of code so bright,
Local models take their flight.
With
llmR
we now align,Summaries are more divine.
Dependencies fixed with care,
Documentation now laid bare.
Summary by CodeRabbit
New Features
use_whisper_local_stt
anduse_mlx_whisper_local_stt
.speech_to_summary_workflow
.Bug Fixes
Documentation
llmR
package.Chores
0.12.0
and updated metadata.