Skip to content

Add support for API Key Per Request #66

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
wants to merge 1 commit into from

Conversation

BrianBorge
Copy link

@BrianBorge BrianBorge commented Mar 25, 2025

This PR addresses #55 by adding support for passing an API Key per request.

Proposed API (and implemented in PR)

# Anthropic Example
chat = RubyLLM.chat
response = chat.with_model('claude-3-7-sonnet-20250219')
               .with_api_key(ENV.fetch('SOME_OTHER_ANTHROPIC_API_KEY', nil))
               .ask("What's your favorite algorithm?")

puts response.content
#=> I don't have personal favorites since I don't ... 

# OpenAI Example
chat = RubyLLM.chat
response = chat.with_api_key(ENV.fetch('OTHER_OPENAI_API_KEY', nil))
               .ask("What's your favorite algorithm?")

puts response
#=> I don't have personal preferences, but I can tell you about some ...

# Setting key on chat initialization
response = RubyLLM.chat(api_key: ENV.fetch('OTHER_OPENAI_API_KEY', nil)).ask("What's your favorite algorithm?")
puts response.content
#=> As an AI, I don't have personal preferences or feelings, but 

Embedding Example

response = RubyLLM.embed("What's your favorite algorithm?", api_key: ENV.fetch('OTHEN_OPENAI_API_KEY', nil))
puts response
#=> #<RubyLLM::Embedding:0x00000001295eca20>

Paint Example

response = RubyLLM.paint "a sunset over mountains in watercolor style", api_key: ENV.fetch('OPENAI_API_KEY', nil)
puts response
#=> #<RubyLLM::Image:0x000000012b75f658>

Questions/Notes

  • I'm unsure about Bedrock
  • Providing a different API key for specs?

@pricetodd
Copy link

@BrianBorge thanks!
I was just about to submit a very similar PR, only difference being the ability to set the api_key in the RubyLLM.chat call as well i.e.:

chat = RubyLLM.chat(model: 'claude-3-7-sonnet', api_key: 'my_api_key')

What do you think about including that?

@BrianBorge
Copy link
Author

@pricetodd - thanks for bringing that up -- just included it.

@crmne
Copy link
Owner

crmne commented Mar 26, 2025

Thanks for taking this on! While I appreciate the implementation, I think we can make this API sing with a bit more Ruby magic.

Instead of adding one-off parameters, let's embrace the beauty of configuration blocks and our existing chainable interface. Here's what the API should look like:

# Beautiful per-instance configuration that matches our global style
chat = RubyLLM.chat.with_config do |config|
  config.anthropic_api_key = "different_key"
  config.request_timeout = 60
end

# Chainable like everything else
RubyLLM.embed("text").with_config do |config|
  config.openai_api_key = "different_key"
end

This feels much more Ruby-like and matches our existing methods like .with_tool and .with_model. It also gives us room to grow as providers need more than just API keys (looking at you, AWS Bedrock).

Would you like to revise the PR to implement this pattern? The key pieces would be:

  1. Making our Configuration class support per-instance configs that inherit from the global config
  2. Adding the .with_config chainable method to both Chat and one-off operations
  3. Updating providers to respect instance-level configuration, if needed.

Let me know if you'd like guidance on any of those pieces. Looking forward to seeing this land with the new pattern!

/cc #55

@BrianBorge
Copy link
Author

Instead of adding one-off parameters, let's embrace the beauty of configuration blocks and our existing chainable interface

Great, love it. This approach makes sense. It's easy for me to tunnel vision and do the quick/easy thing when I see an issue so I appreciate the nudge towards the more elegant solution.

Would you like to revise the PR to implement this pattern?

Yes. I'll update the PR -- thanks for the guidance. I'll re-request a review once I have a draft of the revised implementation.

@BrianBorge BrianBorge force-pushed the api-key-per-request branch from c547dee to 623098d Compare March 27, 2025 13:44
@BrianBorge
Copy link
Author

BrianBorge commented Mar 27, 2025

@crmne – I think we may want to consider changing the internal implementation and API of one-off methods like embed and paint to behave more like Chat.

Right now, RubyLLM.embed("text") immediately performs the embedding and returns the result, which makes chaining something like .with_config awkward (inside the internals).

I’d love to hear your thoughts on which direction you’d prefer before I implement one or the other for the one-off methods.

Example of the one-off style with .with_config (based on your suggestion)

result = RubyLLM.embed("text").with_config do |config|
  config.openai_api_key = "different_key"
end

puts result.vectors

Example of initializing and using embed like Chat

embedding = RubyLLM.embed

result = embedding.with_config do |config|
  config.openai_api_key = "different_key"
end.embed("test")

puts result.vectors

Thoughts/Notes

  • Changing the implementation would make it easier to support more chainable methods in the future for one-off methods, maybe?
  • It would also make embed and paint consistent with how chat behaves
  • The main downside is that this would be a breaking change for anyone using the current one-off method pattern

For me, I would like to not make a breaking change and write the code to make your suggestion work but I also need to bring up the alternatives and get your thoughts.

@crmne crmne added the enhancement New feature or request label Apr 2, 2025
@crmne crmne linked an issue Apr 3, 2025 that may be closed by this pull request
@crmne
Copy link
Owner

crmne commented Apr 5, 2025

Hey @BrianBorge! I love how this conversation is getting at the heart of configuration patterns in Ruby. Let me add my 2¢ on this.

Rather than getting into competing styles of one-off instance configs, I think we should elevate the abstraction. What we really want here is a first-class Context object that can carry its own complete configuration state. Here's what I'm thinking:

# Global defaults work as before
RubyLLM.configure do |config|
  config.openai_api_key = ENV['OPENAI_API_KEY']
end

# But now we can create isolated contexts with their own config
context = RubyLLM.context do |config|
  config.openai_api_key = "team_a_key"
  config.request_timeout = 60
end

# Each context works independently
context.chat(...)
context.embed("Hello")

# Different contexts don't interfere 
context_a = RubyLLM.context { |c| c.openai_api_key = "team_a_key" }
context_b = RubyLLM.context { |c| c.anthropic_api_key = "team_b_key" }

# Perfect for concurrent usage
Fiber.new { context_a.chat(...) }.resume
Fiber.new { context_b.chat(...) }.resume

This gives us:

  1. A proper object to contain configuration state (instead of mixing it into every operation)
  2. Clean isolation between different configurations
  3. Thread/fiber safety by design (each context carries its own state)
  4. No magic - just plain old Ruby objects working together

The core idea here is that instead of trying to bolt configuration onto individual operations, we make the configuration context a first-class citizen that knows how to perform operations.

What do you think? This feels like a more "whole solution" that solves both the immediate need and gives us a better foundation for the future.

@BrianBorge
Copy link
Author

Thread safety and future-proofing? love it. I'll re-request a review once I have a revised implementation. Again, thanks for your guidance.

@crmne
Copy link
Owner

crmne commented Apr 20, 2025

made my own implementation in 5e73fe3

@crmne crmne closed this Apr 20, 2025
@BrianBorge BrianBorge deleted the api-key-per-request branch April 23, 2025 14:26
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

API key per request
3 participants