Skip to content

Commit

Permalink
update landing page with new model name selection
Browse files Browse the repository at this point in the history
  • Loading branch information
st3w4r committed Jul 26, 2024
1 parent e0853c7 commit 6c6c64c
Showing 1 changed file with 19 additions and 1 deletion.
20 changes: 19 additions & 1 deletion landing/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -305,6 +305,7 @@ <h2>Features</h2>
<li>Store <b>multiple provider configurations</b></li>
<li>Set a <b>default provider</b></li>
<li>Choose provider via <b>command-line flag</b></li>
<li>Set <b>model name</b> per provider</li>
<li>Stream output in <b>real-time</b></li>
</ul>
</section>
Expand All @@ -313,13 +314,30 @@ <h2>Configuration</h2>
<h3>Config file</h3>
<p>Configurations are managed through:</p>
<pre><code class="bash">~/.config/jsonthat/config.yaml</code></pre>
<p>The config file now stores multiple provider configurations and the default provider.</p>
<p>Command to display configuration:</p>
<pre><code class="bash">$ jt --config</code></pre>
<p>The config file stores multiple provider configurations and the default provider.</p>
<pre><code class="bash">default_provider: openai
providers:
claude:
api_key: *****
mistral:
api_key: *****
model: open-mistral-nemo
ollama:
api_url: http://localhost:11434
model: llama3.1
openai:
api_key: *****
model: gpt-4o-mini
</code></pre>
</section>
<section>
<h3>Environment Variables</h3>
<p>To configure the CLI tool, you can set the following environment variables:</p>
<pre><code class="bash">export LLM_PROVIDER='openai'
export LLM_API_KEY='your_api_key_here'
export LLM_MODEL='gpt-4o-mini'
export OLLAMA_API_URL='http://127.0.0.1:11434'
export OLLAMA_MODEL='llama3'
</code></pre>
Expand Down

0 comments on commit 6c6c64c

Please # to comment.