-
Notifications
You must be signed in to change notification settings - Fork 113
Add shiny.ui.Chat
#1453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Add shiny.ui.Chat
#1453
Changes from 67 commits
Commits
Show all changes
101 commits
Select commit
Hold shift + click to select a range
63aa066
wip shiny.chat experiments
cpsievert 8894244
Use custom messages for streaming
cpsievert 807b034
Clean up
cpsievert 38ed666
Switch from input to output binding
cpsievert 0ed3b56
Make streaming easier
cpsievert ac547e2
Improved generic interface
cpsievert d4bb411
Avoid a binding altogether
cpsievert 0e57c89
Use strategy pattern to normalize messages and also a way to reigster…
cpsievert 61f3139
Add ollama support and example
cpsievert 76381d5
Sanitize HTML before putting it in a message
cpsievert c1dcf02
Require an active session when initializing Chat(), add chat_ui() for…
cpsievert 63e49a7
Introduce placehodler message concept
cpsievert f6b75b5
First pass at code highlighting and copy to clipboard
cpsievert 69768a9
Display user messages differently; refactor
cpsievert 9f19a24
Add support for AsyncIterable in append_message_stream(); add nonbloc…
cpsievert bf09580
Make input autoresize
cpsievert 04ad941
Make appending of message streams non-blocking by default
cpsievert 98eb35a
Make sure message content doesn't blow out of its container
cpsievert 1869a1c
More UI improvements
cpsievert 3dfb275
Refactor; better code highlighting
cpsievert 370ed19
Leverage inheritance in a more sane way
cpsievert e12cca0
Add langchain BaseChatModel support
cpsievert 532b4f2
Updates for recent anthropic release; be more careful not to error ou…
cpsievert 8abe0e3
Better error handling
cpsievert 112f897
Various improvements; address some feedback
cpsievert 0ae748e
Allow user to type while receiving a message (but prevent sending unt…
cpsievert 77d723d
Use .ui() method to display; move initial messages to constructor
cpsievert e61e19e
Move more UI logic to client
cpsievert 91aa45c
Add user_input_transformer; don't display system messages; separate s…
cpsievert 2a424c9
Flesh out docstrings; few other improvements
cpsievert e9deba0
Simplify/improve highlight logic
cpsievert 7d7c006
Add recipes example
cpsievert eaba0be
Separate concerns between user/assistant message components
cpsievert dcf29e8
Add assistant_response_transformer
cpsievert 91e36d0
Move on_error back to on_user_submit
cpsievert 2598c10
Make user input id accessible (things like shiny_validate might want it)
cpsievert 76840b1
First pass at imposing a token limit
cpsievert d7ea9cb
Clean up some ui API details
cpsievert ec68450
Refactor/improve message types
cpsievert 14df00b
Fix get_messages() logic; embrace FullMessage inside internal state
cpsievert a2c37da
wip provide a default tokenzier; remember pre&post transform response…
cpsievert 065bdd0
Add set_user_input() method; improve styling; other refactoring/fixes
cpsievert 343dbc8
Use subclassing to provide transforms (this way the transform also ha…
cpsievert 65a4dbc
Revert subclass transforms; support returning a ChatMessage from tran…
cpsievert f12423a
Fix error handling in append_message_stream()
cpsievert bea3634
Show actual errors when we have proof of errors not needing sanitization
cpsievert 458ce27
Merge branch 'main' into chat-llms
cpsievert 9beef48
Merge branch 'main' into chat-llms
cpsievert 4e7ef05
Tweak/refactor styles; add dark mode example
cpsievert b756a0a
Prevent chat effects from accumulating; clean-up multi-provider example
cpsievert 2a2d1c6
Re-organize examples
cpsievert a753022
wip enqueue pending messages while streaming to ensure FIFO
cpsievert 776293e
DRY
cpsievert e4b58b2
Improve transform API
cpsievert 84d4aad
Generate typestubs
cpsievert 1b77b4d
Debug
cpsievert 26ffe2a
Debug
cpsievert 4f6b22d
Fixes
cpsievert e6efae6
Merge branch 'main' into chat-llms
cpsievert 18466f0
More fixes
cpsievert 1553fb2
More fixes
cpsievert 08f9f16
Quote more types
cpsievert 870c47f
Try requiring latest google-generativeai
cpsievert 78a5717
Move chat packages to dev not test
cpsievert 4a7ba8a
Add workarounds for google-generativeai not supporting Python 3.8
cpsievert bb919e3
Get rid of typestubs
cpsievert 0633240
Merge branch 'main' into chat-llms
cpsievert 03b03c7
Add requirements for recipes app to dev
cpsievert 865c4b7
Add some lead-in commentary to each example
cpsievert 53f66ff
Add a couple enterprise examples
cpsievert a499c2a
Accumulate and flush pending messages server-side instead of client-side
cpsievert b4e3e36
get_user_message -> get_user_input; add transform parameter
cpsievert cdb2695
Make get_user_input() sync not async
cpsievert cf19247
Fix handling of None return values in transform_user_input
cpsievert 4f22d1a
First pass at adding tests
cpsievert fbb8344
Fix typing issue
cpsievert f95e07b
Mock an API key
cpsievert ac43f59
Fix more typing issues
cpsievert 2843e38
Require anthropic 0.28
cpsievert 634f547
Fix check for anthropic type for older Python versions
cpsievert 47275a4
Revert anthropic requirement
cpsievert 5338c41
Merge branch 'main' into chat-llms
cpsievert db56942
More tests
cpsievert 030b695
Doc improvements
cpsievert 297ecfd
Recommend dotenv for managing credentials
cpsievert 347b842
Accumulate message chunks before transform, store, and send
cpsievert aa9218b
Tokenize the pre-transformed response; make get_user_input() sync for…
cpsievert a760241
Improved TransformedMessage type/logic; pass accumulated message to t…
cpsievert 00f7785
Leverage stored messages inside .get_user_input()
cpsievert bc30c87
Merge branch 'main' into chat-llms
cpsievert 7e5b09b
.get_messages() -> .messages(); .get_user_input() -> .user_input()
cpsievert fc891f7
Fix typing compatibility
cpsievert b447e65
Fix and add more tests
cpsievert bab0d92
Fix type and typo
cpsievert 4ea1784
Address feedback
cpsievert d8b0a7f
load_dotenv() returns True
cpsievert bf88638
Update tests
cpsievert 849710b
Docstring improvements
cpsievert 11f9dae
Merge branch 'main' into chat-llms
cpsievert cf904a1
Remove runtime check in .ui() for Express mode
cpsievert 5ab0b32
Update changelog
cpsievert File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# from langchain_openai import ChatOpenAI | ||
from openai import AsyncOpenAI | ||
from utils import recipe_prompt, scrape_page_with_url | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Recipe Extractor Chat", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Initialize the chat (with a system prompt and starting message) | ||
chat = ui.Chat( | ||
id="chat", | ||
messages=[ | ||
{"role": "system", "content": recipe_prompt}, | ||
{ | ||
"role": "assistant", | ||
"content": "Hello! I'm a recipe extractor. Please enter a URL to a recipe page. For example, <https://www.thechunkychef.com/epic-dry-rubbed-baked-chicken-wings/>", | ||
}, | ||
], | ||
) | ||
|
||
chat.ui(placeholder="Enter a recipe URL...") | ||
|
||
llm = AsyncOpenAI() | ||
|
||
|
||
# A function to transform user input | ||
# Note that, if an exception occurs, the function will return a message to the user | ||
# "short-circuiting" the conversation and asking the user to try again. | ||
@chat.transform_user_input | ||
async def try_scrape_page(input: str) -> str | None: | ||
try: | ||
return await scrape_page_with_url(input) | ||
except Exception: | ||
await chat.append_message( | ||
"I'm sorry, I couldn't extract content from that URL. Please try again. " | ||
) | ||
return None | ||
|
||
|
||
@chat.on_user_submit | ||
async def _(): | ||
response = await llm.chat.completions.create( | ||
model="gpt-4o", messages=chat.get_messages(), temperature=0, stream=True | ||
) | ||
await chat.append_message_stream(response) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
import aiohttp | ||
from bs4 import BeautifulSoup | ||
|
||
recipe_prompt = """ | ||
You are RecipeExtractorGPT. | ||
Your goal is to extract recipe content from text and return a JSON representation of the useful information. | ||
|
||
The JSON should be structured like this: | ||
|
||
``` | ||
{ | ||
"title": "Scrambled eggs", | ||
"ingredients": { | ||
"eggs": "2", | ||
"butter": "1 tbsp", | ||
"milk": "1 tbsp", | ||
"salt": "1 pinch" | ||
}, | ||
"directions": [ | ||
"Beat eggs, milk, and salt together in a bowl until thoroughly combined.", | ||
"Heat butter in a large skillet over medium-high heat. Pour egg mixture into the hot skillet; cook and stir until eggs are set, 3 to 5 minutes." | ||
], | ||
"servings": 2, | ||
"prep_time": 5, | ||
"cook_time": 5, | ||
"total_time": 10, | ||
"tags": [ | ||
"breakfast", | ||
"eggs", | ||
"scrambled" | ||
], | ||
"source": "https://recipes.com/scrambled-eggs/", | ||
} | ||
``` | ||
|
||
The user will provide text content from a web page. | ||
It is not very well structured, but the recipe is in there. | ||
Please look carefully for the useful information about the recipe. | ||
IMPORTANT: Return the result as JSON in a Markdown code block surrounded with three backticks! | ||
""" | ||
|
||
|
||
async def scrape_page_with_url(url: str, max_length: int = 14000) -> str: | ||
""" | ||
Given a URL, scrapes the web page and return the contents. This also adds adds the | ||
URL to the beginning of the text. | ||
|
||
Parameters | ||
---------- | ||
url: | ||
The URL to scrape | ||
max_length: | ||
Max length of recipe text to process. This is to prevent the model from running | ||
out of tokens. 14000 bytes translates to approximately 3200 tokens. | ||
""" | ||
contents = await scrape_page(url) | ||
# Trim the string so that the prompt and reply will fit in the token limit.. It | ||
# would be better to trim by tokens, but that requires using the tiktoken package, | ||
# which can be very slow to load when running on containerized servers, because it | ||
# needs to download the model from the internet each time the container starts. | ||
contents = contents[:max_length] | ||
return f"From: {url}\n\n" + contents | ||
|
||
|
||
async def scrape_page(url: str) -> str: | ||
# Asynchronously send an HTTP request to the URL. | ||
async with aiohttp.ClientSession() as session: | ||
async with session.get(url) as response: | ||
if response.status != 200: | ||
raise aiohttp.ClientError(f"An error occurred: {response.status}") | ||
html = await response.text() | ||
|
||
# Parse the HTML content using BeautifulSoup | ||
soup = BeautifulSoup(html, "html.parser") | ||
|
||
# Remove script and style elements | ||
for script in soup(["script", "style"]): | ||
script.decompose() | ||
|
||
# List of element IDs or class names to remove | ||
elements_to_remove = [ | ||
"header", | ||
"footer", | ||
"sidebar", | ||
"nav", | ||
"menu", | ||
"ad", | ||
"advertisement", | ||
"cookie-banner", | ||
"popup", | ||
"social", | ||
"breadcrumb", | ||
"pagination", | ||
"comment", | ||
"comments", | ||
] | ||
|
||
# Remove unwanted elements by ID or class name | ||
for element in elements_to_remove: | ||
for e in soup.find_all(id=element) + soup.find_all(class_=element): | ||
e.decompose() | ||
|
||
# Extract text from the remaining HTML tags | ||
text = " ".join(soup.stripped_strings) | ||
|
||
return text |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
from anthropic import AsyncAnthropic | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Hello Anthropic Claude Chat", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Create and display empty chat | ||
chat = ui.Chat(id="chat") | ||
chat.ui() | ||
|
||
# Create the LLM client (assumes ANTHROPIC_API_KEY is set in the environment) | ||
client = AsyncAnthropic() | ||
|
||
|
||
# On user submit, generate and append a response | ||
@chat.on_user_submit | ||
async def _(): | ||
response = await client.messages.create( | ||
model="claude-3-opus-20240229", | ||
messages=chat.get_messages(), | ||
stream=True, | ||
max_tokens=1000, | ||
) | ||
await chat.append_message_stream(response) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
from google.generativeai import GenerativeModel | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Hello Google Gemini Chat", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Create and display empty chat | ||
chat = ui.Chat(id="chat") | ||
chat.ui() | ||
|
||
# create an LLM client | ||
client = GenerativeModel() | ||
|
||
|
||
# on user submit, generate and append a response | ||
@chat.on_user_submit | ||
async def _(): | ||
messages = chat.get_messages() | ||
|
||
# Convert messages to the format expected by Google's API | ||
contents = [ | ||
{ | ||
"role": "model" if x["role"] == "assistant" else x["role"], | ||
"parts": x["content"], | ||
} | ||
for x in messages | ||
] | ||
|
||
response = client.generate_content( | ||
contents=contents, | ||
stream=True, | ||
) | ||
await chat.append_message_stream(response) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
from langchain_openai import ChatOpenAI | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Hello LangChain Chat Models", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Create and display an empty chat UI | ||
chat = ui.Chat(id="chat") | ||
chat.ui() | ||
|
||
# Create the chat model | ||
llm = ChatOpenAI() | ||
|
||
# -------------------------------------------------------------------- | ||
# To use a different model, replace the line above with any model that subclasses | ||
# langchain's BaseChatModel. For example, to use Anthropic: | ||
# from langchain_anthropic import ChatAnthropic | ||
# llm = ChatAnthropic(model="claude-3-sonnet-20240229") | ||
# For more information, see the langchain documentation. | ||
# https://python.langchain.com/v0.1/docs/modules/model_io/chat/quick_start/ | ||
# -------------------------------------------------------------------- | ||
|
||
|
||
# Define a callback to run when the user submits a message | ||
@chat.on_user_submit | ||
async def _(): | ||
# Get all the messages currently in the chat | ||
messages = chat.get_messages() | ||
# Create an async generator from the messages | ||
stream = llm.astream(messages) | ||
# Append the response stream into the chat | ||
await chat.append_message_stream(stream) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
import ollama | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Hello Ollama Chat", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Create and display empty chat | ||
chat = ui.Chat(id="chat") | ||
chat.ui() | ||
|
||
|
||
# on user submit, generate and append a response | ||
@chat.on_user_submit | ||
async def _(): | ||
response = ollama.chat( | ||
model="llama3", | ||
messages=chat.get_messages(), | ||
stream=True, | ||
) | ||
await chat.append_message_stream(response) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
# pyright: basic | ||
from openai import AsyncOpenAI | ||
|
||
from shiny.express import ui | ||
|
||
ui.page_opts( | ||
title="Hello OpenAI Chat", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
# Create a chat instance, with an initial message | ||
chat = ui.Chat( | ||
id="chat", | ||
messages=[ | ||
{"content": "Hello! How can I help you today?", "role": "assistant"}, | ||
], | ||
# assistant_response_transformer=lambda x: HTML(f"<h1>{x}</h1"), | ||
) | ||
|
||
# Display the chat | ||
chat.ui() | ||
|
||
# Create the LLM client (assumes OPENAI_API_KEY is set in the environment) | ||
cpsievert marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
client = AsyncOpenAI() | ||
|
||
|
||
# on user submit, generate and append a response | ||
@chat.on_user_submit | ||
async def _(): | ||
response = await client.chat.completions.create( | ||
model="gpt-4o", | ||
messages=chat.get_messages(), | ||
stream=True, | ||
) | ||
await chat.append_message_stream(response) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.