Getting Started Tutorial
Lets create a simple flow that helps illustrate the core concepts of pyllments by building and serving a chat application with a persistant chat history.
Please make sure that youโve installed pyllments if you want to follow along.
If you donโt care about building flows, you can hop on over to the recipes section to run pre-built applications. Or skip to the very end to run our example flow.
1. Creating your first Element
The fundamental building block of pyllments, is as you may have guessed, an Element
.
An element is composed of a Model
that handles the business logic, Ports
that handle the communication between elements, and optionally, Views
that handles the frontend representation of the element.
- 1
-
Import the
ChatInterfaceElement
from thepyllments.elements
subpackage. This is how to import an element efficiently. - 2
-
Create an instance of the
ChatInterfaceElement
.
2. Adding more elements
Now that weโre getting the hang of it, lets create a couple more.
from pyllments.elements import LLMChatElement, HistoryHandlerElement
1= LLMChatElement(model_name='gpt-4o')
llm_chat_el 2= HistoryHandlerElement(
history_handler_el =1000,
history_token_limit='gpt-4o'
tokenizer_model )
- 1
-
Create an instance of the
LLMChatElement
with the model name set to โgpt-4oโ. LLMChatElement uses the LiteLLM naming system and is compatible with the chat models supported by LiteLLM. All you need is the corresponding API key in an .env file. - 2
-
Create an instance of the
HistoryHandlerElement
with the token limit set to 1000 tokens as measured by the gpt-4o tokenizer. This is the default tokenizer used and can be expected to be a good enough estimate for most use cases.
Creating Context
To not have a completely lame chatbot, we want to combine our query to it with some context like a history of the previous messages as well as a system prompt it can use to guide its responses.
Under the hood, the ContextBuilderElement
uses a payload conversion mapping from port types to generate MessagePayloads
to be used as context. (See Here)
= ContextBuilderElement(
context_builder_el 1={
input_map2'system_prompt_constant': {
'role': 'system',
'message': 'You are actually a pirate and will respond as such.'
},3'history': {'payload_type': list[MessagePayload]},
4'query': {'payload_type': MessagePayload}
},5=['system_prompt_constant', '[history]', 'query']
emit_order )
- 1
-
The
input_map
is a mandatory argument to theContextBuilderElement
, as it describes the inputs we will be using to build our context. - 2
-
One type of input is the constant. It is converted to a message of a specified role. It must have the
_constant
suffix. (The other types are ports and templates) - 3
-
The
history
input is a port that expects alist[MessagePayload]
type. - 4
-
The
query
input is a port that emits aMessagePayload
. - 5
-
The
emit_order
argument is a list of the input keys in the order we want them to be emitted. When all inputs are available, we emit a list of messages. The square brackets around[history]
indicate that it is optional.
3. Your first flow
Lets take a moment to think about what we want to achieve. We are creating a chat interface which uses an LLM to respond to message while also taking into account the history of the conversation.
Below, you can see that each individual element has its own unique set of input and output ports as well as a designated Payload type it either emits or receives. In this case, weโre only using the MessagePayload
and List[MessagePayload]
types. For an output port to connect to an input port, its payload type must be compatible with the input portโs payload type.
Port name nomenclature
The _emit_input
suffix tends to be used to signify that upon the reception of a Payload, the Element will emit a Payload in return.
For the ContextBuilderElement
to connect to the LLMChatElement
, the messages_emit_input
port of the LLMChatElement
must be able to accept a List[MessagePayload]
type.
To facilitate the proper communication between the elements:
- When we type a message into the
ChatInterfaceElement
and hit send, in addition to rendering it in the chatfeed,it emits aMessagePayload
through themessage_output
port.- The
ContextBuilderElement
receives theMessagePayload
through thequery
port. More on this below. - The
HistoryHandlerElement
recieves aMessagePayload
through themessage_input
port. This message is incorporated into our the running history it contains. This does not trigger an emission.
- The
- When the
ContextBuilderElement
receives the MessagePayload through thequery
port, the condition is satisifed for the emission of a list of messages. Remember, the history port is optional, so it need not receive any payload for us to trigger the emission. It is simply ignored from theemit_order
when no history is present at that port. - As the
LLMChatElement
receives a list of messages, it sends them to the LLM we have specified and emits aMessagePayload
response through themessage_output
port. - The
MessagePayload
is received by theChatInterfaceElement
and rendered in the chatfeed. However, we should note that themessage_emit_input
port also triggers the emission of that very same message after it has streamed to the chatfeed to be passed along, and this time, out of theassistant_message_output
port. - The message is received by the
HistoryHandlerElement
in itsmessage_emit_input
port. This triggers it to emit its message history as a list[MessagePayload] to theContextBuilderElement
. Now, when we send a new message through our interface, the history will be included in the context.
4. Connecting the elements
Now that we have a flow in mind, connecting the elements is a breeze.
> context_builder_el.ports.query
chat_interface_el.ports.user_message_output > history_handler_el.ports.messages_input
chat_interface_el.ports.user_message_output
> context_builder_el.ports.history
history_handler_el.ports.messages_output
> llm_chat_el.ports.messages_emit_input
context_builder_el.ports.messages_output
> chat_interface_el.ports.message_emit_input llm_chat_el.ports.message_output
The ports are accessed using dot notation on the ports
attribute of the element. In the case of llm_chat_el.ports.message_output > chat_interface_el.ports.message_emit_input
, we are connecting an output port of the LLMChatElement
to an input port of the ChatInterfaceElement
using the >
operator, with the output port being on the left hand side of it. It is equivalent to llm_chat_el.ports.message_output.connect(chat_interface_el.ports.message_emit_input)
.
5. Creating the views
After connecting the elements, we can create the views responsible for generating the visual components of our application.
1import panel as pn
2= chat_interface_el.create_interface_view(width=600, height=800)
interface_view 3= history_handler_el.create_context_view(width=220)
chat_history_view 4= llm_chat_el.create_model_selector_view()
model_selector_view
5= pn.Column(
main_view
model_selector_view,=10),
pn.Spacer(height
pn.Row(
interface_view,=10),
pn.Spacer(width
chat_history_view
),={'width': 'fit-content'}
styles )
- 1
- The panel library is imported to help with the view layout. The front end of pyllments is built using Panel, and supports rendering panel widgets and panes within pyllments applications.
- 2
-
interface_view
is created by calling thecreate_interface_view
method of theChatInterfaceElement
. This view is a wrapper around thechat_input_view
,chat_feed_view
, andsend_button_view
. The height and width are specified in pixels. - 3
-
chat_history_view
is created by calling thecreate_context_view
method of theHistoryHandlerElement
. This view contains the current chat history which is sent to the LLM. Here, only the width is specified, as the height will stretch to fit its container. - 4
-
model_selector_view
is created by calling thecreate_model_selector_view
method of theLLMChatElement
. This view allows us to select the model we wish to chat with. The width isnโt specified because we want it to stretch to fit its container. - 5
- Lastly, we use the panel row and column layout helpers to organize the views. The spacers are used to create some visual space between the views and neaten things up.
6. Serve your flow as an application
To create an application from your flow, you must create a function decorated with a @flow
decorator that returns a view object. Every time the page is reloaded, the code in that function will be executed. This means that you have the option of instantiating the elements every single time the page is reloaded, or reusing them.
from pyllments import flow
# {{ Element creation here }}
@flow
def my_flow():
# {{ View creation here }}
return main_view
from pyllments import flow
@flow
def my_flow():
# {{ Element and view creation here }}
return main_view
Make sure that you .env file is in your working directory or its parent directories. (Alternatively, you can specify the path to the .env file using the --env
flag)
Save your code as a python file my_flow.py
and serve it:
pyllments serve my_flow.py
Add a --logging
flag to see under the hood.
pyllments serve --help
-----------------------------------------
Usage: pyllments serve [OPTIONS] FILENAME
Start a Pyllments server
โญโ Arguments โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ * filename TEXT [default: None] [required] โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --logging --no-logging Enable logging. [default: no-logging] โ
โ --logging-level TEXT Set logging level. [default: INFO] โ
โ --no-gui --no-no-gui Don't look for GUI components. โ
โ [default: no-no-gui] โ
โ --port INTEGER Port to run server on. [default: 8000] โ
โ --env TEXT Path to .env file. [default: None] โ
โ --host -H TEXT Network interface to bind the server โ
โ to. Defaults to localhost (127.0.0.1) โ
โ for safer local development. โ
โ [default: 127.0.0.1] โ
โ --profile --no-profile Enable profiling output. โ
โ [default: no-profile] โ
โ --config -c TEXT Additional configuration options for โ
โ the served file. Provide either โ
โ multiple key=value pairs or a single โ
โ dictionary literal (e.g. '{"key": โ
โ "value"}'). โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
7. Putting it all together
import panel as pn
from pyllments import flow
from pyllments.elements import (
ChatInterfaceElement,
LLMChatElement,
HistoryHandlerElement,
ContextBuilderElement
)
= ChatInterfaceElement()
chat_interface_el = LLMChatElement(model_name='gpt-4o')
llm_chat_el = HistoryHandlerElement(
history_handler_el =1000,
history_token_limit='gpt-4o'
tokenizer_model
)= ContextBuilderElement(
context_builder_el ={
input_map'system_prompt_constant': {
'role': 'system',
'message': 'You are actually a pirate and will respond as such.'
},'history': {'payload_type': list[MessagePayload]},
'query': {'payload_type': MessagePayload}
},=['system_prompt_constant', '[history]', 'query']
emit_order
)
> context_builder_el.ports.query
chat_interface_el.ports.user_message_output > history_handler_el.ports.messages_input
chat_interface_el.ports.user_message_output
> context_builder_el.ports.history
history_handler_el.ports.messages_output
> llm_chat_el.ports.messages_emit_input
context_builder_el.ports.messages_output
> chat_interface_el.ports.message_emit_input
llm_chat_el.ports.message_output
= chat_interface_el.create_interface_view(width=600, height=800)
interface_view = history_handler_el.create_context_view(width=220)
chat_history_view = llm_chat_el.create_model_selector_view()
model_selector_view
@flow
def my_flow():
= pn.Column(
main_view
model_selector_view,=10),
pn.Spacer(height
pn.Row(
interface_view,=10),
pn.Spacer(width
chat_history_view
),={'width': 'fit-content'}
styles
)return main_view
CLI:
pyllments serve my_flow.py --logging