Getting Started Tutorial
Lets create a simple flow that helps illustrate the core concepts of pyllments by building and serving a chat application with a persistant chat history.
Please make sure that youโve installed pyllments if you want to follow along.
If you donโt care about building flows, you can hop on over to the recipes section to run pre-built applications. Or skip to the very end to run our example flow.
1. Creating your first Element
The fundamental building block of pyllments, is as you may have guessed, an Element.
An element is composed of a Model that handles the business logic, Ports that handle the communication between elements, and optionally, Views that handles the frontend representation of the element.
- 1
-
Import the
ChatInterfaceElementfrom thepyllments.elementssubpackage. This is how to import an element efficiently. - 2
-
Create an instance of the
ChatInterfaceElement.
2. Adding more elements
Now that weโre getting the hang of it, lets create a couple more.
from pyllments.elements import LLMChatElement, HistoryHandlerElement
1llm_chat_el = LLMChatElement(model_name='gpt-4o')
2history_handler_el = HistoryHandlerElement(
history_token_limit=1000,
tokenizer_model='gpt-4o'
)- 1
-
Create an instance of the
LLMChatElementwith the model name set to โgpt-4oโ. LLMChatElement uses the LiteLLM naming system and is compatible with the chat models supported by LiteLLM. All you need is the corresponding API key in an .env file. - 2
-
Create an instance of the
HistoryHandlerElementwith the token limit set to 1000 tokens as measured by the gpt-4o tokenizer. This is the default tokenizer used and can be expected to be a good enough estimate for most use cases.
Creating Context
To not have a completely lame chatbot, we want to combine our query to it with some context like a history of the previous messages as well as a system prompt it can use to guide its responses.
Under the hood, the ContextBuilderElement uses a payload conversion mapping from port types to generate MessagePayloads to be used as context. (See Here)
context_builder_el = ContextBuilderElement(
1 input_map={
2 'system_prompt_constant': {
'role': 'system',
'message': 'You are actually a pirate and will respond as such.'
},
3 'history': {'payload_type': list[MessagePayload]},
4 'query': {'payload_type': MessagePayload}
},
5 emit_order=['system_prompt_constant', '[history]', 'query']
)- 1
-
The
input_mapis a mandatory argument to theContextBuilderElement, as it describes the inputs we will be using to build our context. - 2
-
One type of input is the constant. It is converted to a message of a specified role. It must have the
_constantsuffix. (The other types are ports and templates) - 3
-
The
historyinput is a port that expects alist[MessagePayload]type. - 4
-
The
queryinput is a port that emits aMessagePayload. - 5
-
The
emit_orderargument is a list of the input keys in the order we want them to be emitted. When all inputs are available, we emit a list of messages. The square brackets around[history]indicate that it is optional.
3. Your first flow
Lets take a moment to think about what we want to achieve. We are creating a chat interface which uses an LLM to respond to message while also taking into account the history of the conversation.
Below, you can see that each individual element has its own unique set of input and output ports as well as a designated Payload type it either emits or receives. In this case, weโre only using the MessagePayload and List[MessagePayload] types. For an output port to connect to an input port, its payload type must be compatible with the input portโs payload type.
Port name nomenclature
The _emit_input suffix tends to be used to signify that upon the reception of a Payload, the Element will emit a Payload in return.
For the ContextBuilderElement to connect to the LLMChatElement, the messages_emit_input port of the LLMChatElement must be able to accept a List[MessagePayload] type.
To facilitate the proper communication between the elements:
- When we type a message into the
ChatInterfaceElementand hit send, in addition to rendering it in the chatfeed,it emits aMessagePayloadthrough themessage_outputport.- The
ContextBuilderElementreceives theMessagePayloadthrough thequeryport. More on this below. - The
HistoryHandlerElementrecieves aMessagePayloadthrough themessage_inputport. This message is incorporated into our the running history it contains. This does not trigger an emission.
- The
- When the
ContextBuilderElementreceives the MessagePayload through thequeryport, the condition is satisifed for the emission of a list of messages. Remember, the history port is optional, so it need not receive any payload for us to trigger the emission. It is simply ignored from theemit_orderwhen no history is present at that port. - As the
LLMChatElementreceives a list of messages, it sends them to the LLM we have specified and emits aMessagePayloadresponse through themessage_outputport. - The
MessagePayloadis received by theChatInterfaceElementand rendered in the chatfeed. However, we should note that themessage_emit_inputport also triggers the emission of that very same message after it has streamed to the chatfeed to be passed along, and this time, out of theassistant_message_outputport. - The message is received by the
HistoryHandlerElementin itsmessage_emit_inputport. This triggers it to emit its message history as a list[MessagePayload] to theContextBuilderElement. Now, when we send a new message through our interface, the history will be included in the context.
4. Connecting the elements
Now that we have a flow in mind, connecting the elements is a breeze.
chat_interface_el.ports.user_message_output > context_builder_el.ports.query
chat_interface_el.ports.user_message_output > history_handler_el.ports.messages_input
history_handler_el.ports.messages_output > context_builder_el.ports.history
context_builder_el.ports.messages_output > llm_chat_el.ports.messages_emit_input
llm_chat_el.ports.message_output > chat_interface_el.ports.message_emit_inputThe ports are accessed using dot notation on the ports attribute of the element. In the case of llm_chat_el.ports.message_output > chat_interface_el.ports.message_emit_input, we are connecting an output port of the LLMChatElement to an input port of the ChatInterfaceElement using the > operator, with the output port being on the left hand side of it. It is equivalent to llm_chat_el.ports.message_output.connect(chat_interface_el.ports.message_emit_input).
5. Creating the views
After connecting the elements, we can create the views responsible for generating the visual components of our application.
1import panel as pn
2interface_view = chat_interface_el.create_interface_view(width=600, height=800)
3chat_history_view = history_handler_el.create_context_view(width=220)
4model_selector_view = llm_chat_el.create_model_selector_view()
5main_view = pn.Column(
model_selector_view,
pn.Spacer(height=10),
pn.Row(
interface_view,
pn.Spacer(width=10),
chat_history_view
),
styles={'width': 'fit-content'}
)- 1
- The panel library is imported to help with the view layout. The front end of pyllments is built using Panel, and supports rendering panel widgets and panes within pyllments applications.
- 2
-
interface_viewis created by calling thecreate_interface_viewmethod of theChatInterfaceElement. This view is a wrapper around thechat_input_view,chat_feed_view, andsend_button_view. The height and width are specified in pixels. - 3
-
chat_history_viewis created by calling thecreate_context_viewmethod of theHistoryHandlerElement. This view contains the current chat history which is sent to the LLM. Here, only the width is specified, as the height will stretch to fit its container. - 4
-
model_selector_viewis created by calling thecreate_model_selector_viewmethod of theLLMChatElement. This view allows us to select the model we wish to chat with. The width isnโt specified because we want it to stretch to fit its container. - 5
- Lastly, we use the panel row and column layout helpers to organize the views. The spacers are used to create some visual space between the views and neaten things up.
6. Serve your flow as an application
To create an application from your flow, you must create a function decorated with a @flow decorator that returns a view object. Every time the page is reloaded, the code in that function will be executed. This means that you have the option of instantiating the elements every single time the page is reloaded, or reusing them.
from pyllments import flow
# {{ Element creation here }}
@flow
def my_flow():
# {{ View creation here }}
return main_viewfrom pyllments import flow
@flow
def my_flow():
# {{ Element and view creation here }}
return main_viewMake sure that you .env file is in your working directory or its parent directories. (Alternatively, you can specify the path to the .env file using the --env flag)
Save your code as a python file my_flow.py and serve it:
pyllments serve my_flow.pyAdd a --logging flag to see under the hood.
pyllments serve --help
-----------------------------------------
Usage: pyllments serve [OPTIONS] FILENAME
Start a Pyllments server
โญโ Arguments โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ * filename TEXT [default: None] [required] โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --logging --no-logging Enable logging. [default: no-logging] โ
โ --logging-level TEXT Set logging level. [default: INFO] โ
โ --no-gui --no-no-gui Don't look for GUI components. โ
โ [default: no-no-gui] โ
โ --port INTEGER Port to run server on. [default: 8000] โ
โ --env TEXT Path to .env file. [default: None] โ
โ --host -H TEXT Network interface to bind the server โ
โ to. Defaults to localhost (127.0.0.1) โ
โ for safer local development. โ
โ [default: 127.0.0.1] โ
โ --profile --no-profile Enable profiling output. โ
โ [default: no-profile] โ
โ --config -c TEXT Additional configuration options for โ
โ the served file. Provide either โ
โ multiple key=value pairs or a single โ
โ dictionary literal (e.g. '{"key": โ
โ "value"}'). โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ7. Putting it all together
import panel as pn
from pyllments import flow
from pyllments.elements import (
ChatInterfaceElement,
LLMChatElement,
HistoryHandlerElement,
ContextBuilderElement
)
chat_interface_el = ChatInterfaceElement()
llm_chat_el = LLMChatElement(model_name='gpt-4o')
history_handler_el = HistoryHandlerElement(
history_token_limit=1000,
tokenizer_model='gpt-4o'
)
context_builder_el = ContextBuilderElement(
input_map={
'system_prompt_constant': {
'role': 'system',
'message': 'You are actually a pirate and will respond as such.'
},
'history': {'payload_type': list[MessagePayload]},
'query': {'payload_type': MessagePayload}
},
emit_order=['system_prompt_constant', '[history]', 'query']
)
chat_interface_el.ports.user_message_output > context_builder_el.ports.query
chat_interface_el.ports.user_message_output > history_handler_el.ports.messages_input
history_handler_el.ports.messages_output > context_builder_el.ports.history
context_builder_el.ports.messages_output > llm_chat_el.ports.messages_emit_input
llm_chat_el.ports.message_output > chat_interface_el.ports.message_emit_input
interface_view = chat_interface_el.create_interface_view(width=600, height=800)
chat_history_view = history_handler_el.create_context_view(width=220)
model_selector_view = llm_chat_el.create_model_selector_view()
@flow
def my_flow():
main_view = pn.Column(
model_selector_view,
pn.Spacer(height=10),
pn.Row(
interface_view,
pn.Spacer(width=10),
chat_history_view
),
styles={'width': 'fit-content'}
)
return main_viewCLI:
pyllments serve my_flow.py --logging

