Use Opsmate for Automation
Opsmate is designed to be predominantly used via CLI and Web UI. That being said it is also trivial to use it for high-level automation via Python runtime. You can consider Opsmate as the "AppleScript" for your production enviornment.
In this cookbook we will show you how to use Opsmate for performing automation tasks.
Prerequisites¶
- You have a OpenAI API key, otherwise Anthropic API key is also supported, as Opsmate is LLM provider agnostic.
- You have Opsmate installed - see getting started for more details.
Setup¶
First, let's install our required packages and set our API keys.
import getpass
import os
def _set_if_undefined(var: str) -> None:
if os.environ.get(var):
return
os.environ[var] = getpass.getpass(var)
_set_if_undefined("OPENAI_API_KEY") # Feel to comment this out and use Anthropic API key instead
_set_if_undefined("ANTHROPIC_API_KEY")
Introducing dino
¶
Under the hood of Opsmate it is powered by dino
(short for "Dino IS NOT Opsmate") - a lightweight framework that allows you to write LLM powered scripts in a functional manner.
Here are some of the core design principles of dino
:
- Enable end-developers to write code in a high-level and functional manner.
- Extact the implementation details of an execution procedure away from the code and delegate it to LLM tool calls, so that the end-developers can focus on the business logic.
- Structured outputs out of box over raw text outputs/schemas, allowing easy validation, chaining and integration with other functions, libraries and tools.
- The LLM is swappable, allowing you to use different LLM providers without changing the code.
Getting Started¶
Let's start with a simple script that will show you how to use Opsmate for scripting.
from opsmate.dino import dino
@dino(model="gpt-4o-mini", response_model=str)
async def extract_phone_number(text: str):
"""
Extract phone number digits from the text
"""
return text
phone_number = await extract_phone_number("My phone number is 123-456-7890")
assert phone_number == "1234567890"
print(phone_number)
1234567890
In the above script, we have defined a function extract_phone_number
that takes a text as input and returns the extracted phone number as output.
The @dino
decorator is used to define the function, and it takes the following arguments:
model
: The LLM model to use.response_model
: The type of the output.
Note that there is a Extract phone number digits from the text
docstring in the function definition. This is essentially used as a system prompt for the LLM to follow, and help it to understand the purpose of the function.
Structured Outputs¶
Structured output is one of the core features of dino
. It allows you to define the output type of your function in a structured manner, and dino
will automatically parse the output for you.
In the example above, we have defined the output type as str
, and dino
will automatically parse the output for you. It also support more complex and nuanced structures, namely Pydantic models.
Here is an example of structured output:
from pydantic import BaseModel, Field, field_validator
from typing import List
class UserInfo(BaseModel):
name: str = Field(description="The name of the user")
phone_number: str = Field(description="The phone number of the user, must be all digits")
@field_validator("phone_number")
def validate_phone_number(cls, v: str) -> str:
if not v.isdigit():
raise ValueError("Phone number must be all digits")
return v
@dino(model="gpt-4o-mini", response_model=List[UserInfo])
async def extract_user_info(text: str):
"""
Extract all the user information from the text
"""
return text
user_infos = await extract_user_info("""
You can call Matt at 123-456-7890. John's number is the same except for the last digit being 1.
""")
assert len(user_infos) == 2
assert user_infos[0].name == "Matt"
assert user_infos[0].phone_number == "1234567890"
assert user_infos[1].name == "John"
assert user_infos[1].phone_number == "1234567891"
print(user_infos)
[UserInfo(name='Matt', phone_number='1234567890'), UserInfo(name='John', phone_number='1234567891')]
In the example above we not only defined the output type as a list of UserInfo
, but also make sure that the phone_number
is all digits. Under the hood if the LLM returns a phone number that is not all digits, dino
will automatically retry the function call for better results.
Note that dino
also comes with nuanced type hinting. If you hover over the extract_user_info
function, you will see that it is typed as follows by your IDE/Text editor:
(function) def extract_user_info(text: str) -> Awaitable[List[UserInfo]]
Extract all the user information from the text
Tool Calls¶
Most of the time you will need opsmate to interact with the production environment, in which case you will need to use "tool calls" to the system as the LLM along has no knowledge of your system.
Here is an example of how to use tool calls to achieve your goal:
from opsmate.plugins import PluginRegistry as plugin
import structlog
import logging
structlog.configure(
wrapper_class=structlog.make_filtering_bound_logger(logging.ERROR),
)
logger = structlog.get_logger(__name__)
plugin.discover()
# You can also import it directly via
# from opsmate.tools.command_line import ShellCommand as shell
shell = plugin.get_tool("ShellCommand")
class Info(BaseModel):
cpus: int = Field(description="The number of vCPUs on the machine")
rams: int = Field(description="The number of GB of RAM on the machine")
@dino(model="gpt-4o-mini", response_model=Info, tools=[shell])
async def run_command(instruction: str):
"""
As a sys admin accessing to a workstation, given the instruction,
run the cli and return the result
"""
return instruction
result = await run_command("How many cpus and rams on this machine?")
print(result)
cpus=8 rams=29
In the example above, we have defined a function run_command
that takes an instruction as input and returns the result of the command as output.
We have also defined the output type as Info
, which is a Pydantic model with two fields: cpus
and rams
with the description of the fields. This agains is one of the benefits of using Pydantic for structured output:
- The annotations not only provides a documentation for the output for clarity
- They are also sent to the LLM as part of the prompt, allowing LLM to understand the output format
- The validation is also performed by the Python runtime to ensure the legitimacy of the output.
In the example above, we have also added the shell
tool to the function, which is a tool call to the ShellCommand
tool.
Finally, we have called the run_command
function with the instruction "How many cpus and rams on this machine?" and printed the result.
Built-in Tools¶
To know all the tools available you can run plugin.get_tools()
. This will return a list of all the tools available including:
- The builtin tools, which are shipped with Opsmate and shown in the table below.
- The custom tools you have defined - we will cover this in a later section.
import pandas as pd
df = pd.DataFrame(plugin.get_tools().items(), columns=["Tool Name", "Description"])
df["Description"] = df["Tool Name"].apply(lambda x: plugin.get_tool(x).__doc__.strip())
df
Tool Name | Description | |
---|---|---|
0 | FileAppend | FileAppend tool allows you to append to a file |
1 | FileDelete | FileDelete tool allows you to delete a file |
2 | FileRead | FileRead tool allows you to read a file |
3 | FileWrite | FileWrite tool allows you to write to a file |
4 | FilesFind | FilesFind tool allows you to find files in a d... |
5 | FilesList | FilesList tool allows you to list files in a d... |
6 | HttpCall | HttpCall tool allows you to call a URL\n Su... |
7 | HttpGet | HttpGet tool allows you to get the content of ... |
8 | HttpToText | HttpToText tool allows you to convert an HTTP ... |
9 | KnowledgeRetrieval | Knowledge retrieval tool allows you to search ... |
10 | ShellCommand | ShellCommand tool allows you to run shell comm... |
11 | SysEnv | SysEnv tool allows you to get the environment ... |
12 | SysStats | SysStats tool allows you to get the stats of a... |
13 | current_time | Get the current time in %Y-%m-%dT%H:%M:%SZ format |
14 | datetime_extraction | You are tasked to extract the datetime range f... |
"Agentic" via LLM Call as a Tool¶
By now you might wonder can we make the LLM call as a tool call? The answer is yes, and this is a powerful feature of dino
and dtool
.
Here is an example of how to use LLM call as a tool call:
from opsmate.dino import dino
from opsmate.dino.tools import dtool
from typing import Annotated
@dtool
@dino("gpt-4o-mini", response_model=str, tools=[shell])
async def k8s_agent(
question: Annotated[str, "The question to solve"],
) -> str:
"""
k8s_agent is a tool that solves a problem using kubectl.
"""
return f"answer the question: {question}"
@dino("gpt-4o", response_model=str, tools=[k8s_agent])
async def sre_manager(query: str):
"""
You are a world class SRE manager who manages a team of SREs.
"""
return f"answer the query: {query}"
result = await sre_manager("How many pods are running in the cluster?")
print(result)
There are 19 pods currently running in the Kubernetes cluster.
Model Swapping¶
By default you provide a model to the dino
decorator. It will be used as the default-sane model for the executing the function. It is also trivial to swap the model at runtime.
The example below demonstrates how to do it:
from opsmate.dino import dino
from typing import Literal
brand = Literal["openai", "anthropic"]
@dino(model="gpt-4o-mini", response_model=brand)
async def query_model():
"""
Who creates ya?
"""
return "The language model provider"
result = await query_model()
assert result == "openai"
print(f"When the model is gpt-4o-mini, the result is {result}")
result = await query_model(model="claude-3-5-sonnet-20241022")
assert result == "anthropic"
print(f"When the model is claude-3-5-sonnet-20241022, the result is {result}")
When the model is gpt-4o-mini, the result is openai When the model is claude-3-5-sonnet-20241022, the result is anthropic