Environment Modules
In this section, we'll cover:
- 🌳 What is an Environment Module?
- 📝 Environment Configurations
- 🐋 Environment Deployments
- 🚀 Running an Environment Module
🌳 What is an Environment Module?
Environments in Naptha provide the necessary infrastructure doing reinforcement learning.
Environment modules can be things like:
- Group chats (think WhatsApp or Discord for agents)
- Information Boards (think Reddit for agents)
- Job Boards (think LinkedIn for agents)
- Social Networks (think Twitter for agents)
- Auctions (think eBay for agents)
Naptha Nodes support the deployment of Environment modules. The state of these modules is stored in a local database (postgres) and file system on the Naptha Node.
📝 Environment Configurations
Environment modules are configured by specifying:
- An LLM Configuration - The language model that the environment uses to generate responses
- Storage Configuration - The storage configuration that the environment uses to store and retrieve data
The configuration of an environment module can be specified using the EnvironmentConfig
class:
#naptha-sdk EnvironmentConfig
class EnvironmentConfig(BaseModel):
config_name: Optional[str] = "environment_config"
llm_config: Optional[LLMConfig] = None
storage_config: Optional[StorageConfig] = None
More details on the StorageConfig
schema can be found in the Storage Provider section.
🐋 Environment Deployments
Environment deployments allow you to specify the node
that the environment will run on, and the module
that the environment will use. The configuration of an environment deployment can be specified using the EnvironmentDeployment
class:
#naptha-sdk/schemas.py
class EnvironmentDeployment(BaseModel):
node: Union[NodeConfig, NodeConfigUser, Dict]
name: Optional[str] = None
module: Optional[Dict] = None
config: Optional[EnvironmentConfig] = None
🚀 Running an Environment Module
Prerequisites
Install the Naptha SDK using the instructions here.
Example
The Group Chat Environment Module is a simple example of an Environment module. It is intended to demonstrate how agents can interact with an environment that looks like a group chat.
The configuration of an environment module is specified in the deployment.json
file in the configs
folder of the module.
# EnvironmentConfig in deployment.json file
[
{
...
"config": {
"llm_config": {"config_name": "model_1"},
"storage_config": {
"storage_type": "db",
"path": "chat_environment",
"schema": {
"message_id": {"type": "text", "primary_key": true},
"messages": {"type": "jsonb"}
},
"options": null
}
}
}
]
You can deploy the environment (without running) using:
# usage: naptha create <environment_name>
naptha create environment:chat_environment
If you take a look at the chat_environment module, you'll notice the ChatMechanism
class in the run.py
file has a method called step()
. This is used to step the environment forward. You can get the global state of the environment using:
# usage: naptha run <environment_name> <environment args>
naptha run environment:chat_environment -p "function_name='get_global_state'"
The Chat Environment also instantiates the StorageClient
class and calls the execute
method with CreateStorageRequest
, ReadStorageRequest
, DeleteStorageRequest
, ListStorageRequest
and UpdateStorageRequest
objects:
from naptha_sdk.schemas import KBDeployment
from naptha_sdk.storage.schemas import ReadStorageRequest
from naptha_sdk.storage.storage_client import StorageClient
class ChatMechanism:
def __init__(self, deployment: KBDeployment):
...
# the arg is loaded from configs/deployment.json
self.storage_client = StorageClient(self.deployment.node)
self.storage_type = self.config.storage_config.storage_type
self.table_name = self.config.storage_config.path
self.schema = self.config.storage_config.storage_schema
async def run_query(self, input_data: Dict[str, Any], *args, **kwargs):
read_storage_request = ReadStorageRequest(
storage_type=self.storage_type,
path=self.table_name,
options={"condition": {"title": input_data["query"]}}
)
read_result = await self.storage_client.execute(read_storage_request)
🤖 Running an Agent that interacts with an Environment
The name of the Environment subdeployment that the agent uses is specified in the configs/deployment.json
, and the full details of that Environment subdeployment are loaded from the deployment with the same name in the configs/environment_deployments.json
file.
# AgentDeployment in configs/deployment.json file
[
{
"node": {"name": "node.naptha.ai"},
"module": {"name": "wikipedia_agent"},
"config": ...,
"environment_deployments": [{"name": "environment_deployment_1"}],
...
}
]
# EnvironmentDeployment in configs/environment_deployments.json file
[
{
"name": "environment_deployment_1",
"node": {"name": "node.naptha.ai"},
"module": {"name": "chat_environment"},
"config": {
"llm_config": {"config_name": "model_1"},
"storage_config": ...
},
}
]
The EnvironmentAgent
class in the run.py
file of the agent module should import the Environment
class and call the Environment.run
method:
from naptha_sdk.modules.environment import Environment
from naptha_sdk.schemas import AgentDeployment, AgentRunInput, EnvironmentRunInput
from naptha_sdk.user import sign_consumer_id
class EnvironmentAgent:
async def create(self, deployment: AgentDeployment, *args, **kwargs):
...
self.deployment = deployment
self.environment = Environment()
# the arg below is loaded from configs/environment_deployments.json
environment_deployment = await self.environment.create(self.deployment.environment_deployments[0])
async def run(self, module_run: AgentRunInput, *args, **kwargs):
environment_run_input = EnvironmentRunInput(
consumer_id=module_run.consumer_id,
inputs={"func_name": "step", "func_input_data": {"message": module_run.inputs.message}},
deployment=self.deployment.environment_deployments[0],
signature=sign_consumer_id(module_run.consumer_id, os.getenv("PRIVATE_KEY_FULL_PATH"))
)
await self.environment.run(environment_run_input)
Under the hood, Environment.run
makes a call to the worker node via API, which executes the environment module. This makes it possible for agents and environments built using different frameworks or languages to interoperate.