A TypeScript toolkit for integrating Large Language Models with tool execution capabilities.
A toolkit for integrating LLM applications with tool execution capabilities. Provides functionality for handling tool calls, managing chat sessions, and executing operations.
- π― Clean API: Simple interface with minimal setup required
- π Type Safety: Full TypeScript support with type definitions
- π± Session Management: Session handling with abort controls and permission inheritance
- π§ Context: System prompts with environment-aware ContextSys integration
- π§ Tool Execution: Tool registration with error handling and validation
- π Permission System: Approval controls for tool execution with
approve,deny, andallow_alloptions - π‘οΈ Security: Security measures with tool usage restrictions and validation
- π Streaming Support: API for both streaming and standard responses
- β‘ Event-Driven: Real-time callbacks for thinking, messages, tool calls, and results
Note
This toolkit provides LLM-tool integration with permission controls. The Orchestrator handles chat interactions, tool execution, and user permissions, while ContextSys provides environment context for AI responses.
sequenceDiagram
participant User
participant Orchestrator
participant ContextSys
participant ChatManager
participant ToolExecutor
participant LLM
participant Tools
User->>Orchestrator: Send chat request
Orchestrator->>ContextSys: Get system prompt
ContextSys-->>Orchestrator: Return context
Orchestrator->>ChatManager: Add user message + system context
Orchestrator->>LLM: Send request with tools
LLM-->>Orchestrator: Response with tool calls
loop For each tool call
Orchestrator->>User: Request permission
User-->>Orchestrator: Approve/Deny/Allow All
alt Permission Granted
Orchestrator->>ToolExecutor: Execute tool
ToolExecutor->>Tools: Run tool logic
Tools-->>ToolExecutor: Return result
ToolExecutor-->>Orchestrator: Tool result
Orchestrator->>ChatManager: Add tool result to session
else Permission Denied
Orchestrator->>User: Abort session
end
end
Orchestrator->>LLM: Send updated context
LLM-->>Orchestrator: Final response
Orchestrator->>ChatManager: Add final message
Orchestrator-->>User: Return session with abort capability
- Core: Tool execution logic and validation
- Integrator: Chat orchestration, session management, and permission handling
- Interfaces: TypeScript type definitions for all components
- Schemas: Tool schema definitions for LLM integration
- Utils: Utility functions for ID generation and common operations
-
π€ Create Ollama Account
- Sign up at ollama.com
- Create an API key by visiting ollama.com/settings/keys
-
π¦ Clone Repository
git clone https://github.com/NeaByteLab/LLM-Toolkit.git cd LLM-Toolkit -
βοΈ Environment Setup
# Create environment file echo "OLLAMA_KEY=your_api_key_here" > .env # Edit .env and replace with your actual API key # OLLAMA_KEY=your_actual_api_key_here
-
π¦ Install Dependencies
npm install
npx tsx ./src/index.tsThis demonstrates the complete toolkit with:
- Permission system (denies TerminalCmd, approves FileCreate/FileEdit)
- Streaming responses
- Auto-abort after 3 seconds
- All event callbacks
src/
βββ schemas/ # Tool schema definitions
βββ core/
β βββ base/ # Tool implementation logic
β βββ ToolExecutor.ts # Tool registration & execution
-
Create Schema (
/src/schemas/YourTool.ts)export default { type: 'function', function: { name: 'your_tool_name', description: 'What your tool does', parameters: { type: 'object', properties: { param1: { type: 'string', description: 'Description' } }, required: ['param1'] } } }
-
Implement Logic (
/src/core/base/YourTool.ts)export default class YourTool { private readonly param1: string constructor(args: SchemaYourTool) { const { param1 } = args this.param1 = param1 } async execute(): Promise<string> { const resValidate = this.validate() if (resValidate !== 'ok') { return resValidate } // Your logic here return 'Success message' } private validate(): string { if (typeof this.param1 !== 'string') { return '`param1` must be a string.' } return 'ok' } }
-
Register in ToolExecutor.ts
// Add import import YourTool from '@core/base/YourTool' import type { SchemaYourTool } from '@root/interfaces/index' // Add to switch statement case 'your_tool_name': return new YourTool(args as SchemaYourTool).execute()
π€ System Prompt (/src/integrator/ContextSys.ts)
To edit the AI's behavior and personality:
- π§ Modify
getSystemPrompt()method - ββ Add/remove capabilities, guidelines, or instructions
- π Customize the AI agent's behavior and personality
- π Update security guidelines or tool usage rules
π Context Information (/src/integrator/ContextEnv.ts)
To edit the environment context:
- π§ Modify
getContext()to change format or add/remove information - β Add new methods to gather additional system information
- β° Customize time format in
getTimeInfo() - π» Add more OS details in
getOSInfo() - π Include additional path information in
getPathInfo()
The toolkit includes a permission system for controlling tool execution:
approve: Allow this specific tool calldeny: Block this tool call and abort sessionallow_all: Approve this call and all future calls in this session
const session = await orchestrator.chat(message, {
onAskPermission: (data) => {
console.log(`Permission requested for: ${data.toolName}`)
// Custom permission logic
if (data.toolName === 'TerminalCmd') {
return { action: 'deny' } // Block terminal commands
}
return { action: 'approve' } // Allow everything else
},
// ... other options
})// Abort specific session
session.abort()
// Check if session is active
const isActive = session.isActive()
// Abort all sessions
orchestrator.abort()- Auto-creation: Sessions are created when you call
chat() - Context injection: System prompt is added on first message
- Permission tracking: Session remembers "allow all" settings
- Termination: Abort stops all operations
- Embedding README: Text vectorization and similarity search using transformer models
- Provides semantic search, text similarity, and content clustering capabilities
- Independent module for advanced text processing features
This project is licensed under the MIT license. See the LICENSE file for more info.