AI-Integrated Programming with MTLLM#
This guide covers different ways you can use MTLLM to build AI-integrated software in Jaclang. From simple AI-powered functions to complex multi-agent systems, MTLLM provides the tools to seamlessly integrate Large Language Models into your applications. For truly agentic behavior that can reason, plan, and act autonomously, MTLLM offers the ReAct method with tool integration.
Supported Models#
MTLLM use LiteLLM under the hood allowing seamless integration with a wide range of models.
Note
There are Many other supported models and model serving platforms available with LiteLLM, please check their documentation for model names.
Key MTLLM Features#
LLM integration is a first class feature in Jaclang, enabling you to build AI-powered applications with minimal effort. Here are some of the key features:
- Zero Prompt Engineering: Define function signatures and let MTLLM handle implementation
- Type Safety: Maintain strong typing while adding AI capabilities
- Tool Integration: Connect AI functions to external APIs and services
- Context Aware Methods: AI-powered methods that understand object context
- Structured Outputs: Generate complex, typed data structures automatically
- Media Support: Handle images and videos as inputs and outputs
- ReAct Method: Build agentic applications that can reason and use tools
Intelligent Functions#
Basic Functions#
Transform any function into an intelligent agent by adding the by llm
declaration. Instead of writing manual API calls and prompt engineering, simply define the function signature and let MTLLM handle the implementation:
These functions become intelligent agents that can understand natural language inputs and produce contextually appropriate outputs.
Enhanced Functions with Reasoning#
Add the method='Reason'
parameter to enable step-by-step reasoning for complex tasks:
Structured Output Functions#
MTLLM excels at generating structured outputs. Define functions that return complex types:
A more complex example of using object schema for adding context to LLM and constrint genaration to structured output genaration is explained in the game level genaration example.
Instance context aware MTP methods#
Transform methods into intelligent components that can reason about their state and context:
Basic Methods#
Complex AI-Integrated Workflows with Objects#
Create sophisticated multi-agent systems using object methods:
Adding Explicit Context for Functions, Methods and Objects#
When building AI-integrated applications, providing the right amount of context is crucial for optimal performance. MTLLM offers multiple ways to add context to your functions and objects without over-engineering prompts.
Adding Context with Docstrings#
Docstrings serve as crucial context for your intelligent functions. MTLLM uses docstrings to understand the function's purpose and expected behavior. Keep them concise and focused - they should guide the LLM, not replace its reasoning.
Key principles for effective docstrings:
- Be specific about the function's purpose
- Mention return format for complex outputs
- Avoid detailed instructions - let the LLM reason
- Keep them under one sentence when possible
Adding Context with Semantic Strings (Semstrings)#
For more complex scenarios where you need to describe object attributes or function parameters without cluttering your code, Jaclang provides semantic strings using the sem
keyword. This is particularly useful for:
- Describing object attributes with domain-specific meaning
- Adding context to parameters without verbose docstrings
- Maintaining clean code while providing rich semantic information
Additional context with incl_info
#
Use incl_info
to provide additional context to LLM methods for context-aware processing:
When to Use Each Approach#
- Docstrings: Use for function-level context and behavior description
- Semstrings: Use for attribute-level descriptions and domain-specific terminology
- incl_info: Use to selectively include relevant object state in method calls
The sem
keyword can be used in separate implementation files, allowing for cleaner code organization and better maintainability.
In this example:
greet("Alice")
executes the normal function and returns"Hello Alice"
greet("Alice") by llm()
overrides the function with LLM behavior, potentially returning a more natural or contextual greetingformat_data(user_data) by llm()
transforms simple data formatting into intelligent, human-readable presentation
Tool-Using Agents with ReAct#
The ReAct (Reasoning and Acting) method enables true agentic behavior by allowing agents to reason about problems and use external tools to solve them. This is where functions become genuinely agentic - they can autonomously decide what tools they need and how to use them.
Any function can be made agentic by adding the by llm(tools=[...])
declaration. This allows the function to use external tools to solve
problems, making it capable of reasoning and acting like an agent.
What Makes ReAct Truly Agentic?#
The ReAct method demonstrates genuine agentic behavior because:
- Autonomous Reasoning: The agent analyzes the problem independently
- Tool Selection: It decides which tools are needed and when to use them
- Adaptive Planning: Based on tool results, it adjusts its approach
- Goal-Oriented: It works towards solving the complete problem, not just individual steps
A full tutorial on building an agentic application is available here.
Streaming Outputs#
The streaming feature allows you to receive tokens from LLM functions in real-time, enabling dynamic interactions and responsive applications. This is particularly useful for generating content like essays, code, or any long-form text where you want to display results as they are produced.
In the invoke parameters, you can set stream=True
to enable streaming:
??? example "NOTE:
"The stream=True
will only support the output of type str
and at the moment tool calling is not supported in streaming mode. That will be supported in the future."