The Arch-Function-Chat collection builds upon the Katanemo's
Arch-Function
collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
Clarify & refine
: Generates natural follow-up questions to collect missing information for function calling
Interpret & respond
: Provides human-friendly responses based on function execution results
Context management
: Mantains context in complex multi-turn interactions
Note
: Arch-Function-Chat is now the primarly LLM used in then open source
Arch Gateway
- An AI-native proxy for agents. For more details about the
project, check out the Github
README
.
Requirements
The code of Arch-Function-Chat-1.5B has been in the Hugging Face
transformers
library and we advise you to install latest version:
pip install transformers>=4.37.0
How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the
OpenAI's function calling
.
Quickstart
import json
from typing importAny, Dict, Listfrom transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed.""\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>""\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```''\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```''\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our modeldefformat_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids inzip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
License
Katanemo Arch-Function collection is distributed under the
Katanemo license
.
Runs of katanemo Arch-Function-Chat-1.5B on huggingface.co
233
Total runs
0
24-hour runs
0
3-day runs
65
7-day runs
225
30-day runs
More Information About Arch-Function-Chat-1.5B huggingface.co Model
Arch-Function-Chat-1.5B huggingface.co is an AI model on huggingface.co that provides Arch-Function-Chat-1.5B's model effect (), which can be used instantly with this katanemo Arch-Function-Chat-1.5B model. huggingface.co supports a free trial of the Arch-Function-Chat-1.5B model, and also provides paid use of the Arch-Function-Chat-1.5B. Support call Arch-Function-Chat-1.5B model through api, including Node.js, Python, http.
Arch-Function-Chat-1.5B huggingface.co is an online trial and call api platform, which integrates Arch-Function-Chat-1.5B's modeling effects, including api services, and provides a free online trial of Arch-Function-Chat-1.5B, you can try Arch-Function-Chat-1.5B online for free by clicking the link below.
katanemo Arch-Function-Chat-1.5B online free url in huggingface.co:
Arch-Function-Chat-1.5B is an open source model from GitHub that offers a free installation service, and any user can find Arch-Function-Chat-1.5B on GitHub to install. At the same time, huggingface.co provides the effect of Arch-Function-Chat-1.5B install, users can directly use Arch-Function-Chat-1.5B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
Arch-Function-Chat-1.5B install url in huggingface.co: