@@ -508,13 +508,21 @@ Advanced parameters (optional):
508508- ` stream ` : if ` false ` the response will be returned as a single response object, rather than a stream of objects
509509- ` keep_alive ` : controls how long the model will stay loaded into memory following the request (default: ` 5m ` )
510510
511+ ### Tool calling
512+
513+ Tool calling is supported by providing a list of tools in the ` tools ` parameter. The model will generate a response that includes a list of tool calls. See the [ Chat request (Streaming with tools)] ( #chat-request-streaming-with-tools ) example below.
514+
515+ Models can also explain the result of the tool call in the response. See the [ Chat request (With history, with tools)] ( #chat-request-with-history-with-tools ) example below.
516+
517+ [ See models with tool calling capabilities] ( https://ollama.com/search?c=tool ) .
518+
511519### Structured outputs
512520
513521Structured outputs are supported by providing a JSON schema in the ` format ` parameter. The model will generate a response that matches the schema. See the [ Chat request (Structured outputs)] ( #chat-request-structured-outputs ) example below.
514522
515523### Examples
516524
517- #### Chat Request (Streaming)
525+ #### Chat request (Streaming)
518526
519527##### Request
520528
@@ -569,6 +577,88 @@ Final response:
569577}
570578```
571579
580+ #### Chat request (Streaming with tools)
581+
582+ ##### Request
583+
584+ ``` shell
585+ curl http://localhost:11434/api/chat -d ' {
586+ "model": "llama3.2",
587+ "messages": [
588+ {
589+ "role": "user",
590+ "content": "what is the weather in tokyo?"
591+ }
592+ ],
593+ "tools": [
594+ {
595+ "type": "function",
596+ "function": {
597+ "name": "get_weather",
598+ "description": "Get the weather in a given city",
599+ "parameters": {
600+ "type": "object",
601+ "properties": {
602+ "city": {
603+ "type": "string",
604+ "description": "The city to get the weather for"
605+ }
606+ },
607+ "required": ["city"]
608+ }
609+ }
610+ }
611+ ],
612+ "stream": true
613+ }'
614+ ```
615+
616+ ##### Response
617+
618+ A stream of JSON objects is returned:
619+ ``` json
620+ {
621+ "model" : " llama3.2" ,
622+ "created_at" : " 2025-07-07T20:22:19.184789Z" ,
623+ "message" : {
624+ "role" : " assistant" ,
625+ "content" : " " ,
626+ "tool_calls" : [
627+ {
628+ "function" : {
629+ "name" : " get_weather" ,
630+ "arguments" : {
631+ "city" : " Tokyo"
632+ }
633+ },
634+ }
635+ ]
636+ },
637+ "done" : false
638+ }
639+ ```
640+
641+ Final response:
642+
643+ ``` json
644+ {
645+ "model" :" llama3.2" ,
646+ "created_at" :" 2025-07-07T20:22:19.19314Z" ,
647+ "message" : {
648+ "role" : " assistant" ,
649+ "content" : " "
650+ },
651+ "done_reason" : " stop" ,
652+ "done" : true ,
653+ "total_duration" : 182242375 ,
654+ "load_duration" : 41295167 ,
655+ "prompt_eval_count" : 169 ,
656+ "prompt_eval_duration" : 24573166 ,
657+ "eval_count" : 15 ,
658+ "eval_duration" : 115959084
659+ }
660+ ```
661+
572662#### Chat request (No streaming)
573663
574664##### Request
@@ -606,6 +696,74 @@ curl http://localhost:11434/api/chat -d '{
606696}
607697```
608698
699+ #### Chat request (No streaming, with tools)
700+
701+ ##### Request
702+
703+
704+ ``` shell
705+ curl http://localhost:11434/api/chat -d ' {
706+ "model": "llama3.2",
707+ "messages": [
708+ {
709+ "role": "user",
710+ "content": "what is the weather in tokyo?"
711+ }
712+ ],
713+ "tools": [
714+ {
715+ "type": "function",
716+ "function": {
717+ "name": "get_weather",
718+ "description": "Get the weather in a given city",
719+ "parameters": {
720+ "type": "object",
721+ "properties": {
722+ "city": {
723+ "type": "string",
724+ "description": "The city to get the weather for"
725+ }
726+ },
727+ "required": ["city"]
728+ }
729+ }
730+ }
731+ ],
732+ "stream": false
733+ }'
734+ ```
735+
736+ ##### Response
737+
738+ ``` json
739+ {
740+ "model" : " llama3.2" ,
741+ "created_at" : " 2025-07-07T20:32:53.844124Z" ,
742+ "message" : {
743+ "role" : " assistant" ,
744+ "content" : " " ,
745+ "tool_calls" : [
746+ {
747+ "function" : {
748+ "name" : " get_weather" ,
749+ "arguments" : {
750+ "city" : " Tokyo"
751+ }
752+ },
753+ }
754+ ]
755+ },
756+ "done_reason" : " stop" ,
757+ "done" : true ,
758+ "total_duration" : 3244883583 ,
759+ "load_duration" : 2969184542 ,
760+ "prompt_eval_count" : 169 ,
761+ "prompt_eval_duration" : 141656333 ,
762+ "eval_count" : 18 ,
763+ "eval_duration" : 133293625
764+ }
765+ ```
766+
609767#### Chat request (Structured outputs)
610768
611769##### Request
@@ -712,6 +870,87 @@ Final response:
712870}
713871```
714872
873+
874+ #### Chat request (With history, with tools)
875+
876+ ##### Request
877+
878+ ``` shell
879+ curl http://localhost:11434/api/chat -d ' {
880+ "model": "llama3.2",
881+ "messages": [
882+ {
883+ "role": "user",
884+ "content": "what is the weather in Toronto?"
885+ },
886+ // the message from the model appended to history
887+ {
888+ "role": "assistant",
889+ "content": "",
890+ "tool_calls": [
891+ {
892+ "function": {
893+ "name": "get_temperature",
894+ "arguments": {
895+ "city": "Toronto"
896+ }
897+ },
898+ }
899+ ]
900+ },
901+ // the tool call result appended to history
902+ {
903+ "role": "tool",
904+ "content": "11 degrees celsius",
905+ "tool_name": "get_temperature",
906+ }
907+ ],
908+ "stream": false,
909+ "tools": [
910+ {
911+ "type": "function",
912+ "function": {
913+ "name": "get_weather",
914+ "description": "Get the weather in a given city",
915+ "parameters": {
916+ "type": "object",
917+ "properties": {
918+ "city": {
919+ "type": "string",
920+ "description": "The city to get the weather for"
921+ }
922+ },
923+ "required": ["city"]
924+ }
925+ }
926+ }
927+ ]
928+ }'
929+ ```
930+
931+ ##### Response
932+
933+ ``` json
934+ {
935+ "model" : " llama3.2" ,
936+ "created_at" : " 2025-07-07T20:43:37.688511Z" ,
937+ "message" : {
938+ "role" : " assistant" ,
939+ "content" : " The current temperature in Toronto is 11°C."
940+ },
941+ "done_reason" : " stop" ,
942+ "done" : true ,
943+ "total_duration" : 890771750 ,
944+ "load_duration" : 707634750 ,
945+ "prompt_eval_count" : 94 ,
946+ "prompt_eval_duration" : 91703208 ,
947+ "eval_count" : 11 ,
948+ "eval_duration" : 90282125
949+ }
950+
951+ ```
952+
953+
715954#### Chat request (with images)
716955
717956##### Request
0 commit comments