Reference

Step LLM (:llm)

Quick Answer

Use this reference for the :llm step schema, prompt/message patterns, templates, and model-call configuration.

Use for model calls via an :llm-provider connection.

Canonical Shape

Core fields:

FieldTypeRequiredNotes
:connectionkeyword/stringRecommendedSlot or connection id
:messagesvectorYes*Explicit chat messages
:promptstring/mapYes*Prompt shorthand
:systemstringNoSystem prompt shorthand
:inputmapNoCanonical input envelope
:templatekeywordNo:llm-prompt template id
:datamapNoTemplate data
:modelstringNoModel override
:providerkeyword/stringNoProvider override
:temperature, :top-p, :stopscalarNoGeneration controls
:max-tokensintNoResponse token cap
:output / :response-formatmap/keywordNoStructured output config
:json-schemamapNoStructured JSON schema
:toolsmap/vectorNoAgentic tools config
:openaimapNoOpenAI Responses-specific options (:responses)
:available-stepsvectorNoAuto-tool step set
:max-iterationsintNoAgentic loop bound
:authmapNoExplicit auth if not using connection

* Provide :messages, :prompt/:system, or :input.

Limits And Behavior

  • Use either :messages or :prompt/:system.
  • Prefer templates for long prompts.
  • :tools belongs on the step config, not inside templates.
  • For most production flows, bind a reusable :llm-provider connection via :requires.
  • :max-tokens policy:
    • platform-managed keys are capped by platform limit
    • user-provided keys are not platform-clamped (provider/account limits still apply)
  • OpenAI hosted shell can be configured via :openai {:responses ...}.
  • Phase 1 limits:
    • :openai {:responses {:transport :websocket}} is not supported.
    • :openai {:responses {:shell {:environment {:type :local}}}} is not supported.
    • OpenAI Responses-only options require direct OpenAI Responses routing (not Azure/custom Chat-compatible endpoints).

Canonical Example

;; In the flow definition:
;; :templates [{:id :summary
;;              :type :llm-prompt
;;              :system "You are concise."
;;              :prompt "Summarize in 3 bullets:\\n{{text}}"}]
;; :functions [{:id :llm-input
;;              :language :clojure
;;              :code "(fn [input] {:text (:text input)})"}]
'(let [prepared (flow/step :function :prepare-llm-input
                  {:ref :llm-input
                   :input (flow/input)})
       result (flow/step :llm :summarize
                {:connection :ai
                 :model "gpt-4o-mini"
                 :template :summary
                 :data prepared
                 :output {:format :json}
                 :tools {:mode :propose
                         :allowed ["http" "db" "function"]}
                 :max-iterations 5})]
   result)

OpenAI Responses Hosted Shell (Phase 1)

'(flow/step :llm :support-agent
   {:connection :ai
    :model "gpt-5.2"
    :prompt "Analyze this support email and draft a reply."
    :openai {:responses {:shell {:environment {:type :container_auto
                                               :network-policy {:type :allowlist
                                                                :allowed-domains ["gmail.googleapis.com"]}}}
                         :tool-choice "required"
                         :store false}}})

Related

As of Feb 26, 2026