OLLAMA

Generates text via OLLAMA from user text by command line argument.

ollama.toml

[log]
    path  = "-"
    level = "INFO"

[[inlets.args]]

[[flows.ollama]]
    address = "http://127.0.0.1:11434" # <- OLLAMA API
    model = "phi3"                     # <- Model
    stream = true                      # <- Response mode, see the difference in the Result
    timeout = "120s"

[[outlets.file]]
    path = "-"
    format = "json"

Execute

tine run ollama.toml -- --prompt="hello, who are you?"

Result

  • When set stream=false

  • When set stream=true

Last updated