Documentation
¶
Overview ¶
Package providers implements various AI model service provider connectors supported by MindTrial.
Index ¶
- Variables
- func AssertTurnsAvailable(ctx context.Context, logger logging.Logger, task config.Task, currentTurn int) error
- func DefaultAnswerFormatInstruction(task config.Task) string
- func DefaultResponseFormatInstruction(responseFormat config.ResponseFormat) (string, error)
- func DefaultTaskFileNameInstruction(file config.TaskFile) string
- func DefaultUnstructuredResponseInstruction() string
- func MapToJSONSchema(schemaMap map[string]interface{}) (*jsonschema.Schema, error)
- func NewErrNoActionableContent(stopReason []byte) error
- func ResultJSONSchema(responseFormat config.ResponseFormat) (*jsonschema.Schema, error)
- func ResultJSONSchemaRaw(responseFormat config.ResponseFormat) (map[string]interface{}, error)
- func UnmarshalUnstructuredResponse(ctx context.Context, logger logging.Logger, content []byte, result *Result) error
- func WrapErrGenerateResponse(err error) error
- func WrapErrRetryable(err error) error
- type Alibaba
- type Answer
- type Anthropic
- type CompletionAccumulator
- type CompletionHandler
- type Deepseek
- type ErrAPIResponse
- type ErrNoActionableContent
- type ErrUnmarshalResponse
- type GoogleAI
- type MistralAI
- type MoonshotAI
- type OpenAI
- type OpenRouter
- type Provider
- type ResponseFormat
- type Result
- type Usage
- type XAI
Constants ¶
This section is empty.
Variables ¶
var ( // ErrUnknownProviderName is returned when provider name is not recognized. ErrUnknownProviderName = errors.New("unknown provider name") // ErrCreateClient is returned when provider client initialization fails. ErrCreateClient = errors.New("failed to create client") // ErrInvalidModelParams is returned when model parameters are invalid. ErrInvalidModelParams = errors.New("invalid model parameters for run") // ErrIncompatibleResponseFormat is returned when disable-structured-output is used with a non-text response format. ErrIncompatibleResponseFormat = errors.New("disable-structured-output requires response format to be plain text") // ErrCompileResponseSchema is returned when response schema compilation fails. ErrCompileResponseSchema = errors.New("failed to compile response schema") // ErrMalformedSchema is returned when raw schema data cannot be converted to a valid schema object. ErrMalformedSchema = errors.New("malformed schema") // ErrGenerateResponse is returned when response generation fails. ErrGenerateResponse = errors.New("failed to generate response") // ErrNoResponseCandidates is returned when the model response contains no candidates. ErrNoResponseCandidates = fmt.Errorf("%w: model response contained no response candidates", ErrGenerateResponse) // ErrCreatePromptRequest is returned when request generation fails. ErrCreatePromptRequest = errors.New("failed to create prompt request") // ErrFeatureNotSupported is returned when a requested feature is not supported by the provider. ErrFeatureNotSupported = errors.New("feature not supported by provider") // ErrFileNotSupported is returned when a task context file is not supported by the provider. ErrFileNotSupported = fmt.Errorf("%w: file type", ErrFeatureNotSupported) // ErrFileUploadNotSupported is returned when file upload is not supported by the provider. ErrFileUploadNotSupported = fmt.Errorf("%w: file upload", ErrFeatureNotSupported) // ErrToolUse is returned when tool use fails. ErrToolUse = errors.New("tool use failed") // ErrToolSetup is returned when tool setup/configuration fails. ErrToolSetup = errors.New("tool setup failed") // ErrToolNotFound is returned when a requested tool is not found in available tools. ErrToolNotFound = errors.New("tool not found in available tools") // ErrRetryable is returned when an operation can be retried. ErrRetryable = errors.New("retryable error") // ErrStreamResponse is returned when a streaming response cannot be properly assembled. ErrStreamResponse = errors.New("failed to assemble streaming response") // ErrMaxTurnsExceeded is returned when the conversation loop exceeds the configured // maximum number of turns. ErrMaxTurnsExceeded = fmt.Errorf("%w: maximum conversation turns exceeded", ErrGenerateResponse) )
Functions ¶
func AssertTurnsAvailable ¶ added in v0.15.0
func AssertTurnsAvailable(ctx context.Context, logger logging.Logger, task config.Task, currentTurn int) error
AssertTurnsAvailable logs the current conversation turn and enforces the configured maximum turn limit. Returns nil if no limit is configured or the limit has not been exceeded, or ErrMaxTurnsExceeded if the turn count exceeds the limit.
func DefaultAnswerFormatInstruction ¶
DefaultAnswerFormatInstruction generates default answer formatting instruction for a given task to be passed to the AI model.
func DefaultResponseFormatInstruction ¶
func DefaultResponseFormatInstruction(responseFormat config.ResponseFormat) (string, error)
DefaultResponseFormatInstruction generates default response formatting instruction for the given response format to be passed to AI models that require it.
func DefaultTaskFileNameInstruction ¶
DefaultTaskFileNameInstruction generates default task file name instruction to be passed to AI models that require it.
func DefaultUnstructuredResponseInstruction ¶ added in v0.14.0
func DefaultUnstructuredResponseInstruction() string
DefaultUnstructuredResponseInstruction generates the default instruction for unstructured response mode.
func MapToJSONSchema ¶ added in v0.11.0
func MapToJSONSchema(schemaMap map[string]interface{}) (*jsonschema.Schema, error)
MapToJSONSchema converts a JSON schema represented as a map to a jsonschema.Schema object.
func NewErrNoActionableContent ¶ added in v0.15.0
NewErrNoActionableContent creates a standardized generation error when the model provided neither actionable tool calls nor parseable text at a terminal stop reason.
func ResultJSONSchema ¶
func ResultJSONSchema(responseFormat config.ResponseFormat) (*jsonschema.Schema, error)
ResultJSONSchema generates a JSON schema for the Result type with the given response format injected into the final_answer field. If responseFormat is a schema, it will be used for the final_answer.content field. If responseFormat is a string, the entire final_answer field will be replaced with a string type constraint.
func ResultJSONSchemaRaw ¶ added in v0.4.0
func ResultJSONSchemaRaw(responseFormat config.ResponseFormat) (map[string]interface{}, error)
ResultJSONSchemaRaw generates a JSON schema for the Result type as a map with the given response format injected into the final_answer field.
func UnmarshalUnstructuredResponse ¶ added in v0.14.0
func UnmarshalUnstructuredResponse(ctx context.Context, logger logging.Logger, content []byte, result *Result) error
UnmarshalUnstructuredResponse parses a raw response in unstructured output mode. It unmarshals content directly into result.FinalAnswer as a string, bypassing the standard Result structure. This is used when DisableStructuredOutput is enabled and the model returns only the final answer. The Title and Explanation fields are populated with placeholder values since the model does not provide structured metadata in this mode.
func WrapErrGenerateResponse ¶ added in v0.4.0
WrapErrGenerateResponse wraps an error as a generate response error, preserving the original error chain.
func WrapErrRetryable ¶ added in v0.4.0
WrapErrRetryable wraps an error as retryable, preserving the original error chain.
Types ¶
type Alibaba ¶ added in v0.10.1
type Alibaba struct {
// contains filtered or unexported fields
}
Alibaba implements the Provider interface for Alibaba models. The Qwen models from Alibaba Cloud support OpenAI-compatible interfaces allowing them to be used with the existing OpenAI provider implementation.
func NewAlibaba ¶ added in v0.10.1
func NewAlibaba(cfg config.AlibabaClientConfig, availableTools []config.ToolConfig) *Alibaba
NewAlibaba creates a new Alibaba provider instance with the given configuration.
type Answer ¶ added in v0.9.0
type Answer struct {
// Content contains the actual answer content that follows the user-defined response format.
// For plain text response format, this will be a string.
// For structured schema-based response format, this will be an object that conforms to the schema.
Content interface{} `json:"content" validate:"required"`
}
Answer wraps the final answer content to separate it from response metadata.
func (Answer) MarshalJSON ¶ added in v0.9.0
MarshalJSON implements json.Marshaler for Answer.
func (*Answer) UnmarshalJSON ¶ added in v0.9.0
UnmarshalJSON implements json.Unmarshaler for Answer. It supports unmarshaling from:
- a string (for plain text answers)
- a JSON primitive (number, boolean) stored directly as the answer content
- a structured object with a "content" field (for structured answers)
JSON objects must conform to the Answer schema (i.e., contain a "content" field). Arrays and objects without a "content" field are rejected.
type Anthropic ¶
type Anthropic struct {
// contains filtered or unexported fields
}
Anthropic implements the Provider interface for Anthropic generative models.
func NewAnthropic ¶
func NewAnthropic(cfg config.AnthropicClientConfig, availableTools []config.ToolConfig) *Anthropic
NewAnthropic creates a new Anthropic provider instance with the given configuration.
type CompletionAccumulator ¶ added in v0.15.0
type CompletionAccumulator interface {
// AddChunk feeds a streaming chunk to the accumulator. Returns false if
// the chunk could not be accumulated (indicating a stream error).
AddChunk(ctx context.Context, logger logging.Logger, chunk openai.ChatCompletionChunk) bool
// Result returns the fully accumulated ChatCompletion after streaming ends.
Result() *openai.ChatCompletion
}
CompletionAccumulator handles the accumulation of streaming chat completion chunks into a final ChatCompletion response. It is used by handleStreamingRequest to delegate chunk processing.
type CompletionHandler ¶ added in v0.15.0
type CompletionHandler interface {
CompletionAccumulator
// IsTerminalStopReason reports whether the response should terminate the
// conversation loop and trigger response parsing.
IsTerminalStopReason(stopReason string) bool
// ToParam converts a response message into a request parameter for the next
// conversation turn. Implementations may inject non-standard fields captured
// during streaming or extracted from non-streaming message metadata.
ToParam(ctx context.Context, logger logging.Logger, message openai.ChatCompletionMessage) openai.ChatCompletionMessageParamUnion
}
CompletionHandler extends CompletionAccumulator with response message conversion. It manages the full lifecycle of a chat completion API call: accumulating streaming chunks and converting response messages into request parameters for subsequent turns.
Delegating providers can supply a custom implementation to preserve non-standard fields (e.g. reasoning_content) that the SDK's ChatCompletionAccumulator drops. A fresh instance is created per API call via the NewCompletionHandler factory.
type Deepseek ¶
type Deepseek struct {
// contains filtered or unexported fields
}
Deepseek implements the Provider interface for DeepSeek generative models.
func NewDeepseek ¶
func NewDeepseek(cfg config.DeepseekClientConfig, availableTools []config.ToolConfig) (*Deepseek, error)
NewDeepseek creates a new DeepSeek provider instance with the given configuration.
type ErrAPIResponse ¶ added in v0.7.2
type ErrAPIResponse struct {
// Cause is the underlying error that caused the API call to fail.
Cause error
// Body contains the raw HTTP response body returned by the provider API when available.
Body []byte
}
ErrAPIResponse holds additional information about an API error returned by a provider, including the raw HTTP response body when available.
func NewErrAPIResponse ¶ added in v0.7.2
func NewErrAPIResponse(cause error, body []byte) *ErrAPIResponse
NewErrAPIResponse creates a new ErrAPIResponse instance.
func (*ErrAPIResponse) Error ¶ added in v0.7.2
func (e *ErrAPIResponse) Error() string
func (*ErrAPIResponse) LogFields ¶ added in v0.15.0
func (e *ErrAPIResponse) LogFields() map[string]any
LogFields implements the logging.StructuredError interface.
func (*ErrAPIResponse) Unwrap ¶ added in v0.7.2
func (e *ErrAPIResponse) Unwrap() error
type ErrNoActionableContent ¶ added in v0.15.0
type ErrNoActionableContent struct {
// StopReason contains the provider-specific terminal reason (finish_reason/stop_reason/etc.).
StopReason []byte
}
ErrNoActionableContent is returned when the model response is terminal but contains neither actionable tool calls nor parseable text.
func (*ErrNoActionableContent) Error ¶ added in v0.15.0
func (e *ErrNoActionableContent) Error() string
func (*ErrNoActionableContent) LogFields ¶ added in v0.15.0
func (e *ErrNoActionableContent) LogFields() map[string]any
LogFields implements the logging.StructuredError interface.
func (*ErrNoActionableContent) Unwrap ¶ added in v0.15.0
func (e *ErrNoActionableContent) Unwrap() error
type ErrUnmarshalResponse ¶
type ErrUnmarshalResponse struct {
// Cause is the underlying error that caused the unmarshaling to fail.
Cause error
// RawMessage is the raw message that failed to be unmarshaled.
RawMessage []byte
// StopReason contains the reason why the AI model stopped generating the response.
StopReason []byte
}
ErrUnmarshalResponse is returned when response unmarshaling fails.
func NewErrUnmarshalResponse ¶
func NewErrUnmarshalResponse(cause error, rawMessage []byte, stopReason []byte) *ErrUnmarshalResponse
NewErrUnmarshalResponse creates a new ErrUnmarshalResponse instance.
func (*ErrUnmarshalResponse) Error ¶
func (e *ErrUnmarshalResponse) Error() string
func (*ErrUnmarshalResponse) LogFields ¶ added in v0.15.0
func (e *ErrUnmarshalResponse) LogFields() map[string]any
LogFields implements the logging.StructuredError interface.
func (*ErrUnmarshalResponse) Unwrap ¶ added in v0.7.2
func (e *ErrUnmarshalResponse) Unwrap() error
type GoogleAI ¶
type GoogleAI struct {
// contains filtered or unexported fields
}
GoogleAI implements the Provider interface for Google AI generative models.
func NewGoogleAI ¶
func NewGoogleAI(ctx context.Context, cfg config.GoogleAIClientConfig, availableTools []config.ToolConfig) (*GoogleAI, error)
NewGoogleAI creates a new GoogleAI provider instance with the given configuration. It returns an error if client initialization fails.
type MistralAI ¶ added in v0.4.0
type MistralAI struct {
// contains filtered or unexported fields
}
MistralAI implements the Provider interface for Mistral AI generative models.
func NewMistralAI ¶ added in v0.4.0
func NewMistralAI(cfg config.MistralAIClientConfig, availableTools []config.ToolConfig) (*MistralAI, error)
NewMistralAI creates a new Mistral AI provider instance with the given configuration.
type MoonshotAI ¶ added in v0.12.2
type MoonshotAI struct {
// contains filtered or unexported fields
}
MoonshotAI implements the Provider interface for Moonshot AI models. The Kimi models from Moonshot AI support OpenAI-compatible interfaces allowing them to be used with the existing OpenAI provider implementation.
func NewMoonshotAI ¶ added in v0.12.2
func NewMoonshotAI(cfg config.MoonshotAIClientConfig, availableTools []config.ToolConfig) *MoonshotAI
NewMoonshotAI creates a new Moonshot AI provider instance with the given configuration.
func (MoonshotAI) Name ¶ added in v0.12.2
func (m MoonshotAI) Name() string
type OpenAI ¶
type OpenAI struct {
// contains filtered or unexported fields
}
OpenAI implements the Provider interface for OpenAI generative models.
func NewOpenAI ¶
func NewOpenAI(cfg config.OpenAIClientConfig, availableTools []config.ToolConfig) *OpenAI
NewOpenAI creates a new OpenAI provider instance with the given configuration.
type OpenRouter ¶ added in v0.13.4
type OpenRouter struct {
// contains filtered or unexported fields
}
OpenRouter implements the Provider interface for models reachable via OpenRouter.
func NewOpenRouter ¶ added in v0.13.4
func NewOpenRouter(cfg config.OpenRouterClientConfig, availableTools []config.ToolConfig) *OpenRouter
NewOpenRouter creates a new OpenRouter provider instance with the given configuration. Injects OpenRouter attribution headers derived from MindTrial metadata into every request.
func (OpenRouter) Name ¶ added in v0.13.4
func (o OpenRouter) Name() string
type Provider ¶
type Provider interface {
// Name returns the provider's unique identifier.
Name() string
// Run executes a task using specified configuration and returns the result.
Run(ctx context.Context, logger logging.Logger, cfg config.RunConfig, task config.Task) (result Result, err error)
// Close releases resources when the provider is no longer needed.
Close(ctx context.Context) error
}
Provider interacts with AI model services.
func NewProvider ¶
func NewProvider(ctx context.Context, cfg config.ProviderConfig, availableTools []config.ToolConfig) (Provider, error)
NewProvider creates a new AI model provider based on the given configuration. It returns an error if the provider name is unknown or initialization fails.
type ResponseFormat ¶ added in v0.13.4
type ResponseFormat string
ResponseFormat specifies the response format mode for the internal OpenAI-compatible provider. This is an internal type; provider wrappers map user-facing formats to these internal values.
const ( // ResponseFormatJSONSchema uses strict json_schema mode without adding response format instructions to the prompt. // This is the default behavior when ResponseFormat is nil. ResponseFormatJSONSchema ResponseFormat = "json-schema" // ResponseFormatLegacySchema uses strict json_schema mode but adds response format instructions to the prompt. // Use this for legacy providers that require explicit JSON formatting guidance (e.g., Alibaba Qwen). ResponseFormatLegacySchema ResponseFormat = "legacy-json-schema" // ResponseFormatJSONObject uses json_object mode and adds response format instructions to the prompt. // Use this for providers that only support basic JSON object responses (e.g., Moonshot Kimi). ResponseFormatJSONObject ResponseFormat = "json-object" // ResponseFormatText uses text mode and adds response format instructions to the prompt. // The provider attempts to repair the text response into valid JSON. ResponseFormatText ResponseFormat = "text" )
func (ResponseFormat) Ptr ¶ added in v0.13.4
func (r ResponseFormat) Ptr() *ResponseFormat
Ptr returns a pointer to the ResponseFormat value.
type Result ¶
type Result struct {
// Title is a brief summary of the response.
Title string `` /* 267-byte string literal not displayed */
// Explanation is a detailed explanation of the answer.
Explanation string `` /* 345-byte string literal not displayed */
// FinalAnswer contains the final answer to the task's query.
FinalAnswer Answer `json:"final_answer" jsonschema:"title=Final Answer" validate:"required"`
// contains filtered or unexported fields
}
Result represents the structured response received from an AI model.
func (Result) GetDuration ¶
GetDuration returns the time duration it took to generate this result.
func (Result) GetFinalAnswerContent ¶ added in v0.9.0
func (r Result) GetFinalAnswerContent() interface{}
GetFinalAnswerContent returns the actual final answer content wrapped in the `FinalAnswer` field. This is a convenience method to access `Result.FinalAnswer.Content` directly.
func (Result) GetPrompts ¶
GetPrompts returns the prompts used to generate this result.
type Usage ¶
type Usage struct {
// InputTokens used by the input if available.
InputTokens *int64 `json:"-"`
// OutputTokens used by the output if available.
OutputTokens *int64 `json:"-"`
// ToolUsage contains per-tool execution metrics collected during the run if available.
ToolUsage map[string]tools.ToolUsage `json:"-"`
}
Usage represents aggregated usage statistics for a response, including both token consumption and tool execution metrics when available.
type XAI ¶ added in v0.7.2
type XAI struct {
// contains filtered or unexported fields
}
XAI implements the Provider interface for xAI.
func NewXAI ¶ added in v0.7.2
func NewXAI(cfg config.XAIClientConfig, availableTools []config.ToolConfig) (*XAI, error)
NewXAI creates a new xAI provider instance with the given configuration.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package execution provides unified provider execution patterns for the MindTrial application.
|
Package execution provides unified provider execution patterns for the MindTrial application. |
|
Package tools provides implementations for executing tools as part of MindTrial's function calling capabilities.
|
Package tools provides implementations for executing tools as part of MindTrial's function calling capabilities. |