AIProvider Class API Reference¶
The AIProvider
class is an abstract base class that defines the interface for all AI provider implementations in ClientAI. It ensures consistency across different providers.
Class Definition¶
Bases: ABC
Abstract base class for AI providers.
Source code in clientai/ai_provider.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
chat(messages, model, system_prompt=None, return_full_response=False, stream=False, json_output=False, temperature=None, top_p=None, **kwargs)
abstractmethod
¶
Engage in a chat conversation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
List[Message]
|
A list of message dictionaries, each containing 'role' and 'content'. |
required |
model
|
str
|
The name or identifier of the AI model to use. |
required |
system_prompt
|
Optional[str]
|
Optional system prompt to guide model behavior. |
None
|
return_full_response
|
bool
|
If True, return the full response object instead of just the chat content. |
False
|
stream
|
bool
|
If True, return an iterator for streaming responses. |
False
|
json_output
|
bool
|
If True, format the response as valid JSON. Each provider uses its native JSON support mechanism. |
False
|
temperature
|
Optional[float]
|
Optional temperature value controlling randomness. Usually between 0.0 and 2.0, with lower values making the output more focused and deterministic, and higher values making it more creative and variable. |
None
|
top_p
|
Optional[float]
|
Optional nucleus sampling parameter controlling diversity. Usually between 0.0 and 1.0, with lower values making the output more focused on likely tokens, and higher values allowing more diverse selections. |
None
|
**kwargs
|
Any
|
Additional keyword arguments specific to the provider's API. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
GenericResponse |
GenericResponse
|
The chat response, either as a string, a dictionary, or an iterator for streaming responses. |
Note
When json_output is True: - OpenAI/Groq use response_format={"type": "json_object"} - Replicate adds output="json" to input parameters - Ollama uses format="json" parameter
Temperature ranges: - OpenAI: 0.0 to 2.0 (default: 1.0) - Ollama: 0.0 to 2.0 (default: 0.8) - Replicate: Model-dependent - Groq: 0.0 to 2.0 (default: 1.0)
Top-p ranges: - OpenAI: 0.0 to 1.0 (default: 1.0) - Ollama: 0.0 to 1.0 (default: 0.9) - Replicate: Model-dependent - Groq: 0.0 to 1.0 (default: 1.0)
Source code in clientai/ai_provider.py
generate_text(prompt, model, system_prompt=None, return_full_response=False, stream=False, json_output=False, temperature=None, top_p=None, **kwargs)
abstractmethod
¶
Generate text based on a given prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
The input prompt for text generation. |
required |
model
|
str
|
The name or identifier of the AI model to use. |
required |
system_prompt
|
Optional[str]
|
Optional system prompt to guide model behavior. |
None
|
return_full_response
|
bool
|
If True, return the full response object instead of just the generated text. |
False
|
stream
|
bool
|
If True, return an iterator for streaming responses. |
False
|
json_output
|
bool
|
If True, format the response as valid JSON. Each provider uses its native JSON support mechanism. |
False
|
temperature
|
Optional[float]
|
Optional temperature value controlling randomness. Usually between 0.0 and 2.0, with lower values making the output more focused and deterministic, and higher values making it more creative and variable. |
None
|
top_p
|
Optional[float]
|
Optional nucleus sampling parameter controlling diversity. Usually between 0.0 and 1.0, with lower values making the output more focused on likely tokens, and higher values allowing more diverse selections. |
None
|
**kwargs
|
Any
|
Additional keyword arguments specific to the provider's API. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
GenericResponse |
GenericResponse
|
The generated text response, full response object, or an iterator for streaming responses. |
Note
When json_output is True: - OpenAI/Groq use response_format={"type": "json_object"} - Replicate adds output="json" to input parameters - Ollama uses format="json" parameter
Temperature ranges: - OpenAI: 0.0 to 2.0 (default: 1.0) - Ollama: 0.0 to 2.0 (default: 0.8) - Replicate: Model-dependent - Groq: 0.0 to 2.0 (default: 1.0)
Top-p ranges: - OpenAI: 0.0 to 1.0 (default: 1.0) - Ollama: 0.0 to 1.0 (default: 0.9) - Replicate: Model-dependent - Groq: 0.0 to 1.0 (default: 1.0)