Text Generation with ClientAI¶
This guide explores how to use ClientAI for text generation tasks across different AI providers. You'll learn about the various options and parameters available for generating text.
Table of Contents¶
- Basic Text Generation
- Advanced Parameters
- Streaming Responses
- Provider-Specific Features
- Best Practices
Basic Text Generation¶
To generate text using ClientAI, use the generate_text
method:
from clientai import ClientAI
client = ClientAI('openai', api_key="your-openai-api-key")
response = client.generate_text(
"Write a short story about a robot learning to paint.",
model="gpt-3.5-turbo"
)
print(response)
This will generate a short story based on the given prompt.
Advanced Parameters¶
ClientAI supports various parameters to fine-tune text generation:
response = client.generate_text(
"Explain the theory of relativity",
model="gpt-4",
max_tokens=150,
temperature=0.7,
top_p=0.9,
presence_penalty=0.1,
frequency_penalty=0.1
)
max_tokens
: Maximum number of tokens to generatetemperature
: Controls randomness (0.0 to 1.0)top_p
: Nucleus sampling parameterpresence_penalty
: Penalizes new tokens based on their presence in the text so farfrequency_penalty
: Penalizes new tokens based on their frequency in the text so far
Note: Available parameters may vary depending on the provider.
Streaming Responses¶
For long-form content, you can use streaming to get partial responses as they're generated:
for chunk in client.generate_text(
"Write a comprehensive essay on climate change",
model="gpt-3.5-turbo",
stream=True
):
print(chunk, end="", flush=True)
This allows for real-time display of generated text, which can be useful for user interfaces or long-running generations.
Provider-Specific Features¶
Different providers may offer unique features. Here are some examples:
OpenAI¶
response = openai_client.generate_text(
"Translate the following to French: 'Hello, how are you?'",
model="gpt-3.5-turbo"
)
Replicate¶
response = replicate_client.generate_text(
"Generate a haiku about mountains",
model="meta/llama-2-70b-chat:latest"
)
Ollama¶
Best Practices¶
- Prompt Engineering: Craft clear and specific prompts for better results.
good_prompt = "Write a detailed description of a futuristic city, focusing on transportation and architecture."
-
Model Selection: Choose appropriate models based on your task complexity and requirements.
-
Error Handling: Always handle potential errors in text generation:
try:
response = client.generate_text("Your prompt here", model="gpt-3.5-turbo")
except Exception as e:
print(f"An error occurred: {e}")
-
Rate Limiting: Be mindful of rate limits imposed by providers. Implement appropriate delays or queuing mechanisms for high-volume applications.
-
Content Filtering: Implement content filtering or moderation for user-facing applications to ensure appropriate outputs.
-
Consistency: For applications requiring consistent outputs, consider using lower temperature values or implementing your own post-processing.
By following these guidelines and exploring the various parameters and features available, you can effectively leverage ClientAI for a wide range of text generation tasks across different AI providers.