AI_ASK
The AI_ASK function generates text responses using a Large Language Model (LLM) based on a user-provided prompt and optional tabular data context. It connects to an OpenAI-compatible chat completions API, with Mistral AI as the default provider.
LLMs are transformer-based neural networks trained on massive text corpora to predict the next token in a sequence. When given a prompt, the model generates contextually relevant text by sampling from a probability distribution over its vocabulary. This implementation uses the Mistral AI Chat Completions API, which supports instruction-following models fine-tuned for conversational responses.
The function accepts an Excel range as the data parameter, which is serialized as JSON and appended to the prompt. This allows the model to analyze tabular data such as survey results, sales figures, or incident reports directly from the spreadsheet.
Two key parameters control the model’s output behavior:
- Temperature controls randomness in token selection. Lower values (e.g., 0.2) produce more deterministic, focused outputs, while higher values (e.g., 0.7) increase creativity and diversity. Mistral recommends values between 0.0 and 0.7 for most use cases.
- Max tokens limits the length of the generated response. The combined token count of the prompt and response cannot exceed the model’s context window.
The function uses mistral-small-latest as the default model, but any model available through the configured API endpoint can be specified. For available Mistral models, see the Mistral model overview. The api_url parameter accepts any OpenAI-compatible endpoint, enabling use with alternative providers or self-hosted models.
For more details on the API, see the Mistral Chat Completions API documentation and the Mistral AI GitHub repository.
This example function is provided as-is without any representation of accuracy.
Excel Usage
=AI_ASK(prompt, api_key, data, temperature, max_tokens, model, api_url)
prompt(str, required): The question, instruction, or task for the AI model.api_key(str, required): API key for authentication.data(list[list], optional, default: null): 2D list (Excel range) to provide additional context. Default is None.temperature(float, optional, default: 0.5): Controls randomness/creativity (0.0 to 2.0). Default is 0.5.max_tokens(int, optional, default: 250): Maximum number of tokens to generate (5 to 5000). Default is 250.model(str, optional, default: “codestral-2508”): Model ID to use. Default is “codestral-2508”.api_url(str, optional, default: “https://api.mistral.ai/v1/chat/completions”): OpenAI-compatible API endpoint URL. Default is “https://api.mistral.ai/v1/chat/completions”.
Returns (str): The AI-generated response, or an error message if the request fails or parameters are invalid.
Example 1: Demo case 1
Inputs:
| prompt | data | temperature | max_tokens | model | |
|---|---|---|---|---|---|
| Summarize the key findings from the employee engagement survey in 1 sentence: | Question | Score | 0.5 | 250 | codestral-2508 |
| Team collaboration | 4.5 | ||||
| Workload | 3.2 | ||||
| Career advancement | 3 | ||||
| Management support | 4 |
Excel formula:
=AI_ASK("Summarize the key findings from the employee engagement survey in 1 sentence:", {"Question","Score";"Team collaboration",4.5;"Workload",3.2;"Career advancement",3;"Management support",4}, 0.5, 250, "codestral-2508")
Expected output:
"The employee engagement survey revealed that while team collaboration and management support scored highly (4.5 and 4, respectively), workload and career advancement received lower ratings (3.2 and 3), indicating areas needing improvement."
Example 2: Demo case 2
Inputs:
| prompt | data | temperature | max_tokens | model | ||||
|---|---|---|---|---|---|---|---|---|
| Provide a brief analysis of the quarterly sales performance in 1 sentence: | Region | Q1 | Q2 | Q3 | Q4 | 0.5 | 250 | codestral-2508 |
| North | 120 | 135 | 150 | 160 | ||||
| South | 100 | 110 | 120 | 130 | ||||
| Central | 90 | 95 | 100 | 105 |
Excel formula:
=AI_ASK("Provide a brief analysis of the quarterly sales performance in 1 sentence:", {"Region","Q1","Q2","Q3","Q4";"North",120,135,150,160;"South",100,110,120,130;"Central",90,95,100,105}, 0.5, 250, "codestral-2508")
Expected output:
"The sales performance shows steady growth across all regions, with the North region leading in sales and the Central region trailing, though all regions experienced quarter-over-quarter increases."
Example 3: Demo case 3
Inputs:
| prompt | data | temperature | max_tokens | model |
|---|---|---|---|---|
| Summarize the following incident report in 1 sentence: | On April 10th, a system outage affected order processing for 2 hours. The IT team resolved the issue by updating server configurations. No data loss occurred. | 0.5 | 250 | codestral-2508 |
Excel formula:
=AI_ASK("Summarize the following incident report in 1 sentence:", {"On April 10th, a system outage affected order processing for 2 hours. The IT team resolved the issue by updating server configurations. No data loss occurred."}, 0.5, 250, "codestral-2508")
Expected output:
"On April 10th, a 2-hour system outage disrupted order processing, which was resolved by updating server configurations without any data loss."
Example 4: Demo case 4
Inputs:
| prompt | data | temperature | max_tokens | model |
|---|---|---|---|---|
| Summarize the following in 1 sentence: | Test data | 0.5 | 250 | codestral-2508 |
Excel formula:
=AI_ASK("Summarize the following in 1 sentence:", {"Test data"}, 0.5, 250, "codestral-2508")
Expected output:
"The data to analyze is labeled as \"Test data.\""
Python Code
Show Code
import requests
import json
def ai_ask(prompt, api_key, data=None, temperature=0.5, max_tokens=250, model='codestral-2508', api_url='https://api.mistral.ai/v1/chat/completions'):
"""
Generate a text response using an AI model based on a prompt and optional tabular data.
This example function is provided as-is without any representation of accuracy.
Args:
prompt (str): The question, instruction, or task for the AI model.
api_key (str): API key for authentication.
data (list[list], optional): 2D list (Excel range) to provide additional context. Default is None. Default is None.
temperature (float, optional): Controls randomness/creativity (0.0 to 2.0). Default is 0.5. Default is 0.5.
max_tokens (int, optional): Maximum number of tokens to generate (5 to 5000). Default is 250. Default is 250.
model (str, optional): Model ID to use. Default is "codestral-2508". Default is 'codestral-2508'.
api_url (str, optional): OpenAI-compatible API endpoint URL. Default is "https://api.mistral.ai/v1/chat/completions". Default is 'https://api.mistral.ai/v1/chat/completions'.
Returns:
str: The AI-generated response, or an error message if the request fails or parameters are invalid.
"""
if not api_key:
return "You must include an API key to use this function. Sign up for a free API key at https://aistudio.google.com/, https://console.mistral.ai/, or other providers and add your own api_key. You may use any OpenAI compatible API, just update the api_url parameter."
if not isinstance(temperature, (float, int)) or not (0 <= float(temperature) <= 2):
return "Error: temperature must be a float between 0 and 2 (inclusive)"
if not isinstance(max_tokens, (float, int)) or not (5 <= float(max_tokens) <= 5000):
return "Error: max_tokens must be a number between 5 and 5000 (inclusive)"
# Construct the message incorporating both prompt and data if provided
message = prompt
if data is not None:
data_str = json.dumps(data, indent=2)
message += f"\n\nData to analyze:\n{data_str}"
# Prepare the API request payload
payload = {
"messages": [{"role": "user", "content": message}],
"temperature": float(temperature),
"model": model,
"max_tokens": int(max_tokens)
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"Accept": "application/json"
}
# Make the API request
response = requests.post(api_url, headers=headers, json=payload)
if response.status_code == 429:
return "You have hit the rate limit for the API. Please try again later."
try:
response.raise_for_status()
response_data = response.json()
content = response_data["choices"][0]["message"]["content"]
return content
except Exception as e:
return f"Error: {str(e)}"