Skip to main content

AI LLM endpoint params Google object.

type
enum<string>
required

The type of the AI LLM endpoint params object for Google. This parameter is required.

Available options:
google_params
Example:

"google_params"

temperature
number | null

The temperature is used for sampling during response generation, which occurs when top-P and top-K are applied. Temperature controls the degree of randomness in the token selection.

Required range: 0 <= x <= 2
Example:

0

top_p
number | null

Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value.

Required range: 0.1 <= x <= 2
Example:

1

top_k
number | null

Top-K changes how the model selects tokens for output. A low top-K means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a high top-K means that the next token is selected from among the three most probable tokens by using temperature.

Required range: 0.1 <= x <= 2
Example:

1