Documentation Index
Fetch the complete documentation index at: https://docs.cognigy.com/llms.txt
Use this file to discover all available pages before exploring further.
Updated in 2026.10
To start using an Amazon Bedrock model with Cognigy.AI features, follow these steps:
- Add a Model
- Apply the Model
Add Models
You can add a model using one of the following interfaces:
Add Models via GUI
You can add a model provided by Amazon Bedrock to Cognigy.AI in Build > LLM. To add the model, you will need the following parameters:
Standard Model
Custom Model
| Parameter | Description |
|---|
| Access Key ID | Enter the Access Key ID. Log in to the AWS Management Console, go to the IAM dashboard, select Users, and choose the IAM user. Navigate to the Security credentials tab, and under Access keys, create a new access key if one hasn’t been created. Copy the Access Key ID provided after creation. |
| Secret Access Key | Enter the Secret Access Key. After creating the access key, you’ll be prompted to download a file containing the Access Key ID and the Secret Access Key. Alternatively, you can retrieve the Secret Access Key by navigating to the IAM dashboard, selecting the user, going to the Security credentials tab, and clicking Show next to the Access Key ID to reveal and copy the Secret Access Key. |
| Region | Enter the AWS region where your model is located, for example, us-east-1 for the US East (N. Virginia) region. |
| Location | Select the routing behavior of your inference calls. Select one of the following options:- Region — the data sent in the request to the LLM stays within the AWS region you provided in Region. This is the default behavior.
- Geo — the data sent in the request can be routed within a geographic boundary. This option allows for higher request throughput while keeping the benefits of a geographic boundary, for example, data residency for compliance with regulations. Enter the geographic boundary in the Geo field, for example,
us, eu, or apac. The AWS region must be located within the geographic boundary entered in this parameter. - Global — the data sent in the request can be routed worldwide. This option allows for the highest request throughput and best performance for cases with no data residency constraints.
For more information, read the Regional availability article. |
| Parameter | Description |
|---|
| Model Type | Select the Chat type for models that support the Converse API. Note that the model will only work if your AWS administrator gives you access to this model. |
| Model Name | Enter the ID of the model that you want to use as a custom model. To find model IDs, refer to the Amazon Bedrock documentation. |
| Access Key ID | Enter the Access Key ID. Log in to the AWS Management Console, go to the IAM dashboard, select Users, and choose the IAM user. Navigate to the Security credentials tab, and under Access keys, create a new access key if one hasn’t been created. Copy the Access Key ID provided after creation. |
| Secret Access Key | Enter the Secret Access Key. After creating the access key, you’ll be prompted to download a file containing the Access Key ID and the Secret Access Key. Alternatively, you can retrieve the Secret Access Key by navigating to the IAM dashboard, selecting the user, going to the Security credentials tab, and clicking Show next to the Access Key ID to reveal and copy the Secret Access Key. |
| Region | Enter the AWS region where your model is located, for example, us-east-1 for the US East (N. Virginia) region. |
| Location | Select one of the following routing behaviors for your inference calls:- Region — the data sent in the request to the LLM stays within the AWS region you provided in Region. This is the default behavior.
- Geo — the data sent in the request can be routed within a geographic boundary. This option allows for higher request throughput while keeping the benefits of a geographic boundary, for example, data residency for compliance with regulations. Enter the geographic boundary in the Geo field, for example,
us, eu, or apac. The AWS region must be located within the geographic boundary entered in this parameter. - Global — the data sent in the request can be routed worldwide. This option allows for the highest request throughput and best performance for cases with no data residency constraints.
For more information, read the Regional availability article. |
Apply changes. Check if the connection was set up by clicking Test.
Add Models via the API
You can add either a standard or custom model using the Cognigy.AI API POST /v2.0/largelanguagemodels request.
Then, test your connection for the created model via the Cognigy.AI API POST /v2.0/largelanguagemodels//test.
Apply the Model
To apply a model, follow these steps:
- In the left-side menu of the Project, go to Manage > Settings.
- Go to the section based on your use case for using a model:
- Generative AI Settings. In the Generative AI Settings section, activate Enable Generative AI Features. This setting is toggled on by default if you have previously set up the Generative AI credentials.
- Knowledge AI Settings. Use this section if you need to add a model for Knowledge AI. Select a model for the Knowledge Search and Answer Extraction features. Refer to the list of standard models and find the models that support these features.
- Navigate to the desired feature and select a model from the list. If there are no models available for the selected feature, the system will automatically select None. Save changes.