The DeepSeek API has recently undergone a significant upgrade, introducing a suite of new features designed to unlock greater potential for developers and users alike. These updates focus on enhancing flexibility, control, and integration capabilities, making DeepSeek a more versatile and powerful tool.
The latest DeepSeek API update includes enhancements to existing interfaces and the addition of entirely new functionalities. These features aim to provide developers with more granular control over model outputs and enable seamless integration with external tools and environments.
All of these new features are supported by the deepseek-chat
and deepseek-coder
models.
/chat/completions
InterfaceThe /chat/completions
interface receives several powerful upgrades, further refined to optimize the output.
The new JSON Output feature within the DeepSeek API allows developers to force the Large Language Model to return responses in JSON format. This enables automated data processing and simplifies the parsing of model outputs. This JSON output is particularly useful when working with structured data.
To use JSON Output:
response_format
parameter to { 'type': 'json_object' }
.max_tokens
to prevent truncation of the JSON string.The example below shows how to format the JSON response containing question and answer pairs.
For detailed usage instructions, refer to the JSON Output Guide.
The DeepSeek API now supports Function Calling, enabling the model to interact with external tools and APIs. This allows you to create more dynamic tools.
Key features of Function Calling:
The image below showcases the successful integration of deepseek-coder into LobeChat:
Example Image of Function Calling Integration
The Function Calling process unfolds as follows:
Example Image of the Function Calling process itself
For detailed usage instructions, refer to the Function Calling Guide.
Dialogue Prefix Completion allows you to specify a prefix for the assistant's last message, guiding the model to continue from that point. This feature permits greater flexibility in the responses the LLM outputs. It is also very useful when the output has been truncated due to length limits.
How to use Dialogue Prefix Completion:
base_url
to https://api.deepseek.com/beta
to enable the Beta functionality.messages
list has the role "assistant" and set "prefix": True
.Example Image of Dialogue Prefix Completion
For detailed usage instructions, refer to the Dialogue Prefix Completion Guide.
To accommodate scenarios requiring longer text outputs, the max_tokens
limit has been increased to 8K in the Beta API.
To utilize the 8K maximum output:
base_url
to https://api.deepseek.com/beta
to enable the Beta functionality.max_tokens
is 4096, but you can adjust it up to 8192 in Beta./completions
Interface: Unlocking Continuation ScenariosDeepSeek API introduces a new /completions
interface introducing the FIM completion feature.
The DeepSeek API now includes a FIM (Fill-In-The-Middle) Completion API, compatible with OpenAI's FIM Completion API. This new feature allows models to fill in gaps between user-defined prefixes and suffixes, enabling use cases like story completion, code generation, and contextual content creation. This API charges the same rate as Dialogue Completion.
To use the FIM Completion interface, you need to set the base_url
to https://api.deepseek.com/beta
to access the Beta features.
Example Image of FIM Completion in Use
For detailed usage instructions, refer to the FIM Completion Guide.
The Beta interfaces are available to all users by setting base_url
to https://api.deepseek.com/beta
. Keep in mind that Beta interfaces are unstable.
This latest DeepSeek API upgrade equips developers with a powerful set of tools to enhance their applications and explore new possibilities in AI-driven innovation. By continuously iterating and improving its service, DeepSeek is positioning itself as a leading platform for AI development and deployment.