OpenAI's official website published an article yesterday:
For GPT-3.5 Turbo, fine-tuning is now available, and the fine-tuning for GPT-4 will be released this fall. This update allows developers to customize the model to better fit their use cases and run these customized models at scale. Early tests have shown that fine-tuned GPT-3.5 Turbo can match or even surpass the capabilities of base GPT-4 in some narrow tasks. As with all our APIs, data sent to the fine-tuning API and retrieved from it are owned by the customer, and neither OpenAI nor any other organization will use it to train other models.
The main advantages include:
: Fine-tuning allows companies to make the model better follow instructions, such as making outputs concise or always responding in a given language. For example, developers can use fine-tuning to ensure that the model always responds in German when prompted to use that language. : Fine-tuning improves the model's ability to respond in a consistent format — a key aspect for applications that require specific response formats, such as code completion or constructing API calls. Developers can use fine-tuning to more reliably turn user prompts into high-quality JSON snippets that work with their own systems. : Fine-tuning is a good way to optimize the qualitative feel of model outputs, such as its tone, making it better suited to align with the voice of an enterprise brand. Businesses with a recognizable brand voice can use fine-tuning to make the model more consistent with their tone.
It is still quite convenient to implement:
The price details are as follows:
Everyone can start using it~