Advanced Computing in the Age of AI | Tuesday, April 30, 2024

OpenAI Offers New Fine-Tuning, Customization Options 

Fine-tuning plays a vital role in the construction of valuable AI tools. This process of refining pre-trained models with more targeted datasets can allow users to vastly increase a model’s understanding of content, allowing the user to augment existing knowledge for a specific task.

Although this process can take time, it is often three times more cost-effective compared to training a model from scratch. This value is precisely why OpenAI recently announced an expansion to its Custom Models Program as well as a variety of new features for its fine-tuning API.

New Features in the Self-Serve Fine-Tuning API

OpenAI originally announced the launch of the self-serve fine-tuning API for GPT-3 in August 2023, and the response from the AI community has been overwhelming. OpenAI reports that thousands of organizations have used the API to train hundreds of thousands of models in areas such as generating code in particular programming languages, summarizing text to a specific format, or crafting personalized content based on user behavior.

The Indeed job matching and hiring platform has a major success story to come out of the changes from August 2023. In an effort to match job seekers to relevant open positions, Indeed sends personalized recommendations to users. By fine-tuning GPT 3.5 Turbo to generate more accurate explanations of the jobs, they were able to reduce the number of tokens in the prompt by 80%. This enabled the company to scale from less than one million messages to job seekers per month to around 20 million.

The new fine-tuning API features below build on this success, hoping to improve functionality for future users:

  • Epoch-based Checkpoint Creation: Automatically produce one full fine-tuned model checkpoint during each training epoch, which reduces the need for subsequent retraining, especially in the cases of overfitting.
  • Comparative Playground: A new side-by-side Playground UI for comparing model quality and performance, allowing human evaluation of the outputs of multiple models or fine-tune snapshots against a single prompt.
  • Third-party Integration: Support for integrations with third-party platforms (starting with Weights & Biases this week) to let developers share detailed fine-tuning data to the rest of their stack.
  • Comprehensive Validation Metrics: The ability to compute metrics like loss and accuracy over the entire validation dataset instead of a sampled batch, providing better insight on model quality.
  • Hyperparameter Configuration: The ability to configure available hyperparameters from the Dashboard (rather than only through the API or SDK).
  • Fine-Tuning Dashboard Improvements: Including the ability to configure hyperparameters, view more detailed training metrics, and rerun jobs from previous configurations.

Based on past success, OpenAI believes these new features will give developers more granular control over their fine-tuning efforts.

Assisted Fine-Tuning and Custom-Trained Models

OpenAI is also building on its previous announcement at DevDay in November 2023 by improving upon the Custom Model program. One of the major changes is the unveiling of assisted fine-tuning, which is a means of leveraging valuable techniques beyond the fine-tuning of the API such as adding additional hyperparameters and various parameter efficient fine-tuning (PEFT) methods at a larger scale.

SK Telecom is an example of this service being used to its full potential. This telecommunications operator serves over 30 million subscribers in South Korea, and as such they desired a custom AI model that could act as an expert in telecommunications customer service.

By working with OpenAI to fine-tune GPT-4 to focus on telecom-related conversations in Korean, SK Telecom saw a 35% increase in conversation summarization quality as well as a 33% increase in intent recognition accuracy. They also achieved an increase in satisfaction scores from 3.6 to 4.5 out of 5 when comparing their new fine-tuned model to a generalized GPT-4.

OpenAI is also introducing the ability to build custom models for companies that require deeply fine-tuned models for domain-specific knowledge. The organization’s work with legal AI company Harvey is proof of this feature’s worth. Legal work requires a large amount of reading dense documents, and Harvey wanted to use LLMs to synthesize information from these documents and present it to lawyers for review. However, many laws are complex and context-dependent, and Harvey wanted to work with OpenAI to build a custom-trained model that could include new knowledge and reasoning methods into base models.

Working with OpenAI, Harvey added the equivalent of 10 billion tokens worth of data to custom-train this case law model. By adding the depth of context necessary to make informed legal judgments, the resulting model achieved an 83% increase in factual responses.

AI tools were never meant to be one-size-fits-all solutions. Customizability lies at the heart of this technology’s usefulness, and OpenAI’s work in fine-tuning and custom-trained models will help expand upon what organizations are already receiving from this tool.

EnterpriseAI