Transformative Paraphrasing

Paraphrasing helps to simplify complex technical language and enhance its readability for a broader audience. This case study explores QuData's practical application of paraphrasing to improve content performance in game development, effectively captivating and engaging users.

Business Challenge

In the IT industry, paraphrasing plays a vital role by simplifying complex technical jargon for wider comprehension. Its primary importance lies in making dense technical information more readable for the general audience, aiding in easier understanding without overwhelming people with intricate technical details.

At QuData, we've harnessed paraphrasing's versatility, leveraging it for multiple purposes, including building expansive chatbot training datasets. By paraphrasing user requests, our model learned to recognize intents and thus significantly improve smartbot performance.

Given our extensive experience, the Qudata team was requested to apply text rewriting for a gamedev company. We needed to adapt paraphrasing to tackle an important challenge in casual game development: transforming complex technical game descriptions into user-friendly content. The challenge lay in maintaining technical precision while crafting engaging narratives, enhancing search engine visibility, and appealing to both potential players and algorithms.

Solution Overview

Our objective was to seamlessly paraphrase game descriptions, infusing them with targeted keywords, while striking a harmonious balance between technical accuracy and captivating storytelling. This challenge necessitated the creation of a comprehensive solution that would enhance the appeal of game descriptions, ensuring a wider audience reach and an improved online presence.

Our team aimed not only to increase the user interest, but also to optimize the descriptions of customers' games for better indexing by search engines, in order to bring users the true gems of the gaming industry.

The Qudata team developed a system that operates on this scheme:

The model takes as an input an arbitrary text describing the game and a specific list of relevant keywords for use in the context of our task.

The output of the model should consist of paraphrased text that retains stylistic and logical meaning, including the keywords supplied as input.

Technical Details

The initial open-source solutions that could be easily deployed locally did not produce good results. The game's context is not very popular, and models such as Guanaco, Llama-1, Llama-2 hallucinated with this approach. GPT-3.5 from OpenAI via the API showed satisfactory results, but using the API as a solution would impose additional financial costs on the system. We decided to implement similar functionality by additionally training open source large language models (LLM). Llama-2 with 7 billion parameters was chosen as a base model, being the best ratio between the quality of work and the amount of resources required for its work.

An important step was the data preparation stage. To further train such a model, we should use high-quality pairs of input data (game descriptions + keywords), and as output a paraphrased description of the game with keywords. The game description list was easy to obtain as the gamedev company has an extensive list of games with available descriptions. It is also not difficult to identify keywords. Given their specificity, a constant list was used with approximately the following words: “free”, “casual”, “family-friendly”, “interesting”, “best”. The problem was only in obtaining high-quality output texts. It was solved by assembling such texts using GPT-3.5 from OpenAI using the API.

Having received approximately 500 high-quality instances of input and output data, we moved on to finetuning of LLama-2. The specificity of finetuning was that large language models take up a lot of space (for float32 precision, a full model will take up approximately 28 GB of video random access memory (VRAM). Training such a large model with such accuracy without using expensive equipment is impossible. Various tools come to the rescue that will reduce the required VRAM usage for training to 5-6 GB.

Such tools are QLoRA (Quantized Low-Rank Adapters, and BitsAndBytes) – a library that allows you to conveniently operate with the accuracy of calculations. It transfers general model parameters into RAM up to 4-bit accuracy, and loads the necessary model segments with float16 precision during operation. The use of the Accelerate library made it possible to parallelize the operation of the model, thereby improving its overall operating speed. And the TRL (Transformer Reinforcement Learning) library provided wide and convenient functionality for training.

Technology Stack

GPT

GPT

Llama 2

Llama 2