Implementing Self-Refine Technique Using Large Language Models LLMs

byrn
By byrn
7 Min Read


This tutorial demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope, a powerful framework for building structured prompt workflows. Self-Refine is a prompt engineering strategy where the model evaluates its own output, generates feedback, and iteratively improves its response based on that feedback. This refinement loop can be repeated multiple times to progressively enhance the quality and accuracy of the final answer.

The Self-Refine approach is particularly effective for tasks involving reasoning, code generation, and content creation, where incremental improvements lead to significantly better results. Check out the Full Codes here

Installing the dependencies

!pip install "mirascope[openai]"

OpenAI API Key

To get an OpenAI API key, visit https://platform.openai.com/settings/organization/api-keys and generate a new key. If you’re a new user, you may need to add billing details and make a minimum payment of $5 to activate API access. Check out the Full Codes here

import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')

Basic Self-Refine Implementation

We begin by implementing the Self-Refine technique using Mirascope’s @openai.call and @prompt_template decorators. The process starts with generating an initial response to a user query. This response is then evaluated by the model itself, which provides constructive feedback. Finally, the model uses this feedback to generate an improved response. The self_refine function allows us to repeat this refinement process for a specified number of iterations, enhancing the quality of the output with each cycle. Check out the Full Codes here

from mirascope.core import openai, prompt_template
from mirascope.core.openai import OpenAICallResponse


@openai.call(model="gpt-4o-mini")
def call(query: str) -> str:
    return query


@openai.call(model="gpt-4o-mini")
@prompt_template(
    """
    Here is a query and a response to the query. Give feedback about the answer,
    noting what was correct and incorrect.
    Query:
    {query}
    Response:
    {response}
    """
)
def evaluate_response(query: str, response: OpenAICallResponse): ...


@openai.call(model="gpt-4o-mini")
@prompt_template(
    """
    For this query:
    {query}
    The following response was given:
    {response}
    Here is some feedback about the response:
    {feedback}

    Consider the feedback to generate a new response to the query.
    """
)
def generate_new_response(
    query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
    feedback = evaluate_response(query, response)
    return {"computed_fields": {"feedback": feedback}}


def self_refine(query: str, depth: int) -> str:
    response = call(query)
    for _ in range(depth):
        response = generate_new_response(query, response)
    return response.content


query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"

print(self_refine(query, 1))

Enhanced Self-Refine with Response Model

In this enhanced version, we define a structured response model MathSolution using Pydantic to capture both the solution steps and the final numerical answer. The enhanced_generate_new_response function refines the output by incorporating model-generated feedback and formatting the improved response into a well-defined schema. This approach ensures clarity, consistency, and better downstream usability of the refined answer—especially for tasks like mathematical problem-solving. Check out the Full Codes here

from pydantic import BaseModel, Field


class MathSolution(BaseModel):
    steps: list[str] = Field(..., description="The steps taken to solve the problem")
    final_answer: float = Field(..., description="The final numerical answer")


@openai.call(model="gpt-4o-mini", response_model=MathSolution)
@prompt_template(
    """
    For this query:
    {query}
    The following response was given:
    {response}
    Here is some feedback about the response:
    {feedback}

    Consider the feedback to generate a new response to the query.
    Provide the solution steps and the final numerical answer.
    """
)
def enhanced_generate_new_response(
    query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
    feedback = evaluate_response(query, response)
    return {"computed_fields": {"feedback": feedback}}


def enhanced_self_refine(query: str, depth: int) -> MathSolution:
    response = call(query)
    for _ in range(depth):
        solution = enhanced_generate_new_response(query, response)
        response = f"Steps: {solution.steps}\nFinal Answer: {solution.final_answer}"
    return solution


# Example usage
result = enhanced_self_refine(query, 1)
print(result)

The Enhanced Self-Refine technique proved effective in accurately solving the given mathematical problem:

“A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?”

Through a single iteration of refinement, the model delivered a logically sound and step-by-step derivation leading to the correct answer of 60 km/h. This illustrates several key benefits of the Self-Refine approach:

  • Improved accuracy through iterative feedback-driven enhancement.
  • Clearer reasoning steps, including variable setup, equation formulation, and quadratic solution application.
  • Greater transparency, making it easier for users to understand and trust the solution.

In broader applications, this technique holds strong promise for tasks that demand accuracy, structure, and iterative improvement—ranging from technical problem solving to creative and professional writing. However, implementers should remain mindful of the trade-offs in computational cost and fine-tune the depth and feedback prompts to match their specific use case.


Check out the Full Codes here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

FAQ: Can Marktechpost help me to promote my AI Product and position it in front of AI Devs and Data Engineers?

Ans: Yes, Marktechpost can help promote your AI product by publishing sponsored articles, case studies, or product features, targeting a global audience of AI developers and data engineers. The MTP platform is widely read by technical professionals, increasing your product’s visibility and positioning within the AI community. [SET UP A CALL]


I am a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I have a keen interest in Data Science, especially Neural Networks and their application in various areas.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *