Creating a GenAI-Powered Excel Add-In - Part 03 - Backend Integration
Welcome to Part 3 of our tutorial series on developing a GenAI-powered Excel Add-In to dynamically create charts from your data. If you haven't already, make sure to read the previous tutorial to ensure you're ready to proceed with this installment.
In this segment, we focus on constructing the backend and linking it to our Vue.js frontend through a REST API.
What We'll Use
- FastAPI and Python for the backend.
- OpenAI’s GPT Model to interpret user intent and suggest appropriate charts.
Prerequisites
Before we begin, you should:
- Complete the frontend setup from the last tutorial.
- Obtain an OpenAI API key, available here.
- Have Python 3.10 or higher installed with pip.
Setup
Prepare your environment:
- Create a new directory called
api
. - Navigate (
cd
) into theapi
directory via your terminal. - Create a
requirements.txt
file with the following contents:fastapi[all] openai instructor
- Create a new directory called
Set up a virtual environment:
bashpython -m venv venv source venv/bin/activate pip install -r requirements.txt
Building the FastAPI Server
We aim to develop a backend that offers an endpoint for the frontend to access. This endpoint will process the user's intent and selected data, returning the necessary chart metadata.
Step-by-Step Guide:
Initialize your server:
- Create
main.py
and import necessary modules. - Set up CORS to allow cross-origin requests for local testing.
pythonfrom fastapi import FastAPI, CORSMiddleware from pydantic import BaseModel, Field from enum import Enum from openai import OpenAI import instructor import os app = FastAPI() origins = ["*"] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], )
- Create
Define data models:
Define data models to specify what information the API receives and how it responds. We are using Pydantic
BaseModels
as our data structure because they integrate seamlessly with FastAPI and the Instructor Library for our LLM interactions.pythonclass ChartType(str, Enum): Line = "Line" Doughnut = "Doughnut" ColumnClustered = "ColumnClustered" Waterfall = "Waterfall" XYScatter = "XYScatter" class ChartData(BaseModel): title: str = Field(..., description="The title of the chart") x_axis_label: str = Field(..., description="The label for the x-axis") y_axis_label: str = Field(..., description="The label for the y-axis") chart_type: ChartType = Field(..., description="The type of chart") has_trendline: bool = Field(..., description="Whether the chart should have a trendline") class ChartInputData(BaseModel): intention: str data: list
Create the API endpoint:
Implement a POST endpoint that maps requests to
ChartInputData
and returnsChartData
.python@app.post('/graph-data') async def get_graph_data(chart_input_data: ChartInputData) -> ChartData: return call_llm(chart_input_data)
Integrate the LLM:
Define a function to call the LLM. This function initializes an LLM client from OpenAI and integrates it with the instructor library. The instructor library configures the LLM to output data in a structured format, specifically a
ChartData
model, rather than the usual text. This model includes fields for the chart title, axis names, and one of five chart types. On line 6, we define our output model, theChartData
, with detailed descriptions for each field. Now, when theclient.chat.completions.create
method is called, the output from OpenAI is mapped to thisChartData
class, which we can then return.pythondef call_llm(input: ChartInputData) -> ChartData: try: client = instructor.from_openai(OpenAI()) llm_response_model = client.chat.completions.create( model="gpt-3.5-turbo", response_model=ChartData, messages=[ {"role": "system", "content": "You are a talented data scientist."}, {"role": "user", "content": "I need a chart that effectively represents my data based on {input.intention}. Here is the data: {input.data}."} ] ) except Exception as e: print(f"An error occurred: {e}") llm_response_model = ChartData(title="Error", x_axis_label="Error", y_axis_label="Error", chart_type=ChartType.Line, has_trendline=True) return llm_response_model
Launch the server:
- Start your FastAPI server using Uvicorn with hot reload enabled.
bashuvicorn main:app --reload
- Verify your API by navigating to
http://localhost:8000/docs
in your browser and checking out the OpenAPI Specs documentation
Integrating with the Frontend
Since the frontend was configured in the previous tutorial to dynamically generate the graph based on the chartData
variable, updating this variable with the API call's output will seamlessly integrate the changes.
To make an API call in Vue.js, we will use the Vue.js fetch function.
Modify the Code
In the App.vue
file within the frontend folder, directly below the chartData
variable, create a function that calls our API and assigns the returned result to chartData
.
let chartData = {...}
const fetchData = async (data) => {
const requestParams = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ intention: intention.value, data: data}),
};
await fetch('http://localhost:8000/graph-data', requestParams)
.then(response => response.json())
.then(data => chartData = data);
}
Within our createChart function, we now have the capability to invoke our fetchData function.
const range = context.workbook.getSelectedRange();
range.load("valuesAsJsonLocal");
await context.sync();
const data_input = range.valuesAsJsonLocal.map(item => item.map(subItem => subItem.basicValue));
await fetchData(data_input);
const sheet = context.workbook.worksheets.getActiveWorksheet();
This section of code retrieves the range of cells selected by the user and loads them into the context. After refining the data to remove extraneous information, it's passed to the fetchData function. This function then updates the chartData dictionary with the new data, which is subsequently used to generate the plot.
Testing Your Setup
Refresh your Excel add-in, input your intention, highlight some data and watch as the LLM suggests and generates a graph based on your specifications. If your intention was about showing potential profits, the LLM returns a Line graph, if talked about the data beeing X percentage of something bigger, the LLM will return a Doughnut graph.
Looking Ahead
Stay tuned for the next tutorial where we will dockerize and deploy our application to the cloud.