Welcome to the Tax AI Assistant, an AI-driven web application that ingest the tax data you provide along with a question and give you valuable advice! 🚀 The project will be developed incrementally with documentation for the underlying technologies and techniques. Keep reading to learn the individual parts consisting the app. ✏️
The frontend of the application is developed in Vue3 (Composition API) + Vite, the default building tool for Vue applications.
To build and run the frontend server in dev mode run:
cd frontend
npm install
npm run dev
The backend is developed in django + django REST framework and Python 3.13.1. To build the backend create a virtual environment for python and use requirements file to install necessary dependencies:
# Using PyPI
pip install -r requirements.txt
Create a local .env to store environment variables. The file is excluded from tracking (see .gitignore). Use .env.template as a template. Variables declared in .env include:
DEBUG=on
SECRET_KEY=my_django_secret_key
OPENAI_API_KEY=my_openai_key
Below is the backend API endpoints documentation:
method | endpoint | description |
---|---|---|
POST | api/advice/generate | Generates AI response based on user data and prompt |
To integrate AI in backend to generate content based on prompts, the OpenAI Python API library is used. Backend app advice is responsible for handling AI-related API calls. To generate tax filing advice, generate_advice view is called to communicate with OpenAI API:
# in backend/advice/views.py
# ...
client = OpenAI(api_key=env('OPENAI_API_KEY'))
@api_view(['POST'])
def generate_advice(request):
# ...
# Get the model
chat, _ = ChatHistory.objects.get_or_create(user_id=user_id)
_create_or_update_history(chat,
role="user",
content=user_content)
# API call to OpenAI
try:
chat_completion = client.chat.completions.create(
model='gpt-4o',
messages=chat.messages,
temperature=0.3
)
# ...
# Store new answer and save model
_create_or_update_history(chat,
role="assistant",
content=chat_completion.choices[0].message.content)
print(chat.messages)
chat.save()
# ...
To give context to the AI model for follow-up questions, ChatHistory django model is defined. Currently, the app supports session-based users so ChatHistory has minimal implementation.
Every new user prompt to the model along with model's response are added to the "history" and included to the next OpenAI call (see utils.py):
def _create_or_update_history(chat, role, content):
# Define model behavior
if not chat.messages:
with open('./AIDeveloperSettings/setting1.txt', 'r') as f:
setting_content = f.read().replace('\n', ' ')
chat.messages.append(
{
"role": "developer",
"content": setting_content
}
)
# Add new message
chat.messages.append(
{
"role": role,
"content": content
}
)
The model specification in "developer" role is defined in AIDeveloperSettings.
On success, the AI generated answer is included in Response:
# in backend/advice/views.py
# ...
@api_view(['POST'])
def generate_advice(request):
# ...
return Response({
'success': True,
'detail': 'Form was submitted successfully!',
'answer': chat_completion.choices[0].message.content
},
status=status.HTTP_200_OK)
❗ IMPORTANT: Create .env file in backend with necessary secret variables (see Local .env file)
Build and start the containers using docker-compose.yml:
docker-compose up
Follow frontend's container url (by default http://localhost:5173) to operate application.
To ensure smooth Continuous Integration to the application, GitHub Actions workflows are used. In .github/workflows there are workflows for backend and frontend parts. The workflows are defined for push and pull request to main.
Handles backend build for specified versions, uses secrets stored in GitHub's Actions Secrets and Variables to set necessary environment variables and install dependencies. Finally, runs specified tests with django testing environment.
Handles frontend build for specified versions, installs dependecies and runs specified tests with vitest environment.