Blog
Build Your Own Chat with PDF App with Bolt.new & RAG (Retrieval Augmented Generation)



Tutorial
·
Apr 3, 2025
Have you ever wanted to build your very own Chat with PDF app—where you can upload any PDF and simply chat with its contents in natural language? Imagine easily querying a document and getting instant answers, just like magic!
In this guide, we’ll show you how to create this app without writing a single line of code. By using BuildShip to power Retrieval Augmented Generation (RAG) and Bolt for the frontend, we can bring this project to life in no time.
Getting Started with BuildShip
Step 1: Select a RAG Template
Head over to BuildShip’s Templates Library and search for “RAG.” You’ll see multiple options, including one that uses Supabase for implementation. However, for the simplest approach, we’ll go with RAG using BuildShip.
Clone this template into your project, and it will generate two workflows:

Store Vector Embeddings Workflow (Ingestion Workflow)
RAG Query Handling Workflow
Let’s configure the first workflow to process and store PDF embeddings.
Step 2: Configure the Store Vector Embeddings Workflow
This workflow is responsible for:
Uploading PDFs
Extracting text using Mistral AI OCR
Splitting text into chunks
Generating vector embeddings
Storing everything in the BuildShip database

Configuring the OCR Node
Since text extraction from PDFs can be tricky, we use Mistral AI’s OCR API. You can create a free API key via the Mistral AI console and add it to BuildShip as a secret.

Setting Up OpenAI Embeddings
We use OpenAI’s text-embedding model to generate vector embeddings. You’ll need to provide your OpenAI API key, which you can get from OpenAI’s platform.

Step 3: Test the Workflow
Once configured, test the workflow by uploading a sample PDF (e.g., an Airbnb Guide PDF). The process:
Upload the PDF
Store metadata
Extract text
Generate embeddings
Save embeddings as vector fields
Once the test run is successful, ship the workflow live!

Building the Chat with PDF Feature
Now that our PDFs are stored with embeddings, let’s switch to the second workflow: RAG using BuildShip. This workflow handles:
Receiving user queries
Generating embeddings for queries
Performing a vector search for relevant chunks
Passing results to OpenAI’s GPT for a response
Step 4: Set Up the Query Workflow
Configure Nodes
Ensure the embedding generation node is using the correct OpenAI key.
Set up the vector query node to search the stored chunks.
Creating a Vector Index
Before running a query, BuildShip requires a vector index. If this is your first time running the workflow, it will fail. Simply click “Create Index” in the test panel and wait for it to process.

Step 5: Test the Chat Feature
Try asking a question related to your PDF, such as “Does it mention anything about WiFi?” If everything is working correctly, GPT should return an accurate answer based on the PDF’s contents.

Building the Chat UI with Bolt
Now that our backend is set up, let’s create the frontend using Bolt.
Step 6: Generate a UI
In Bolt, define your chat app’s features:
Upload PDF Button (connects to the Store Vector Embeddings workflow)
Chat Interface (connects to the RAG Query Handling workflow)
Use BuildShip’s AI Handoff prompt to generate a Bolt-compatible frontend. Copy the provided prompt from the Connect tab in BuildShip and paste it into Bolt.
Step 7: Fix Input Mapping Issues
During testing, if the PDF upload fails, ensure the input fields match what BuildShip expects:
Files Collection Name →
PDF Files
Chunks Collection Name →
PDF File Chunks
Embedding Field Name →
Embedding

Once adjusted, retest and deploy the app!
For a full step by step tutorial click below:
Conclusion
And that’s it! With BuildShip handling the backend and Bolt generating the frontend, you now have a fully functional Chat with PDF app—without writing a single line of code.
Try it out, experiment with different PDFs, and see how AI-powered retrieval makes document interaction effortless!
Need help? Drop a comment or check out BuildShip’s community for support. 🚀
Have you ever wanted to build your very own Chat with PDF app—where you can upload any PDF and simply chat with its contents in natural language? Imagine easily querying a document and getting instant answers, just like magic!
In this guide, we’ll show you how to create this app without writing a single line of code. By using BuildShip to power Retrieval Augmented Generation (RAG) and Bolt for the frontend, we can bring this project to life in no time.
Getting Started with BuildShip
Step 1: Select a RAG Template
Head over to BuildShip’s Templates Library and search for “RAG.” You’ll see multiple options, including one that uses Supabase for implementation. However, for the simplest approach, we’ll go with RAG using BuildShip.
Clone this template into your project, and it will generate two workflows:

Store Vector Embeddings Workflow (Ingestion Workflow)
RAG Query Handling Workflow
Let’s configure the first workflow to process and store PDF embeddings.
Step 2: Configure the Store Vector Embeddings Workflow
This workflow is responsible for:
Uploading PDFs
Extracting text using Mistral AI OCR
Splitting text into chunks
Generating vector embeddings
Storing everything in the BuildShip database

Configuring the OCR Node
Since text extraction from PDFs can be tricky, we use Mistral AI’s OCR API. You can create a free API key via the Mistral AI console and add it to BuildShip as a secret.

Setting Up OpenAI Embeddings
We use OpenAI’s text-embedding model to generate vector embeddings. You’ll need to provide your OpenAI API key, which you can get from OpenAI’s platform.

Step 3: Test the Workflow
Once configured, test the workflow by uploading a sample PDF (e.g., an Airbnb Guide PDF). The process:
Upload the PDF
Store metadata
Extract text
Generate embeddings
Save embeddings as vector fields
Once the test run is successful, ship the workflow live!

Building the Chat with PDF Feature
Now that our PDFs are stored with embeddings, let’s switch to the second workflow: RAG using BuildShip. This workflow handles:
Receiving user queries
Generating embeddings for queries
Performing a vector search for relevant chunks
Passing results to OpenAI’s GPT for a response
Step 4: Set Up the Query Workflow
Configure Nodes
Ensure the embedding generation node is using the correct OpenAI key.
Set up the vector query node to search the stored chunks.
Creating a Vector Index
Before running a query, BuildShip requires a vector index. If this is your first time running the workflow, it will fail. Simply click “Create Index” in the test panel and wait for it to process.

Step 5: Test the Chat Feature
Try asking a question related to your PDF, such as “Does it mention anything about WiFi?” If everything is working correctly, GPT should return an accurate answer based on the PDF’s contents.

Building the Chat UI with Bolt
Now that our backend is set up, let’s create the frontend using Bolt.
Step 6: Generate a UI
In Bolt, define your chat app’s features:
Upload PDF Button (connects to the Store Vector Embeddings workflow)
Chat Interface (connects to the RAG Query Handling workflow)
Use BuildShip’s AI Handoff prompt to generate a Bolt-compatible frontend. Copy the provided prompt from the Connect tab in BuildShip and paste it into Bolt.
Step 7: Fix Input Mapping Issues
During testing, if the PDF upload fails, ensure the input fields match what BuildShip expects:
Files Collection Name →
PDF Files
Chunks Collection Name →
PDF File Chunks
Embedding Field Name →
Embedding

Once adjusted, retest and deploy the app!
For a full step by step tutorial click below:
Conclusion
And that’s it! With BuildShip handling the backend and Bolt generating the frontend, you now have a fully functional Chat with PDF app—without writing a single line of code.
Try it out, experiment with different PDFs, and see how AI-powered retrieval makes document interaction effortless!
Need help? Drop a comment or check out BuildShip’s community for support. 🚀
Have you ever wanted to build your very own Chat with PDF app—where you can upload any PDF and simply chat with its contents in natural language? Imagine easily querying a document and getting instant answers, just like magic!
In this guide, we’ll show you how to create this app without writing a single line of code. By using BuildShip to power Retrieval Augmented Generation (RAG) and Bolt for the frontend, we can bring this project to life in no time.
Getting Started with BuildShip
Step 1: Select a RAG Template
Head over to BuildShip’s Templates Library and search for “RAG.” You’ll see multiple options, including one that uses Supabase for implementation. However, for the simplest approach, we’ll go with RAG using BuildShip.
Clone this template into your project, and it will generate two workflows:

Store Vector Embeddings Workflow (Ingestion Workflow)
RAG Query Handling Workflow
Let’s configure the first workflow to process and store PDF embeddings.
Step 2: Configure the Store Vector Embeddings Workflow
This workflow is responsible for:
Uploading PDFs
Extracting text using Mistral AI OCR
Splitting text into chunks
Generating vector embeddings
Storing everything in the BuildShip database

Configuring the OCR Node
Since text extraction from PDFs can be tricky, we use Mistral AI’s OCR API. You can create a free API key via the Mistral AI console and add it to BuildShip as a secret.

Setting Up OpenAI Embeddings
We use OpenAI’s text-embedding model to generate vector embeddings. You’ll need to provide your OpenAI API key, which you can get from OpenAI’s platform.

Step 3: Test the Workflow
Once configured, test the workflow by uploading a sample PDF (e.g., an Airbnb Guide PDF). The process:
Upload the PDF
Store metadata
Extract text
Generate embeddings
Save embeddings as vector fields
Once the test run is successful, ship the workflow live!

Building the Chat with PDF Feature
Now that our PDFs are stored with embeddings, let’s switch to the second workflow: RAG using BuildShip. This workflow handles:
Receiving user queries
Generating embeddings for queries
Performing a vector search for relevant chunks
Passing results to OpenAI’s GPT for a response
Step 4: Set Up the Query Workflow
Configure Nodes
Ensure the embedding generation node is using the correct OpenAI key.
Set up the vector query node to search the stored chunks.
Creating a Vector Index
Before running a query, BuildShip requires a vector index. If this is your first time running the workflow, it will fail. Simply click “Create Index” in the test panel and wait for it to process.

Step 5: Test the Chat Feature
Try asking a question related to your PDF, such as “Does it mention anything about WiFi?” If everything is working correctly, GPT should return an accurate answer based on the PDF’s contents.

Building the Chat UI with Bolt
Now that our backend is set up, let’s create the frontend using Bolt.
Step 6: Generate a UI
In Bolt, define your chat app’s features:
Upload PDF Button (connects to the Store Vector Embeddings workflow)
Chat Interface (connects to the RAG Query Handling workflow)
Use BuildShip’s AI Handoff prompt to generate a Bolt-compatible frontend. Copy the provided prompt from the Connect tab in BuildShip and paste it into Bolt.
Step 7: Fix Input Mapping Issues
During testing, if the PDF upload fails, ensure the input fields match what BuildShip expects:
Files Collection Name →
PDF Files
Chunks Collection Name →
PDF File Chunks
Embedding Field Name →
Embedding

Once adjusted, retest and deploy the app!
For a full step by step tutorial click below:
Conclusion
And that’s it! With BuildShip handling the backend and Bolt generating the frontend, you now have a fully functional Chat with PDF app—without writing a single line of code.
Try it out, experiment with different PDFs, and see how AI-powered retrieval makes document interaction effortless!
Need help? Drop a comment or check out BuildShip’s community for support. 🚀