Blog

Generative Fullstack: Frontend UI with Vercel v0 and Backend with BuildShip

Tutorial

·

May 29, 2024

We've seen ChatGPT generating entire blogs and articles, MidJourney crafting stunning graphics, and GitHub Copilot writing full codebases—all from simple plain English prompts. Owing to this, the tech community began speculating about the advent of AI-powered UI generators capable of designing entire frontends.

While tools like MidJourney could create beautiful graphic images of UIs, designers still needed to convert these into Figma designs and then develop them.

Now with Vercel's v0, the game has changed dramatically. v0 isn’t just an image generator; it’s a sophisticated UI generator that builds your frontend and provides the code for any necessary tweaks and adjustments. It even helps you create custom UIs based on any reference image, perfect for adhering to your brand guidelines or existing aesthetics.

With the frontend covered, what about the backend? Imagine a tool that could handle backend development with the same ease, boosting productivity fivefold by generating everything from your use case prompts.

Well, you’re in luck. BuildShip is here for all your backend needs. With BuildShip, you can create powerful APIs, manage backend tasks, connect to your favorite tools or databases, and build robust automations for your app effortlessly.

Now you can build your complete app with AI using Generative UI capabilities of v0 by Vercel for generating frontend apps and generative backend with BuildShip for backend APIs, and cloud jobs. In this blog, we’ll explore the “how” by building a fun and interactive "Guess the Prompt" game using the power of AI and get you immersed in the era of true low-code development.  Furthermore, we'll also include a remix link to another of our v0 powered popular game called "Who's the Murderer?"

Let’s start!

What is v0?

v0 is a tool designed to help developers quickly build the first iteration of their product. Whether you're prototyping a new idea or laying the foundation for your app, v0 streamlines the process by generating production-ready code using open-source tools like React, Tailwind CSS, and Shadcn UI. From design to deployment, v0 simplifies the journey, enabling you to focus on refining your product and taking it to the next level.

Laying out the App Structure

We’ll be creating a “Guess the Prompt Game” where we’ll use v0, the AI-powered UI generation tool, to create the game's user interface without writing a single line of code. For the backend logic, we'll use Buildship, a low-code visual backend builder, to handle the game's functionality easily.

The objective of the game is simple: players are presented with an AI-generated image and must guess the prompt that was used to create it. The game will then calculate a similarity score based on how close the player's guess is to the original prompt. The closer the guess, the higher the score!

While this is a low-code tutorial, it's important to note that we will still be writing a bit of code.

Vercel's V0 does an excellent job generating UIs based on a given problem, but it's not perfect yet. Sometimes, as in this blog, a little human input is still required.

But don't worry—we'll explain each part as we go along. With that out of the way, let's get started!

Generating the Frontend UI with v0

To create the game's user interface, we’re going to use the capabilities of v0. By providing v0 with a description or an image of the desired UI, it generates the necessary code for us. We'll create three main screens for our game:

1. Initial Screen: Displays the AI-generated image and a text input for the player to enter their guess.

2. Loading Screen: Shows a loading indicator while the similarity score is being calculated.

3. Score Screen: Reveals the original prompt, the player's guess, and the similarity score.

By using V0, we can quickly generate the UI components without writing any code, saving us significant development time.

Step 1: Generating the UI with v0

While V0 does a great job generating UIs from a prompt, it excels even more when given a reference image.

For the "Initial Screen," we'll provide v0 with a reference image and hit Enter. Once processed, v0 offers us three options. We select the closest match to our reference image, which features a two-column layout. Comparing our reference screen for "Initial Screen" from above, Option 2 is the UI that looks like the closest match and we will proceed with that.

Now, we can enhance the selected UI by entering a prompt to modify and tweak it. Here, we're instructing V0 to adjust the text element to full width and correct its positioning.

Here is the final result that for our “Initial Screen”.

Set 2: Setting up a NextJS project

Go to your V0 dashboard, click on “Code” and copy the copy the command to add to your Nextjs project.

Next, open terminal and copy in the command below:

npx create-next-app@latest

Fill in the prerequisite details such as project names, whether you want to use Typescript, etc. You can leave all options to default by just hitting Enter.

After the project has been setup, you can change directories by typing

cd guess-the-prompt-game

And then

npm run dev

And when you go to localhost:3000 it will have your project up and running.

Next, let’s open up our project in the code editor. For this example, we’d be using Visual Code. Navigate to the “guess-the-prompt-game" folder and open it.

Then copy in the command from your v0 dashboard and paste it in your terminal in Visual code.

It will prompt us to enter the name of the component. Let’s go with “Initial screen” as we decided earlier. Hit Enter and it will inject our component. If you go to the components folder in the sidebar, you can see an “Initial Screen” component.

Now in order to render this screen on your already running NextJS server, first we need to call it in the pages.tsx file. You can find it in the app folder in Visual Studio. Let’s delete the current boilerplate content and just render the Initial screen component as so. Make sure that you import it in the starting starements.

Now, if you go back and reload, you’ll see your initial screen being rendered.

Step 3: Generating all the other screens

To generate the other screens, let’s go back to v0 dashboard and click on “New Generation” at the top navbar.

Once again, we’ll give it a reference image which is a card loading image and v0 will pop out 3 close options for us to choose from.

This time option C seems the closes match:

But we can make it better by prompting v0 to adjust the position as so.

And here is the desired result.

Let’s get it in our code editor but going to “Canvas” and copy the command.

Let’s go in terminal and paste the command. It will again prompt you to enter the component name. Let’s call it “Loading Screen”.

Let’s go in the pages.tsx file and introduce this component so that it renders in our already running server as before.

Now, all’s left is our third screen which is the Score Screen. Let’s repeat the steps from before by clicking on New Generation, uploading a reference image and tweaking the result to fit the original image.

Again copy the command and paste it in terminal.

Call the component “Score screen”, import the component in the pages.tsx file and with that we’ve generated all our screens without writing a single line of code.

Generating Backend with BuildShip:

With the UI in place, we'll shift our focus to the game's backend logic. This is where BuildShip comes into play. BuildShip is a low-code visual backend builder that allows us to create powerful backend functionality using a drag-and-drop interface.

Step 4: Building out the Backend Workflows

We'll create two main workflows in Buildship:

1. Get Random Image: This workflow generates a random prompt using GPT-4, creates an AI-generated image based on that prompt, and returns the image URL and hints to the frontend.

2. Calculate Score: When the player submits their guess, this workflow generates an image based on the guessed prompt, calculates the similarity score between the original prompt and the guess using the Jaro-Winkler distance algorithm, and returns the score and the generated image.

First let’s focus on the first screen. To do that, we’ll comment out the other two screens.

To build out our first workflow, we’ll create brand new workflow. Let’s call it “Get Random Image”.

For starters, let’s add in the REST API Trigger node. Our fronted application will call our BuildShip workflow via this REST API endpoint.

Let’s specify the path as “get-random-image” and leave the method to GET.

Next we’ll create a text generator node that will be used to prompt our image generator with unique, random text prompts which in turn will create images for us.

Then, we connect it up with our OpenAI Image Generator node and make sure to connect the “Input Text” field with the “Text Generator” node output from above.

Now we want to upload this image to cloud storage that comes in with every BuildShip project.

In this node, we’ll just specify to save each name with a unique name by appending a date variable to the saved image name.

And now finally, we want to return some values back to our client frontend. These values include the text prompt from the text generator node, the generated image and some hints.

For this we’ll go back to BuildShip and add in a return node. We’ll return the output’s of the respective node corresponding for their key value whereas for the hint key, we’ll extract all the words from the prompt, shuffle them and then use 3 of them.

Moving ahead, we need to ship our workflow and then test to see whether everything is working properly as expected.

Step 5: Integrating the Frontend and Backend

To connect our v0-generated UI with the Buildship workflows, we'll make API calls from the frontend to trigger the appropriate workflows. We'll use the `fetch` function to send requests and handle the responses accordingly.

To do this let’s ask ChatGPT the following:

Put in the pages.tsx code and the initial-screen.tsx screen along with the prompt.

ChatGPT will generate the code for you. Just copy in the code and paste it for the respective pages.

Next we need to copy in our API Endpoint from the REST API screen and paste it in our useEffect() function.

And now go back and reload your server and voila! Your images are being generated.

Now, we’d need another workflow that calculate the similarity score once a user submits his prompt that can be compared to the original prompt.

Let’s create new workflow and add a REST API Trigger with the method set to POST. Set the path as /get-score.

Next we will add the openAI Image generator node whose input will be the value that our user guessed.

We’ll store this image in the BuildShip storage and name it uniquely by appending the date to it.

Now to calculate the score, we’ll use the Jaro- Winkler Distance algorithm that will be used to calculate the similarity between two strings. So in BuildShip we’ll reusing this algorithm to check the similarity between the two prompts.

For this let’s add the “Calculate Similarity Node”. Text 1 will be the original prompt and Text 2 will be the guessed prompt and both these values can be extracted from the incoming Body request. This node will return a number which will be used to depict the similarity score on the frontend.

So to wind things up, add a return node that returns the generated image and the similarity score.

Ship and test the workflow and then we go back to integrating it with our frontend.

Now, again go back to chatGPT and fill in the new prompt as below:

And this time we’ll page in the loading screen component file along with the initalscreen.tsx and pages.tsx files too. ChatGPT will give you a output that you can paste back in your respective component pages.

Now when you go back and reload the app, you can put in your prompt for a generated image and see your similarity score.

For a full set-by-step tutorial, we’d recommend you to check out the video below:

How to Use Vercel's v0 Latest Key Features in Low-Code Style

Here's a list of v0's newest features that combined with BuildShip can help you start creating your next app idea in full low-code.

  • Design to fit: All assets generated are responsive, allowing you to upscale and downscale to fulfil all your viewports and devices requirements.

  • Generate UI with text prompt: As we learned you can create out your frontend UI using a basic text prompt and connect it with BuildShip's backend API.

  • Interactions: v0 by vercel does not produce static frontend. Tweak your prompts to include micro interactions, hover effects and more!

  • Integrate images: The soul of any frontend is images. Easily spin out the most creative images using the best AI models from BuildShip's nodes and have it rendered on your frontend.

Who's the Murderer Game

To demonstrate further the capabilities of BuildShip with v0 by Vercel, we've created another game called "Who's the Murderer?".

This project is designed to simplify game development by breaking it into two main components: a Gameplay API that generates game assets and a Game UI Frontend that can be created and deployed using v0 by Vercel or any other platform of your choice.

To get started, you’ll need accounts and API keys for three platforms: Replicate for generating images, ElevenLabs for creating audio, and OpenAI for story text generation. Once you’ve gathered these prerequisites, you’re ready to dive into building your game.

The first step is creating the API backend. Begin by connecting your API keys for Replicate, ElevenLabs, and OpenAI in the “Keys” section of the nodes.

This is a one-time setup, so if you’ve already completed it, you can move on to testing your workflow. Use the test panel with a sample input like "Harry Potter" to ensure everything works properly. After confirming the workflow, head over to the “Connector” tab and click Connect API. Finally, ship your backend by clicking Ship, making it ready to use.

Next, you’ll create the frontend for your game. Start by copying the code snippet from the Usage tab in the API Connector along with the sample output from your test panel. These will serve as inputs for your UI. Then, visit v0.dev and prompt it to generate your game’s user interface. Once the UI is created, test it thoroughly and make edits if necessary. After finalizing your design, deploy it to Vercel and share your game with friends!

For a visual guide, check out this quick demo to see the process in action.

Conclusion:

By combining the power of v0 for UI generation and BuildShip for backend logic, we created a fully functional full-stack "Guess the Prompt" game with minimal coding effort. This demonstrates the potential of AI-assisted development tools in the application development process.

The game serves as a fun and engaging way to showcase the capabilities of AI in generating images and calculating similarity scores. It also highlights how low-code and no-code tools like v0 and BuildShip can significantly reduce development time and empower developers to focus on the core functionality of their applications.

Furthermore, we also created another more complex game called as "Who's the Murderer?". Challenge yourself to see how close you can get to the original prompts and enjoy the experience of playing a game built with the power of AI.

We've seen ChatGPT generating entire blogs and articles, MidJourney crafting stunning graphics, and GitHub Copilot writing full codebases—all from simple plain English prompts. Owing to this, the tech community began speculating about the advent of AI-powered UI generators capable of designing entire frontends.

While tools like MidJourney could create beautiful graphic images of UIs, designers still needed to convert these into Figma designs and then develop them.

Now with Vercel's v0, the game has changed dramatically. v0 isn’t just an image generator; it’s a sophisticated UI generator that builds your frontend and provides the code for any necessary tweaks and adjustments. It even helps you create custom UIs based on any reference image, perfect for adhering to your brand guidelines or existing aesthetics.

With the frontend covered, what about the backend? Imagine a tool that could handle backend development with the same ease, boosting productivity fivefold by generating everything from your use case prompts.

Well, you’re in luck. BuildShip is here for all your backend needs. With BuildShip, you can create powerful APIs, manage backend tasks, connect to your favorite tools or databases, and build robust automations for your app effortlessly.

Now you can build your complete app with AI using Generative UI capabilities of v0 by Vercel for generating frontend apps and generative backend with BuildShip for backend APIs, and cloud jobs. In this blog, we’ll explore the “how” by building a fun and interactive "Guess the Prompt" game using the power of AI and get you immersed in the era of true low-code development.  Furthermore, we'll also include a remix link to another of our v0 powered popular game called "Who's the Murderer?"

Let’s start!

What is v0?

v0 is a tool designed to help developers quickly build the first iteration of their product. Whether you're prototyping a new idea or laying the foundation for your app, v0 streamlines the process by generating production-ready code using open-source tools like React, Tailwind CSS, and Shadcn UI. From design to deployment, v0 simplifies the journey, enabling you to focus on refining your product and taking it to the next level.

Laying out the App Structure

We’ll be creating a “Guess the Prompt Game” where we’ll use v0, the AI-powered UI generation tool, to create the game's user interface without writing a single line of code. For the backend logic, we'll use Buildship, a low-code visual backend builder, to handle the game's functionality easily.

The objective of the game is simple: players are presented with an AI-generated image and must guess the prompt that was used to create it. The game will then calculate a similarity score based on how close the player's guess is to the original prompt. The closer the guess, the higher the score!

While this is a low-code tutorial, it's important to note that we will still be writing a bit of code.

Vercel's V0 does an excellent job generating UIs based on a given problem, but it's not perfect yet. Sometimes, as in this blog, a little human input is still required.

But don't worry—we'll explain each part as we go along. With that out of the way, let's get started!

Generating the Frontend UI with v0

To create the game's user interface, we’re going to use the capabilities of v0. By providing v0 with a description or an image of the desired UI, it generates the necessary code for us. We'll create three main screens for our game:

1. Initial Screen: Displays the AI-generated image and a text input for the player to enter their guess.

2. Loading Screen: Shows a loading indicator while the similarity score is being calculated.

3. Score Screen: Reveals the original prompt, the player's guess, and the similarity score.

By using V0, we can quickly generate the UI components without writing any code, saving us significant development time.

Step 1: Generating the UI with v0

While V0 does a great job generating UIs from a prompt, it excels even more when given a reference image.

For the "Initial Screen," we'll provide v0 with a reference image and hit Enter. Once processed, v0 offers us three options. We select the closest match to our reference image, which features a two-column layout. Comparing our reference screen for "Initial Screen" from above, Option 2 is the UI that looks like the closest match and we will proceed with that.

Now, we can enhance the selected UI by entering a prompt to modify and tweak it. Here, we're instructing V0 to adjust the text element to full width and correct its positioning.

Here is the final result that for our “Initial Screen”.

Set 2: Setting up a NextJS project

Go to your V0 dashboard, click on “Code” and copy the copy the command to add to your Nextjs project.

Next, open terminal and copy in the command below:

npx create-next-app@latest

Fill in the prerequisite details such as project names, whether you want to use Typescript, etc. You can leave all options to default by just hitting Enter.

After the project has been setup, you can change directories by typing

cd guess-the-prompt-game

And then

npm run dev

And when you go to localhost:3000 it will have your project up and running.

Next, let’s open up our project in the code editor. For this example, we’d be using Visual Code. Navigate to the “guess-the-prompt-game" folder and open it.

Then copy in the command from your v0 dashboard and paste it in your terminal in Visual code.

It will prompt us to enter the name of the component. Let’s go with “Initial screen” as we decided earlier. Hit Enter and it will inject our component. If you go to the components folder in the sidebar, you can see an “Initial Screen” component.

Now in order to render this screen on your already running NextJS server, first we need to call it in the pages.tsx file. You can find it in the app folder in Visual Studio. Let’s delete the current boilerplate content and just render the Initial screen component as so. Make sure that you import it in the starting starements.

Now, if you go back and reload, you’ll see your initial screen being rendered.

Step 3: Generating all the other screens

To generate the other screens, let’s go back to v0 dashboard and click on “New Generation” at the top navbar.

Once again, we’ll give it a reference image which is a card loading image and v0 will pop out 3 close options for us to choose from.

This time option C seems the closes match:

But we can make it better by prompting v0 to adjust the position as so.

And here is the desired result.

Let’s get it in our code editor but going to “Canvas” and copy the command.

Let’s go in terminal and paste the command. It will again prompt you to enter the component name. Let’s call it “Loading Screen”.

Let’s go in the pages.tsx file and introduce this component so that it renders in our already running server as before.

Now, all’s left is our third screen which is the Score Screen. Let’s repeat the steps from before by clicking on New Generation, uploading a reference image and tweaking the result to fit the original image.

Again copy the command and paste it in terminal.

Call the component “Score screen”, import the component in the pages.tsx file and with that we’ve generated all our screens without writing a single line of code.

Generating Backend with BuildShip:

With the UI in place, we'll shift our focus to the game's backend logic. This is where BuildShip comes into play. BuildShip is a low-code visual backend builder that allows us to create powerful backend functionality using a drag-and-drop interface.

Step 4: Building out the Backend Workflows

We'll create two main workflows in Buildship:

1. Get Random Image: This workflow generates a random prompt using GPT-4, creates an AI-generated image based on that prompt, and returns the image URL and hints to the frontend.

2. Calculate Score: When the player submits their guess, this workflow generates an image based on the guessed prompt, calculates the similarity score between the original prompt and the guess using the Jaro-Winkler distance algorithm, and returns the score and the generated image.

First let’s focus on the first screen. To do that, we’ll comment out the other two screens.

To build out our first workflow, we’ll create brand new workflow. Let’s call it “Get Random Image”.

For starters, let’s add in the REST API Trigger node. Our fronted application will call our BuildShip workflow via this REST API endpoint.

Let’s specify the path as “get-random-image” and leave the method to GET.

Next we’ll create a text generator node that will be used to prompt our image generator with unique, random text prompts which in turn will create images for us.

Then, we connect it up with our OpenAI Image Generator node and make sure to connect the “Input Text” field with the “Text Generator” node output from above.

Now we want to upload this image to cloud storage that comes in with every BuildShip project.

In this node, we’ll just specify to save each name with a unique name by appending a date variable to the saved image name.

And now finally, we want to return some values back to our client frontend. These values include the text prompt from the text generator node, the generated image and some hints.

For this we’ll go back to BuildShip and add in a return node. We’ll return the output’s of the respective node corresponding for their key value whereas for the hint key, we’ll extract all the words from the prompt, shuffle them and then use 3 of them.

Moving ahead, we need to ship our workflow and then test to see whether everything is working properly as expected.

Step 5: Integrating the Frontend and Backend

To connect our v0-generated UI with the Buildship workflows, we'll make API calls from the frontend to trigger the appropriate workflows. We'll use the `fetch` function to send requests and handle the responses accordingly.

To do this let’s ask ChatGPT the following:

Put in the pages.tsx code and the initial-screen.tsx screen along with the prompt.

ChatGPT will generate the code for you. Just copy in the code and paste it for the respective pages.

Next we need to copy in our API Endpoint from the REST API screen and paste it in our useEffect() function.

And now go back and reload your server and voila! Your images are being generated.

Now, we’d need another workflow that calculate the similarity score once a user submits his prompt that can be compared to the original prompt.

Let’s create new workflow and add a REST API Trigger with the method set to POST. Set the path as /get-score.

Next we will add the openAI Image generator node whose input will be the value that our user guessed.

We’ll store this image in the BuildShip storage and name it uniquely by appending the date to it.

Now to calculate the score, we’ll use the Jaro- Winkler Distance algorithm that will be used to calculate the similarity between two strings. So in BuildShip we’ll reusing this algorithm to check the similarity between the two prompts.

For this let’s add the “Calculate Similarity Node”. Text 1 will be the original prompt and Text 2 will be the guessed prompt and both these values can be extracted from the incoming Body request. This node will return a number which will be used to depict the similarity score on the frontend.

So to wind things up, add a return node that returns the generated image and the similarity score.

Ship and test the workflow and then we go back to integrating it with our frontend.

Now, again go back to chatGPT and fill in the new prompt as below:

And this time we’ll page in the loading screen component file along with the initalscreen.tsx and pages.tsx files too. ChatGPT will give you a output that you can paste back in your respective component pages.

Now when you go back and reload the app, you can put in your prompt for a generated image and see your similarity score.

For a full set-by-step tutorial, we’d recommend you to check out the video below:

How to Use Vercel's v0 Latest Key Features in Low-Code Style

Here's a list of v0's newest features that combined with BuildShip can help you start creating your next app idea in full low-code.

  • Design to fit: All assets generated are responsive, allowing you to upscale and downscale to fulfil all your viewports and devices requirements.

  • Generate UI with text prompt: As we learned you can create out your frontend UI using a basic text prompt and connect it with BuildShip's backend API.

  • Interactions: v0 by vercel does not produce static frontend. Tweak your prompts to include micro interactions, hover effects and more!

  • Integrate images: The soul of any frontend is images. Easily spin out the most creative images using the best AI models from BuildShip's nodes and have it rendered on your frontend.

Who's the Murderer Game

To demonstrate further the capabilities of BuildShip with v0 by Vercel, we've created another game called "Who's the Murderer?".

This project is designed to simplify game development by breaking it into two main components: a Gameplay API that generates game assets and a Game UI Frontend that can be created and deployed using v0 by Vercel or any other platform of your choice.

To get started, you’ll need accounts and API keys for three platforms: Replicate for generating images, ElevenLabs for creating audio, and OpenAI for story text generation. Once you’ve gathered these prerequisites, you’re ready to dive into building your game.

The first step is creating the API backend. Begin by connecting your API keys for Replicate, ElevenLabs, and OpenAI in the “Keys” section of the nodes.

This is a one-time setup, so if you’ve already completed it, you can move on to testing your workflow. Use the test panel with a sample input like "Harry Potter" to ensure everything works properly. After confirming the workflow, head over to the “Connector” tab and click Connect API. Finally, ship your backend by clicking Ship, making it ready to use.

Next, you’ll create the frontend for your game. Start by copying the code snippet from the Usage tab in the API Connector along with the sample output from your test panel. These will serve as inputs for your UI. Then, visit v0.dev and prompt it to generate your game’s user interface. Once the UI is created, test it thoroughly and make edits if necessary. After finalizing your design, deploy it to Vercel and share your game with friends!

For a visual guide, check out this quick demo to see the process in action.

Conclusion:

By combining the power of v0 for UI generation and BuildShip for backend logic, we created a fully functional full-stack "Guess the Prompt" game with minimal coding effort. This demonstrates the potential of AI-assisted development tools in the application development process.

The game serves as a fun and engaging way to showcase the capabilities of AI in generating images and calculating similarity scores. It also highlights how low-code and no-code tools like v0 and BuildShip can significantly reduce development time and empower developers to focus on the core functionality of their applications.

Furthermore, we also created another more complex game called as "Who's the Murderer?". Challenge yourself to see how close you can get to the original prompts and enjoy the experience of playing a game built with the power of AI.

We've seen ChatGPT generating entire blogs and articles, MidJourney crafting stunning graphics, and GitHub Copilot writing full codebases—all from simple plain English prompts. Owing to this, the tech community began speculating about the advent of AI-powered UI generators capable of designing entire frontends.

While tools like MidJourney could create beautiful graphic images of UIs, designers still needed to convert these into Figma designs and then develop them.

Now with Vercel's v0, the game has changed dramatically. v0 isn’t just an image generator; it’s a sophisticated UI generator that builds your frontend and provides the code for any necessary tweaks and adjustments. It even helps you create custom UIs based on any reference image, perfect for adhering to your brand guidelines or existing aesthetics.

With the frontend covered, what about the backend? Imagine a tool that could handle backend development with the same ease, boosting productivity fivefold by generating everything from your use case prompts.

Well, you’re in luck. BuildShip is here for all your backend needs. With BuildShip, you can create powerful APIs, manage backend tasks, connect to your favorite tools or databases, and build robust automations for your app effortlessly.

Now you can build your complete app with AI using Generative UI capabilities of v0 by Vercel for generating frontend apps and generative backend with BuildShip for backend APIs, and cloud jobs. In this blog, we’ll explore the “how” by building a fun and interactive "Guess the Prompt" game using the power of AI and get you immersed in the era of true low-code development.  Furthermore, we'll also include a remix link to another of our v0 powered popular game called "Who's the Murderer?"

Let’s start!

What is v0?

v0 is a tool designed to help developers quickly build the first iteration of their product. Whether you're prototyping a new idea or laying the foundation for your app, v0 streamlines the process by generating production-ready code using open-source tools like React, Tailwind CSS, and Shadcn UI. From design to deployment, v0 simplifies the journey, enabling you to focus on refining your product and taking it to the next level.

Laying out the App Structure

We’ll be creating a “Guess the Prompt Game” where we’ll use v0, the AI-powered UI generation tool, to create the game's user interface without writing a single line of code. For the backend logic, we'll use Buildship, a low-code visual backend builder, to handle the game's functionality easily.

The objective of the game is simple: players are presented with an AI-generated image and must guess the prompt that was used to create it. The game will then calculate a similarity score based on how close the player's guess is to the original prompt. The closer the guess, the higher the score!

While this is a low-code tutorial, it's important to note that we will still be writing a bit of code.

Vercel's V0 does an excellent job generating UIs based on a given problem, but it's not perfect yet. Sometimes, as in this blog, a little human input is still required.

But don't worry—we'll explain each part as we go along. With that out of the way, let's get started!

Generating the Frontend UI with v0

To create the game's user interface, we’re going to use the capabilities of v0. By providing v0 with a description or an image of the desired UI, it generates the necessary code for us. We'll create three main screens for our game:

1. Initial Screen: Displays the AI-generated image and a text input for the player to enter their guess.

2. Loading Screen: Shows a loading indicator while the similarity score is being calculated.

3. Score Screen: Reveals the original prompt, the player's guess, and the similarity score.

By using V0, we can quickly generate the UI components without writing any code, saving us significant development time.

Step 1: Generating the UI with v0

While V0 does a great job generating UIs from a prompt, it excels even more when given a reference image.

For the "Initial Screen," we'll provide v0 with a reference image and hit Enter. Once processed, v0 offers us three options. We select the closest match to our reference image, which features a two-column layout. Comparing our reference screen for "Initial Screen" from above, Option 2 is the UI that looks like the closest match and we will proceed with that.

Now, we can enhance the selected UI by entering a prompt to modify and tweak it. Here, we're instructing V0 to adjust the text element to full width and correct its positioning.

Here is the final result that for our “Initial Screen”.

Set 2: Setting up a NextJS project

Go to your V0 dashboard, click on “Code” and copy the copy the command to add to your Nextjs project.

Next, open terminal and copy in the command below:

npx create-next-app@latest

Fill in the prerequisite details such as project names, whether you want to use Typescript, etc. You can leave all options to default by just hitting Enter.

After the project has been setup, you can change directories by typing

cd guess-the-prompt-game

And then

npm run dev

And when you go to localhost:3000 it will have your project up and running.

Next, let’s open up our project in the code editor. For this example, we’d be using Visual Code. Navigate to the “guess-the-prompt-game" folder and open it.

Then copy in the command from your v0 dashboard and paste it in your terminal in Visual code.

It will prompt us to enter the name of the component. Let’s go with “Initial screen” as we decided earlier. Hit Enter and it will inject our component. If you go to the components folder in the sidebar, you can see an “Initial Screen” component.

Now in order to render this screen on your already running NextJS server, first we need to call it in the pages.tsx file. You can find it in the app folder in Visual Studio. Let’s delete the current boilerplate content and just render the Initial screen component as so. Make sure that you import it in the starting starements.

Now, if you go back and reload, you’ll see your initial screen being rendered.

Step 3: Generating all the other screens

To generate the other screens, let’s go back to v0 dashboard and click on “New Generation” at the top navbar.

Once again, we’ll give it a reference image which is a card loading image and v0 will pop out 3 close options for us to choose from.

This time option C seems the closes match:

But we can make it better by prompting v0 to adjust the position as so.

And here is the desired result.

Let’s get it in our code editor but going to “Canvas” and copy the command.

Let’s go in terminal and paste the command. It will again prompt you to enter the component name. Let’s call it “Loading Screen”.

Let’s go in the pages.tsx file and introduce this component so that it renders in our already running server as before.

Now, all’s left is our third screen which is the Score Screen. Let’s repeat the steps from before by clicking on New Generation, uploading a reference image and tweaking the result to fit the original image.

Again copy the command and paste it in terminal.

Call the component “Score screen”, import the component in the pages.tsx file and with that we’ve generated all our screens without writing a single line of code.

Generating Backend with BuildShip:

With the UI in place, we'll shift our focus to the game's backend logic. This is where BuildShip comes into play. BuildShip is a low-code visual backend builder that allows us to create powerful backend functionality using a drag-and-drop interface.

Step 4: Building out the Backend Workflows

We'll create two main workflows in Buildship:

1. Get Random Image: This workflow generates a random prompt using GPT-4, creates an AI-generated image based on that prompt, and returns the image URL and hints to the frontend.

2. Calculate Score: When the player submits their guess, this workflow generates an image based on the guessed prompt, calculates the similarity score between the original prompt and the guess using the Jaro-Winkler distance algorithm, and returns the score and the generated image.

First let’s focus on the first screen. To do that, we’ll comment out the other two screens.

To build out our first workflow, we’ll create brand new workflow. Let’s call it “Get Random Image”.

For starters, let’s add in the REST API Trigger node. Our fronted application will call our BuildShip workflow via this REST API endpoint.

Let’s specify the path as “get-random-image” and leave the method to GET.

Next we’ll create a text generator node that will be used to prompt our image generator with unique, random text prompts which in turn will create images for us.

Then, we connect it up with our OpenAI Image Generator node and make sure to connect the “Input Text” field with the “Text Generator” node output from above.

Now we want to upload this image to cloud storage that comes in with every BuildShip project.

In this node, we’ll just specify to save each name with a unique name by appending a date variable to the saved image name.

And now finally, we want to return some values back to our client frontend. These values include the text prompt from the text generator node, the generated image and some hints.

For this we’ll go back to BuildShip and add in a return node. We’ll return the output’s of the respective node corresponding for their key value whereas for the hint key, we’ll extract all the words from the prompt, shuffle them and then use 3 of them.

Moving ahead, we need to ship our workflow and then test to see whether everything is working properly as expected.

Step 5: Integrating the Frontend and Backend

To connect our v0-generated UI with the Buildship workflows, we'll make API calls from the frontend to trigger the appropriate workflows. We'll use the `fetch` function to send requests and handle the responses accordingly.

To do this let’s ask ChatGPT the following:

Put in the pages.tsx code and the initial-screen.tsx screen along with the prompt.

ChatGPT will generate the code for you. Just copy in the code and paste it for the respective pages.

Next we need to copy in our API Endpoint from the REST API screen and paste it in our useEffect() function.

And now go back and reload your server and voila! Your images are being generated.

Now, we’d need another workflow that calculate the similarity score once a user submits his prompt that can be compared to the original prompt.

Let’s create new workflow and add a REST API Trigger with the method set to POST. Set the path as /get-score.

Next we will add the openAI Image generator node whose input will be the value that our user guessed.

We’ll store this image in the BuildShip storage and name it uniquely by appending the date to it.

Now to calculate the score, we’ll use the Jaro- Winkler Distance algorithm that will be used to calculate the similarity between two strings. So in BuildShip we’ll reusing this algorithm to check the similarity between the two prompts.

For this let’s add the “Calculate Similarity Node”. Text 1 will be the original prompt and Text 2 will be the guessed prompt and both these values can be extracted from the incoming Body request. This node will return a number which will be used to depict the similarity score on the frontend.

So to wind things up, add a return node that returns the generated image and the similarity score.

Ship and test the workflow and then we go back to integrating it with our frontend.

Now, again go back to chatGPT and fill in the new prompt as below:

And this time we’ll page in the loading screen component file along with the initalscreen.tsx and pages.tsx files too. ChatGPT will give you a output that you can paste back in your respective component pages.

Now when you go back and reload the app, you can put in your prompt for a generated image and see your similarity score.

For a full set-by-step tutorial, we’d recommend you to check out the video below:

How to Use Vercel's v0 Latest Key Features in Low-Code Style

Here's a list of v0's newest features that combined with BuildShip can help you start creating your next app idea in full low-code.

  • Design to fit: All assets generated are responsive, allowing you to upscale and downscale to fulfil all your viewports and devices requirements.

  • Generate UI with text prompt: As we learned you can create out your frontend UI using a basic text prompt and connect it with BuildShip's backend API.

  • Interactions: v0 by vercel does not produce static frontend. Tweak your prompts to include micro interactions, hover effects and more!

  • Integrate images: The soul of any frontend is images. Easily spin out the most creative images using the best AI models from BuildShip's nodes and have it rendered on your frontend.

Who's the Murderer Game

To demonstrate further the capabilities of BuildShip with v0 by Vercel, we've created another game called "Who's the Murderer?".

This project is designed to simplify game development by breaking it into two main components: a Gameplay API that generates game assets and a Game UI Frontend that can be created and deployed using v0 by Vercel or any other platform of your choice.

To get started, you’ll need accounts and API keys for three platforms: Replicate for generating images, ElevenLabs for creating audio, and OpenAI for story text generation. Once you’ve gathered these prerequisites, you’re ready to dive into building your game.

The first step is creating the API backend. Begin by connecting your API keys for Replicate, ElevenLabs, and OpenAI in the “Keys” section of the nodes.

This is a one-time setup, so if you’ve already completed it, you can move on to testing your workflow. Use the test panel with a sample input like "Harry Potter" to ensure everything works properly. After confirming the workflow, head over to the “Connector” tab and click Connect API. Finally, ship your backend by clicking Ship, making it ready to use.

Next, you’ll create the frontend for your game. Start by copying the code snippet from the Usage tab in the API Connector along with the sample output from your test panel. These will serve as inputs for your UI. Then, visit v0.dev and prompt it to generate your game’s user interface. Once the UI is created, test it thoroughly and make edits if necessary. After finalizing your design, deploy it to Vercel and share your game with friends!

For a visual guide, check out this quick demo to see the process in action.

Conclusion:

By combining the power of v0 for UI generation and BuildShip for backend logic, we created a fully functional full-stack "Guess the Prompt" game with minimal coding effort. This demonstrates the potential of AI-assisted development tools in the application development process.

The game serves as a fun and engaging way to showcase the capabilities of AI in generating images and calculating similarity scores. It also highlights how low-code and no-code tools like v0 and BuildShip can significantly reduce development time and empower developers to focus on the core functionality of their applications.

Furthermore, we also created another more complex game called as "Who's the Murderer?". Challenge yourself to see how close you can get to the original prompts and enjoy the experience of playing a game built with the power of AI.

Start building your
BIGGEST ideas
in the *simplest* of ways.

Start building your
BIGGEST ideas
in the *simplest* of ways.

Start building your
BIGGEST ideas
in the *simplest* of ways.

You might also like