Integrate Gemini and Hugging Face to automate workflows with scalable backend
Connect Gemini and Hugging Face nodes to in your workflow. Integrate with any tool or database and ship powerful backend logic and APIs instantly - No code required!
Getting Started
How To Connect Gemini and Hugging Face
Popular Templates With Gemini and Hugging Face
Explore our popular Google Sheets & templates below. Click. Remix. Ship!
Node stack
Supported Triggers & Actions
Gemini NODES
Count Tokens in Prompt
When using long prompts, it might be useful to count tokens before sending any content to the model.
Gemini Text Generator
Make an API call to the Generative Language Model endpoint
Generate Embedding
Generate Embeddings from text input and represent text (words, sentences, and blocks of text) in a vectorized formusing Gemini AI
Multimodal
Use Google's Gemini AI to generate text from text-only or text-and-image input. [Full documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-multimodal).
Stream Response
Generates a stream of response text using Google's Generative AI with a given prompt
Hugging Face NODES
Caption Image
Generate caption for the image using Hugging Face's [Salesforce/blip-image-captioning-large](https://huggingface.co/Salesforce/blip-image-captioning-large) model for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone).
script
Image Classification
Get classification labels for your image using Hugging Face's [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) model which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
script
Text Summarization
Summarize long text using Hugging Face's [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) model which is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
script
Text-To-Image
Generate image from text, using Hugging Face's [openskyml/dalle-3-xl](https://huggingface.co/openskyml/dalle-3-xl) test model very similar to Dall•E 3.
script
Text-To-Music
Generate music from text using Hugging Face's [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) model capable of generating high-quality music samples conditioned on text descriptions or audio prompts.
script
Blog posts & Tutorials
Recommended
Reads
Below are recommneded blogs that will help in your journey
Support
Need Help?
Here are some helpful resources to get you "unstuck"
💬
Join BuildShip Community ->
An active and large community of no-code / low-code builders. Ask questions, share feedback, showcase your project and connect with other BuildShip enthusiasts.
🙋
Hire a BuildShip Expert ->
Need personalized help to build your product fast? Browse and hire from a range of independent freelancers, agencies and builders - all well versed with BuildShip.
🛟
Send a Support Request ->
Got a specific question on your workflows / project or want to report a bug? Send a us a request using the "Support" button directly from your BuildShip Dashboard.
⭐️
Feature Request ->
Something missing in BuildShip for you? Share on the #FeatureRequest channel on Discord. Also browse and cast your votes on other feature requests.