Getting Started with RubyLLM

Welcome to RubyLLM! This guide will get you up and running quickly. We’ll cover installing the gem, minimal configuration, and making your first chat, image, and embedding requests.

Table of contents

  1. Installation
  2. Minimal Configuration
  3. Your First Chat
  4. Generating an Image
  5. Creating an Embedding
  6. What’s Next?

After reading this guide, you will know:

  • How to install RubyLLM.
  • How to perform minimal configuration.
  • How to start a simple chat conversation.
  • How to generate an image.
  • How to create a text embedding.

Installation

Add RubyLLM to your Gemfile:

gem 'ruby_llm'

Then run bundle install.

(For full details, see the Installation Guide).

Minimal Configuration

RubyLLM needs API keys for the AI providers you want to use. Configure them once, typically when your application starts.

# config/initializers/ruby_llm.rb (in Rails) or at the start of your script
require 'ruby_llm'

RubyLLM.configure do |config|
  # Add keys ONLY for the providers you intend to use.
  # Using environment variables is highly recommended.
  config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
  # config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
end

You only need to configure keys for the providers you actually plan to use. See the Configuration Guide for all options, including setting defaults and connecting to custom endpoints.

Your First Chat

Interact with language models using RubyLLM.chat.

# Create a chat instance (uses the configured default model)
chat = RubyLLM.chat

# Ask a question
response = chat.ask "What is Ruby on Rails?"

# The response is a RubyLLM::Message object
puts response.content
# => "Ruby on Rails, often shortened to Rails, is a server-side web application..."

RubyLLM handles the conversation history automatically. See the Chatting with AI Models Guide for more details.

Generating an Image

Generate images using models like DALL-E 3 via RubyLLM.paint.

# Generate an image (uses the default image model)
image = RubyLLM.paint("A photorealistic red panda coding Ruby")

# Access the image URL (or Base64 data depending on provider)
if image.url
  puts image.url
  # => "https://oaidalleapiprodscus.blob.core.windows.net/..."
else
  puts "Image data received (Base64)."
end

# Save the image locally
image.save("red_panda.png")

Learn more in the Image Generation Guide.

Creating an Embedding

Create numerical vector representations of text using RubyLLM.embed.

# Create an embedding (uses the default embedding model)
embedding = RubyLLM.embed("Ruby is optimized for programmer happiness.")

# Access the vector (an array of floats)
vector = embedding.vectors
puts "Vector dimension: #{vector.length}" # e.g., 1536

# Access metadata
puts "Model used: #{embedding.model}"

Explore further in the Embeddings Guide.

What’s Next?

You’ve covered the basics! Now you’re ready to explore RubyLLM’s features in more detail: