A delightful Ruby way to work with AI through a unified interface to OpenAI, Anthropic, Google, and DeepSeek.
ðĪš Battle tested at ðŽ Chat with Work
The problem with AI libraries
Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies â just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
Features
- ðŽ Chat with OpenAI, Anthropic, Gemini, and DeepSeek models
- ðïļ Vision and Audio understanding
- ð PDF Analysis for analyzing documents
- ðžïļ Image generation with DALL-E and other providers
- ð Embeddings for vector search and semantic analysis
- ð§ Tools that let AI use your Ruby code
- ð Rails integration to persist chats and messages with ActiveRecord
- ð Streaming responses with proper Ruby patterns
What makes it great
# Just ask questions
chat = RubyLLM.chat
chat.ask "What's the best way to learn Ruby?"
# Analyze images
chat.ask "What's in this image?", with: { image: "ruby_conf.jpg" }
# Analyze audio recordings
chat.ask "Describe this meeting", with: { audio: "meeting.wav" }
# Analyze documents
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
# Generate images
RubyLLM.paint "a sunset over mountains in watercolor style"
# Create vector embeddings
RubyLLM.embed "Ruby is elegant and expressive"
# Let AI use your code
class Calculator < RubyLLM::Tool
description "Performs calculations"
param :expression, type: :string, desc: "Math expression to evaluate"
def execute(expression:)
eval(expression).to_s
end
end
chat.with_tool(Calculator).ask "What's 123 * 456?"
Quick start
require 'ruby_llm'
# Configure your API keys
RubyLLM.configure do |config|
config.openai_api_key = ENV.fetch('OPENAI_API_KEY')
config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY')
end
# Start chatting
chat = RubyLLM.chat
response = chat.ask "What's the best way to learn Ruby?"
# Generate images
image = RubyLLM.paint "a sunset over mountains"
puts image.url
# Analyze PDF documents with Claude
claude_chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
claude_chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
Have great conversations
# Start a chat with the default model (GPT-4o-mini)
chat = RubyLLM.chat
# Or specify what you want
chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
# Simple questions just work
chat.ask "What's the difference between attr_reader and attr_accessor?"
# Multi-turn conversations are seamless
chat.ask "Could you give me an example?"
# Stream responses in real-time
chat.ask "Tell me a story about a Ruby programmer" do |chunk|
print chunk.content
end
# Need a different model mid-conversation? No problem
chat.with_model('gemini-2.0-flash').ask "What's your favorite algorithm?"
Rails integration that makes sense
# app/models/chat.rb
class Chat < ApplicationRecord
acts_as_chat
# Works great with Turbo
broadcasts_to ->(chat) { "chat_#{chat.id}" }
end
# app/models/message.rb
class Message < ApplicationRecord
acts_as_message
end
# app/models/tool_call.rb
class ToolCall < ApplicationRecord
acts_as_tool_call
end
# In your controller
chat = Chat.create!(model_id: "gpt-4o-mini")
chat.ask("What's your favorite Ruby gem?") do |chunk|
Turbo::StreamsChannel.broadcast_append_to(
chat,
target: "response",
partial: "messages/chunk",
locals: { chunk: chunk }
)
end
# That's it - chat history is automatically saved
Learn more
License
Released under the MIT License.