Redisse

Redisse is a Redis-backed Ruby library for creating Server-Sent Events, publishing them from your application, and serving them to your clients.

Features

Rationale

Redisse’s design comes from these requirements:

Redirect endpoint

The simplest way that last point can be fulfilled is by actually loading and running your code in the Redisse server. Unfortunately since it’s EventMachine-based, if your method takes a while to returns the channels, all the other connected clients will be blocked too. You'll also have some duplication between your Rack config and Redisse server config.

Another way if you use nginx is instead to use a endpoint in your main application that will use the header X-Accel-Redirect to redirect to the Redisse server, which is now free from your blocking code. The channels will be sent instead via the redirect URL. See the section on nginx for more info.

Installation

Add this line to your application's Gemfile:

gem 'redisse', '~> 0.4.0'

Usage

Configure Redisse (e.g. in config/initializers/redisse.rb):

require 'redisse'

Redisse.channels do |env|
  %w[ global ]
end

Use the endpoint in your main application (in config.ru or your router):

# config.ru Rack
map "/events" do
  run Redisse.redirect_endpoint
end

# config/routes.rb Rails
get "/events" => Redisse.redirect_endpoint

Run the server:

$ bundle exec redisse --stdout --verbose

Get ready to receive events (with HTTPie or cURL):

$ http localhost:8080/events Accept:text/event-stream --stream
$ curl localhost:8080 -H 'Accept: text/event-stream'

Send a Server-Sent Event:

Redisse.publish('global', success: "It's working!")

Testing

In the traditional Rack app specs or tests, use Redisse.test_mode!:

describe "SSE" do
  before do
    Redisse.test_mode!
  end

  it "should send a Server-Sent Event" do
    post '/publish', channel: 'global', message: 'Hello'
    expect(Redisse.published.size).to be == 1
  end
end

See the example app specs.

Behind nginx

When running behind nginx as a reverse proxy, you should disable buffering (proxy_buffering off) and close the connection to the server when the client disconnects (proxy_ignore_client_abort on) to preserve resources (otherwise connections to Redis will be kept alive longer than necessary).

You should take advantage of the redirect endpoint instead of directing the SSE requests to the SSE server. Let your Rack application determine the channels, but have the request served by the SSE server with a redirect (X-Accel-Redirect) to an internal location.

In this case, and if you have a large number of long-named channels, the internal redirect URL will be long and you might need to increase proxy_buffer_size from its default in your Rack application location configuration. For example, 8k will allow you about 200 channels with UUIDs as names, which is quite a lot.

You can check the nginx conf of the example for all the details.

Contributing

  1. Fork it

  2. Create your feature branch (git checkout -b my-new-feature)

  3. Commit your changes (git commit -am 'Add some feature')

  4. Push to the branch (git push origin my-new-feature)

  5. Create new Pull Request