Introducing ourray.app: debugging is better together

Ray is one of the most loved debugging tools in the Laravel and PHP community, and increasingly beyond. Thousands of developers use it daily to debug their applications. But it's always been a private, local experience.
During a recent hackathon we started wondering: what if debugging wasn't a solo activity? What if you could share your ray() calls with everyone?
So we built ourray.app. A shared, cloud-based Ray instance for the entire developer community. One dashboard, everyone's dumps, all in real-time. And it's completely free.
Getting started
Install the package:
composer require spatie/our-ray
Then prepend our() before your ray() calls:
our()->ray('Hello from my codebase!');
Your debugging output gets broadcast to ourray.app where anyone watching will see it pop up in real-time. Same API you're already used to, just a bit more... public.
The full API works
Colors work as you'd expect:
our()->ray('All systems go')->green();
our()->ray('Hmm, suspicious')->orange();
our()->ray('Something is definitely wrong')->red();
Labels:
our()->ray($user)->label('Current user');
our()->ray($order->total)->label('Order total')->green();
And yes, even confetti works. When you call our()->ray()->confetti(), every person watching ourray.app sees confetti rain down on their screen. You can really brighten someone's day with that one.
You can send anything to ourray.app that you'd normally send to Ray. Models, queries, exceptions, collections, you name it. Just keep in mind: everyone can see it. We trust you'll keep it classy.

When is this actually useful?
Besides the entertainment value of watching a worldwide stream of developers debugging their code (it's oddly meditative, like a fireplace video for programmers), there are some real scenarios where this shines:
Pair debugging across time zones. Your colleague in Tokyo can watch your ray() calls live while you debug an issue. No screen sharing, no video call, just raw debugging output streaming in.
Teaching and mentoring. Show a junior developer how you trace through a problem. They watch your ray() calls appear one by one as you step through the code.
Conference talks. Project ourray.app on the big screen and live-debug in front of your audience. No screen-sharing lag, no resolution issues.
Hackathons. Your whole team sees every ray() call from every machine. When someone finds a bug, everyone sees it. Maximum chaos, maximum collaboration.
Open source debugging. Instead of pasting log output into a GitHub issue, tell the maintainer to watch ourray.app while you reproduce the problem.
Or just vibes. Sometimes you want to feel connected to other developers. Watch the stream. See someone debug a payment integration at 2am. Silently wish them well.
The OurRay Refinement Display

We're also launching our first physical product: the OurRay Refinement Display. A Raspberry Pi connected to a screen with a custom-built controller, dedicated to streaming ourray.app in real-time.
What's in the box:
- Play/pause button with LED
- Clear button
- Color filter buttons for orange and red entries
- A rotary encoder for scrolling through the stream
Put it on your desk and watch an endless stream of mysterious debugging data from developers you've never met. You don't know what they're working on. You don't know why that variable is null. But it's your job to watch it now.
Contact us at lumon@there-there.app for more info.
The technical setup
The architecture behind ourray.app turned into something genuinely interesting. Let me walk you through how all the pieces fit together.
How the our-ray package hooks into Ray
The main driver behind all of this is the new spatie/our-ray package. When you install it, a helpers.php file gets autoloaded via Composer. It registers a callback on Ray::$afterSendCallbacks (a public static array that fires after every ray() call), initializes a CloudClient pointing at https://ourray.app/api, and defines the our() helper function.
The OurRay class itself is tiny:
class OurRay
{
public function ray(...$args)
{
$instance = ray();
CloudState::enable($instance->uuid);
if (count($args)) {
return $instance->send(...$args);
}
return $instance;
}
}
When you call our()->ray('something'), it creates a normal Ray instance, marks its UUID as "cloud-enabled" in CloudState, and sends your arguments. Ray does its thing locally as usual. Then the afterSend callback fires, checks if this UUID is cloud-enabled, and passes the payload to the CloudClient.
The CloudClient buffers payloads and sends them in batches of 5 via cURL. A register_shutdown_function flushes any remaining items when the PHP process ends.
Every interaction with the cloud is wrapped in try-catch blocks. If ourray.app goes down, Ray keeps working locally like nothing happened.
The Cloudflare Worker at the edge
We expect a lot of traffic on the incoming payload endpoint. Potentially thousands of developers sending ray() calls at the same time. We wanted to keep our Laravel application focused on serving the ourray.app webpage, not on processing all those incoming payloads.
To handle that, we use a Cloudflare Worker. It's a small piece of code that runs on Cloudflare's edge network, close to wherever the developer is in the world. All incoming payloads hit this Worker first, not our Laravel app.
The Worker handles three things:
Rate limiting with Durable Objects. Cloudflare Durable Objects are small stateful instances that live at the edge. We create one per IP address, and it keeps a counter of how many requests that IP has made in the current 30-second window. After 20 requests you get throttled. No database lookups needed, it's all in-memory.
Payload truncation. Anything over 65KB gets trimmed.
Fire-and-forget fan-out. After rate limiting and truncation, the Worker needs to do two things with each payload: store it in ClickHouse (our database) for when someone opens the dashboard later, and push it to Laravel Reverb (our WebSocket server) so anyone currently watching sees it appear instantly. It fires off both requests in parallel using ctx.waitUntil() and immediately returns 200 OK. The PHP process gets its response back fast while the async work continues.

Laravel Reverb for real-time WebSockets
We use Laravel Reverb for the WebSocket layer. The interesting part here is how we get the payloads into Reverb. The Cloudflare Worker doesn't use a Laravel SDK or anything like that. It talks directly to Reverb's batch_events HTTP endpoint using the Pusher protocol.
That means the Worker has to construct the HMAC-SHA256 authentication signature by hand. First we build the batch of events to send:
const batch = requests.map((data) => ({
name: 'App\\Events\\PayloadReceived',
channel: 'payloads',
data: JSON.stringify({ request: data }),
}));
const body = JSON.stringify({ batch });
Then we compute an MD5 hash of the body and construct the string to sign from the HTTP method, path, and sorted query parameters:
const timestamp = Math.floor(Date.now() / 1000).toString();
const bodyMd5 = toHex(await crypto.subtle.digest('MD5', encoder.encode(body)));
const path = `/apps/${env.REVERB_APP_ID}/batch_events`;
const queryParams = [
`auth_key=${env.REVERB_APP_KEY}`,
`auth_timestamp=${timestamp}`,
`auth_version=1.0`,
`body_md5=${bodyMd5}`,
].join('&');
const stringToSign = `POST\n${path}\n${queryParams}`;
Finally we sign it with the Reverb app secret using the Web Crypto API and append the signature to the request:
const key = await crypto.subtle.importKey(
'raw',
encoder.encode(env.REVERB_APP_SECRET),
{ name: 'HMAC', hash: 'SHA-256' },
false,
['sign'],
);
const signature = toHex(await crypto.subtle.sign('HMAC', key, encoder.encode(stringToSign)));
await fetch(`${env.REVERB_URL}${path}?${queryParams}&auth_signature=${signature}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body,
});
Since Reverb speaks the Pusher protocol, this just works.
On the frontend, we use Laravel Echo to listen for these events. Since a single ray() call can result in multiple messages (one for the payload content, another for the color, another for the label, or even confetti), we need to do some mix and matching on the client side to assemble the final stream.
ClickHouse for high-volume storage
We needed a database that can handle a constant stream of inserts from developers all over the world. A traditional MySQL or PostgreSQL setup would work, but those are row-oriented databases optimized for transactional workloads. Our use case is different: we write a lot, we read in bulk, and we never update individual rows.
That's exactly what ClickHouse is built for. It's a column-oriented database designed for analytical workloads and high-volume inserts. Instead of storing data row by row, it stores each column separately. This makes compression much more effective (all values of the same type are next to each other) and bulk reads very fast.
Here's our schema:
CREATE TABLE ray_payloads
(
uuid String CODEC(ZSTD(1)),
type LowCardinality(String),
content String CODEC(ZSTD(3)),
origin_file String CODEC(ZSTD(1)),
origin_line_number UInt32 CODEC(Delta, ZSTD(1)),
origin_hostname String CODEC(ZSTD(1)),
created_at DateTime64(3, 'UTC') DEFAULT now64(3) CODEC(DoubleDelta, ZSTD(1))
)
ENGINE = MergeTree()
PARTITION BY toStartOfHour(created_at)
ORDER BY (created_at, uuid)
TTL created_at + INTERVAL 7 DAY
SETTINGS ttl_only_drop_parts = 1;
We use ZSTD to compress the data. LowCardinality on the type column tells ClickHouse it only has a handful of distinct values, so it can store them as small integers internally instead of repeating the same strings over and over.
The table is partitioned by hour, meaning ClickHouse groups all data from the same hour into its own chunk. Combined with a 7-day TTL and ttl_only_drop_parts, cleanup is very efficient: instead of scanning and deleting individual rows, it just drops entire hourly chunks. Old data disappears automatically. If your dump was important enough to keep, it probably shouldn't have been on ourray.app.
Completely vibe coded
While we decided on the tech stack (Laravel, Cloudflare Workers, ClickHouse, Reverb), we didn't write a single line of code ourselves. We described what we wanted and AI wrote all of it.
The Cloudflare Worker with Durable Objects. The HMAC-SHA256 authentication for Reverb (we didn't even know you could write to Reverb outside a Laravel context). The ClickHouse migration with all its compression settings. The WebSocket bridge to the Raspberry Pi. Even the entire frontend. We honestly don't fully understand how the React code works, but it works.
Our designer did some final tweaks to the UI because AI is good but not that good.
Final notes

We had a blast during the hackathon building the OurRay Refinement Display and ourray.app itself. If you want to give it a spin, head over to ourray.app.
Or if you prefer to keep your debugging private, use the coupon code D3AD5402 for 20% off at myray.app.
United we debug, divided we dd.