Solution: Eyevinn Open Analytics - EyevinnOSC/community GitHub Wiki
The Eyevinn Player Analytics Specification (EPAS) is an open sourced framework and specification for tracking events from video- and audio players. It is a modular framework where you can pick and choose the modules you need.
This tutorial is based on the use case where you have a streaming solution up, and running and you want to gather analytics. We will use Open Source Cloud as well as open source components to achieve this.
Figure 1: Eyevinn Open Analytics
This solution is based on the following open source projects made available as services or components:
- Player Analytics Specification - describes EPAS (Eyevinn Player Analytics Specification) an open specification that defins a standard for implementing analytics in any video/audio player.
- Player Analytics Client SDK Web - describes the client component available for web development
- Player Analytics Client SDK Android - describes the client component available for android development.
- Player Analytics Client SDK Swift - describes the client component available for iOS, iPadOS, watchOS, tvOS and MacOS.
- Player Analytics Eventsink - describes the eventsink service that receives the data from the players and pushes the data on to a processing queue.
- SmoothMQ Service - an open source drop-in-replacement for SQS.
- Player Analytics Worker - describes the worker-service that picks up events written to the processing queue and writes them to a database.
- ClickHouse Service - a fast and resource efficient real-time data warehouse and open-source database.
- If you have not already done so, sign up for an OSC account, you can create on on osaas.io.
- OSC Command line tool installed. You need version v 0.14.2 or higher.
- AWS CLI to create a queue
% brew install awscli
We have to implement a client (Web, Android or iOS), connect it with a Player Analytics Eventsink that writes out the events on a queue handled by SmoothMQ. This queue is processed by the Player Analytics Worker who persists the data in a database, here served by ClickHouse.
Note
Throughout this guide it is important that you keep track of your secrets and the values assigned to them. In this guide we will use the following Secret Names and Secret Values as can be seen in the table below.
Service | Service Secret Name | Secret Value | Description |
---|---|---|---|
Eventsink |
- |
mysink | The name of the Player Analytics Eventsink instance |
SmoothMQ | - | myqueuename | The name of the message-queue |
SmoothMQ | mqaccesskey |
mymqaccessvalue | The Access Key for the queue |
SmoothMQ | mqsecretkey |
mymqsecretvalue | The Secret Key for the queue |
Click House | - | myclickdbinstance | The name of the ClickHouse DB-instance |
Click House | myclickdb |
myclickdbname | The name of your database |
Click House | clickdbuserkey |
myclickdbuservalue | The Admin user for your DB |
Click House | clickdbsecretkey |
myclickdbsecretvalue | The password used for your user |
Worker | - | myworkername | The name of the Player Analytics Worker instance |
Table 1: Secrets used in this guide. Step 2 & 3
Feel free to fill out the name of your keys and your secret values in the table for easy access throughout this guide.
When we refer to these secret-keys or secret-values we will do it with a <
>
-notation.
Example:
<mysecretusername>
- refers to your secret key that corresponds to mysecretusername
in Table 1
likewise
<myuser> - refers to your secret value that corresponds to myuser in Table1.
The SDK:s comes with an informative ReadMe that describes how to incorporate them in your app. Both Android-, Swift- and web-SDK:s all try to be so easy to implement that you only initialize them and then the SDK:s do the rest and you don't have to do anything more. You can find the client-setups here for Web, Android and iOS.
Further information regarding the different events and the SEPA specification describing the event-flow can you read here in Player Analytics Specification that is useful for all clients.
SmoothMQ is a drop-in-replacement for SQS and it is used by the Player Analytics Eventsink as a messageing queue.
You will need your OSC Personal Access Token since we are going to use the OSC CLI. This you can find by clicking on the Settings link, down on your left side. This will display your Account-page. To the right of the Account-tab is a tab called { } API
, and when you click there you will see your Personal Access Token. Click the copy-symbol.
In the terminal type this:
% export OSC_ACCESS_TOKEN=<Paste your OSC Personal Access Token here>
Now your OSC Personal Access Token is saved for this terminal-session.
Create the two SmoothMQ service secretes in the OSC web user interface.
Click the Create message-queue +
-button and fill in the dialog that follows with your values: Name
= Table1.<myqueuename>, AccessKey
= Table1.<mqaccesskey>, and SecretKey
= Table1.<mqsecretkey>
Figure 2. SmoothMQ parameters
And click the Create
-button.
If you click on the tab-link "My message-queues (n)" you can see all running SmoothMQ instances. Locate your SmoothMQ and click on the copy-symbol to the right of the URL.
Figure 3. SmoothMQ-URL for your instances
You will need this SmoothMQ-URL below.
Now when the SmoothMQ instance is up and running we can create a queue that we want to use. And also configure the eventsink module that receives the data from the players and pushes the data on to a processing queue.
NOTE!! The --endpoint-url
is the SmoothMQ-URL from Figure 3 above.
% export AWS_ACCESS_KEY_ID=<mymqaccessvalue>
% export AWS_SECRET_ACCESS_KEY=<mymqsecretvalue>
% aws sqs create-queue --queue-name=events --region='eu-west-1' --endpoint-url=<SmoothMQ-URL>
We have now created a queue named events
on your SmoothMQ instance.
You will get an output from the command above, like this:
{
"QueueUrl": "https://sqs.us-east-1.amazonaws.com/1/events"
}
Figure 4: SmoothMQ QueueURL
This is the QueueURL for your queue. Take a note on this as you will use it later.
Navigate to the Player Analytics Eventsink in OSC web console. Press the button Create eventsink +
.
Fill out the dialog and press Create.
Name
=Table1.<mysink>, SqsQueueUrl
=<SmoothMQ QueueUrl from Figure 4>, AwsAccessKeyId
=Table1.<mymqaccessvalue>, AwsSecretAccessKey
=Table1.<mymqsecretvalue>, SqsEndpoint
=<SmoothMQ-URL for you instance from figure 3>
Figure 5: Player Analytics Eventsink parameters
And click the Create
-button
If you click on the tab-link "My eventsinks (n)" you can see all running Eventsink-instances. Locate your Eventsink-instance and click on the copy-symbol to the right of the URL.
Figure 6. Eventsink-URL for your instance
We can test the eventsink using curl. The URL is the Eventsink-URL from Figure 6.
% url -X POST --json '{ "event": "init", "sessionId": "3", "timestamp": 1740411580982, "playhead": -1, "duration":
-1 }' <Eventsink-URL from Figure 6>
{"sessionId":"3","heartbeatInterval":5000}
We can now use AWS CLI to check the message was placed in the queue, with the following CLI:
aws sqs receive-message --queue-url=<SmoothMQ QueueURL from figure 4> --endpoint-url <SmoothMQ-URL for your instance from Figure 3> --region eu-west-1
It can look like this:
% aws sqs receive-message --queue-url=https://sqs.us-east-1.amazonaws.com/1/events --endpoint-url https://eyevinnlab-myqueuename.poundifdef-smoothmq.auto.prod.osaas.io --region eu-west-1
If the call is successful you will get an answer like below:
{
"Messages": [
{
"MessageId": "1917361731521220608",
"ReceiptHandle": "1917361731521220608",
"MD5OfBody": "56e41ee2399ef83003d1d230e8d11212",
"Body": "{\"event\":\"init\",\"sessionId\":\"3\",\"timestamp\":1740411580982,\"playhead\":-1,\"duration\":-1}",
"MessageAttributes": {
"Event": {
"StringValue": "init",
"DataType": "String"
},
"Time": {
"StringValue": "1740411580982",
"DataType": "String"
}
}
}
]
}
ClickHouseDB is a fast, open-source columnar database. We will setup a database instance for your Player Analytics. Go to ClickHouse and create the Click House secrets from Table 1.
Click on Create clickhouse-server +
and fill out the dialog, by choosing a secret for each field: Name
= Table1.<myclickdbinstance>, Db
= Table1.<myclickdb>, User
= Table1.<clickdbuserkey>, and Password
= Table1.<clickdbsecretkey>
Figure 7. ClickHouse Server parameters
Then press Create
.
This will create a ClickHouse DB server-instance called <Table1.myclickdbinstance> with one database called <Table1.myclickdbname>. This you can verify by clicking on the three dots upon the card for your DB Server, and choose Open application
. This will open up a query dialog.
Provide your <Table1.myclickdbuservalue> and <Table1.myclickdbuserkey> in the upper right-hand corner. And enter the query:
select * from system.databases
And your view ought to be similar to the picture below.
Figure 8. ClickHouse web query interface
You can get the URL to you ClickHouse Server both from the view above or by clicking the copy symbol on the card for your ClickHouse Server instance:

Figure 9. URL to your ClickHouse Server instance
Now we have verified that the DB is valid and up and running.
Player analytics Worker is the worker module that process the data from the queue and stores it in a database.
Go to Player analytics Worker and click on the "Create worker +" button.

Figure 10: Player Analytics Worker setup dialog
Field name | Description |
---|---|
Name |
The name you want for your instance |
ClickHouseUrl |
URL to your ClickHouse instance. NOTE: ClickHouseUser@ClickHousePassword in the address https://myclickdbuservalue:myclickdbsecretvalue@eyevinnlab-myclickdbinstance.clickhouse-clickhouse.auto.prod.osaas.io
|
SqsQueueUrl |
SmoothMQ QueueURL from figure 4 |
AwsAccessKeyId |
User name of the SmoothMQ instance (AccessKey) |
AwsSecretAccessKey |
User password of the SmoothMQ instance (SecretKey) |
SqsEndpoint |
SmoothMQ-URL for your instance from figure 3 |
Table 2: Explanation of fields
After you have pressed Create
you should wait a few minutes to let the worker start and also create your database. You can see in the log if it is ready.
One easy way to test the full flow is to make a small sample app on either of the platforms and play and pause a movie, then you will see your events appearing in the Click House database.
It is easy to enable ClickHouse to be able to answer natural questions from Claude Desktop thanks to the ClickHouse MCP Server project.
Install Claude Desktop and login.
Run brew install uv
since ClickHouse MCP needs this fast python package resolver.
Then it is needed to change the claude_desktop_config.json
file. You can find it at:
% ~/Library/Application Support/Claude/claude_desktop_config.json
In the event that the file does not exist you can create it and add the following in the file:
{
"mcpServers": {
"Open Analytics": {
"command": "uv",
"args": [
"run",
"--with",
"mcp-clickhouse",
"--python",
"3.13",
"mcp-clickhouse"
],
"env": {
"CLICKHOUSE_HOST": "eyevinnlab-myclickdbinstance.clickhouse-clickhouse.auto.prod.osaas.io",
"CLICKHOUSE_PORT": "443",
"CLICKHOUSE_USER": "myclickdbuservalue",
"CLICKHOUSE_PASSWORD": "myclickdbsecretvalue",
"CLICKHOUSE_SECURE": "true",
"CLICKHOUSE_VERIFY": "true",
"CLICKHOUSE_CONNECT_TIMEOUT": "30",
"CLICKHOUSE_SEND_RECEIVE_TIMEOUT": "30"
}
}
}
}
Figure 10. claude_desktop_config.json with our example values
NOTE: Change the values to your own as explained below:
Parameter | Your value |
---|---|
CLICKHOUSE_HOST |
- is the URL to your ClickHouse Server instance from Figure 9 |
CLICKHOUSE_USER |
- is your <myclickdbuservalue> |
CLICKHOUSE_PASSWORD |
- is your <myclickdbsecretvalue> |
Now you can use your Claude Desktop and query your Open Analytics about your data.

Figure 11. Claude Desktop using your ClickHouse data.
After configuring ClickHouse queries with MCP, enhance your analytics by visualizing data in Grafana. Follow these steps to connect Grafana to your ClickHouse instance and build dashboards.
[Video Player/Client] → [Eyevinn SDK] → [Eventsink] → [SmoothMQ Queue] → [Worker] → [ClickHouse DB] → [MCP Server] → [Grafana Dashboard]
Before starting this guide, one should have:
- An existing ClickHouse instance with analytics data (this is already set up as part of the analytics worker)
- ClickHouse connection details:
- Endpoint URL: URL:
https://<your-clickhouse-endpoint>/play
- Database name (typically
epas_default
) -
<Clickhouse-Username>
and<Clickhouse-password>
Option A: Create Grafana on OSC (Recommended)
-
Launch Grafana
- Go to OSC UI → Web Services → Grafana → Create Grafana
-
Name:
grafana
-
Plugins to Preinstall:
clickhouse-datasource
- Click Create and wait until status is running.

-
Log in to Grafana
- Open the provided Grafana URL
- Default credentials: Username:
admin
, Password:admin
- Set a new password when prompted.
Option B: Run Grafana Locally with Docker
docker run -d \
-p 3000:3000 \
--name=grafana \
grafana/grafana:latest
-
Access: http://localhost:3000/
-
Default credentials: admin/admin
-
Set new password.
-
Install ClickHouse plugin:
- In Grafana: ⚙️ Plugins → search ClickHouse → Install → Enable
- If needed:
docker restart grafana
-
In Grafana sidebar: Configuration (⚙️) → Data Sources → Add data source
-
Select ClickHouse (or Altinity plugin).
-
Enter connection details:
-
URL:
https://<your-clickhouse-endpoint>/play
-
Default database:
epas_default
- Basic Auth: Enable
-
User:
<ClickHouse-username>
-
Password:
<ClickHouse-password>
-
URL:
-
Click Save & Test and confirm Data source is working.


- Grafana sidebar: + → Import
- Upload
/sample-grafana-dashboard/Clickhouse-1745399875287.json
or paste its JSON - Or import directly using the raw JSON from this link
-
Event Frequency
SELECT toStartOfHour(timestamp) AS time, event, count(*) AS count_of_events FROM epas_default WHERE $__timeFilter(timestamp) GROUP BY time, event ORDER BY time
-
Top Content Titles
SELECT JSONExtractString(payload, 'contentTitle') AS title, count(*) AS plays FROM epas_default WHERE event = 'metadata' AND $__timeFilter(timestamp) GROUP BY title ORDER BY plays DESC LIMIT 10
-
Playback Errors
SELECT toStartOfMinute(timestamp) AS time, JSONExtractString(payload, 'reason') AS error_reason, count(*) AS count_errors FROM epas_default WHERE event = 'stopped' AND JSONExtractString(payload, 'reason') = 'error' AND $__timeFilter(timestamp) GROUP BY time, error_reason ORDER BY time


