How do I run Sources & Kafka locally for development - cloudigrade/cloudigrade GitHub Wiki

How do I run Sources & Kafka locally for development

These instructions allow you to run Kafka, Zookeeper, and Sources locally on your development machine. Not for running the Source UI, but rather for interacting with it via the API. It is particularly useful for looking at its logs, looking at the API requests as well as Kafka messages coming in.

For these instructions, we'll have the sources and Kafka extracted in a directory ~/deployment, but that can be anywhere else needed.

Setup Kafka

We will use Apache Kafka for this setup.

Download Kafka

$ mkdir ~/deployment
$ cd ~/deployment
$ curl https://downloads.apache.org/kafka/3.2.3/kafka-3.2.3-src.tgz -o kafka-3.2.3-src.tgz
$ tar -xf - < kafka-3.2.3-src.tgz
$ ln -s kafka-3.2.3-src kafka

Configuring Kafka

For the Kafka server, we'll keep the default localhost:2181 for Zookeeper and localhost:9092 for the Kafka server itself. We will configure this via the ~/deployment/kafka/config/server.properties file:

listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT

zookeeper.connect=localhost:2181

The localhost:9092 portion above must be matched by the Source's environment setup:

Setup Sources

Reference URL: https://github.com/RedHatInsights/sources-api-go

While the Ephemeral, Stage, and Prod access Sources via the default clowder port of 8000, we'll setup the local environment to use port 4000.

Querying any collection of the local running sources via http can be done as follows:

$ http localhost:4000/api/sources/v3/${COLLECTION}/ | jq

Clone Sources

$ cd ~/deployment
$ git clone [email protected]:RedHatInsights/sources-api-go.git

Dependencies

  • Postgresql 10+ or later
  • Create a new database sources_api_go_development
  • Set owner as "root"

Setup Environment for Sources

# Kafka related setup
export QUEUE_HOST=localhost
export QUEUE_PORT=9092

# Postgres related setup
export DATABASE_HOST="localhost"
export DATABASE_PORT="5432"
export DATABASE_USER="root"
export DATABASE_PASSWORD="${PSQL_ROOT_USER_PASSWORD}"
export DATABASE_NAME="sources_api_go_development"

# Redis related setup
export REDIS_CACHE_HOST="localhost"
export REDIS_CACHE_PORT="6379"

# Sources related setup
export BYPASS_TENANCY=true
export BYPASS_RBAC=true
export PATH_PREFIX=/api
export APP_NAME="sources"
export PORT=4000
export METRICS_PORT=0

# Cloudigrade related setup
export KAFKA_SERVER_HOST=${QUEUE_HOST}
export KAFKA_SERVER_HOST=${QUEUE_PORT}
export SOURCES_API_BASE_URL="http://localhost:${PORT}"

Download dependencies and run sources in-memory:

$ cd ~/deployment/sources-api-go
$ make setup
$ make inlinerun

Running the whole lot (Zookeeper, Kafka and Sources):

Start Zookeeper

You can run the following in a separate terminal:

$ cd ~/deployment/kafka
$ ./gradlew jar -PscalaVersion=2.13.8    # takes a while the first time, ~6 minutes
$ bin/zookeeper-server-start.sh config/zookeeper.properties 2>&1 | tee zookeeper.log

Start Kafka

In a separate terminal:

$ cd ~/deployment/kafka
$ bin/kafka-server-start.sh config/server.properties 2>&1 | tee kafka.log

Setup the topics of interest

This is a one-time thing:

$ cd ~/deployment/kafka
$ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 20 --topic platform.topological-inventory.operations-satellite
$ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 20 --topic platform.sources.event-stream
$ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 20 --topic platform.sources.status

Start Sources

Download dependencies and run sources in-memory:

$ cd ~/deployment/sources-api-go
$ make setup
$ make inlinerun

Creating a source and application

We can specify the account_number or org_id to the sources-api via the x-rh-sources-account-number or x-rh-sources-org-id headers.

Example creating an Azure source:

$ export ACCOUNT_NUMBER="1234567"
$ export SOURCE_NAME="lighthouse-rh-dir1"
$ export AZURE_SUBSCRIPTION_ID="c2b810d4-e83c-4df5-a728-f1301dd78561"
cat - <<!END! | http localhost:4000/api/sources/v3.1/bulk_create "x-rh-sources-account-number: ${ACCOUNT_NUMBER}"
  "sources":[
    {
      "name":"${SOURCE_NAME}",
      "source_type_name":"azure"
    }
  ],
  "authentications":[
    {
      "resource_type":"application",
      "resource_name":"cloud-meter",
      "username":"${AZURE_SUBSCRIPTION_ID}",
      "authtype":"lighthouse_subscription_id"
    }
  ],
  "applications":[
    {
      "source_name":"${SOURCE_NAME}",
      "application_type_name":"cloud-meter"
    }
   ]
}
!END!

Then querying the relevant collections:

$ http localhost:4000/api/sources/v3/sources/ "x-rh-sources-account-number: ${ACCOUNT_NUMBER}" | jq
$ http localhost:4000/api/sources/v3/applications/ "x-rh-sources-account-number: ${ACCOUNT_NUMBER}" | jq
$ http localhost:4000/api/sources/v3/source_types/ "x-rh-sources-account-number: ${ACCOUNT_NUMBER}" | jq
$ http localhost:4000/api/sources/v3/applicatoin_types/ "x-rh-sources-account-number: ${ACCOUNT_NUMBER}" | jq