20250211 ‐ devops ‐ setting up heart beat for python exe - cywongg/2025 GitHub Wiki
Below is a broad outline of how you might implement heartbeat monitoring for a Python executable and then push the results to Elasticsearch. There are many different ways to approach this problem; the best solution depends on your infrastructure, your Python application’s design (whether it has an HTTP endpoint or not), and your existing tooling with the Elastic Stack. Below are some common strategies.
Option 1: Use Elastic Heartbeat to Check an HTTP Endpoint
When to use this approach
If your Python application offers (or can offer) an HTTP endpoint for health checks, you can leverage [Elastic Heartbeat](https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-overview.html). Heartbeat periodically sends requests (HTTP, ICMP, or TCP checks) to your application and sends the results to Elasticsearch.
Steps
-
Expose a Health Check Endpoint in Your Python App
- If you’re running a long-lived Python process (e.g., a web server), expose a simple route like
GET /health
that returns a 200 status if the service is healthy. For a minimal example using Flask:
from flask import Flask, jsonify app = Flask(__name__) @app.route("/health", methods=["GET"]) def health(): return jsonify({"status": "OK"}), 200 if __name__ == "__main__": app.run(host="0.0.0.0", port=8080)
- If you’re running a long-lived Python process (e.g., a web server), expose a simple route like
-
Install and Configure Heartbeat
-
[Download Heartbeat](https://www.elastic.co/downloads/beats/heartbeat).
-
Create or edit your Heartbeat configuration file (often
heartbeat.yml
), and define a monitor:heartbeat.monitors: - type: http name: "Python Health Check" schedule: '@every 10s' # check every 10 seconds urls: ["http://my-python-app-url:8080/health"] check.request: method: GET check.response: status: 200 output.elasticsearch: hosts: ["localhost:9200"]
-
Launch Heartbeat and verify that documents are streaming into Elasticsearch.
-
-
Configure Alerts or Visualizations in Kibana
- Using Kibana, you can create alerts that notify you if the Heartbeat monitor fails. Or build dashboards to visualize uptime over time.
Option 2: Send a Custom Heartbeat Log and Collect with Filebeat/Logstash
When to use this approach
If your Python exe isn’t a network server (no open HTTP port) or you prefer to handle heartbeats via logs, you can manually output a heartbeat message every N seconds and ingest those logs into Elasticsearch with Filebeat or Logstash.
Steps
-
Add Heartbeat Logging in Your Python Script
- Every N seconds (via a simple loop or scheduled job), log a message like “HEARTBEAT: OK” or include more detailed info (timestamp, version, memory usage, etc.):
import time import logging from datetime import datetime logging.basicConfig( filename="heartbeat.log", level=logging.INFO, format="%(asctime)s %(message)s" ) if __name__ == "__main__": while True: logging.info("HEARTBEAT: OK - timestamp=%s", datetime.utcnow()) time.sleep(10)
-
Install and Configure Filebeat
-
[Download Filebeat](https://www.elastic.co/downloads/beats/filebeat).
-
Create or edit
filebeat.yml
to watch your heartbeat log:filebeat.inputs: - type: log enabled: true paths: - /path/to/heartbeat.log output.elasticsearch: hosts: ["localhost:9200"]
-
-
Launch Filebeat
- Filebeat will track new lines in
heartbeat.log
and forward them to Elasticsearch.
- Filebeat will track new lines in
-
Set Up Visualization or Alerts in Kibana
- For instance, you can create a Kibana watcher that fires an alert if you haven’t seen a heartbeat in the last few minutes.
Option 3: Use a Python Script That Directly Writes to Elasticsearch
When to use this approach
If you prefer a purely Python-based solution without additional Beats or if you want more control over what data is stored and when it’s sent.
Steps
-
Install the Elasticsearch Python Client
pip install elasticsearch
-
Create a Heartbeat Script
- This script checks whether a particular Python process is running (using [psutil](https://github.com/giampaolo/psutil) or a custom health check) and then directly writes a document to Elasticsearch.
import time import psutil from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch(["localhost:9200"]) def check_process_and_write(): # e.g. check for "python.exe" or some specific name process_exists = any(proc.name() == "python.exe" for proc in psutil.process_iter()) doc = { "timestamp": datetime.utcnow().isoformat(), "process_exists": process_exists, } # send to Elasticsearch es.index(index="heartbeat-index", document=doc) if __name__ == "__main__": while True: check_process_and_write() time.sleep(10)
-
Visualize or Alert
- As always, you can use Kibana to view the logs, build dashboards, or set up alerting.
Metricbeat (for System/Process Monitoring)
Option 4:If you care not only about the “heartbeat” but also CPU usage, memory usage, etc., you could install Metricbeat. Metricbeat has modules for system metrics and can track running processes. You can then filter or alert in Kibana if your specific process is not running.
Choosing the Right Approach
- Heartbeat (Elastic) is the simplest if your Python application is accessible via TCP/HTTP/ICMP.
- Filebeat (with custom “heartbeat” logs) is helpful if you have no open ports or want to keep the overhead in your Python code minimal.
- Custom Python scripts using the Elasticsearch client allow you full control and let you check process presence or other advanced custom logic.
- Metricbeat or other system-level watchers are perfect if you want more system/process resource metrics in addition to simple liveness checks.
Any of these approaches will give you (1) a heartbeat signal indicating your Python application is alive, and (2) data ingested into Elasticsearch so you can build dashboards and alerts in Kibana.