I’ve been doing some testing here, and I believe I now have everything you need to run WebSocket functionality using your own indexer.
In this reply, I will:
- Instruct you on how to turn your node into an indexer
- Highlight key considerations of this setup
- Provide a Docker Compose example for running a development/testing Elasticsearch instance
Since you already have a blockchain node running, I’ll assume you have basic Docker knowledge and understand what docker-compose
is. If any of this sounds unfamiliar, feel free to let me know.
Klever Blockchain Perspective
From the blockchain side, the setup is relatively straightforward.
In your node’s config files, you’ll find a file named external.yaml
— this is where the configuration for external services is defined.
To turn a node into an indexer, all you need to do is:
-
Enable and configure the Elasticsearch connector in the external.yaml
file.
-
Once correctly configured to connect to your Elasticsearch instance, start your node.
-
On startup, you should see the message:
“Node is running with a valid indexer”
This confirms that your node is now acting as an indexer.
That’s all it takes from the blockchain’s point of view. 
Important Note
We do not recommend running a node as both a validator and an indexer.
If you already operate a validator, you should run a secondary observer node dedicated to indexing.
Why It Can Be Resource-Intensive
The most resource-intensive part is Elasticsearch itself. Running it requires a considerable amount of RAM, and setting up a production-grade cluster may exceed what you need for development or testing.
To simplify things, here is a Docker Compose setup you can use for a local Elasticsearch instance in a development environment.
Folder Structure
You should arrange the files in your folder with the following structure:
your-folder/
├── docker-compose.yml
└── elasticsearch
├── elasticsearch.yml
└── jvm.options
docker-compose.yml
version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
container_name: es01
environment:
- node.name=es01
- cluster.name=klever-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- network.host=0.0.0.0
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/jvm.options:/usr/share/elasticsearch/config/jvm.options.d/jvm.options
ports:
- 9200:9200
volumes:
data01:
driver: local
Also, you will need some configuration files
jvm.options
-Xms4g
-Xmx4g
elasticsearch.yml
cluster.name: klever-cluster
node.name: klever-es-node-1
bootstrap.memory_lock: true
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: X-Requested-With,X-Auth-Token,Content-Type,Content-Length
xpack.security.enabled: false
xpack.license.self_generated.type: basic
File Overview
docker-compose.yml
– Defines and runs the Elasticsearch container.
elasticsearch/elasticsearch.yml
– Main configuration file for the Elasticsearch node.
elasticsearch/jvm.options
– JVM memory settings and other runtime options.
Make sure your paths match this structure exactly so that Docker can correctly mount the configuration files into the container.
Run Elasticsearch
Once everything is created, execute in the root folder:
docker-compose up -d
This will expose Elasticsearch on port 9200.
Configure external.yaml
To connect to it, the default external.yml should be enought for you
elasticSearchConnector:
enabled: true
indexerCacheSize: 100
url: http://localhost:9200
useKibana: true
username:
password:
enabledIndexes:
- transactions
- blocks
- accounts
- accountshistory
- assets
- proposals
- marketplaces
- network-parameters
- rating
- epoch
- accountskda
- peersaccounts
- marketplaceorders
- itos
- kdapools
- logs
- scdeploys
Run docker ps
to confirm that Elasticsearch is running.

Once it is, start your node. If everything is correct, you should see:
Node is running with a valid indexer
That`s it 
You can now connect to your node using websocket and subscribe to topics
Final Note
I understand this setup might seem like a lot just to enable WebSocket support. During this process, I’ve identified ways we can improve this experience, and I’ve added a task to the team backlog to decouple WebSocket functionality from the indexer requirement. I can’t promise a timeline, as our current focus is on the KVM launch on mainnet.
Although you’re likely interested only in the WebSocket feature, I think it’s helpful to know that once Elasticsearch is connected, it starts indexing blockchain data. If your application ever needs fast and direct access to on-chain data, this can become a powerful tool. In fact, the official public APIs are powered by this same indexing engine.
I hope this guide helps you achieve what you need. If anything is unclear or you run into any issues, feel free to reach out — I’ll be glad to help!