Issue with WebSocket Connection to wss://websocket.mainnet.klever.finance

Hey there!

We’ve hit a snag trying to get the WebSocket connection working with wss://websocket.mainnet.klever.finance for real-time transaction tracking on KleverChain Mainnet. Our bot (using the ws library) keeps trying to connect but just can’t make it happen—check out the logs:

[2025-04-27 21:14:31.248] Attempting to connect to WebSocket: wss://websocket.mainnet.klever.finance

We’re subscribing to the `transaction` topic for the address `klv1pmqq2xkjtx8duxu8em3chqhnu7czwd4hzghef038s5pw5y29zf2qdlkszz`, but no luck. Is this the right URL, or has it changed? If it’s not working, could you share the current WebSocket URL or suggest another way to track transactions in real-time? 

Would really appreciate your help! 😊
1 Like

Hello @Sovkosov_Ignat !

Before we investigate further, could you please try again, but this time using .org instead of .finance?

We are currently in the process of migrating all URLs from .finance to .org, and this might be the cause of the issue.

If the problem persists, please let me know so I can assist you further.

2 Likes

I tried using the WebSocket URL wss://websocket.mainnet.klever.org as suggested, but I’m getting a “getaddrinfo ENOTFOUND websocket.mainnet.klever.org” error, indicating that the domain cannot be resolved

1 Like

Sorry for the delayed response — I was gathering additional information to provide you with all the necessary details.

It turns out that the public WebSocket URL has been discontinued.
WebSocket support is still active and receiving updates, but from now on, you’ll need to run your own indexer node to use this feature.
The documentation has already been updated to reflect the removal of the public URL.

Please note that running an indexer node can be resource-intensive, as it relies on Elasticsearch.
If real-time data is not a strict requirement, I recommend using a polling strategy — periodically fetching data every X seconds, which is much lighter on resources.

However, if real-time behavior is essential for your use case, I’d be happy to help you with the configuration and setup of your own indexer node.

Let me know what works best for you! :rocket:

Hey Nicolas! @Nicollas
Thank you for the update!

We already have a full node running and would greatly appreciate any assistance you can provide with setting up the indexer. Real-time data is important for our use case, so we’re interested in proceeding with the configuration.

Looking forward to your guidance!

I’ve been doing some testing here, and I believe I now have everything you need to run WebSocket functionality using your own indexer.

In this reply, I will:

  • Instruct you on how to turn your node into an indexer
  • Highlight key considerations of this setup
  • Provide a Docker Compose example for running a development/testing Elasticsearch instance

Since you already have a blockchain node running, I’ll assume you have basic Docker knowledge and understand what docker-compose is. If any of this sounds unfamiliar, feel free to let me know.

:pushpin: Klever Blockchain Perspective

From the blockchain side, the setup is relatively straightforward.
In your node’s config files, you’ll find a file named external.yaml — this is where the configuration for external services is defined.

To turn a node into an indexer, all you need to do is:

  1. Enable and configure the Elasticsearch connector in the external.yaml file.

  2. Once correctly configured to connect to your Elasticsearch instance, start your node.

  3. On startup, you should see the message:

    “Node is running with a valid indexer”
    This confirms that your node is now acting as an indexer.

That’s all it takes from the blockchain’s point of view. :white_check_mark:


:warning: Important Note

We do not recommend running a node as both a validator and an indexer.
If you already operate a validator, you should run a secondary observer node dedicated to indexing.


:light_bulb: Why It Can Be Resource-Intensive

The most resource-intensive part is Elasticsearch itself. Running it requires a considerable amount of RAM, and setting up a production-grade cluster may exceed what you need for development or testing.

To simplify things, here is a Docker Compose setup you can use for a local Elasticsearch instance in a development environment.

:open_file_folder: Folder Structure

You should arrange the files in your folder with the following structure:

your-folder/
├── docker-compose.yml
└── elasticsearch
    ├── elasticsearch.yml
    └── jvm.options

:clipboard: docker-compose.yml

version: '3'

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=klever-cluster
      - cluster.routing.allocation.disk.threshold_enabled=false
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - network.host=0.0.0.0
      - discovery.type=single-node
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
      - ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./elasticsearch/jvm.options:/usr/share/elasticsearch/config/jvm.options.d/jvm.options
    ports:
      - 9200:9200
volumes:
  data01:
    driver: local

Also, you will need some configuration files

:page_with_curl:jvm.options

-Xms4g
-Xmx4g

:page_with_curl:elasticsearch.yml

cluster.name: klever-cluster
node.name: klever-es-node-1
bootstrap.memory_lock: true
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: X-Requested-With,X-Auth-Token,Content-Type,Content-Length
xpack.security.enabled: false
xpack.license.self_generated.type: basic

:file_folder: File Overview

  • docker-compose.yml – Defines and runs the Elasticsearch container.
  • elasticsearch/elasticsearch.yml – Main configuration file for the Elasticsearch node.
  • elasticsearch/jvm.options – JVM memory settings and other runtime options.

Make sure your paths match this structure exactly so that Docker can correctly mount the configuration files into the container.

:balance_scale: Run Elasticsearch

Once everything is created, execute in the root folder:

docker-compose up -d

This will expose Elasticsearch on port 9200.

:wrench: Configure external.yaml

To connect to it, the default external.yml should be enought for you

elasticSearchConnector:
  enabled: true
  indexerCacheSize: 100
  url: http://localhost:9200
  useKibana: true
  username:
  password:
  enabledIndexes:
    - transactions
    - blocks
    - accounts
    - accountshistory
    - assets
    - proposals
    - marketplaces
    - network-parameters
    - rating
    - epoch
    - accountskda
    - peersaccounts
    - marketplaceorders
    - itos
    - kdapools
    - logs
    - scdeploys

Run docker ps to confirm that Elasticsearch is running.
image

Once it is, start your node. If everything is correct, you should see:

Node is running with a valid indexer

That`s it :rocket:
You can now connect to your node using websocket and subscribe to topics

:shopping_bags: Final Note

I understand this setup might seem like a lot just to enable WebSocket support. During this process, I’ve identified ways we can improve this experience, and I’ve added a task to the team backlog to decouple WebSocket functionality from the indexer requirement. I can’t promise a timeline, as our current focus is on the KVM launch on mainnet.

Although you’re likely interested only in the WebSocket feature, I think it’s helpful to know that once Elasticsearch is connected, it starts indexing blockchain data. If your application ever needs fast and direct access to on-chain data, this can become a powerful tool. In fact, the official public APIs are powered by this same indexing engine.

I hope this guide helps you achieve what you need. If anything is unclear or you run into any issues, feel free to reach out — I’ll be glad to help!

2 Likes