Fast Deployment of the Test Network
This document lists the steps to deploy a network of 4 nodes (master, query and 2 operators nodes).
A detailed description of every step is available in the Session II Deployment document.
Requirements
- 2 machines - either physical or virtual.
- Machine A - deployed with Master, Query, Operator and a remote CLI.
- Machine B - deployed with the second Operator.
- Docker
- Makefile
Prepare Machine(s)
- On both machines - Clone EdgeLake
cd $HOME git clone https://github.com/EdgeLake/docker-compose
- Make sure ports are open and accessible
Default Ports
TCP REST Message Broker (Optional) Master 32048 32049 Operator 32148 32149 32150 Query 32348 32349
Master Node
- cd into docker-compose directory
- Update the params in docker_makefile/edgelake_master.env
- Key Params:
- NODE_NAME
- COMPANY_NAME
- Start Node
make up EDGELAKE_TYPE=master
Validate Master Node
- View node logs - validate that the following services are enabled: TCP, REST, and Blockchain sync
Expected Output:make logs EDGELAKE_TYPE=master
EL edgelake-master +> Process Status Details ---------------|------------|---------------------------------------------------------------------------| TCP |Running |Listening on: 45.79.74.39:32048, Threads Pool: 6 | REST |Running |Listening on: 45.79.74.39:32049, Threads Pool: 5, Timeout: 20, SSL: False | Operator |Not declared| | Blockchain Sync|Running |Sync every 30 seconds with master using: 127.0.0.1:32048 | Scheduler |Running |Schedulers IDs in use: [0 (system)] [1 (user)] | Blobs Archiver |Not declared| | MQTT |Not declared| | Message Broker |Not declared|No active connection | SMTP |Not declared| | Streamer |Not declared| | Query Pool |Running |Threads Pool: 3 | Kafka Consumer |Not declared| | gRPC |Not declared| |
- Attach into master node
make attach EDGELAKE_TYPE=master
- Execute
test node
to validate basic node configuration
Note: The commandEL edgelake-master +< test node Test TCP [************************************************************] Test REST [************************************************************] Test Status -----------------------------------------|-----------------------------------------------------------------------| Metadata Version |02a3d84c0017bbaea01a19780734d801 | Metadata Test |Pass | TCP test using 45.79.74.39:32048 |[From Node 45.79.74.39:32048] edgelake-master@45.79.74.39:32048 running| REST test using http://45.79.74.39:32049 |edgelake-master@45.79.74.39:32048 running |
test node
validates the IP and Port used by the AnyLog protocol (Test TCP) and the REST protocol (Test REST). The REST IP and Port are offered by an EdgeLake service to communicate with 3rd parties applications via REST. If the REST port is not open to the outside world (and **binding** in the EdgeLake Node configuration is set to **False**), then the test will fail. Use the following process to manually test the connection: open a new terminal and run acurl -X GET {INTERNAL_IP}:{REST_PORT}
root@alog-edgelake-node:~# curl -X GET 45.79.74.39:32049 -w "\n" edgelake-master@45.79.74.39:32048 running
- Detach from CLI -
ctrl-d
Note: The TCP IP and Port (in the example - 45.79.74.39:32048
) is used as the Network Identifier, which will be referenced by all members nodes that are assigned to this (test) network.
This IP and Port is assigned to the attribute called LEDGER_CONN on each peer node.
Operator Node(s)
The following configuration steps can be used for each deployed operator.
- cd into docker-compose directory
- Update the params in docker_makefile/edgelake_operator.env
- Key Params:
- NODE_NAME - each operator should have unique value
- COMPANY_NAME
- LEDGER_CONN - should be set to the TCP connection of the Master Node (the value 45.79.74.39:32048 using the Master Node deployment example above)
- CLUSTER_NAME - each operator should have unique cluster name
- DEFAULT_DBMS - should be the same on both operators
- ENABLE_MQTT - The default configurations can accept data from a third-party broker that's alrady running. By setting ENABLE_MQTT to true, data from this third-party broker will flow in automatically.
- MSG_DBMS - should be set to the same value as DEFAULT_DBMS
- Note: to deploy multiple operators on the same machine, make sure each operator is configured with unique port values
- Start Node
make up EDGELAKE_TYPE=operator
Validate Operator Node
- View node logs - validate that the following services are enabled: TCP, REST, Operator, and Blockchain sync
Expected Output:make logs EDGELAKE_TYPE=operator
EL edgelake-operator +> Process Status Details ---------------|------------|---------------------------------------------------------------------------| TCP |Running |Listening on: 35.225.182.15:32148, Threads Pool: 6 | REST |Running |Listening on: 35.225.182.15:32149, Threads Pool: 5, Timeout: 20, SSL: False| Operator |Running |Cluster Member: True, Using Master: 127.0.0.1:32048, Threads Pool: {A2} | Blockchain Sync|Running |Sync every 30 seconds with master using: 127.0.0.1:32048 | Scheduler |Running |Schedulers IDs in use: [0 (system)] [1 (user)] | Blobs Archiver |Running | | MQTT |Running | | Message Broker |Not declared|No active connection | SMTP |Not declared| | Streamer |Running |Default streaming thresholds are 60 seconds and 10,240 bytes | Query Pool |Running |Threads Pool: 3 | Kafka Consumer |Not declared| | gRPC |Not declared| |
- Attach into operator node
make attach EDGELAKE_TYPE=operator
- Execute
test network
to validate you're able to communicate with the nodes in the networkEL edgelake-operator +> test network Test Network [****************************************************************] Address Node Type Node Name Status ---------------------|---------|-----------------------------|------| 35.225.182.15:32148 |operator |edgelake-operator | + | 45.79.74.39:32048 |master |edgelake-master | + |
- Detach from CLI -
ctrl-d
Query Node(s)
- cd into docker-compose directory
- Update the params in docker_makefile/edgelake_query.env
- Key Params:
- NODE_NAME - each query node should have unique value
- COMPANY_NAME
- LEDGER_CONN - should be set to the TCP connection of the Master Node
- Start Node
make up EDGELAKE_TYPE=query
Validate Query Node
- View node logs - validate that the following services are enabled: TCP, REST, and Blockchain sync
Expected Output:make logs EDGELAKE_TYPE=query
EL edgelake-operator +> Process Status Details ---------------|------------|---------------------------------------------------------------------------| TCP |Running |Listening on: 23.239.12.151:32348, Threads Pool: 6 | REST |Running |Listening on: 23.239.12.151:32349, Threads Pool: 5, Timeout: 20, SSL: False| Operator |Not declared| | Blockchain Sync|Running |Sync every 30 seconds with master using: 127.0.0.1:32048 | Scheduler |Running |Schedulers IDs in use: [0 (system)] [1 (user)] | Blobs Archiver |Not declared| | MQTT |Not declared| | Message Broker |Not declared|No active connection | SMTP |Not declared| | Streamer |Not declared| | Query Pool |Running |Threads Pool: 3 | Kafka Consumer |Not declared| | gRPC |Not declared| |
- Attach into query node
make attach EDGELAKE_TYPE=query
- Execute
test network
to validate you're able to communicate with the nodes in the networkEL edgelake-query +> test network Test Network [****************************************************************] Address Node Type Node Name Status ---------------------|---------|-----------------------------|------| 35.225.182.15:32148 |operator |edgelake-operator | + | 45.79.74.39:32048 |master |edgelake-master | + | 23.239.12.151:32348 |query |edgelake-query | + |
- Detach from CLI -
ctrl-d
Commands & Queries
- Operator Commands
- To see the data streams on a node
get streaming
- View the list of tables
get virtual tables
- View columns in a table - Replace [dbms name] with the name given to DEFAULT_DBMS in the config file.
get columns where dbms=[dbms name] and table = rand_data
- View data distribution (for each table)
Sample Queries - Replace [dbms name] with the name given to DEFAULT_DBMS in the config file.get data nodes
- Get Row Count
run client () sql [dbms name] format=table "select count(*) from rand_data"
- View timestamp and value
run client () sql [dbms name] format=table "select timeestamp, value from rand_data"