Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Sv translation
languageen

Prerequisites

There are 2 or 3 servers available for BellaDati installation. The hardware configuration of all servers follows BellaDati system requirements. All servers are running Windows OS.

BellaDati is installed on server No1, Server No2 will be used as an additional cluster node server. PostgreSQL database engine is installed on one of the servers or on a server No3. 

 

Code Block
titleExample for the Cluster
Server No1:  IP 172.31.11.239	names: belladati-main, EC2AMAZ-MF3ICV0
Server No2:  IP 172.31.8.248	names: node1, EC2AMAZ-CJSA0TO
Server No3: not used in this example, the Database is installed locally with the BellaDati instance
 
All servers are in internal private network.

The installation process consists of five blocks:

Table of Contents
maxLevel2
minLevel2

1. Preparation of the Environment

  • Add admin account to both servers (belladati-main & node1)

Info

 Control Panel > User Accounts > Manage Accounts

  • Set up PATH to the glassfish bin directories (JRE for belladati-main server and pgsql \bin for the database server (PostgreSQL))
Info

Control Panel > System and Security > System > Advanced system setting > Environment Variables > System Variables > Path

  • Install JAVA on node1
Info

It is recommended to use the same version of Java on all servers in the cluster environment

  • Set up hosts tables on both servers
Info

The hosts file is located at C:\Windows\System32\drivers\etc\hosts

Code Block
titleHosts file example for node1
127.0.0.1 node1
172.31.8.248 node1
172.31.8.248 EC2AMAZ-CJSA0TO
172.31.11.239 belladati-main
172.31.11.239 EC2AMAZ-MF3ICV0
  • Set up Windows Firewall on both servers nodes (firewalls need to allow connection between all servers and installed components in the cluster)
Info

Go to Control Panel > System and Security > Windows Firewall > Advanced settings > Inbound Rules

If you are in an internal network, enable "All traffic".

  • Enable access to the Database server from all servers

 

The cluster environment will utilize database (belladati_db, user belladati_dbuser) running. It is necessary to make possible access to dbs from both
  • nodes (belladati-main
and node1 servers. The files to check/ modify are
  • & node1)
The
Info

Check / Edit files pg_hba and postgresql

to enabled connection from both nodes:

  • pg_hba:
  connections allowed
  •  connections from both servers to
belladati database for belladati user.
  • the BellaDati database need to be enabled
Code Block
titlepg_hba hosts example
host     all            all             
Example:  host     all            all             The
0.0.0.
0/0               md5   (Because of internal private network, we have allowed access to all databases for all users from all IPs)
0/0               md5
 
Note: because we are in the internal network, we have allowed access to all databases for all users from all IPs
  • postgresql: listen_addresses
  • - define IP addresses to listen on
;example :
Code Block
titleListen_addresses example
listen_addresses = '*'
 

Image Added

 

 

Info

It is recommended to test

connectivity

the connection to the BellaDati database (belladati_db

for user belladati_dbuser

) from both

belladati_main and node1 servers

nodes.

 

  • Application Properties
  • file modification / setting up

The file application.properties can serve also for making possible clustering;

file is located in .... domains\domain1\applications\belladati\WEB-INF\classes\conf\application.properties

example : C:\Program Files\BellaDati\glassfish5\glassfish
  • set up
Info
  • the Application Properties are set up via application.properties file which is located at path_to_your_BellaDati_instance\domains\domain1\applications\belladati\WEB-INF\classes\conf\application.properties
Two nodes cluster for
  • in case there are 2 servers in the cluster (e.g. belladati-main
and node1 on port 2335 the next lines are added to the file:
  • & node1), add the following code to application.properties file:
Code Block
application.actor.server.port=2335

tapestry.clustered-sessions=true

application.servers=belladati-main:2335,node1:2335
(Servers

Note: servers can be identified by name

/IPs)

or IP


  • Restart BellaDati Application Server
    •  Glassfish Application Server restart

Application properties have been modified - AS restart is needed.

 

2. Configuration of SSH Communication

SSH communication environment for windows is set up using tools described on www.cygwin.com 

  • At first setup-x86_64.exe is downloaded for both belladati-main and node1 servers and installed following instructions. (Install from Internet, all users, local package directory ; Select Packages, Net, openssh, 8.Op1-2 )

The C:\cygwin64 directories are created; Set up PATH (see above) to bin subdirectory ;  example C:\cygwin64\bin

  • The Cygwin64 Terminal icon is on server desktops now; run it as administrator
  • command line  run:         ssh-host-config       and then      net start cygsshd start
  • to see help :   cygrunsrv --help , to list services that have been installed with cygrunsrv :  cygrunsrv -L

example to test status (do for both belladati-main and node1 servers)

$ cygrunsrv --query cygsshd

Service             : cygsshd

Display name        : CYGWIN cygsshd

Current State       : Running

Controls Accepted   : Stop

Command             : /usr/sbin/sshd -D

  • TEST ssh connection from windows command line (example use windows user acount "glassfish/password" )

examples :

ssh connection FROM belladati-main to node1 server

C:\Users\glassfish>ssh -l glassfish belladati-main

glassfish@belladati-main's password:

glassfish@EC2AMAZ-MF3ICV0 ~            user glassfish  can operate on node1 server via ssh communication

 

ssh connection FROM node1 server to belladati-main server

C:\Users\glassfish>ssh -l glassfish node1

glassfish@node1's password:

glassfish@EC2AMAZ-CJSA0TO ~            user glassfish is ssh connected from node1 to belladati-main and can use the server

  • TEST java commands from windows command line

from belladati-main : ssh node1 'java version' ,      ssh node1 'jar'  

from node1  : ssh belladati-main 'java version',     ssh belladati-main 'jar'   ssh belladati-main 'java -version' 

We will see adequate response from the server 

Example:

C:\Users\glassfish>ssh node1 'java -version'

glassfish@node1's password:

java version "1.8.0_201"

Java(TM) SE Runtime Environment (build 1.8.0_201-b09)

Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

    • Making installation directory on node1; (Cyqwin Terminal):

glassfish@EC2AMAZ-CJSA0TO ~

$ pwd

/home/glassfish

glassfish@EC2AMAZ-CJSA0TO ~

$ mkdir /home/glassfish/glassfish

$ ls

C:   glassfish

cd glassfish

glassfish@EC2AMAZ-CJSA0TO ~/glassfish 


Servers belladati-main and node1 are ready to following Cluster installation procedure.


3. Cluster Installation

Windows command prompt on server belladati-main, user glassfish (administrator) utility asadmin. 

 

asadmin - utility for performing administrative tasks for Oracle GlassFish Server; path to asadmin is set up in block 1.

(Test path to asadmin is set up on belladati-main server; Command Prompt  C:\Users\glassfish> asadmin  version

RESPONSE: Version = GlassFish Server Open Source Edition  5.0.1  (build 5) Command version executed successfully.)

 

    • Creating password alias

C:\Users\glassfish\asadmin    create-password-alias    password

password is the name of the alias, the password itself is the user glassfish password

Using notepad create text file C:\Users\glassfish\password.txt with one line:

AS_ADMIN-SSHPASSWORD=${ALIAS=password}

C:\Users\glassfish\asadmin list-password-aliases       lists the alias

 

    • Creating node 

SSH type node named "node1" is created on a machine named node1

 

C:\Users\glassfish\asadmin  create-node-ssh  

               --nodehost node1  --installdir /home/glassfish/glassfish --install=true

               --sshuser glassfish --passwordfile c:\Users\glassfish\password.txt        node1

    •  Testing node1 visibility

C:\Users\glassfish\asadmin list-nodes-ssh                  RESPONSE: node1 SSH node1

C:\Users\glassfish\asadmin  ping-node-ssh node1       RESPONSE: Successfully made SSH connection to node node1(node1)

 

    • Creating cluster

C:\Users\glassfish\asadmin create-cluster  belladati-cluster                  

    • Creating instance  (named instance-node1)

JVM instance named instance-node1 is created on node "node1" for cluster "belladati-cluster"   

C:\Users\glassfish\asadmin create-instance 

               --node node1 --cluster belladati-cluster     instance-node1

Port associated to the instance: 26666,28080,24848,28686,23700,29009,23820,23920,28181

C:\Users\glassfish\asadmin list-instances          to see instances  RESPONSE instance-node1 not running

 

    • Creating the second instance of the cluster

The cluster is intended as 2-nodes. Until now we have just one cluster ready instance (instance-node1) on machine called node1; next procedure will create the second instance

on the machine belladati-main.

 

C:\Users\glassfish\asadmin create-local-instance --cluster belladati-cluster instance-local-node

C:\Users\glassfish\asadmin  list-instances   belladati-cluster    (to see JVM instances)

4. Cluster Setup & Start

Cluster configuration (called belladati-cluster-config) until now is with default parameters. The parameters have been taken over from so called default-config set during processes creating belladati-cluster.

The parameters/values are available to see in Admin Console GUI of DAS (Domain Admin Server). They are in Configurations / default-config and Configurations/ belladati-cluster-config.

Not all of them are suitable for running BellaDati application in cluster environment. 

     For information: parameters, use by the BellaDati installation are in  "Configuration / server-config".

The steps below describe how to prepare suitable environment for cluster, i.e. how to modify and set up Configurations/belladati-cluster-configuration.

 

    • Creating resource references          Databases

C:\Users\glassfish\asadmin create-resource-ref

                           --target belladati-cluster jdbc/belladati_db

to see the resources ... asadmin list-resource-refs belladati-cluster


    • Creating application references      BellaDati application

C:\Users\glassfish\asadmin create-application-ref --target belladati-cluster belladati

to see application references ... asadmin list-application-refs belladati-cluster

 

    • Modifying  JVM parameter -Xmx (heap space size)   Memory settings

The  JVM parameter -Xmx should be modified in accordance with recommendations described in Documentation -  "Installing BellaDati on GlassFish"

 

C:\Users\glassfish\asadmin list-jvm-options --target belladati-cluster                          listing JVM Options

C:\Users\glassfish\asadmin delete-jvm-options --target belladati-cluster -Xmx.....       deleting the default -Xmx.... value

C:\Users\glassfish\asadmin create-jvm-options --target belladati-cluster -XmxNNNNNew         creating new -Xmx e.g. -Xmx5120m (5120 MB)

 

 

    •    Modifying thread pools  http-thread-pool  and thread-pool-1      Thread pools settings  

Recommended parameters/values for next modifications are described in Documentation - "Installing BellaDati on GlassFish"

C:\Users\glassfish\asadmin list-configs belladati-cluster          list name of belladati-cluster configuration

 

        •      list of threadpools 

C:\Users\glassfish>asadmin list belladati-cluster-config.thread-pools.*

belladati-cluster-config.thread-pools

belladati-cluster-config.thread-pools.thread-pool.admin-thread-pool

belladati-cluster-config.thread-pools.thread-pool.http-thread-pool

belladati-cluster-config.thread-pools.thread-pool.thread-pool-1

 

        •  getting information as to pools parameters/ values  (e.g. for thread-pool-1)

C:\Users\glassfish>asadmin get belladati-cluster-config.thread-pools.thread-pool.thread-pool-1*
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.classname=org.glassfish.grizzly.threadpool.GrizzlyExecutorService
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.idle-thread-timeout-seconds=900
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.max-queue-size=4096
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.max-thread-pool-size=400
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.min-thread-pool-size=5
belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.name=thread-pool-1

 

        •  setting max-thread-pool-size to recommended 512 for http-thread-pool  and for

C:\Users\glassfish>asadmin set belladati-cluster-config.thread-pools.thread-pool.http-thread-pool.max-thread-pool-size=512

C:\Users\glassfish>asadmin set belladati-cluster-config.thread-pools.thread-pool.thread-pool-1.max-thread-pool-size=512

 

 

    • RESTARTING GlassFish application server

The asadmin subcommands are used to create or delete a configuration item. Restart the DAS for the change to take effect is needed.

 

C:\Users\glassfish\asadmin restart-domain

C:\Users\glassfish>asadmin list-domains

domain1 running

 

C:\Users\glassfish>asadmin list-clusters

belladati-cluster not running

 

C:\Users\glassfish>asadmin list-instances belladati-cluster

instance-node1        not running

instance-local-node   not running

 

  • Cluster start

 

C:\Users\glassfish>asadmin start-cluster --verbose belladati-cluster

start-instance instance-node1

start-instance instance-local-node

The command start-instance executed successfully for: instance-node1 instance-local-node

Command start-cluster executed successfully.

 

5. BellaDati Application Settings

You can now login to BellaDati on both. New port for BellaDati will be used now. E.g. localhost:28080/login instead of localhost:8080/login

Info
titleLicense

Cluster settings require a dedicated license key which needs to be applied on both nodes. Contact BellaDati support for obtaining a license key with this option enabled.

Monitoring servers in the cluster can be done via a dedicated BellaDati screen:

e.g. localhost:28080/cluster

path: Administration > Monitoring > Cluster (all nodes in the cluster are displayed)

...