[+]
 
 
 
 
 
[+]
Updated on 5/21/2019
All Product Installation and Upgrade Guides
Eyeglass Clustered Agent vAPP Install and Upgrade Guide
Direct link to topic in this publication:
Home




Abstract:

This Guide provides a step by step procedure for installing the Superna Eyeglass clustered agent  vAPP used by Ransomware Defender, Easy Auditor.  NOTE: Only follow steps in each section that names the product in the section menu

What's New

  1. Syslog forwarding of ECA logs to eyeglass
  2. Uses FluentD container for local logging and forwarding
  3. Cluster Startup now checks HDFS configuration before starting and provides user feedback on validations
  4. 3, 6 or 9 ECA node control and upgrade
  5. Delayed startup option for containers
  6. Statistics per container cli command
  7. Kafka manager UI

Definitions

  1. ECA -  Eyeglass Clustered Agent - the entire stack that runs in a separate VM outside of Eyeglass that processes CEE data

Deployment and Topology Overview

Deployment Diagram (Ransomware Defender and Easy Auditor)

This diagram shows a three node ECA cluster




ECA Cluster Deployment Topologies with Isilon Clusters

Considerations:

  1. Centralized ECA deployment is easier to manage and monitor.  This requires a central ECA cluster to mont audit folder using NFS at remote locations.  Coming soon is single VM ECA deployment option for remote sites to collect audit data and send centrally for analysis and storing.  This new option will build a single instance ECA cluster with remote VM's upgraded and managed centrally.

Firewall Port Requirements Ransomware Defender

Blue lines = Service broker communication heartbeat 23457

Orange Lines = Isilon REST API over TLS 8080 and SSH

Green lines = NFS UDP v3 to retrieve audit events

Purple Lines  = HDFS ports to store audit data and security events.

Pink Lines = HBASE query ports from Eyeglass to the ECA cluster.

Red lines = ECA to Eyeglass support logging from ECA to Eyeglass.



Additional Firewall Ports for Easy Auditor  

Firewall Rules and Direction Table

NOTE: These rules apply to traffic into the VM and out of the VM. All ports must be open between VM's, private VLAN's and firewall between VM's is not supported.

Port

Direction

Function

16000, 16020, 2181 (TCP)

Eyeglass  → ECA

HBASE

5514 (UDP)

ECA → Eyeglass

syslog

443 (TCP)

ECA → Eyeglass

TLS

443 (HTTPS)

ECA → Internet

Downloading file extension list

8020 or 585 (TCP)

ECA → Isilon

HDFS

NFS (UDP)

ECA → Isilon

NFS export mounting

Additional Ports for Easy Auditor

6066 (TCP)

Eyeglass → ECA

Spark job engine

4040 (TCP)

Admin browser → Eyeglass

Running jobs monitor

18080 (TCP)

Admin browser → Eyeglass

Job History UI

8081 (TCP)

Admin browser → Eyeglass

Spark Workers UI

8080 (TCP)

Admin browser → Eyeglass

Spark Master UI

16010 (TCP)

Admin browser → Eyeglass

HBase Master UI

16030 (TCP)

Admin browser → Eyeglass

HBase Regionserver UI

18080 (TCP)

Admin browser → Eyeglass

Spark History Report UI

9000 (TCP)

Admin browser → Eyeglass

Kafka UI

2013 (TCP)

Admin browser ← Eyeglass

Wiretap Feature event stream

IP Connection and Pool Requirements for Analytics database (Ransomware Defender, Easy Auditor)


ECA Cluster Sizing and Performance Considerations

ECA cluster Size by Application (Ransomware Defender, Easy Auditor)


ECA clusters are 3-9 nodes depending the applications running on the cluster and the number of events per second generated by the clusters under management.    The minimum ECA node configurations that are supported for all deployments are documented below.   NOTE: New applications or releases with features that require more resources will require ECA cluster to expand to handle multiple clusters or new application services.


Application Configuration

Number of ECA VM nodes Required

ESX hosts to split VM WorkloadECA Node VM Size Host Hardware Configuration Requirements

Ransomware Defender only

3 ECA node cluster

14 x vCPU, 16G Ram, 80G disk2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20

Easy Auditor Only 2

6 ECA node cluster

24 x vCPU, 16G Ram, 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20

Ransomware Defender And Easy Auditor Unified deployment

6 ECA node cluster

24 x vCPU, 16G Ram, 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20
Very High IO rate clustersRansomware Defender And Easy Auditor Unified deployment9 ECA node cluster34 x vCPU, 16G Ram, 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10

1 VMware OVA, Microsoft Hyper-v VHDX are available appliance platforms 

2 Contact support for reduced footprint configuration with 3 VM's

NOTE: OVA default sets resource limit of 18000 MHZ for the OVA shared by all ECA VM nodes in the cluster.  This limit can be increased if audit event load requires more CPU processing.  Consult support before making any changes in vmware.



VMware ESX Host Compute Sizing for ECA nodes (Ransomware Defender, Easy Auditor)

Audit data is a real-time intensive processing task. Auditing workload increases with file IO, and the number of users is a good metric to estimate file IO workload per user. The table below is based on an assumption of 1.25 events per second per user with a peak of 1.5 events per second and can be used as a guideline to help determine how many events per second your environment will produce.  This will help you to determine the sizing of the VM and placement on ESX hardware.



Number of active concurrent  Users per cluster 1

ECA VM per Physical Host Recommendation

Estimated Events Guideline

1 to 1000

1 Host

=5000 * 1.25 = 6,250 events per second

5000 - 10000

2 Host

=10,000 * 1.25 = 12,500 events per second

> 10000

3 Host

= Number of users * 1.25 events/second

1  Active tcp connection with file IO to the cluster



ECA Cluster Network Bandwidth Requirements to Isilon (Ransomware Defender, Easy Auditor)

Each ECA node process audit events and writes data to the analytics database using HDFS on the same network interface.  Therefore the combined TX and RX  constitutes the peak bandwidth requirement per node.  The table below is is a  minimum bandwidth requirements per ECA VM example calculation.  

HDFS Bandwidth estimates and guidelines for Analytics database network bandwidth access to Isilon.



Product Configuration



Audit Event rate Per Second

Peak Bandwidth requirement


Events per second per ECA cluster (input NFS Reading events from Isilon to ECA cluster)

Audit data Writes Mbps per ECA cluster (output HDFS writing events)

Ransomware Defender only

2000 evts

Input to ECA → 50 Mbps

Out of ECA <-- < 150 Mbps  

Unified Ransomware and Easy Auditor - Steady state storing events

> 4000 evts

Input to ECA → 125 Mbps

Out of ECA ← < 350 Mbps  

Easy Auditor Analysis Reports (long running reports)

NA

Input to ECA (HDFS from Isilon) ← 800 Mbps - 1.5 Gbps while report runs





Eyeglass VM Pre-requisites

Eyeglass VM

  1. Eyeglass must be deployed with or upgraded to the correct compatible release for the ECA release that is being installed.

Deployment Overview

The Eyeglass appliance is required to be installed and configured. The ECA Cluster runs in a separate group of VM’s from Eyeglass. The ECA Cluster is provisioned as a Audit data handler in the isilon cluster, and receives all file change notifications.



Easy Auditor.png

Eyeglass will be responsible for taking action against the cluster and notifying users.

  • Isilon cluster stores analytics database (this can be the same cluster that is monitored for audit events)
  • Eyeglass appliance with Ransomware Defender agent licenses or Easy Auditor Agent Licenses
  • Isilon cluster with HDFS license to store the Analytics database for Easy Auditor

Overview of steps to install and configure:

  1. Configure Access Zone for Analytics database using an Access Zone with HDFS enabled
  2. Configure SmartConnect on the Access Zone
  3. Create Eyeglass api token for ECA to authenticate to Eyeglass
  4. Install ECA cluster
  5. Configure ECA cluster master config
  6. Push config to all nodes from master with ECA cli
  7. Start cluster
  8. Verify cluster is up and database is created
  9. Verify Eyeglass Service heartbeat and ECA cluster nodes have registered with Eyeglass

Preparation of Analytics Database or Index  (Ransomware Defender, Easy Auditor) (Required Step)


Prepare the Isilon Cluster for HDFS


Prerequistes

  1. Must add minimum 3 Isilon nodes added to new IP pool and assign the pool to the access zone created created for the audit database
  2. Must configure smartconnect zone name with FQDN
  3. Must complete DNS delegation to the FQDN assigned to the new pool for HDFS access
  4. Must Enable HDFS protocol on the new access zone (protocols tab in OneFS gui
  5. Must have HDFS license applied to the cluster
  6. Must configure Snapshot schedule on the access zone path below every day at midnight with 30 day retention
  7. Optional - Create SyncIQ policy to replicate the db to a DR site.


  1. Activate a license for HDFS. When a license is activated, the HDFS service is enabled by default.
  2. Create “eyeglass” Access Zone with path “/ifs/data/igls/analyticsdb” for the HDFS connections from hadoop eyeglass compute clients (ECA) and under Available Authentication Providers, select only the Local  System  authentication provider.  
  3. Select create create zone base directory

Screen Shot 2017-11-01 at 6.50.03 PM.png


NOTE: Ensure that Local System provider is at the top of the list. Additional AD providers are optional and not required.

NOTE: In OneFS 8.0.1 the Local System provider must be added using the command line.  After adding, the GUI can be used to move the Local System provider to the top of the list.

isi zone zones modify eyeglass --add-auth-providers=local:system

  1. Set the HDFS root directory in eyeglass access zone that supports HDFS connections.

         Command: 

(OneFS 7.2)

isi zone zones modify access_zone_name_for_hdfs --hdfs-root-directory=path_to_hdfs_root_dir


Example:

isi zone zones modify eyeglass --hdfs-root-directory=/ifs/data/igls/analyticsdb

(Onefs 8.0)

isi hdfs settings modify --root-directory=path_to_hdfs_root_dir --zone=access_zone_name_for_hdfs

Example:

isi hdfs settings modify --root-directory=/ifs/data/igls/analyticsdb/  --zone=eyeglass



  1. Create One IP pool for HDFS access with at least 3 nodes in the pool to ensure high availability access to each ECA node, the Pool will be configured with static load balancing.   This will be used for datanode and storage node access by the ECA cluster for the Analytics database.

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --static


(Onefs 8.0)

isi network pools create groupnet0.subnet0.hdfspool  --ranges=172.22.1.22-172.22.1.22 --ifaces 1-4:10gige-1  --access-zone eyeglass --sc-dns-zone hdfs-mycluster.ad1.test --alloc-method static

Screen Shot 2017-04-26 at 1.49.18 PM.png

A virtual HDFS rack is a pool of nodes on the Isilon cluster associated with a pool of Hadoop compute clients. To configure virtual HDFS racks on the Isilon Cluster:


NOTE: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command:

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=access_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20  --ip-pools=subnet0:hdfspool

isi networks modify pool --name  subnet0:hdfspool --access-zone=eyeglass


(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20 --ip-pools=subnet0:hdfspool --zone=eyeglass

isi hdfs racks list --zone=eyeglass 

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 172.22.1.18-172.22.1.20 subnet0:hdfspool

-------------------------------------------------------------

Total: 1



  1. Create local Hadoop user in the System access zone.  

NOTE: User ID must be eyeglasshdfs.

Command:

(OneFS 7.2)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system

Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system

(Onefs 8.0)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system

Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system


  1. Login via SSH to the Isilon cluster to change the ownership and permissions on the HDFS path that will be used by Eyeglass ECA clusters. 
  1. chown -R eyeglasshdfs:'Isilon Users' /ifs/data/igls/analyticsdb/ 
  2. chmod -R 755 /ifs/data/igls/analyticsdb/
  1. Analytics Cluster setup Complete.

Installation and Configuration ECA Cluster (Required Step)

OVA Install Prerequistes:

Configuration ItemNotes
3 or 6 VM’s see scaling section
The OVA file will deploy 3 vm's.  to build a 6 node cluster, deploy the OVA twice and move the VM's into the first Cluster object in vcenter.
vSphere 5.5 or higher
1x ip address on the same subnet for each node 

Gateway
Network Mask
DNS IP
NTP server IP 
IP Address of Eyeglass 
API token from Eyeglass 
Unique cluster name (lower case no special characters)


Installation Procedure

  1. The deployment is based on three node ECA appliances.
  2. Download the Superna Eyeglass™ OVF from https://www.supernaeyeglass.com/downloads
  3. Unzip into a directory on a machine with vSphere client installed
  4. Install the OVA using steps below with Windows vCenter client or html vCenter web interface. 
  5. NOTE: IF DEPLOYING A 6 OR 9 NODE CLUSTER FOR EASY AUDITOR THE 3 VM vAPP OVA  STEPS BELOW WILL BE DOWN TWICE FOR 6 NODE AND THREE TIMES FOR 9 NODE CLUSTER.   THIS WILL CREATE 2 OR 3 vAPP'S AND THE VM'S FROM EACH vAPP'S CAN BE MOVED INTO A SINGLE COMMON vAPP OBJECT IN VCENTER AND REMOVE THE EMPTY vAPP OBJECTS IN VCENTER.
  6. NOTE:  The ECA name on the 2nd or 3rd vAPP deployment does not need to match the first vAPP ECA name.  Once completed the ECA name used for the first ECA cluster will be synced to all VM's defined on node 1 ECA cluster master configuration file. 
  7. deployovf.pngPicture1.png
  8. vCenter HTML Example
  9. Deploy from a file or URL where the OVA was saved
  10. Using vCenter client set required VM settings, for datastore, networking.  NOTE: Leave setting as Fixed IP address
  11. Complete the networking sections as follows:
    1. ECA Cluster name (NOTE: must be lowercase < 8 characters and no special characters, with only letters)
    2. IMPORTANT: ECA Cluster name cannot include _ as this will cause some services to fail
    3. All VM are on the same subnet
    4. Enter network mask (will be applied to all VM’s)
    5. Gateway IP
    6. DNS server (must be able to resolve the igls.<your domain name here>) (Use nameserver IP address)
    7. NOTE: Agent node 1 is the master node where all ECA CLI commands are executed for cluster management
  12. vCenter Windows client example
    1. vCenter HTML Client Example

  13. Example OVA vAPP after deployment

  14. OPTIONAL If you are deploying a 6 or 9 node ECA cluster repeat the deployment again following the instractions above and set the ip addresses on the new VM's to expand the overall cluster ip range to 6 or 9 VM's.  The ECA name can be any value since this will be synced from node 1 of the first OVA cluster that was deployed.
    1. After deployment of the 2nd or 3rd ECA, open the vAPP and rename the vm's as follows:
      1. 6 or 9 Node ECA:
        1. EyeglassClusteredAgent 1 to EyeglassClusteredAgent 4
        2. EyeglassClusteredAgent 2 to EyeglassClusteredAgent 5
        3. EyeglassClusteredAgent 3 to EyeglassClusteredAgent 6
        4. ONLY If a 9 node ECA cluster continue to rename the 3rd OVA VM's inside the vAPP
        5. EyeglassClusteredAgent 1 to EyeglassClusteredAgent 7
        6. EyeglassClusteredAgent 2 to EyeglassClusteredAgent 8
        7. EyeglassClusteredAgent 3 to EyeglassClusteredAgent 9
      2. Now drag and drop the vm inside each of the vAPP's into the vAPP created for the first 3 VM's deployed.  Once completed you can delete the empty vAPP deployed for VM's 4-9.
      3. Once done the initial vAPP will look like this (9 node ECA shown).

      4. Done
  15. After Deployment is complete Power on the vAPP
    1. Ping each ip address to make sure each node has finished booting
    2. Login via SSH to the Master Node (Node 1) using the “ecaadmin” account default password 3y3gl4ss and run the following command:
    3. ecactl components install eca
    4. During this step a passphrase for SSH between nodes is generated, press the “Enter key” to accept an empty passphrase.
    5. A prompt to enter Node 2 and Node 3 password is required on first boot only.  Enter the same default password “3y3gl4ss” when prompted
    6. On Eyeglass Appliance: generate a unique API Token from Superna Eyeglass REST API Window. Once a token has been generated for the ECA Cluster, it can be used in that ECA’s startup command for authentication.
    7. Login to Eyeglass goto main menu Eyeglass REST API menu item. Create a new API token. This will be used in the startup file for the ECA cluster to authenticate to the Eyeglass VM and register ECA services.
  16. On ECA Cluster Master node ip 1
    1. Login to that VM. From this point on, commands will only be executed on the master node.
    2. On the master node, edit the file (using vim) /opt/superna/eca/eca-env-common.conf , and change these five settings to reflect your environment. Replace the variable accordingly
    3. Set the IP address or FQDN of the Eyeglass appliance and the API Token (created above), uncomment the parameter lines before save file. I.e:
      1. export EYEGLASS_LOCATION=ip_addr_of_eyeglass_appliance
      2. export EYEGLASS_API_TOKEN=Eyeglass_API_token
    4. Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master, (i.e. the IP address of the node you’re currently logged into.) NOTE: add additional ECA_LOCATION_NODE_X=x.x.x.x  for additional node in the ECA cluster depending on ECA cluster size. All nodes in the cluster must be listed in the file.
      1. export ECA_LOCATION_NODE_1=ip_addr_of_node_1 (set by first boot from the OVF)
      2. export ECA_LOCATION_NODE_2=ip_addr_of_node_2 (set by first boot from the OVF)
      3. export ECA_LOCATION_NODE_3=ip_addr_of_node_3 (set by first boot from the OVF)
    5. Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN hdfs_sc_zone_name with <your domain here FQDN>. 
    6. NOTE: Do not change any other value.  Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.
    7. export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/eca1
  17. Done:  Continue on to the Auditing Configuration Section

Auditing Configuration (Ransomware Defender, Easy Auditor) (Required Step)


How to Configure Turbo Audit Event Ingestion NFS Export

This option is for Isilon clusters with 1000’s of users connected to the cluster or very high IO rates that generate a lot of audit events per second.

Prerequisites for all mount methods:

  1. A Smartconnect name configured in the system zone for the NFS export created on /ifs/.ifsvar/audit/logs
  2. IP pool set to dynamic for NFS mount used by ECA cluster nodes for HA NFS mounts
  3. NFS export is read-only mount for each ECA node.
  4. Follow either manual eca mount method OR automounter option

Instructions:

  1. Create a read-only export on the Isilon cluster(s), using the following syntax.

  2. Note all managed clusters by this ECA cluster will require an export created for audit event processing.

  1.  (replacing <ECA_IP_1> with the IP address of nodes 1, 2, 3): NOTE: If You have built a 6 or 9 node cluster repeat these steps on all nodes in the cluster to balance work load across all nodes and ensure the export lists all IP addresses for all nodes.
      isi nfs exports create /ifs/.ifsvar/audit/logs --root-clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --read-only=true -f --description "Easy Auditor Audit Log Export"
  1. Configure and Verify NFS automounter with Turbo Audit

  1. This should be used versus manual mount steps but not both.
  • Configure by: Requires NFS auto mount release and eca-env-common.conf  to have the following set
  • export USE_AUDIT_NFS_WATCH=true
  • Then cluster down
  • ecactl cluster down
  • Cluster up  
  • ecactl cluster up
  1. Wait 1-2 minutes to ensure licensed cluster list is downloaded from Eyeglass to the ECA.   The ECA uses the licensed cluster list to automount the exports from the previous step.  Use the steps below to verify the mounts have been created.  This solution allows a single ECA cluster to manage more than one cluster using Turbo Audit data processing.
  2. NOTE:  The IP address or FQDN used to add clusters to Eyeglass will be used for the NFS Audit log export mount.  
  3. How To check for successful mount with NFS Auto Mount only

  • sudo -s  (enter ecaadmin password)
  • ls -R /opt/superna/mnt/
  • ls (the command should list files on the exported mount)
  • Exit (to return to ecaadmin shell)
  • You should see a cluster GUID folder under the mnt directory  for each cluster that is licensed
  • Run logs command to verify mount was successful.
  1. ecactl logs --follow  audit-nfs-watch
  2. Verify output shows mount was successful
  3. If not successful double check export and client list is correct

ECA Cluster Startup Post Configuration (Ransomware Defender and Easy Auditor) (Required Step)

ecactl cluster up (Note can take 5 minutes to complete)


The startup script does the following:

  1. Reads config file and checks the config data is not empty
  2. Checks if the hdfs pool Smartconnect name is resolvable
  3. Checks if eca can connect to isilon using netcat with port 8020 for HDFS access
  4. Tests HDFS permissions with Read and write test and will abort of Analtytics database IO test fails.
  5. Mounts hdfs data as "eyeglasshdfs" user and checks if user has permissions
  6. Starts services on each node in the cluster
  7. Verfies the Analytics database is initialized correctly before continuing boot process.
  8. Verify no errors appear in the startup procedure, the cluster up steps are all logged and can be sent to support.
  9. A successfull startup example is shown below.
  • Configuration pushed
  • Starting services on all cluster nodes.
  • Checking HDFS connectivity
  • Starting HDFS connectivity tests...
  • Reading HDFS configuration data...
  • ********************************************************************
  • HDFS root path: hdfs://hdfsransomware.ad3.test:8020/eca1/
  • HDFS name node: hdfsransomware.ad3.test
  • HDFS port: 8020
  • ********************************************************************
  • Resolving HDFS name node....
  • Server:                192.168.1.249
  • Address:        192.168.1.249#53
  • Non-authoritative answer:
  • Name:        hdfsransomware.ad3.test
  • Address: 172.31.1.124
  • Checking connectivity between ECA and Isilon...
  • Connection to hdfsransomware.ad3.test 8020 port [tcp/intu-ec-svcdisc] succeeded!
  • ********************************************************************
  • Initiating mountable HDFS docker container...

Verifying ECA Cluster Status

  1. On the master node run these commands:
  1. run the following command: ecactl db shell
  2. Once in the shell execute command: status
  3. Output should show 1 active master , 2 backup master server

Screen Shot 2017-02-18 at 4.34.06 PM.png

  1. Type ‘exit’

Verifying ECA containers are running

  1. Command: “ecactl containers ps”

Screen Shot 2017-02-18 at 4.19.12 PM.png

Check cluster status and that all analytics tables exist (Ransomware Defender, Easy Auditor) (Optional Step)

  1. Command: ‘ecactl cluster status’
  2. This command verifies all containers are running on all nodes and verifies each node can mount the tables in the Analytics database.
  3. If any error conditions open a support case to resolve or retry with steps below:
  1. ecactl cluster down
  2. ecactl cluster up
  3. Send ECA cluster startup text to support

Verify Connection to the Eyeglass appliance, check the “Manage services” icon (Required Step)

  1. Login to Eyeglass as admin user
  2. Check the status of the ECA Cluster, click ‘Manage Service’ Icon and click on + to expand the container or services for each eca node review image below.  
  3. Verify the ip addresses of the ECA nodes are listed.
  4. Verify all cluster nodes show and all docker containers show green health.
  5. NOTE: Hbase status can take 5 minutes to transition from warning to Green.
  6. Move to the Isilon Protocol Audit Configuration


Isilon Protocol Audit Configuration (Required Step)

Overview

This section configures Isilon file auditing required to monitor user behaviours.   The Audit protocol can be enabled on each Access Zone independently that requires monitoring.  

Enable and configure Isilon protocol audit (Required Step)


  1. Enable Protocol Access Auditing.

Command:

(OneFS 7.2)

isi audit settings modify --protocol-auditing-enabled {yes | no}

Example:

isi audit settings modify --protocol-auditing-enabled=yes

(OneFS 8.0)

isi audit settings global modify --protocol-auditing-enabled {yes | no}

Example:

isi audit settings global modify --protocol-auditing-enabled=yes

  1. Select the access zone that will be audited. This audited access zones are accessed by the SMB/NFS clients.

Command:

(OneFS 7.2)

isi audit settings modify --audited-zones=audited_access_zone

Example:

isi audit settings modify --audited-zones=sales

(OneFS 8.0)

isi audit settings global modify --audited-zones= audited_access_zone

Example:

isi audit settings global modify --audited-zones=sales,system


  1. OneFS 7.2 or 8.0 GUI Auditing Configuration:
  • Click Cluster Management > Auditing
  • In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.
  • In the Audited Zones area, click Add Zones.
  • In the Select Access Zones dialog box, select one or more access zones, and click Add Zones (do not add Eyeglass access zone).


Time Configuration Isilon, Eyeglass, ECA cluster (Required Step)

Overview: To get accurate auditing features for Ransomware or Easy Auditor time sync between all components is critical step.   NTP should be used on all VM’s and use the same NTP source.

  1. Verify Isilon clusters being monitored are using an NTP server.  Many Internet time sources exist or internal Enterprise server IP address
  2. Enable NTP on all Isilon clusters
  3. Eyeglass VM configure the same NTP servers used by Isilon by following.
  4. On each ECA VM repeat the YAST steps above to configure NTP on each VM.

How to Fix Time Skew Error in Manage Services Icon on Eyeglass (Optional Step)

If NTP and ESX host time sync conflict it may be necessary to disable ESX host time sync to the ECA nodes to allow ECA nodes to get NTP time versus esx host. This ensure that DB queries and each node has consistent time in sync across Eyeglass VM’s and ECA nodes.

How to disable VMware vSphere ESXi host time sync

For ECA:

  1. Initiate ecactl cluster down
  2. Power down ECA vApp
  3. Change VMware time sync configuration as below: 
  4. Click on Virtual Machine 
  5. Right click on ECA node1 
  6. Click Edit Settings.. 
  7. Click on Option 
  8. Click VMware Tools 
  9. Uncheck ‘Synchronize guest time with host’ 
  10. Click OK
  11. Power up vApp
  12. Initiate ecactl cluster up





NOTE: Apply this change on ALL ECA nodes.  Perform same steps for Eyeglass Appliance if needed

New changes may take up to 15 mins

[ in some cases, you may need to restart ntpd after cluster up ]



Backup the Audit Database with SnapshotIQ (Required for Easy Auditor) (Required Step)

Use the Isilon native SnapshotIQ feature to backup the audit data.  Procedure documented here.


Expanding ECA Cluster for Higher Performance (Easy Auditor) (optional)


The ECA cluster is based on a cluster technology for reading and writing data (Apache HBASE) and searching (Apache Spark).

To expand the ECA clusters search performance for large databases.  (A large database contains over 1 Billion records).

NOTE:  ECA clusters can be expanded to 9 nodes .


How to Check ECA node Container CPU and Memory Usage (Optional)

  1. Login to the eca node as ecaadmin
  2. Type cli command to see real time view of container resources utilization
  3. ecactl stats

How to expand Easy Auditor cluster size (optional)

Follow this steps to add 3 or 6 more VM’s for analytics performance increase for higher event rate or long running queries on a large database. Deploy the ECA OVA again, copy the new VM's into the vAPP, remove the vAPP created during the deployment.  NOTE: The ECA name will not matter during the new OVA deployment since it will be synced from the existing ECA cluster during cluster up procedures.


  1. Login to the master ECA node
  2.  ecactl cluster down
  3. deploy one or two more eca clusters. No special config needs to be added on the newly deployed ECA OVA.
  4. vim /opt/superna/eca/eca-env-common.conf to add more locations:
  5. ECA_LOCATION_NODE_4: <IP>
  6. ECA_LOCATION_NODE_5: >IP>
  7. add anywhere from nodes 4 to 9 depending on the number of VM's added to the cluster.
  8. ecactl components configure-nodes
  9. ecactl cluster up
  10. This will expand HBASE and Spark containers for faster read and analytics performance
  11. Login to eyeglass and open managed services
    1. preview-image.png
  12. Now HBASE needs to balance the load across the cluster for improved read performance.
    1. Now login to the Region Master vm typically node 1
    2. http://x.x.x.x:16010/ verify that each region server (6 total) are visible in the UI
    3. Verify each has assigned regions
    4. Verify requests are visible to each region server
    5. Verify tables section shows no regions offline, no regions in other column,
    6. Example screenshots of 6 region servers with regions and normal looking table view
  13. done.

    How to Enable Real-time Monitor ECA cluster performance (If directed by support)


    Use this procedure to enable container monitor to determine if cpu GHZ are set correctly for query and writing to Isilon performance.

    1. To enable cadvisor, add the following line to eca-env-common.conf:
    2. export LAUNCH_MONITORING=true
    3. This will launch cadvisor on all cluster nodes.
    4. If you want to launch it on a single node, login to that node and execute:
    5. ecactl containers up -d cadvisor
    6. Once the cadvisor service is running, login to http://<IP OF ECA NODE>:9080 to see the web UI.
    7. Done.

    Ransomware and Easy Auditor IGLS CLI command Reference


    See Eyeglass CLI commands for Ransomware Defender and Easy Auditor


    How to Upgrade the ECA cluster For Easy Auditor and Ransomware Defender

    prepartion steps:

    1. Downlod latest GA Release for the ECA upgrade following instructions here https://support.superna.net



    1. Log in to ECA Node 1 using "ecaadmin" credentials.
    2. Issue the following command: ecactl cluster down​
    3. Please wait for the procedure to complete on all involved ECA nodes.
    4. Done!
    5. - Once the above steps complete:
      1. Use WinSCP to transfer run file on node 1 (Master Node) in /tmp directory
      2. chcd /tmp
      3. chmod +x ecaxxxxxxx.run file (xxxx is name of file)
      4. ./ecaxxxxxx.run file (xxxx is name of file)
      5. Enter password for ecaadmin when prompted 
      6. wait for installation to complete
    6. Done.
    7. Now bring up the cluster again
    8. ecactl cluster up
    9. wait until all services are started on all nodes
    10. Once completed, login to Eyeglass open the managed services icon, verify all show green and online.  If any services show warning or inactive wait at least 5 minutes, if the condition persists, open a support case.
    11. If the above step passes and all ECA nodes show green online, then test Security guard runs in Ransomware Defender or RoboAudit runs in Easy auditor.  
    12. Consult the admin guide of each product to start a manual test of these features.




    Copyright Superna LLC