[+]
 
[+]
 
 
 
[+]
Updated on 8/15/2019
All Product Installation and Upgrade Guides
Eyeglass Clustered Agent vAPP Install and Upgrade Guide
Direct link to topic in this publication:
Home

 




Abstract:

This Guide provides a step by step procedure for installing the Superna Eyeglass clustered agent  vAPP used by Ransomware Defender, Easy Auditor.  NOTE: Only follow steps in each section that names the product in the section menu

What's New

  1. Syslog forwarding of ECA logs to eyeglass
  2. Uses FluentD container for local logging and forwarding
  3. Cluster Startup now checks HDFS configuration before starting and provides user feedback on validations
  4. 3, 6 or 9 ECA node control and upgrade
  5. Delayed startup option for containers
  6. Statistics per container cli command
  7. Kafka manager UI
  8. New 2.5.5 Ransomware Defender does not require HDFS or a smartconnect name pool for HDFS but if Easy Auditor is also installed than HDFS pool is still requried.

Definitions

  1. ECA -  Eyeglass Clustered Agent - the entire stack that runs in a separate VM outside of Eyeglass that processes CEE data

Deployment and Topology Overview

Deployment Diagram (Ransomware Defender and Easy Auditor)

This diagram shows a three node ECA cluster




ECA Cluster Deployment Topologies with Isilon Clusters

Considerations:

  1. Centralized ECA deployment is easier to manage and monitor.  This requires a central ECA cluster to mont audit folder using NFS at remote locations.  Coming soon is single VM ECA deployment option for remote sites to collect audit data and send centrally for analysis and storing.  This new option will build a single instance ECA cluster with remote VM's upgraded and managed centrally.

Firewall Port Requirements Ransomware Defender

Blue lines = Service broker communication heartbeat 23457

Orange Lines = Isilon REST API over TLS 8080 and SSH

Green lines = NFS UDP v3 to retrieve audit events

Purple Lines  = HDFS ports to store audit data and security events.

Pink Lines = HBASE query ports from Eyeglass to the ECA cluster.

Red lines = ECA to Eyeglass support logging from ECA to Eyeglass.


Unified Ransomware Defender and Easy Auditor Firewall ports



Ransomware Defender Only Deployment Firewall ports



Additional Firewall Ports for Easy Auditor  

Firewall Rules and Direction Table

NOTE: These rules apply to traffic into the VM and out of the VM. All ports must be open between VM's, private VLAN's and firewall between VM's is not supported.

Ransomware Defender Only

Port

Direction

Function

2181 (TCP)

Eyeglass  → ECA

kafka

5514 (UDP)

ECA → Eyeglass

syslog

443 (TCP)

ECA → Eyeglass

TLS

443 (HTTPS)

ECA → Internet

Downloading file extension list

NFS (UDP)

ECA → Isilon

NFS export mounting

Additional Ports for Easy Auditor

8020 or 585 (TCP)





ECA → Isilon
HDFS
16000, 16020

hbase

6066 (TCP)

Eyeglass → ECA

Spark job engine

4040 (TCP)

Admin browser → Eyeglass

Running jobs monitor

18080 (TCP)

Admin browser → Eyeglass

Job History UI

8081 (TCP)

Admin browser → Eyeglass

Spark Workers UI

8080 (TCP)

Admin browser → Eyeglass

Spark Master UI

16010 (TCP)

Admin browser → Eyeglass

HBase Master UI

16030 (TCP)

Admin browser → Eyeglass

HBase Regionserver UI

18080 (TCP)

Admin browser → Eyeglass

Spark History Report UI

9000 (TCP)

Admin browser → Eyeglass

Kafka UI

2013 (TCP)

Admin browser ← Eyeglass

Wiretap Feature event stream

IP Connection and Pool Requirements for Analytics database (Easy Auditor)


ECA Cluster Sizing and Performance Considerations

ECA cluster Size by Application (Ransomware Defender, Easy Auditor)


ECA clusters are 3-9 nodes depending the applications running on the cluster and the number of events per second generated by the clusters under management.    The minimum ECA node configurations that are supported for all deployments are documented below.   NOTE: New applications or releases with features that require more resources will require ECA cluster to expand to handle multiple clusters or new application services.


Application Configuration

Number of ECA VM nodes Required

ESX hosts to split VM WorkloadECA Node VM Size Host Hardware Configuration Requirements

Ransomware Defender only

3 ECA node cluster

14 x vCPU, 16G Ram, 30G OS partition + 80G disk2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20

Easy Auditor Only 2

6 ECA node cluster

24 x vCPU, 16G Ram, 30G OS partition + 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20

Ransomware Defender And Easy Auditor Unified deployment

6 ECA node cluster

24 x vCPU, 16G Ram, 30G OS partition + 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20
Very High IO rate clustersRansomware Defender And Easy Auditor Unified deployment9 ECA node cluster34 x vCPU, 16G Ram, 30G OS partition + 80G disk
2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10

1 VMware OVA, Microsoft Hyper-v VHDX are available appliance platforms 

2 Contact support for reduced footprint configuration with 3 VM's only for low event rate environments.

NOTE: OVA default sets resource limit of 18000 MHZ for the OVA shared by all ECA VM nodes in the cluster.  This limit can be increased if audit event load requires more CPU processing.  Consult support before making any changes in vmware.



VMware ESX Host Compute Sizing for ECA nodes (Ransomware Defender, Easy Auditor)

Audit data is a real-time intensive processing task. Auditing workload increases with file IO, and the number of users is a good metric to estimate file IO workload per user. The table below is based on an assumption of 1.25 events per second per user with a peak of 1.5 events per second and can be used as a guideline to help determine how many events per second your environment will produce.  This will help you to determine the sizing of the VM and placement on ESX hardware.


VMware or Hyper-V Host Requirements

  1. NOTE: Vmware environments with DRS and SDRS should excempt the ECA and vApp from dynamic relocation as a best practise.  As a real-time application with time skew requirements between VM's for processing and database operations, DRS movement of running VM's . For maintenance purposes it is ok migrate vm's as needed.


Number of active concurrent  Users per cluster 1

ECA VM per Physical Host Recommendation

Estimated Events Guideline

1 to 1000

1 Host

=5000 * 1.25 = 6,250 events per second

5000 - 10000

2 Host

=10,000 * 1.25 = 12,500 events per second

> 10000

3 Host

= Number of users * 1.25 events/second

1  Active tcp connection with file IO to the cluster



ECA Cluster Network Bandwidth Requirements to Isilon (Ransomware Defender, Easy Auditor)

Each ECA node process audit events and writes data to the analytics database using HDFS on the same network interface.  Therefore the combined TX and RX  constitutes the peak bandwidth requirement per node.  The table below is is a  minimum bandwidth requirements per ECA VM example calculation.  

HDFS Bandwidth estimates and guidelines for Analytics database network bandwidth access to Isilon.



Product Configuration



Audit Event rate Per Second

Peak Bandwidth requirement


Events per second per ECA cluster (input NFS Reading events from Isilon to ECA cluster)

Audit data Writes Mbps per ECA cluster (output HDFS writing events)

Ransomware Defender only

2000 evts

Input to ECA → 50 Mbps

Out of ECA <-- < 150 Mbps  

Unified Ransomware and Easy Auditor - Steady state storing events

> 4000 evts

Input to ECA → 125 Mbps

Out of ECA ← < 350 Mbps  

Easy Auditor Analysis Reports (long running reports)

NA

Input to ECA (HDFS from Isilon) ← 800 Mbps - 1.5 Gbps while report runs





Eyeglass VM Pre-requisites - Mandatory Step

Eyeglass License Requirements

  1. Eyeglass must be deployed with or upgraded to the correct compatible release for the ECA release that is being installed.
  2. Eyeglass Licences for Easy Auditor or Ransomware Defender must be added to Eyeglass VM.
    1. Login to Eyeglass
    2. Open Licence manager Icon
    3. Follow how to download license key instructions using the email license token provided with your purchase. 
    4. Upload the license key zip file from Step #3
    5. Web page will refresh
    6. Open License manager
    7. Select Licensed devices tab
    8. Set the license status for each product to user licensed for clusters that should be monitored by Ransomware Defender or Easy Auditor (depending on what license keys you purchased).
    9. Set the license status for each product for each cluster that should not be licensed to Unlicensed.  This is required to ensure licences are applied to the correct cluster and blocked from being applied to the incorrect cluster.


Deployment Overview

The Eyeglass appliance is required to be installed and configured. The ECA Cluster runs in a separate group of VM’s from Eyeglass. The ECA Cluster is provisioned as a Audit data handler in the isilon cluster, and receives all file change notifications.



Easy Auditor.png

Eyeglass will be responsible for taking action against the cluster and notifying users.

  • Isilon cluster stores analytics database (this can be the same cluster that is monitored for audit events)
  • Eyeglass appliance with Ransomware Defender agent licenses or Easy Auditor Agent Licenses
  • Isilon cluster with HDFS license to store the Analytics database for Easy Auditor only (Ransomware Defender only deployments no longer need HDFS pool as of 2.5.5 or later)

Overview of steps to install and configure Easy Auditor or Unified Ransomware Defender and Easy Auditor:

  1. Configure Access Zone for Analytics database using an Access Zone with HDFS enabled
  2. Configure SmartConnect on the Access Zone
  3. Create Eyeglass api token for ECA to authenticate to Eyeglass
  4. Install ECA cluster
  5. Configure ECA cluster master config
  6. Push config to all nodes from master with ECA cli
  7. Start cluster
  8. Verify cluster is up and database is created
  9. Verify Eyeglass Service heartbeat and ECA cluster nodes have registered with Eyeglass

Preparation of Analytics Database or Index  (Easy Auditor) (Required Step)


Prepare the Isilon Cluster for HDFS


Prerequistes

  1. Easy Auditor only
  2. Must add minimum 3 Isilon nodes added to new IP pool and assign the pool to the access zone created created for the audit database
  3. Must configure smartconnect zone name with FQDN
  4. Must complete DNS delegation to the FQDN assigned to the new pool for HDFS access
  5. Must Enable HDFS protocol on the new access zone (protocols tab in OneFS gui) Easy Auditor only
  6. Must have HDFS license applied to the cluster
  7. Must configure Snapshot schedule on the access zone path below every day at midnight with 30 day retention
  8. Optional - Create SyncIQ policy to replicate the db to a DR site.


  1. Activate a license for HDFS. When a license is activated, the HDFS service is enabled by default.
  2. Create “eyeglass” Access Zone with path “/ifs/data/igls/analyticsdb” for the HDFS connections from hadoop eyeglass compute clients (ECA) and under Available Authentication Providers, select only the Local  System  authentication provider.  
  3. Select create create zone base directory

Screen Shot 2017-11-01 at 6.50.03 PM.png


NOTE: Ensure that Local System provider is at the top of the list. Additional AD providers are optional and not required.

NOTE: In OneFS 8.0.1 the Local System provider must be added using the command line.  After adding, the GUI can be used to move the Local System provider to the top of the list.

isi zone zones modify eyeglass --add-auth-providers=local:system

  1. Set the HDFS root directory in eyeglass access zone that supports HDFS connections.

         Command: 

(OneFS 7.2)

isi zone zones modify access_zone_name_for_hdfs --hdfs-root-directory=path_to_hdfs_root_dir


Example:

isi zone zones modify eyeglass --hdfs-root-directory=/ifs/data/igls/analyticsdb

(Onefs 8.0)

isi hdfs settings modify --root-directory=path_to_hdfs_root_dir --zone=access_zone_name_for_hdfs

Example:

isi hdfs settings modify --root-directory=/ifs/data/igls/analyticsdb/  --zone=eyeglass



  1. Create One IP pool for HDFS access with at least 3 nodes in the pool to ensure high availability access to each ECA node, the Pool will be configured with static load balancing.   This will be used for datanode and storage node access by the ECA cluster for the Analytics database.

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --static


(Onefs 8.0)

isi network pools create groupnet0.subnet0.hdfspool  --ranges=172.22.1.22-172.22.1.22 --ifaces 1-4:10gige-1  --access-zone eyeglass --sc-dns-zone hdfs-mycluster.ad1.test --alloc-method static

Screen Shot 2017-04-26 at 1.49.18 PM.png

A virtual HDFS rack is a pool of nodes on the Isilon cluster associated with a pool of Hadoop compute clients. To configure virtual HDFS racks on the Isilon Cluster:


NOTE: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command:

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=access_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20  --ip-pools=subnet0:hdfspool

isi networks modify pool --name  subnet0:hdfspool --access-zone=eyeglass


(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20 --ip-pools=subnet0:hdfspool --zone=eyeglass

isi hdfs racks list --zone=eyeglass 

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 172.22.1.18-172.22.1.20 subnet0:hdfspool

-------------------------------------------------------------

Total: 1



  1. Create local Hadoop user in the System access zone.  

NOTE: User ID must be eyeglasshdfs.

Command:

(OneFS 7.2)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system

Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system

(Onefs 8.0)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system

Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system


  1. Login via SSH to the Isilon cluster to change the ownership and permissions on the HDFS path that will be used by Eyeglass ECA clusters. 
  1. chown -R eyeglasshdfs:'Isilon Users' /ifs/data/igls/analyticsdb/ 
  2. chmod -R 755 /ifs/data/igls/analyticsdb/
  1. Analytics Cluster setup Complete.

Installation and Configuration ECA Cluster (Required Step)

OVA Install Prerequistes:

Configuration ItemNotes
3 or 6 VM’s see scaling section
The OVA file will deploy 3 vm's.  to build a 6 node cluster, deploy the OVA twice and move the VM's into the first Cluster object in vcenter.  See instructions below to correctly move VM's into a single vapp in vcenter.
vSphere 5.5 or higher
1x ip address on the same subnet for each node 

Gateway
Network Mask
DNS IP
NTP server IP 
IP Address of Eyeglass 
API token from Eyeglass 
Unique cluster name (lower case no special characters)


Installation Procedure of the ECA OVA

  1. The deployment is based on three node ECA appliances.
  2. Download the Superna Eyeglass™ OVF from https://www.supernaeyeglass.com/downloads
  3. Unzip into a directory on a machine with vSphere client installed
  4. Install the OVA using steps below with Windows vCenter client or html vCenter web interface. 
  5. NOTE: IF DEPLOYING A 6 OR 9 NODE CLUSTER FOR EASY AUDITOR THE 3 VM vAPP OVA  STEPS BELOW WILL BE DOWN TWICE FOR 6 NODE AND THREE TIMES FOR 9 NODE CLUSTER.   THIS WILL CREATE 2 OR 3 vAPP'S AND THE VM'S FROM EACH vAPP'S CAN BE MOVED INTO A SINGLE COMMON vAPP OBJECT IN VCENTER AND REMOVE THE EMPTY vAPP OBJECTS IN VCENTER.
    1. Procedures
      1. For the 2nd/3rd ECA OVA deployment power on the vapp
      2. ping each VM ip in the cluster until it responds to ping. (this allows first boot scripts to run)
      3. Once the VM's ping you can move the VM's from the vApp to the 1st ECA vapp deployed, repeat for each VM and once done the empty vApp can be deleted.  Drag and drop the VM from the vapp to the ECA 1 vapp.
      4. Repeat for each ECA OVA deployed AFTER the first ECA OVA.
    2. NOTE:  The ECA name on the 2nd or 3rd vAPP deployment does not need to match the first vAPP ECA name.  Once completed the ECA name used for the first ECA cluster will be synced to all VM's defined on node 1 ECA cluster master configuration file. 
  6. deployovf.pngPicture1.png
  7. vCenter HTML Example
  8. Deploy from a file or URL where the OVA was saved
  9. Using vCenter client set required VM settings, for datastore, networking.  NOTE: Leave setting as Fixed IP address
  10. Complete the networking sections as follows:
    1. ECA Cluster name (NOTE: must be lowercase < 8 characters and no special characters, with only letters)
    2. IMPORTANT: ECA Cluster name cannot include _ as this will cause some services to fail
    3. All VM are on the same subnet
    4. Enter network mask (will be applied to all VM’s)
    5. Gateway IP
    6. DNS server (must be able to resolve the igls.<your domain name here>) (Use nameserver IP address)
    7. NOTE: Agent node 1 is the master node where all ECA CLI commands are executed for cluster management
  11. vCenter Windows client example
    1. vCenter HTML Client Example

  12. Example OVA vAPP after deployment

  13. OPTIONAL If you are deploying a 6 or 9 node ECA cluster repeat the deployment again following the instractions above and set the ip addresses on the new VM's to expand the overall cluster ip range to 6 or 9 VM's.  The ECA name can be any value since this will be synced from node 1 of the first OVA cluster that was deployed.
    1. After deployment of the 2nd or 3rd ECA, open the vAPP and rename the vm's as follows:
      1. 6 or 9 Node ECA:
        1. EyeglassClusteredAgent 1 to EyeglassClusteredAgent 4
        2. EyeglassClusteredAgent 2 to EyeglassClusteredAgent 5
        3. EyeglassClusteredAgent 3 to EyeglassClusteredAgent 6
        4. ONLY If a 9 node ECA cluster continue to rename the 3rd OVA VM's inside the vAPP
        5. EyeglassClusteredAgent 1 to EyeglassClusteredAgent 7
        6. EyeglassClusteredAgent 2 to EyeglassClusteredAgent 8
        7. EyeglassClusteredAgent 3 to EyeglassClusteredAgent 9
      2. Now drag and drop the vm inside each of the vAPP's into the vAPP created for the first 3 VM's deployed.  Once completed you can delete the empty vAPP deployed for VM's 4-9.
      3. Once done the initial vAPP will look like this (9 node ECA shown).

      4. Done
  14. After Deployment is complete Power on the vAPP
    1. Ping each ip address to make sure each node has finished booting
    2. Login via SSH to the Master Node (Node 1) using the “ecaadmin” account default password 3y3gl4ss and run the following command:
    3. ecactl components configure-nodes (this command sets up keyless ssh for the ecaadmin user to manage the cluster)
    4. On Eyeglass Appliance: generate a unique API Token from Superna Eyeglass REST API Window. Once a token has been generated for the ECA Cluster, it can be used in that ECA’s startup command for authentication.
    5. Login to Eyeglass goto main menu Eyeglass REST API menu item. Create a new API token. This will be used in the startup file for the ECA cluster to authenticate to the Eyeglass VM and register ECA services.
  15. On ECA Cluster Master node ip 1
    1. Login to that VM. From this point on, commands will only be executed on the master node.
    2. On the master node, edit the file (using vim) /opt/superna/eca/eca-env-common.conf , and change these five settings to reflect your environment. Replace the variable accordingly
    3. Set the IP address or FQDN of the Eyeglass appliance and the API Token (created above), uncomment the parameter lines before save file. I.e:
      1. export EYEGLASS_LOCATION=ip_addr_of_eyeglass_appliance
      2. export EYEGLASS_API_TOKEN=Eyeglass_API_token
    4. Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master, (i.e. the IP address of the node you’re currently logged into.) NOTE: add additional ECA_LOCATION_NODE_X=x.x.x.x  for additional node in the ECA cluster depending on ECA cluster size. All nodes in the cluster must be listed in the file.
      1. export ECA_LOCATION_NODE_1=ip_addr_of_node_1 (set by first boot from the OVF)
      2. export ECA_LOCATION_NODE_2=ip_addr_of_node_2 (set by first boot from the OVF)
      3. export ECA_LOCATION_NODE_3=ip_addr_of_node_3 (set by first boot from the OVF)
    5. Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN hdfs_sc_zone_name with <your domain here FQDN>. 
    6. NOTE: Do not change any other value.  Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.
    7. export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/eca1
  16. Done:  Continue on to the Auditing Configuration Section

Auditing Configuration (Ransomware Defender, Easy Auditor) (Required Step)


How to Configure Turbo Audit Event Ingestion NFS Export (Ransomware Defender, Easy Auditor) (Required Step)

 

This option is for Isilon clusters with 1000’s of users connected to the cluster or very high IO rates that generate a lot of audit events per second.

Prerequisites for all mount methods:

  1. A Smartconnect name configured in the system zone for the NFS export created on /ifs/.ifsvar/audit/logs
  2. IP pool set to dynamic for NFS mount used by ECA cluster nodes for HA NFS mounts
  3. NFS export is read-only mount for each ECA node.
  4. Follow either manual eca mount method OR auto mount option but not both methods.  Auto mount is simpler since it is controlled in a central file.

Instructions to Setup FSTAB or Auto Mount:

  1.   

  2. Create a read-only NFS Export on the Isilon cluster(s), using the following syntax.

  3. Note all managed clusters by this ECA cluster will require an export created for audit event processing

  1.  (replacing <ECA_IP_1> with the IP address of nodes 1, 2, 3 if a larget cluster add ALL IP addresses for all ECA nodes): NOTE: If You have built a 6 or 9 node cluster repeat these steps on all nodes in the cluster to balance work load across all nodes and ensure the export lists all IP addresses for all nodes.
      isi nfs exports create /ifs/.ifsvar/audit/logs --root-clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --read-only=true -f --description "Easy Auditor Audit Log Export"

  1. Configure an NFS export on each ECA node  to Ingest Audit data from Isilon - ECA node Configuration Steps (Required)

  1. Audit events are ingested over NFS mounts on ECA nodes 2 - X. Follow the steps below to add the export to each of the VM's.
  2. What you will need to complete this step on nodes 2 - x (where x is the last node IP in the cluster):
    1. Cluster GUID and cluster name for each cluster to be indexed
    2. Cluster name as shown on top right corner after login to OneFS GUI 
  3. Change to Root user
    1. sudo -s
    2. enter ecaadmin password 3y3gl4ss
  4. Create local mount directory (repeat for each cluster) 
    1. mkdir -p /opt/superna/mnt/audit/GUID/clusternamehere/    (replace GUID and clusternamehere with correct values)
    2. Repeat these steps on Each ECA node in the cluster 
  5.  Configure automatic mount on VM reboot for NFS mount FSTAB OR skip to the next section to use centralized mount file with auto-mount
    1. NOTE: only use 1 method for mount.  FSTAB or auto mount
    2. Complete these steps on nodes 2 - X (X is the last node in the cluster , depending on the size of your ECA cluster)
    3. vim /etc/fstab  (must be root for this, check with whoami)
    4. At the end of the file the mount will be added.  To jump to the end of the file and turn on insert mode type the below key sequence
      1. Press ESC key,  Then hold Shift key and then press g then press o
      2. This will place the cursor on a blank line.  
    5. Replace yellow highlight with the correct values for your cluster. NOTE: the FQDN should be a smartconnect name for a pool in the System Access Zone IP Pool. 
      1. FQDN:/ifs/.ifsvar/audit/logs /opt/superna/mnt/audit/GUID/clustername nfs nfsvers=3 0 0
      2. NOTE:  copy the mount text for use on remaining nodes
    6. Then paste the edited text into the file at the current cursor location 
    7. Press Esc key to exit insert mode
    8. press the key (colon)  :  then type wq  Enter key
    9. Save the file
  6. NFS Setup with Centralized Mount file for all Nodes with Auto-Mount Option
    1. NOTE: This option will mount on cluster up using a centralized file to control the mount.  This simplifies changing mounts on nodes and provides cluster up mount diagnostics.
    2. Configuration Steps for Auto mount
      1. vim /opt/superna/eca/eca-env-common.conf
      2. add a variable to ensure the cluster up stops if the NFS mount fails  export STOP_ON_AUTOMOUNT_FAIL=true
      3. ssh to ECA node 1 as ecaadmin user to enable auto mount and make sure it starts on OS reboot. NOTE: for each node you will be prompted for the ecaadmin password.
      4. ecactl cluster exec "sudo systemctl unmask autofs"
      5. ecactl cluster exec "sudo systemctl start autofs"
      6. Check and ensure the service is running
        1. ecactl cluster exec "sudo systemctl status autofs"
      7. FSTAB to AUTOMOUNT Upgrade  Steps for 2.5.5 or later - Recommended

        1. Remove the old /etc/fstab entry
          1. sudo sed -i '/mnt\/audit/d' /etc/fstab  (enter the ecaadmin password when prompted)
          2. Repeat on each ECA node
        2. Remove active NFS mount from each ECA node
          1. You will need cluster GUID and cluster name to complete the command below.
          2. ecactl cluster exec "sudo umount -fl /opt/superna/mnt/audit/GUID/clusternamehere/"
        3. Add ECA node 1 to the export client and root list.  NOTE: automount manages all ECA nodes and all nodes require access to the NFS mount on all nodes as of release 2.5.5 and later.
        4. NOTE: Replace x.x.x.x with ECA node 1 ip address and update the client and root lists using the command below.  Login to Isilon as a user with permissions to edit exports to run this CLI command. Use isi nfs exports list command to locate export ID for the ECA cluster mount.   Replace example ID 5 in the command below with the ID from the cluster export list.
        5. isi nfs exports modify 5 --add-clients "x.x.x.x" --add-root-clients "x.x.x.x" -f --read-only=true
      8. Add new entry to auto.nfs file on ECA node 1
        1. NOTE: the FQDN should be a smartconnect name for a pool in the System Access Zone IP Pool.  <NAME> is the cluster name collected from the section above. GUID is the cluster GUID from the General settings screen of Onefs
        2. Fix the command below with correct values highlighted in yellow
          1. echo -e "\n/opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=3,ro <FQDN>:/ifs/.ifsvar/audit/logs" >> /opt/superna/eca/data/audit-nfs/auto.nfs
  7. Test the mount
    1. FSTAB method:
      1. type mount -a (only if using fstab)
      2. This will read the fstab file and mount the export.
    2. Auto-mount method:
      1. The cluster up step will read the mount file and mount on each ECA node during cluster up.  Review cluster up log to verify successful mount.  After the cluster up is completed you can type mount to see the mount.
  8. Repeat FSTAB updates on remaining nodes.  NOTE: Not required with auto mount method.
  9. Repeat mount steps above for additional cluster that requires audit data ingestion. NOTE: The mount must exist BEFORE you start up the cluster.




    ECA Cluster Startup Post Configuration (Ransomware Defender and Easy Auditor) (Required Step)


    How to Configure a Ransomware Defender Only Configuration (2.5.5 or later)

    Make this change before starting up the cluster to ensure docker containers that are not required are auto disabled on startup.

    1. Login to node 1 over ssh as ecaadmin
    2. vim /opt/superna/eca/eca-env-common.conf
    3. add a variable
      1. export RSW_ONLY_CFG=true
    4. :wq (to save the file)
    5. Continue with startup steps below


    ecactl cluster up (Note can take 5-8 minutes to complete)


    The startup script does the following Easy Auditor only:

    1. Reads config file and checks the config data is not empty
    2. Checks if auto mount is enabled and mounts clusters based on the centralized file or skips this step if not enabled.
    3. Checks if the hdfs pool Smartconnect name is resolvable
    4. Checks if eca can connect to isilon using netcat with port 8020 for HDFS access
    5. Tests HDFS permissions with Read and write test and will abort of Analtytics database IO test fails.
    6. Mounts hdfs data as "eyeglasshdfs" user and checks if user has permissions
    7. Starts services on each node in the cluster
    8. Verfies the Analytics database is initialized correctly before continuing boot process.
    9. Verify no errors appear in the startup procedure, the cluster up steps are all logged and can be sent to support.
    10. A successfull startup example is shown below.
    • Configuration pushed
    • Starting services on all cluster nodes.
    • Checking HDFS connectivity
    • Starting HDFS connectivity tests...
    • Reading HDFS configuration data...
    • ********************************************************************
    • HDFS root path: hdfs://hdfsransomware.ad3.test:8020/eca1/
    • HDFS name node: hdfsransomware.ad3.test
    • HDFS port: 8020
    • ********************************************************************
    • Resolving HDFS name node....
    • Server:                192.168.1.249
    • Address:        192.168.1.249#53
    • Non-authoritative answer:
    • Name:        hdfsransomware.ad3.test
    • Address: 172.31.1.124
    • Checking connectivity between ECA and Isilon...
    • Connection to hdfsransomware.ad3.test 8020 port [tcp/intu-ec-svcdisc] succeeded!
    • ********************************************************************
    • Initiating mountable HDFS docker container...

    Verifying ECA Cluster Status

    1. On the master node run these commands:
    1. run the following command: ecactl db shell
    2. Once in the shell execute command: status
    3. Output should show 1 active master , 2 backup master server

    Screen Shot 2017-02-18 at 4.34.06 PM.png

    1. Type ‘exit’

    Verifying ECA containers are running

    1. Command: “ecactl containers ps”

    Screen Shot 2017-02-18 at 4.19.12 PM.png

    Check cluster status and that all analytics tables exist (Ransomware Defender, Easy Auditor) (Optional Step)

    1. Command: ‘ecactl cluster status’
    2. This command verifies all containers are running on all nodes and verifies each node can mount the tables in the Analytics database.
    3. If any error conditions open a support case to resolve or retry with steps below:
    1. ecactl cluster down
    2. ecactl cluster up
    3. Send ECA cluster startup text to support

    Verify Connection to the Eyeglass appliance, check the “Manage services” icon (Required Step) (Ransomware Defender , Easy Auditor)

    1. Login to Eyeglass as admin user
    2. Check the status of the ECA Cluster, click ‘Manage Service’ Icon and click on + to expand the container or services for each eca node review image below.  
    3. Verify the ip addresses of the ECA nodes are listed.
    4. Verify all cluster nodes show and all docker containers show green health.
    5. NOTE: Hbase status can take 5 minutes to transition from warning to Green.
    6. Move to the Isilon Protocol Audit Configuration


    Isilon Protocol Audit Configuration (Required Step) (Ransomware Defender , Easy Auditor)

    Overview

    This section configures Isilon file auditing required to monitor user behaviours.   The Audit protocol can be enabled on each Access Zone independently that requires monitoring.  

    Enable and configure Isilon protocol audit (Required Step) (Ransomware Defender , Easy Auditor)


    1. Enable Protocol Access Auditing.

    Command:

    (OneFS 7.2)

    isi audit settings modify --protocol-auditing-enabled {yes | no}

    Example:

    isi audit settings modify --protocol-auditing-enabled=yes

    (OneFS 8.0)

    isi audit settings global modify --protocol-auditing-enabled {yes | no}

    Example:

    isi audit settings global modify --protocol-auditing-enabled=yes

    1. Select the access zone that will be audited. This audited access zones are accessed by the SMB/NFS clients.

    Command:

    (OneFS 7.2)

    isi audit settings modify --audited-zones=audited_access_zone

    Example:

    isi audit settings modify --audited-zones=sales

    (OneFS 8.0)

    isi audit settings global modify --audited-zones= audited_access_zone

    Example:

    isi audit settings global modify --audited-zones=sales,system


    1. OneFS 7.2 or 8.0 GUI Auditing Configuration:
    • Click Cluster Management > Auditing
    • In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.
    • In the Audited Zones area, click Add Zones.
    • In the Select Access Zones dialog box, select one or more access zones, and click Add Zones (do not add Eyeglass access zone).


    Time Configuration Isilon, Eyeglass, ECA cluster (Required Step)  (Ransomware Defender , Easy Auditor)

    Overview: To get accurate auditing features for Ransomware or Easy Auditor time sync between all components is critical step.   NTP should be used on all VM’s and use the same NTP source.

    1. Verify Isilon clusters being monitored are using an NTP server.  Many Internet time sources exist or internal Enterprise server IP address
    2. Enable NTP on all Isilon clusters
    3. Eyeglass VM configure the same NTP servers used by Isilon by following.
    4. On each ECA VM repeat the YAST steps above to configure NTP on each VM.

    How to Fix Time Skew Error in Manage Services Icon on Eyeglass (Optional Step) (Ransomware Defender , Easy Auditor)

    If NTP and ESX host time sync conflict it may be necessary to disable ESX host time sync to the ECA nodes to allow ECA nodes to get NTP time versus esx host. This ensure that DB queries and each node has consistent time in sync across Eyeglass VM’s and ECA nodes.

    How to disable VMware vSphere ESXi host time sync

    For ECA:

    1. Initiate ecactl cluster down
    2. Power down ECA vApp
    3. Change VMware time sync configuration as below: 
    4. Click on Virtual Machine 
    5. Right click on ECA node1 
    6. Click Edit Settings.. 
    7. Click on Option 
    8. Click VMware Tools 
    9. Uncheck ‘Synchronize guest time with host’ 
    10. Click OK
    11. Power up vApp
    12. Initiate ecactl cluster up





    NOTE: Apply this change on ALL ECA nodes.  Perform same steps for Eyeglass Appliance if needed

    New changes may take up to 15 mins

    [ in some cases, you may need to restart ntpd after cluster up ]



    Backup the Audit Database with SnapshotIQ (Required for Easy Auditor) (Required Step)

    Use the Isilon native SnapshotIQ feature to backup the audit data.  Procedure documented here.


    Expanding ECA Cluster for Higher Performance (Easy Auditor) (optional)


    The ECA cluster is based on a cluster technology for reading and writing data (Apache HBASE) and searching (Apache Spark).

    To expand the ECA clusters search performance for large databases.  (A large database contains over 1 Billion records).

    NOTE:  ECA clusters can be expanded to 9 nodes .


    How to Check ECA node Container CPU and Memory Usage (Optional)

    1. Login to the eca node as ecaadmin
    2. Type cli command to see real time view of container resources utilization
    3. ecactl stats

    How to expand Easy Auditor cluster size (optional)

    Follow this steps to add 3 or 6 more VM’s for analytics performance increase for higher event rate or long running queries on a large database. Deploy the ECA OVA again, copy the new VM's into the vAPP, remove the vAPP created during the deployment.  NOTE: The ECA name will not matter during the new OVA deployment since it will be synced from the existing ECA cluster during cluster up procedures.


    1. Login to the master ECA node
    2.  ecactl cluster down
    3. deploy one or two more eca clusters. No special config needs to be added on the newly deployed ECA OVA.
    4. vim /opt/superna/eca/eca-env-common.conf to add more locations:
    5. ECA_LOCATION_NODE_4: <IP>
    6. ECA_LOCATION_NODE_5: >IP>
    7. add anywhere from nodes 4 to 9 depending on the number of VM's added to the cluster.
    8. ecactl components configure-nodes
    9. ecactl cluster up
    10. This will expand HBASE and Spark containers for faster read and analytics performance
    11. Login to eyeglass and open managed services
      1. preview-image.png
    12. Now HBASE needs to balance the load across the cluster for improved read performance.
      1. Now login to the Region Master vm typically node 1
      2. http://x.x.x.x:16010/ verify that each region server (6 total) are visible in the UI
      3. Verify each has assigned regions
      4. Verify requests are visible to each region server
      5. Verify tables section shows no regions offline, no regions in other column,
      6. Example screenshots of 6 region servers with regions and normal looking table view
    13. done.

      How to Enable Real-time Monitor ECA cluster performance (If directed by support)


      Use this procedure to enable container monitor to determine if cpu GHZ are set correctly for query and writing to Isilon performance.

      1. To enable cadvisor, add the following line to eca-env-common.conf:
      2. export LAUNCH_MONITORING=true
      3. This will launch cadvisor on all cluster nodes.
      4. If you want to launch it on a single node, login to that node and execute:
      5. ecactl containers up -d cadvisor
      6. Once the cadvisor service is running, login to http://<IP OF ECA NODE>:9080 to see the web UI.
      7. Done.

      Ransomware and Easy Auditor IGLS CLI command Reference


      See Eyeglass CLI commands for Ransomware Defender and Easy Auditor


      How to Upgrade the ECA cluster For Easy Auditor and Ransomware Defender

      Ransomware Defender Only ECA installation Upgrades post 2.5.5 Decommision HDFS Pool

      1. As of release 2.5.5 Ransomware Defender only installations (3 ECA nodes) no longer requires HDFS or an IP pool.  After upgrading to 2.5.5 the HDFS pool can be decommissioned following these steps.
      2. NOTE: If Easy auditor is installed (6 node ECA clusters) HDFS license and IP pool is still required.  If unsure please contact support.
      3. Login to the Isilon cluster where the HDFS pool was created
      4. Delete the IP pool
      5. Remove delegation from DNS to the smartconnect name assigned to the pool for HDFS
      6. Delete the access zone (default name Eyeglass)
      7. Remove the eyeglasshdfs user from the system zone local auth provider
      8. Login to this cluster using SSH
        1. Delete the Analytics database (default location is /ifs/data/igls/analyticsdb). WARNING:  Only if this is a Ransomware Defender only installation)
      9. Done.


      Steps to upgrade:

      1. Downlod latest GA Release for the ECA upgrade following instructions here https://support.superna.net
      2. Log in to ECA Node 1 using "ecaadmin" credentials.
      3. Issue the following command: ecactl cluster down​
      4. Please wait for the procedure to complete on all involved ECA nodes.
      5. Done!
      6. - Once the above steps complete:
        1. Use WinSCP to transfer run file on node 1 (Master Node) in /tmp directory
        2. chcd /tmp
        3. chmod +x ecaxxxxxxx.run file (xxxx is name of file)
        4. ./ecaxxxxxx.run file (xxxx is name of file)
        5. Enter password for ecaadmin when prompted 
        6. wait for installation to complete
      7. Done.
      8. Now bring up the cluster again
      9. ecactl cluster up
      10. wait until all services are started on all nodes
      11. Once completed, login to Eyeglass open the managed services icon, verify all show green and online.  If any services show warning or inactive wait at least 5 minutes, if the condition persists, open a support case.
      12. If the above step passes and all ECA nodes show green online, then test Security guard runs in Ransomware Defender or RoboAudit runs in Easy auditor.  
      13. Consult the admin guide of each product to start a manual test of these features.

      How to Migrate ECA cluster settings to a new ECA cluster deployment - To upgrade OS to opensuse 15.1

      To upgrade an ECA cluster OS, it is easier to migrate the settings to a new ECA cluster deployed with the new OS. Follow these steps to deploy a new ECA cluster and migrate configuration to the new ECA cluster.

      Prerequisites 

      1. The ECA cluster has a logical name between the nodes, when deploying the new OVA the deployment promps for the ECA cluster name and this should use the same name as the previous ECA cluster.
        1. How to get the ECA cluster name
        2. Login to eca node 1 via ssh ecaadmin@x.x.x.x  and then run the command below:
        3. cat /opt/superna/eca/eca-env-common.conf | grep ECA_CLUSTER_ID
        4. Use the value returned after the = sign when deploying the new ECA cluster.


      1. Deploy a new OVA ECA cluster using the latest OS OVA by downloading from instructions here.
      2. Follow deployment instructions in the install guide to deploy the new OVA and use the ECA cluster name captured from the prerequisites when prompted during the installation process of the OVA.  The install guide for deploying the OVA is here.    
      3. NOTE: Use the same ip addresses as the current ECA cluster.
      4. Using winscp utility copy the following files from ECA node 1 of the existing ECA cluster, login using ecaadmin user
        1. /opt/superna/eca/eca-env-common.conf
        2. /opt/superna/eca/docker-compose.overrides.yml
        3. /opt/superna/eca/conf/common/overrides/ThreatLevels.json
        4. Get listing of audit folder month path and save to a file. Use this command to get the path to be created on the new ECA cluster for the audit mount point.  This must be executed on ECA node 2
          1. Login as ecaadmin on node 2 using ssh
          2. cd /opt/superna/mnt/audit
          3. run this command -->  find . -maxdepth 2 -type d -exec ls -ld "{}" \;
          4. The output will provide the cluster GUID and cluster name used on the mount point, if more than one cluster is managed you will have two paths.  sample output 0050569960fcd70161594d21dd22a3c10cbe/prod-cluster-8
          5. Save these paths in a file for steps needed on the new ECA vapp below.
      5. Using winscp copy the following file from ECA node 2
        1. /etc/fstab
      6. Shutdown old ECA cluster
        1. Login to node 1 as ecaadmin
        2. ecactl cluster down
        3. wait for the comment to finish
        4. Using vcenter UI power off the vapp
      7. Startup new ECA cluster
        1. Using vcenter UI power on the vapp
        2. ping each ip address in the cluster until each VM responds. NOTE: Do not continue if you cannot ping each VM in the cluster.
        3.  Using winscp and login ecaadmin copy the files from the steps above into the new ECA OVA cluster
        4. On node 1 replace these files with the backup copies
          1. /opt/superna/eca/eca-env-common.conf
          2. /opt/superna/eca/docker-compose.overrides.yml
          3. /opt/superna/eca/conf/common/overrides/ThreatLevels.json
        5. On Nodes 2 to node 6 
          1. Open the fstab file and copy the mount to the Isilon Audit folder and insert into the fstab file
          2. On each node complete these steps
            1. ssh ecaadmin@x.x.x.x (ip of each eca node)
            2. sudo -s (enter ecaadmin password when prompted)
            3. vim /etc/fstab
            4. insert (letter i to enable insert mode) a new line at the end of the file and paste the backup fstab mount entry backed up from the steps above into the file.  
            5. save the file
            6. :wq
            7. mkdir -p /opt/superna/mnt/audit/<paste cluster GUID and name path here>
              1. example only /opt/superna/mnt/audit/0050569960fcd70161594d21dd22a3c10cbe/prod-cluster-8
            8. test the mount in fstab on the node
              1. NOTE: you should still be the root user from above steps
              2. type command --> mount -a
              3. if no mount error you should not see any output from this command
              4. Check mount and type --> mount [enter]
              5. review the output to make sure the mount is visible
            9. Repeat all steps above on nodes 2 through 6
        6. Startup the new ECA cluster
          1. login to eca node 1 as ecaadmin
          2. ecactl cluster up
          3. review statup messages
        7. Done.



      Copyright Superna LLC