Quantcast
Channel: www.firemon.com
Viewing all 433 articles
Browse latest View live

Security Analytics Taxonomy

$
0
0

There’s a lot of buzz about, and interest in, security analytics. Security teams and vendors are exploring anything and everything that will accelerate threat detection and remediation, and ‘analytics’ intuitively feels like something organizations should be doing more. With the average breach taking 200+ days to detect, it sure seems like there’s room for improvement.

The challenge is nearly every security vendor has integrated ‘analytics’ into their message, so now we’re right back where we started – how do we determine what’s what? There’s even some disagreement among industry analysts, some say analytics is the next big thing in security while others say it’s been part of our solutions for years and therefore is nothing new.

Due to increasing interest from so many of our customers, FireMon commissioned Forrester to survey security organizations about their challenges, current data analysis tools and plans for security analytics. John Kindervag, a principal analyst with Forrester, and I discussed the findings in depth in this webinar, but the essence of the report is: 1) growing complexity has created new challenges in detecting and responding to threats; 2) existing data analysis solutions (i.e. SIEM) have major shortcomings; and 3) there’s growing interest in security analytics to reduce risk by accelerating detection and response.

In response, the security and IT management industries have integrated analytics into messaging and materials. Even if it’s nothing more than relabeling their existing report section to better match a prospective customer’s keyword search.

Reviewing a definition for analytics shows how the subject can be so broadly defined and interpreted. Data Analytics is the discovery and communication of meaningful patterns in IT data to improve IT and business processes. The patterns can identify changes, differences and inconsistencies that indicate an existing incident, elevated security risk or operational inefficiencies.

Such a broad and general definition forces security teams to devote significant time and effort to explore analytics products and technologies. Without a structured taxonomy to communicate their needs and evaluate solutions, we hear questions like “do you offer a dashboard?” Dashboards are built for a predefined list of questions that aren’t available for the new and unusual. Here is an analytics taxonomy that segments solutions into two specific and actionable categories to ensure organizations can prioritize the security analytics solutions they’re investigating.

Analytics for the Known – Analytics for the known include key performance indicators (KPIs) for infrastructure and services like Internet bandwidth, trouble ticket volume and application performance graphed over time. This information is commonly consumed through a dashboard, something we’ve had for years. And while there continue to be advancements, the business impact is largely incremental.

Analytics for the Unknown – The biggest net new opportunity for analytics to impact the security and operational efficiency of the business is discovering the unknown – the issues that aren’t displayed in a dashboard. We’ve heard phrases like “I need answers, but I don’t know the questions to ask.” While this might seem like hyperbole, it’s a real challenge. All the questions thousands of incredibly smart security professionals) have imagined over the years are embodied in today’s security systems (i.e. firewalls, anti-virus, intrusion detection, malware detection, etc.) and yet the adversary still finds a way in… We need answers, especially when the questions aren’t obvious.

The people that are able to find the answers without knowing the questions are in limited supply. They need tools to navigate large volumes of data, highlight relationships, groups, outliers and changes with previous observations as context. Any progress in identifying characteristics, anomalies and changes that enables businesses to find answers to the elusive security and operational incidents is potentially transformational for security teams. It needs to be fast and easy enough for our security subject matter experts (SMEs) to use without becoming data scientists.

In future posts, we’ll explore the challenges in requiring our SMEs to become data scientists to perform data discovery and exploration. We’ll also discuss the benefits of correlating security policy/configuration information with network, system and application data.

The post Security Analytics Taxonomy appeared first on FireMon.


Does Immediate Insight have an API?

Does the Immediate Insight ‘Remotes’ feature support interactive testing commands?

$
0
0

In addition to previously available test command capabilities, the Remotes feature of Immediate Insight now supports user defined command parameters that will be prompted for in the GUI when the test command is executed.

The following options are supported:

p – prompt to display (required)
h – hint to show in the field (optional)
d – default value (optional)

The full variable syntax always begins with two @ and ends with three @. Colons separate options.

@@args:p=arguments for ls:h=arguments:d=-l@@@

(No validation is supported in this version)

Here is a simple example of a Command to execute a Linux file listing, and then prompting for an argument for that file listing when the command is run:

1st the Command was configured as follows;

editing-command

2nd the command is invoked by pressing the Play (run) button beside the GUI listing for the command

dataflow

3rd – here is the dynamically generated dialog that appears, the default arguments can be left alone or modified to suit. Then click OK to continue.

runtime-parameters

4th – here is the Output of this example:

output

For further details on the ‘Remotes’ feature of Immediate Insight, please consult the Immediate Insight Admin user Guide.

The post Does the Immediate Insight ‘Remotes’ feature support interactive testing commands? appeared first on FireMon.

Rapid PCAP Analysis with Analytics

$
0
0

In this post I show you how one can use the built-in security analytics in Immediate Insight to help you rapidly review a large PCAP file for anomalies. Immediate Insight is security analytics for data discovery – purpose-built for incident response, investigation, and triage.

1) Collect the PCAP. Here I am using my Fortinet FortiWifi 90D’s built-in packet capture functionality – you can also use tshark or whatever your favorite packet capture methodology du jour is. Immediate Insight does not use parsers – its natural language system can take in any human-readable data (PCAP, PST files, PDF, Microsoft Office documents and spreadsheets, configuration files, IP reputation files – the list goes on).

image001

You can even stream PCAP in real-time to Immediate Insight:

image003

2) Use Immediate Insight’s “Drag and Drop Import” functionality to ingest the PCAP quickly and easily – tag the data with a unique term (I used “fwf90dpcap”). This tag will be used to search and use the built-in analytics in Immediate Insight only on this specific file (you can also, of course, review all of your data together – including this file). Be sure to review the “Import Time” settings (set it to the current time – or now – or set it to the time stamps found within whatever file you are dragging into the system) to make sure they match the overall purpose of your investigation (the “Drag and Drop Import” functionality is primarily designed to supplement the real-time data coming into your system – such as syslog, flat-files, NetFlow/IPFIX/sFlow, and much more).

image005

3) Search on your tag (“fwf90dpcap” in this case).

image007

The default “Search Results” page will display very much like what you’re used to with a SIEM. However, one of the main goals of Immediate Insight is to encourage exploration and the discovery of the unknowns (with support for both structured and unstructured data) – I will be focusing on some of the analytics components in this post.

Clicking on “Explore” > “Entities” will take you to Immediate Insight’s “Association Analytics” featuring a look at both frequent and infrequent entities (things like URLs and IP addresses), locations (Immediate Insight does automatic geolocation enrichment at time of data ingestion), addresses (IP addresses), networks, sources (where the data came from – i.e. syslog), and names.

Furthermore, clicking on “Analytics” > “Most Common” will take you to Immediate Insight’s “Event Clusters” functionality. This is an incredible tool for going through large datasets or finding the unknown unknowns. It takes the entirety of the dataset that you’re reviewing and finds the commonality in the data – grouping similar data into buckets so a human can rapidly determine what is happening. In the image below – you can see that a very complex event has been boiled down to its essence for the purpose of human analysis.

image008

You can click on the black and white circle to highlight unusual terms – adding further ease to the review. You can also flip the data over and look at the unusual items (versus the common, by default).

This is just the tip of the iceberg regarding the capabilities of Immediate Insight from FireMon. The free, Community Edition is full-featured and supports up to 25 million events – it’s a VMware OVA that you can deploy within 10 minutes in VMware Workstation, Fusion, ESX, or vSphere. Give it a try. Come back when you’re ready to take advantage of our nearly infinite horizontal clustering capabilities for big data. Contact me if you’d like to see a demo, would like assistance working with your visibility and data ingestion use cases, or if you just plain want to geek out on this cool software.

The post Rapid PCAP Analysis with Analytics appeared first on FireMon.

How to use the Reputation feature of Immediate Insight

Should I reserve memory for Immediate Insight VM?

$
0
0

For performance reasons it is important to reserve RAM memory for Immediate Insight. If you do not reserve memory, resources might be shared with other VMs on the VMware Server, which will affect search performance. The default setting for a VM does not reserve memory, for best performance it is recommended to override this default and reserve the memory.

To reserve memory for your Immediate Insight VM, please following the following steps (example here for VM ESXi, but similar concept applies for VW Workstation etc.)

  1. Connect to the ESXi server with vSphere Client
  2. From the Inventory, select the Immediate Insight VM form the list
  3. make sure the VM is powered off
  4. click ‘Edit virtual machine settings’
  5. click ‘Resources’
  6. click ‘Memory’
  7. check box for ‘Reserve all guest memory (All locked)’
  8. click ‘OK’
  9. power the VM back up, the proceed as normal

The post Should I reserve memory for Immediate Insight VM? appeared first on FireMon.

How do I increase the disk size of my Immediate Insight storage?

$
0
0

Note: do not use VMWare tools to adjust the size of the disk, this will not work. Instead follow the following process;

  1.  Shut down by powering down the Immediate Insight VM from the VMware console
  2.  Add a new drive to the Immediate Insight VM using the VMware VM settings
  3. Power on the Immediate Insight VM
  4.  Login to the Immediate Insight CLI using user ‘insight’ and the default password
    • type expand-disk and hit return, follow the prompts
      • The utility will merge the new drive you added in step to with the existing one.
      • Note; the maximum usable disk space per Immediate Insight VM is currently 2TB. If you required more than 2TB of storage you can cluster Immediate Insight VMs together. Please consult the Admin User Guide for details on Clustering.
    • wait for the utility to finish and the prompt to return
  5.  from the Immediate Insight CLI start the services
    • type start-all and hit return

The post How do I increase the disk size of my Immediate Insight storage? appeared first on FireMon.

What Hypervisor is used for Immediate Insight installation?

$
0
0

VMWare ESXi version 5 or above is the recommended Hypervisor for production deployments.

For evaluation or demonstration purposes the following may also be used;

VMWare Workstation version 8 and above

VMWare Fusion version 6 and above

Installation instructions for ESXi & VMWare Workstation are available in the Admin User Guide.

If you have different virtualization needs, please email open a ticket with iisupport@firemon.com detailing your desired Hypervisor environment

The post What Hypervisor is used for Immediate Insight installation? appeared first on FireMon.


How do I change the system name shown in the Immediate Insight GUI?

$
0
0

The most recent version of Immediate Insight allows renaming of the system name in the GUI (by default the name is blank). To do this you can complete the following steps.

  • From the GUI click the gear icon near the top right corner, then select ‘User Preferences’

    Immediate Insight User Preferences

  • Type the desired system name in that field, then click OK

    Immediate Insight Enter System Name

  • GUI will now show your system name

    Immediate Insight System Name

The post How do I change the system name shown in the Immediate Insight GUI? appeared first on FireMon.

Centralizing Windows Logs in JSON with Security Analytics

$
0
0

In this post I will show how you can centralize your enterprise-wide Windows logs with zero cost and via one agent to Immediate Insight – security analytics for data discovery. We will output the logs in JSON (they show up a lot more rich than any other method that I have used – a huge improvement over Snare and others). You will use Microsoft’s Windows Event Collector to centralize all of your Windows logs in your domain to one server and NXLog Community Edition to send those logs in JSON format over TCP syslog to Immediate Insight. Immediate Insight’s Elasticsearch backend and built-in analytics make hunting, incident response, and general troubleshooting and triage extremely fast, easy, and straightforward.

Setup the Log Sending Hosts

On each log sending Windows server:

Enable the Windows Remote Management Service on the Windows servers that will be sending logs to your central Windows server.

Get to a command prompt and type “winrm quickconfig” – in this example my Windows Server 2012 R2 Standard x64 server was already setup.

01

Head to the Local Users and Groups and edit the Event Log Readers group to include the server that will be running Windows Event Collector to centralize the Windows logs (you will need to edit the Object Types to include Computers in order to find the computer successfully).

02

Head to the Windows Firewall configuration section in Settings and go to Allowed apps – edit as shown to allow only domain-based access to your events.

03

Setup Your Central Windows Log Repository

On the server that will run Windows Event Collector:

Enable the Windows Event Collector service on the Windows server that you are going to use to centralize all of your Windows logs.

Get to a command prompt and type “wecutil qc” and follow up with a “y” for yes to finish configuring the service.

04

Open Event Viewer on the Windows Event Collector server – go to Subscriptions and select Create Subscription…

05

Now – edit the Subscription Properties:

06

Give your subscription a name (mine is “Remote Windows Event Log Collection.” Set the destination log to Forwarded Events. Hit the radio button for collector initiated – then select the computers/servers that it is to collect logs from (alternatively, you can also use Group Policy to set it up so each source Windows log server sends logs).

07

Under events to collect – hit the “Select Events…” button. My settings are shown. If you have a lot of servers, you may want to setup multiple collectors (or perhaps use the aforementioned source computer log send via GPO).

08

The system warns you that you’re being a bit ambitious. :)

09

Hit the “Advanced…” button and verify that the User Account radio button is set to Machine Account. Set the Event Delivery Optimization to Minimize Latency. You should be done configuring your subscription/collector.

10

Check the status of your subscription as shown:

11

Pop into the Forwarded Events view and you should see logs populating in from your servers.

12

Setup the JSON TCP Syslog Sender

Now, install NXLog Community Edtion – the default setup is fine. You will edit the nxlog.conf file located in C:\Program Files (x86)\nxlog to customize how your logs are sent. I have two tested and working conf files available for download here (you will want to edit the settings to match your environment). One is for logging in JSON format to disk and one is for sending in JSON format over TCP syslog to Immediate Insight (which is what I’m using for this blog – and it is shown below).

13

Restart the service once you have successfully edited your conf file – check the NXLog log (heh) at C:\Program Files (x86)\nxlog\data\nxlog.log – you should see a message like below and start to see logs in your Immediate Insight.

14

The logs come in nicely via JSON in Immediate Insight:

15

Now for the fun stuff – using Immediate Insight’s built-in security analytics (to bubble up interesting information in your data) and extremely fast search to explore your data with ease.

Tie it Together with Analytics

Association Analytics

This looks at frequency in your data – what is happening frequently and infrequently in several categories – along with doing automatic geolocation on all IP addresses upon data ingestion. Find outliers and explore them rapidly.

16

Event Clusters

This takes all of your data for the given time period that you’re analyzing and places it in “buckets” based on the commonality within said data. A way for humans like yourself to use your business intelligence to analyze a large data set very quickly.

Common:

17

Unusual:

18

Activity and Change

You can also take a look at your data sets and compare them to other periods of time – how do they differ from the last hour, yesterday, etc. and what’s new, missing, unchanged, and what is trending up and down…highlighting the anomalous very quickly.

19

Cohort Analysis

Guilt by association – what other data points has the data that you’re looking at interacted with in the period that you’re investigating?

20

Next: Sysmon Integration for Sexier Data

Thanks for reading. In my next post – I will take a look at how we can use Windows Sysinternals’ Sysmon to make your Windows logs far sexier – logging things like process creation, hashes (SHA1, MD5, SHA256, or IMPHASH), loading of drivers and DLLs, disk/volume raw access, network connections, change in file creation time, and more! Create a force multiplier by using that information along with Immediate Insight’s security analytics…

The post Centralizing Windows Logs in JSON with Security Analytics appeared first on FireMon.

Anatomy of an Immediate Insight Proof-of-Concept

$
0
0

Background

Today’s reality for IT Security and Operations teams is there are more activities to be performed than there are hours in the day. Before evaluating any product it’s helpful to understand the scope of effort and time required to evaluate a product’s value to your organization. This document describes the typical process, and timeline, for evaluating Immediate Insight. It normally takes 60 minutes for preparation and installation. After the installation and a minimal data collection period, the system is available for users to ask questions of their data and follow the non-obvious associations across data silos.

Typical Process and Timeline

Immediate Insight process-timeline

Preparation for the PoC (30 minutes)

  • Provision a VM; recommended base configuration – 8 vCPU, 32GB RAM, minimum 500GB storage.
  • Identify data sources, anything human readable (common sources include firewall logs, IDS logs, proxy logs, web server logs, application logs, packet captures – pcap, netflow).
    • Data already written to a central data store (I.E. SIEM)
    • Syslog data
    • Any data streamed over a port
    • JSON or XML data via http post
    • Drag and drop data

Installation process (30 minutes)

The basic steps to getting started:

  1. Download the Immediate Insight appliance
  2. Load the Virtual Appliance in VMware and start it
    • Import the file / VMWare Console / Deploy OVF Template / (Thin Provision)
    • Give it 32GB of RAM and 4 or more vCPU.
      • 16GB / 4 vCPU is typically good for ~150 million records.
      • 32GB / 8 vCPU is typically good for ~500 million records.
      • 64GB / 16 vCPU is typically good for 1 billion records.
    • Add an Additional Hard Drive for the Data
    • In VMware, right-click the VM Ò edit settings Ò add Ò new hard drive
    • Give it space based on the data volume you will be ingesting, typically 500GB or more.
      • 500GB is typically good for ~150 million records.
      • 1TB is typically good for ~500 million records.
      • 2TB is typically good for 1 billion records.
  3. Login, set the IP Address, and Initialize the Data Disk
    • User: insight Password: WhatsHappeningNow
    • Type install to start the installation process (note: type “dhcp” at the IP address prompt if you are going to use DHCP – not recommended)
    • Type sudo reboot once complete – so the changes take effect
  4. Start the Server (You can now login via SSH from a remote client if you prefer, or login from the console)
    • User: insight Password: WhatsHappeningNow
    • Type start to launch the app (for later: type stop to stop the app cleanly)
    • Type update to ensure the latest release is installed
  5. Login from a Browser
    • http://ip-address:3201/
    • User: admin
    • Password: admin
  6. Install evaluation license key – select Activate License from upper right cog menu
  7. Connect Some Data Sources
    • Access data from a file share
      • Mount some network shares that have log files the system can follow
      • sudo vi /etc/fstab
      • add a mount line such as:
      • //servername/sharename /media/logs cifs username=user,password=pwd 0 0
      • /media/logs is the default mount point, already configured and ready
      • mount it: ‘sudo mount -a’
    • Syslog streams
      • Configure syslog data sources with Immediate Insight IP address as a destination
      • Immediate Insight listens on port 514 by default
    • Pipe data to a TCP or UDP port
      • Immediate Insight listens on port 3000 by default
      • For data streamed on custom port, access the DataFlow interface to configure additional Immediate Insight collectors
    • Netflow sources
      • Configure netflow data source with Immediate Insight IP address as a destination
      • Immediate Insight listens on port 2055 by default.
    • Drag and drop
      • Any log file (human readable), configuration files, PCAP, PST
        If necessary, you can change any path and port settings from the DataFlow interface
  8. Confirm data is coming into the system.
    • To determine if data is coming into the system, either search for everything (leave search field blank) in the past hour, view the Dataflow collectors status screen, or use the Firehose to see the incoming data live.
  9. Download the “Situations to Watch” Pinboard from the Immediate Insight Knowledgebase – https://www.firemon.com/do-you-have-a-sample-set-of-commonly-useful-searches-that-i-can-start-with/ and upload it to Immediate Insight. The installed Pinboard will provide a sample of commonly used searches.

Data Collection – Let the system run for 24-48 hours, collecting data.

Explore the Data

Once data is in Immediate Insight, simply ask any questions of the data and results are returned very quickly. The following common exploration use cases can help users familiarizing themselves with the system’s capabilities and get users started in extracting actionable insights from their data to improve security and operations.

Common exploration use cases:

  1. Needle finding
    • Leave the search field blank, select the last 24 hours and run a search
    • Select the most unusual to see the events that occur most infrequently in the past 24 hours.
  2. Discovery
    • Search for terms typically associated with problem (i.e. error fail* refuse* deny denied disconnect)
    • Search for addresses of critical infrastructure (Oracle, SAP, Email, etc).
    • Combine the two and search for issues on critical infrastructure.
    • Select timeline to see event volumes over time.
    • Display entities and locations to spot anomalies
    • View the new events when compared to the previous search period.
  3. Location anomalies
    • Leave the search field blank, select the last 24 hours and run a search
    • Select location
    • Select table icon to see all locations ordered by frequency
    • Select a location to see all the associated events
  4. Data anomalies
    • Enter any search, or leave it blank, for the desired timeframe
    • Select most common
    • Select most unusual
    • The common and unusual data clusters are displayed
    • Click through any of the clusters to see a sample of event details. Click on the event detail to pivot search to isolate the cluster. Select events, location, or entities for other views into the data.
  5. New/changed
    • Leave the search field blank, select the last 24 hours and run a search
    • Select trending
    • Select compare to previous period to compare results from past 24 hours to the previous 24 hours.
    • Toggle through trending up, new, missing, etc
  6. Focused then Retrospective
    • Leave the search field blank, select the last 24 hours and run a search
    • Select timeline
    • Select a bar in the timeline to drill in
    • Pivot search on desired entity
    • Select past 7 days
  7. NOT US Locations
    • Enter NOT US in the search field, select the last 24 hours and run a search
    • Select Entities
    • Select any location, addresses, names to pivot search
    • Select + (for AND) and – (for NOT) to create more complex searches
  8. NOT denied AND CN
    • Enter NOT denied AND CN, select the last 24 hours and run a search
    • Pivot search on any internal addresses to see the associated events
    • Pivot, trending, unusual
  9. Firehose
    • Leave the search field blank, select the last 24 hours and run a search
    • Select the follow arrow and select an entity
    • Firehose view shows a live view of all events for the selected events as they happen
  10. Saving searches (bookmarks) and Pinning bookmarks to the board
    • Enter a search query and timeframe and run the search
    • Select add bookmark from the menu on the upper right side of the interface
    • Add guide and category to pin search
    • See pinned searches are displayed on the second pages of the main search screen
  11. Using tagging as an investigative tool
    • Enter a search query
    • Hover over an event of interest and select add note and enter a tag
    • Perform the same on other events of interest and enter the same tag
    • Search for the tag to display all tagged events
    • View entities, locations, trending, and unusual to spot non-obvious common denominators
  12. PCAP analysis
    • Drag and drop PCAP file or exported CSV of PCAP, select use original time for each line to import data in the timeframe of the original event.
    • Enter a tag in the drag and drop tag field to group the data.
    • Enter search query and view most unusual to highlight the less common traces
    • Select entities to see addresses and locations for the packet captures
  13. Using reputation to map data to business importance
    • Select the Reputation menu
    • Enter an address of a critical system (i.e. Oracle server)
    • Add key value pair to identify the server address as part of the Oracle critical infrastructure (for example, field = critical infrastructure; value = Oracle
    • Run a search for Oracle AND error to show all errors reported by the Oracle server.

ImIn-workflow

The post Anatomy of an Immediate Insight Proof-of-Concept appeared first on FireMon.

Enhance Windows Anomaly Detection with Sysmon

$
0
0

In my last post I covered how you can centralize your Windows logs on one system, send them as JSON for full detail, and use Immediate Insight’s fast search and analytics to investigate alerts and discover the unknown. Now – let’s take it a step further and use Sysinternals’ Sysmon to capture rich details from your workstations and servers. Details like process creation/termination, file creation, driver load, network connections, and more. All of this data can then be rapidly queried, correlated, and alerted upon within Immediate Insight.

  1. Download Sysmon – I personally place all of my utilities in a folder called “C:\Util” and then I add that folder to the system’s path as shown below.

    Immediate-Insight-Environment-variables

  2. You’re now ready to install and configure Sysmon for your environment. This component requires some planning – as Sysmon generates a ton of log data. However, the quality of that data is quite high – so you’ll want to balance the configuration between your storage retention capabilities and your security requirements.You can also chose to just load a quick “default” configuration:

    Immediate-Insight-Admin-command-prompt

    sysmon -i – accepteula -h md5 -n -l

    The above installs Sysmon as a service (so it will survive reboots) and will collect data on the loading of modules, log network connections, and record MD5 hashes for any processes created.

    Immediate-Insight-Admin-command-prompt-2

    If you want to get more in-depth: here are links to two Sysmon configuration XML files that you can review and edit on your own, conveniently pre-configured for both servers and workstations. They are excellent starting points.

  3. Create a subscription for collecting remote Sysmon events (very much like what you did in the last blog post for standard Windows logs).
    Immediate-Insight-subscription-properties
    Select the computers from where you want to collect the Sysmon data:
    Immediate-Insight-subscription-properties-2
    Define the query filter to grab the Sysmon events:
    Immediate-Insight-subscription-properties-3Unfortunately, by default, Windows Event Collector will be unable to access theSysmon logs in the same way as it does “standard” logs (Application, Security, System, etc.).

    Immediate-Insight-subscription-runtime-status

    You will need to set additional permissions. For each server or workstation that will be running Sysmon and centralizing those logs you will need to add the Network Service local account and the event collector local computer account into the Event Log Readers local group:

    Immediate-Insight-event-log-readers

    You will also need to use your domain’s (or if you lack a domain – set it locally on each machine) Group Policy Management Editor to give those same two accounts permission to “Manage auditing and security log” as shown:

    Immediate-Insight-auditing-security-properties

    Finally – reboot each changed machine. Hey – I didn’t say this would be completely easy. :)

  4. The end result is great looking Sysmon data coming into your Immediate Insight system in JSON. Here they are under the Microscope in Immediate Insight (a great place to examine your data in great forensic-esque detail):
    Immediate-Insight-microscopeUse Association Analytics to examine patterns of frequency and infrequency within your dataset – entities, geolocated locations (at time of data ingestion), IP addresses, networks, and more…rapidly investigate what is outside of the normal.
    Immediate-Insight-association-analytics

    Use Event Clusters to investigate a large dataset rapidly – Immediate Insight finds commonality in your data and puts it in “buckets” so you can find both common and unusual anomalies quickly. What data looks strange?

    Immediate-Insight-event-clusters

    Visualize your data using the Timeline. Look for outliers – spikes in traffic or strange locations (either in the U.S. or in the World).

    Immediate-Insight-time-slices

    Check Activity & Change to investigate how your data has changed since other periods of time – is traffic significantly higher than a normal Monday at this time? Significantly lower? Dig in! Find new data that wasn’t there before – is it just a new system or application coming online or is it something anomalous?

    Immediate-Insight-activity-and-change

    Take a look at the Firehose to see the data streaming in live – pause and filter to troubleshoot in real-time when there is an ongoing issue or incident:

    Immediate-Insight-firehose

Thanks for reading – I hope that you found this post useful. Sysmon is an incredibly powerful tool that makes Windows logs substantially more valuable – it is very worth the effort to deploy it where it makes sense within your organization.

The post Enhance Windows Anomaly Detection with Sysmon appeared first on FireMon.

For FireMon, the Channel Is as Important as Ever

$
0
0

crn_5star_PPGIt’s always exciting to hear that FireMon’s Ignite Partner Program is recognized from outside industry experts.

In building an award winning program, the FireMon team and I have had many starts and stops along the way to success. Last year, about 20% of the total new to FireMon business was identified and closed with our channel partners.

Our channel partnerships have been central to our growth. To honor that, we have evolved our program each year to enable our partners not only meet their own goals but to also create solutions that meet their customers’ goals.

In 2016, the evolution continues with additional focus on:

  • Sales Enablement – Targeted e-learning modules to enable Channel Sales Representatives
  • Expanded Channel Engineer certifications
  • Incentives for successfully executing Partner Business Planning
  • Continued investment in lead generation and marketing programs

The fundamentals of a successful channel relationship are great products that deliver what the market demands, a great team consisting of partners, channel managers and field sales, and consistently exceeding expectations.

I am looking forward to another amazing growth year with our partners.

The post For FireMon, the Channel Is as Important as Ever appeared first on FireMon.

FireMon Gets Cisco Firepower APIs

$
0
0

Cisco made a big announcement yesterday about the expansion of their partner ecosystem, and FireMon is thrilled to be a part of it. As part of their ongoing commitment toward openness and integration, they have enabled us to make use of Cisco Firepower’s “write” REST APIs in upcoming versions of FireMon Security Manager and Policy Planner.

According to Cisco, the new API functionality “enables management of Firepower firewall policy from 3rd party management tools, thereby simplifying creation of consistent policies across a deployment… even when there are multiple firewall vendors in the environment.”

FireMon worked with Cisco intermittently for the past 3 years as they developed these APIs—requesting specific features and advising on industry-standard best practices. We have long held that vendor-supported APIs are the only way to consistently and reliably make changes on managed devices, and it’s an honor to see this project come to fruition.

So, what does this mean for our customers? The ability to write policies is one of the many pieces of the puzzle required to automate the entire firewall change process — from design to implementation. When done successfully, these process improvements free security professionals to do real security analysis instead of getting bogged down in day-to-day administrative tasks.

Support for Firepower will be coming later this year, but the work has already begun. We’ll be announcing more about our plans for automation in the next week, and if you’re attending Cisco Live this month, stop by booth #2557 for a demo of the firewall change automation technology.

The post FireMon Gets Cisco Firepower APIs appeared first on FireMon.

Tired of fighting fires instead of doing real security work?

$
0
0

You aren’t alone! FireMon conducted a survey at Infosecurity Europe earlier this month, and I wish I could say the results were surprising. As a group, cyber security professionals are overworked, underutilized, and required to satisfy business and regulatory demands that are often in direct conflict. Among the security professionals we surveyed, a full 51% stated that they spent most of their days firefighting instead of doing “meaningful security work.” This is a frustrating and dangerous state of affairs I believe is caused, at least in part, by increasing network complexity.

Organizations are putting solution after solution in place to try to find the missing piece to their security puzzle when, in fact, more technology probably isn’t the answer. This constant acquisition of solutions—and the increasing network complexity that goes along with it—is driven at least in part by regulatory compliance. Security professionals are simultaneously being asked to chase compliance—56% admitted they had added a product purely to meet compliance regulations, even though they knew it offered no other business benefit—while also compromising security posture in order to meet business demands.

In fact, 52% of IT security pros survey admitted to adding access that they know had decreased their organization’s security posture. And those outside regulations? 28% admitted to cheating on an audit just to pass, a figure that has gone up 6% from five years ago when the same question was posed. Something is broken here. More solutions aren’t needed—better management is.

If you’re looking to stop fighting fires and reclaim control of your network, consider the following four tips:

  • Get Visibility – You can’t manage what you don’t know is there. Having detailed visibility into firewall rules and policy effectiveness allows you to clean up outdated or redundant rules and close security gaps, lowering overall firewall complexity and level of risk.
  • Get Intelligence – By taking into account knowledge of the vulnerabilities in the networked environment on well-known threat entry points and combining it with real-time monitoring and vulnerability mapping, you have the situational awareness it needs to identify and remediate problematic issues before they evolve.
  • Integrate – Exchange of information between disparate systems cannot be underestimated. The ability to share security information in real time without restricting it to a single application, system or device can empower you to make decisions.
  • Automate – Change workflow automation can help your team assess the impact of any new access being provided and restrict or vet it against the corporate security policy to ensure it does not break compliance or introduce unacceptable risk.

What are your thoughts on the mounting pressures and competing objectives placed on IT security staff from inside as well as outside organizations? Tweet us @firemon and let us know if you are #overworkedinIT!

The post Tired of fighting fires instead of doing real security work? appeared first on FireMon.


Tagging, It’s Not Just for Twitter Anymore

$
0
0

So much of the current industry buzz is about the promise of machine learning and artificial intelligence to finally win the security battle. From what I’ve seen, the near term promise of machine learning is to scale the human layer of data analysis for security. Humans are ultimately much better at free association than any rule or algorithm. Applying machine learning and other analysis techniques to human commentary about the data can be a real win for security teams combating a more sophisticated adversary operating in a more complex infrastructure.

I believe security teams need an integrated “social” framework to capture the human assessment. A framework that enables operators to inject context directly to, and collaborate in, the machine and human data used for threat detection and incident response. In social media (Facebook, Twitter, etc.) users can contribute personal commentary and associations by simply adding a tag or note. However, our security teams and systems are in the dark ages with regard to flexibly adding commentary and institutional knowledge directly to the data.

Today’s systems are built for the known, repeatable and structured. They lack the “social framework” necessary for teams to productively engage the unknown and unstructured. For these systems, the primary information-sharing medium for unusual and suspicious activity is messaging applications like email that disconnects assessment and commentary from the data. In fact, one CISO recently shared she is frustrated that institutional knowledge and context for anomalous activity is difficult to leverage across the organization because it is contained in many team member’s PST files.

If security operators and analysts are able to add their personal observations directly to the data that others can leverage, it creates a more collaborative problem-solving environment required for more elusive security threats. Enabling analysts and operators to tag interesting records or data points, the system can then capture that context and leverage analytics to improve results returned by the system.

For tagging to be useful, it must be simple and flexible. The user shouldn’t have to operate in some rigid structure, typical of legacy systems, that limits flexibility and reduces its value. In practice, adding tags and notes needs to be as easy as Facebook and Twitter – a fluid part of user’s normal workflow to:

  • Add commentary and assessment
  • Create custom data segmentation
  • Classify unstructured data
  • Correlate structured and unstructured data

The associative and accumulative value of tags increases with use. For example, a user searches on a tag or comment, creating a correlated and working dataset, defined by the tag. This enables teams to see a correlated view (across structured and unstructured data) of non-obvious threats and operational issues.

  • Common addresses, users, etc.
  • Unusual locations
  • Common and uncommon events
  • Comparisons across arbitrary time periods
  • Reputations and cohorts

Tagging is an important part of Immediate Insight’s new natural language, associative and accumulative approach to security analytics. The system is not built to structure and store the data as fields with direct details, as traditional data systems have done, instead it’s built to track entities, their associations and store information about their reputation (what they’ve done in the past and what other things they are associated with). This associative and accumulative approach enables the system to learn as it sees new data and enables queries that return much better insight.

Join author Jeff Barker for a further look at how you can use data tagging to enable collaboration and improve threat detection and response in our webinar Thursday, July 12 at 1:00p CDT >>

The post Tagging, It’s Not Just for Twitter Anymore appeared first on FireMon.

Messy firewall rules get security professionals “grounded for life”

$
0
0

Hands up, whose firewall rules are a mess? Yes? Well, the good news (if it can be considered good news) is that you’re not alone, because 65% of your peers are in the same boat according to a survey carried out last month at Infosecurity Europe. In fact, 65% of the 300 security professionals surveyed said if their firewall rules were a teenager’s bedroom, their mom would be so angry she would ground them; and half of those said they would be grounded for life!

The same study also showed that 32% admitted they had inherited over half of the rules they manage from a predecessor – no wonder they are a mess! And a quarter of security professionals confessed to being afraid to turn off legacy rules. To add to the complexity, 72% of security professionals surveyed use two or more firewall vendors within their IT environments to try and manage rules for.

Firewall rule management can be one of those thankless tasks that is a thorn in the side of many IT managers. It takes up too much time, sort of like untangling those miscellaneous wires that have been piling up in your junk drawer.

Though organizations in general, especially IT teams, are expected to do more with fewer resources, good security management and automation can close gaps in resources while helping streamline processes and simplify tasks such as firewall rule management.

If, like the majority of IT security professionals, you’re in danger of being grounded over your messy firewall rules, here are some tips from my colleague Tim Woods on how to start tidying up your firewall policies:

Step 1: Remove technical mistakes – A primary example of a technical mistake that is classified as ineffective, incorrect and not needed is a hidden rule which includes redundant and shadowed rules that serve no legitimate business purpose.

Step 2: Remove unused access – Unused access rules bloat a firewall policy causing confusion and mistakes. To determine rule usage, you need to analyze and correlate the active policy against the network traffic pattern; doing this over a sustained period will show definitively which rules are used versus unused to help with clean up.

Step 3: Review, refine and organize access – You need to determine whether rules are justified against a defined business requirement and analyze the need vs. risk acceptance for the rule. Start with rules that employ the use of “ANY”, as these could potentially be the most risky.

Step 4: Continual policy monitoring – Don’t forget that maintaining an effective, efficient and correct firewall policy is an ongoing process. Make sure you have real-time change event monitoring and alerting and real-time audit reporting to know when a violation of your security policy has occurred.

The post Messy firewall rules get security professionals “grounded for life” appeared first on FireMon.

Advanced Commands – Immediate Insight

$
0
0

This document highlights valuable installation and setup-related commands and other command-line interface (CLI) commands for advanced users.

Installation Script – “install”

When you first install Immediate Insight, you initiate the “install” command. This command is a collection of the scripts outlined below (along with verification and other sanity-type checks).

  1. “install-tools storage”
    • Formats the data volume and mounts it at “/data”
  2. “install-tools network”
    • Establishes network connectivity (IP, netmask, DNS, etc.)
    • Enter “dhcp” under the IP address prompt if you would like to use DHCP (not recommended for production environments).
  3. “set-timezone”
    • Sets the timezone for the machine.
  4. “set-ntp”
    • Sets the NTP servers used by the machine.
  5. “install-tools application”
    • Gets the latest updates for the Immediate Insight application.
  6. “set-ssd on”
    • If you are using SSD for storage – set this to “yes” (“on”) – for some additional tuning parameters optimized for SSD storage.
  7. “install-tools database”
    • Initializes the database. This uses the “reset-datastore” command to create a fresh, new datastore (erases all previous data).
  8. “install-tools joinCluster ” (if you answer “yes” to joining a cluster)
    • Joins the Immediate Insight server to an existing cluster – or to another server to create a new cluster.

Other Commands

  • “install-tools verify” or “install verify” – runs the “install” script in a more verbose fashion – outlining everything the script is doing on-screen (instead of just logging to ~/logs/update.log). This is helpful should you be troubleshooting install issues.
  • “install-tools check” or “install check” – tells you the current state of the system (version running, configuration details, storage status, memory utilization, network details, disk volume outline, and agent details).
  • “status” – a more compressed “install-tools check” command outlining version info, the IP address, what Immediate Insight components are running, data storage utilization, cluster status (if applicable), and memory utilization.
  • “install-tools deleteServer ” – remove a system from the cluster. Be sure to do this from the system that you are keeping in the cluster.
  • “diags-search indices” – check the status of the database indexes – red is bad, yellow is OK depending on the situation (may be the cluster syncing).
  • “reset-settings –x” – factory reset.
  • “reset-settings –d” – delete the database and start fresh.
  • “reset-settings -?” – other reset options.
  • “update only” – load the latest version of application code for Immediate Insight but do not activate it.
  • “update search” – update the search engine to the latest version.
  • “revert” roll back to the previous version of the Immediate Insight application after an update. Previous versions are stored in ~/versions.
  • “stop-all” – stop both the database and the Immediate Insight application components.
  • “reload server” – bounces the Immediate Insight application without restarting the database. The database is slow to restart (since it has to verify data integrity and build caches) versus the Immediate Insight application – so this can save time if you just need to restart Immediate Insight.
  • “sudo reboot” – reboot the server (“stop-all” should be issued prior to a shut down or restart of the system)
  • “sudo apt-get update && sudo apt-get upgrade” – update Linux with the latest OS security patches.
  • “ii” – command-line search. It returns some results – so you can verify that everything is running without having to use the browser-based console.

More Helpful Commands

  • “?” – show a list of all CLI commands.
  • “set-ssl on” – enable SSL for the browser-based console (https). Certs are stored in the ~/app/config directory.
  • “set-certs” – install newly added certs or generate a self-signed certificate.
  • “set-ip” – change the IP address (/etc/network/interfaces).
  • “set-api” – activate the API and set keys.
  • “set-agent” – configure the agent (used for the Drag & Drop and Remotes features)
  • “set-updateproxy” – enable/disable HTTPS proxy server for the “update” command to use.
  • “firewall show” – show the current local port controls.
  • “firewall clear” – disable the local firewall (not recommended for production use, but helpful when troubleshooting connectivity issues).

Please contact iisupport@firemon.com for any other inquiries.

The post Advanced Commands – Immediate Insight appeared first on FireMon.

Security Manager Changes and Firewall Log Collection – Immediate Insight

$
0
0

The purpose of this document is to walk the user through the integration for collecting Security Manager firewall change events and logs into Immediate Insight (also note: Security Manager activity, such as configuration collection via SSH, will also be collected). Now all of the log feeds that you send to Security Manager can be easily directed from Security Manager to Immediate Insight – saving you from having to setup additional or duplicate feeds.

 

Immediate_insight_architecture

Additional Hardware Considerations

The firewall log collection adds considerable overhead to data collection on Immediate Insight. It is recommended that you add 1GB of RAM and 500MB of storage to your existing memory and storage configuration on Immediate Insight to compensate for the additional overhead. For 2,000 EPS or higher Security Manager firewall log collection, it is recommended that you add an additional vCPU to your existing vCPU configuration on Immediate Insight.

A Note About Future Security Manager Software Updates or Upgrades

The following files on Security Manager are modified as part of this change and do not persist across upgrades of Security Manager:

/etc/firemon/dc.conf
/etc/firemon/LogAnalyzerRegexInfo.xml

Please be aware that if you update or upgrade Security Manager (i.e. from 8.6.0 to 8.6.2) you will need to re-apply the changes to those files in this document.

Part A – Backup Configuration / Data

It is important to backup your configuration and data ahead of making changes to your system(s). If you are running your Security Manager and/or Immediate Insight systems in VMware – we recommend taking a snapshot prior to making changes. In addition:

  • Update your Immediate Insight to the latest version and create a backup:
    • [Create snapshot in VMware – if applicable]
    • Login to Immediate Insight CLI and type the following (this updates Immediate Insight and creates a configuration backup):
      • update
  • Create a backup of your Security Manager configuration and database:
    • [Create snapshot in VMware – if applicable]
    • Login to Security Manager CLI and type the following;
      • fmos-backup

Part B – Integration Pre-Requisites

Note: this integration requires Immediate Insight version app-2016-04-07 (or newer) – from the “2016” build, which can be obtained by typing ‘update’ from the Immediate Insight CLI. Security Manager 8.5 or newer is required.

Immediate Insight and Security Manager must be configured with correct certificates so that they can securely exchange data. Here are the required steps;

  • Login to Immediate Insight CLI and type the following;
    • create-fmos-certs
    • from Immediate Insight, open an sftp connection to Security Manager, the copy these files over to the home directory of the account that you’re using to connect to Security Manager.
      /home/insight/app/config/certs/localhost.crt
      /home/insight/app/config/certs/localhost.key
      /home/insight/app/config/certs/rootCA.pem
  • Login to Security Manager CLI and type the following;
    • mkdir cert_backup
    • sudo cp /etc/pki/tls/certs/localhost.crt cert_backup/
    • sudo cp /etc/pki/tls/private/localhost.key cert_backup/
    • [make sure you’re in the directory where you copied the three certs from Immediate Insight]
    • sudo cp localhost.crt /etc/pki/tls/certs/
    • sudo cp localhost.key /etc/pki/tls/private/
    • sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/
    • sudo update-ca-trust
    • sudo vi /etc/firemon/dc.conf
    • copy the following to the end of the dc.conf file, making sure to edit the items between < > to reflect your environment:
      –DataCollector.EnableLogAnalyzer true
      –DataCollector.LogAnalyzer.AppServer.IP :61614
      –DataCollector.LogAnalyzer.UseTLSConnections false
      –DataCollector.LogAnalyzer.UserName
      –DataCollector.LogAnalyzer.Password
      [if you want to use TLS – change 61614 to 61615 and change false to true]
    • Save the changes to dc.conf
    • sudo vi /etc/firemon/LogAnalyzerRegexInfo.xml
    • [right before (end of file) copy the following:]
      <!– Immediate Insight –>
      <LogAnalyzerRegex>
      <Regex>.*</Regex>
      </LogAnalyzerRegex>
    • Save the changes to LogAnalyzerRegexInfo.xml
    • [reboot Security Manager]
    • sudo reboot

Part C – Live Collection of Security Manager Logs and Events into Immediate Insight

To collect logs and change events from Security Manager, you will need to install an additional component in Immediate Insight for logs and configure a Security Manager Collector in Immediate Insight for both the logs and events.

To enable the collection of firewall logs being sent to Security Manager – login to the Immediate Insight CLI.

  • add-tool logCollection
    Immediate-insight-add-tool-logcollection

To setup a data collector for the change events and logs you will need valid admin level credentials and connectivity to Security Manager. Login to Immediate Insight GUI, go to DataFlow -> Collectors and click the + button of the Collectors from, then select ‘Security Manager Collector’

  • Specify the IP Address of Security Manager
  • Provide an admin-level username and password for the Security Manager web interface GUI (not CLI)
  • Set the Data TTL for how long you want to keep the data (e.g. 30d)
  • Optionally you can assign the data to something other than the default ‘main-stream’ repository, to make it easier to separately search Security Manager data apart for other data coming into the system.
    • In this case you must first create the repository at the CLI, e.g. ‘new-repo sm’ , will then provide sm-stream as an available selection in the GUI
  • Give the collector a suitable name
  • The other values can normally be left default.
  • Click the ‘Add’ button when ready
    Immediate-insight-add-collector-type

Part D – Confirm Security Manager Firewall Logs and Change Events are coming into the system

Go to the DataFlow -> Collectors screen

Scroll down to your new Security Manager Collector, and check that the Status says ‘running’ (it can take a minute or so to initially change from ‘Connecting’ to ‘Running’) – if it does not say ‘Running’ confirm your credentials, certificates, and network connectivity before proceeding.

If Status is ‘running’ you will see Security Manager Change Events show up as indicated by the ‘Records’ counter.

II-security-manager-collector

Note: depending on how Security Manager Configuration Change Monitoring is setup, it may take anywhere from minutes to hours to see data, below is an example of configuring hourly change monitoring in Security Manager for a particular device managed by Security Manger (please see Security Manager documentation for full details on how to enable and configure the ‘Change Monitoring’ feature)

II-change-monitoring

Part E – Search some sample Security Manager Change Events in Immediate Insight

Go back to the Immediate Insight Search tab, then do one of the following to search for Security Manager Change event records.

  • Specify the separate repository if you configured one in Part B (e.g. sm-stream)
    • if not, type “id action” in the search string
  • Then click Search
  • The event Search Results will look similar to below, you can then go to ‘Full Text’ , ‘Metadata’ , ‘Microscope’ , ‘Associations’ , ‘Event Clusters’ etc (just as you would for other data collected by Immediate Insight) – same sample screen shots below

II-search

II-search2

II-microscope

II-microscope2

Searches can also be combined with other non-Security Manager data sources allowing for context and correlation of information from across your security related ecosystem. (For further details on Searching with Immediate insight please consult the User Guide)

The post Security Manager Changes and Firewall Log Collection – Immediate Insight appeared first on FireMon.

Streaming Copy over WAN – Immediate Insight

$
0
0

This document outlines how you can securely forward/copy data from one Immediate Insight server to another over the WAN (or Internet). As an example: this can be utilized to send data from a remote site to a central Immediate Insight in environments such as MSSP.

Variables

These are the variables (in bold) that you will need to set based on your environment. For more details, refer to the Appendix of this document under “Variable Definition”.

localUDPtunnelPort – port on the source server that will receive the output from the streamUDP data route.

localTCPtunnelPort – port on the source server that is the endpoint of the SSH tunnel.

remoteTCPlistenerPort – port on the destination server that is the endpoint of the SSH tunnel. Set this to the TCP Socket Listener (default TCP port 3000).

remoteIP – IP address of the destination server.

Source (Local) Immediate Insight Server Configuration

Use the CLI to perform the following commands on the source (local) server to bring up the SSH tunnel. The final “echo” command should be executed within 5 minutes of the “ssh –f –C –L…” command to insure that the tunnel stays up.

add-tool socat

ssh –f –C –L localTCPtunnelPort:localhost:remoteTCPlistenerPort insight@remoteIP sleep 300

socat UDP-LISTEN:localUDPtunnelPort,fork TCP:localhost:localTCPtunnelPort &

echo “tunnel up” |socat – UDP:localhost:localUDPtunnelPort

Use the Immediate Insight web GUI to configure the streamUDP data route on the source (local) server. In this example, events on the TCP Socket Listener are being copied/forwarded.

Go to DataFlow > Data Routing > Data Routing & Filtering and click the “+” on the top right.

II-edit-data-route

Give the Data Route a name (here we are using “secure copy”) and match the other settings above:

  • Stage – post
  • Expires – (no configuration)
  • Match – “TCP Socket Listener” (this matches the data collector name on the remote server – edit to match your environment).
  • Match Field – source
  • Action – streamUDP

Once complete with the above configuration, click “Edit Settings”.

II-edit-UDP-stream-action

Leave the “Destination IP” value as 127.0.0.1 (representing localhost). Edit the “Destination port” to match your localUDPtunnelPort variable.

The configuration on the source (local) Immediate Insight server is complete.

Destination (Remote) Immediate Insight Server Configuration

Make sure that TCP port 22 (SSH) is open to the source (local) Immediate Insight server.

Best practice is to create a firewall policy that restricts access to SSH to the IP address of the source (local) Immediate Insight server and to send the SSH traffic through an encrypted VPN tunnel.

Immediate Insight should never be exposed directly to the Internet using the default settings.

Please contact iisupport@firemon.com for any other inquiries.

Appendix

VARIABLESYSTEMCONFIGURATIONDEFINITION
localTCPtunnelPortSource (Local)CLIArbitrary (unused) local TCP port used to establish the SSH tunnel to the destination (remote).
localUDPtunnelPortSource (Local)DataFlow > Data Routing > Data Routing & Filtering. Edit the “secure copy” Data Route (as defined in this document – replace with your term as necessary.The source (local) UDP port used for tunneling data out to the destination (remote). Although it is sent via UDP it is received via TCP by the Data Router in the background.
remoteTCPlistenerPortDestination (Remote)DataFlow > Collectors > Create New Collector (or use the default “TCP Socket Listener” as defined in this document). If you do not use the default, be sure you alter the “match” variable in your Data Route and make sure the port matches.The port that the destination (remote) will listen on for data. A collector must exist on the destination (remote) to receive the data from the source (local).
remoteIPDestination (Remote)N/AIP address for the destination (remote) Immediate Insight server.

The post Streaming Copy over WAN – Immediate Insight appeared first on FireMon.

Viewing all 433 articles
Browse latest View live