Forcepoint User and Entity Behavior Analytics
March 23rd, 2018 by Luigi Danakos

Today I want to focus on the study of User Behavior Analytics and how companies like ForcePoint are developing solutions to help mitigate cybersecurity threats from inside your company.

While attending Tech Field Day 16* out in Austin, User Behavior Analytics took the center stage during one of the presentations by Forcepoint.

I have always had a love for analytics, even more so, how you can determine actions based off of trends from users. This is quite prevalent in the gaming industry and helps game developers fine tune their product. So needless to say, hearing how you can use it to defend cybersecurity threats was quite interesting to me.


User Behavior Analytics what is it?

User Behavior Analytics is the collection of human behavior data to help identify anomalies in users to help combat cybersecurity threats. Companies like Forcepoint then develop algorithms and statistical models to help businesses detect potential threats from within the company.

The key takeaway is that using this technology is about tracking the users’ actions and not the actions of the system.


Why does my company want to monitor me?

It wants to monitor everyone, not just you. Understand that the sooner cybersecurity threats can be detected the less impact it has on the business.

According to ForcepoiForcepoint Insider Threat Imagent, 69% of enterprise security executives reported an attempted theft or corruption of its data.

Let’s be clear that not all data theft or corruption is intentional by users. A user sends an email to the wrong person or deletes a folder without realizing what they did. Take another example, you are surfing the internet and accidentally click on a cute kitten video, unknowingly affecting your computer with malware.

There are many cases of former employees trying to enact revenge because they are unhappy with their previous employer. Or the person is a salesperson and they access information and download client database right before quitting and starting with a competitor. This person’s intentions are deliberate.


How do they do it?

One waForcepoint User and Entity Behavior Analyticsy for Forcepoint and their customers to take advantage of this technology is through their User & Entity Behavior Analytics solution, this tool allows for them to bring data in from a variety of sources to understand who employees are and what they are doing.

By understanding who your users are and what they do normally, helps companies detect when something out of the ordinary happens.

If Bob never goes into the office late at night and randomly he starts accessing company files after 11 pm, you can identify a potential threat. Or perhaps, Bob got a new position and is working different hours or got assigned a project and was just trying to meet deadlines. Bob’s manager could go to him and say we noticed that you started logging in and accessing sensitive data late at night and when Bob replies he is in bed normally at 9 pm, the company would know something was wrong.

Watch the Forcepoint presentation on User and Entity Behavior Analytics


*Please note that Forcepoint was a sponsor of an event (Tech Field Day 16) that paid for my travel accommodations to participate in the event.

Posted in Tech Tagged with: , , , , , , , ,

April 23rd, 2015 by JFrappier

Jonathan Frappier Virtxpert

A couple of weeks ago a question was posted on the Ansible LinkedIn group stemming from an Ansible role for security CentOS. The question, whether Automation is the only way to ensure security. My brief social media shorted response was

Completely agree. If you aren’t automating then you can’t really claim to be secure

This caused some fuss on the post, with most disagreeing with me. Now I stand by my answer, you cannot be secure if you are not automating, but to further my answer, you are not necessarily secure just because you are automating. Security is not just something you turn on, said another way its not binary, you don’t just turn on security. Security consists of many layers, not the least of which is truly understanding your companies business, goals, requirements, processes, and people. With that understanding, you can now apply any specific security measures you may need to abide by. For example if accept credit cards then appropriate safe guards need to be taken to ensure data is encrypted and certain elements such as the validation number are not stored.

Now, if you are not adhering to those requirements, there is no automation process in the world that can secure you. However, even with the most specific of run books, security teams, engineers, and auditors ensuring you have done everything technically possible to ensure security, you cannot truly say you are secure with a means to automate the installation, and configuration as you have defined them.

heartbleedAnother argument in the group discussion was that automation can also lead to widespread vulnerabilities by opening security holes. And while this is true, my previous statements still hold true – you need to have the proper security processes, and details in place before you automate them. Now, say for example, something like Heartbleed comes along again – how quickly would it take you to patch even 10 systems? What about 100? or 1000 if you are doing it by hand? Much longer than it would take to leverage something to patch the systems automatically.

Automation, configuration management, devops; none of these things are a panacea – however security teams should need be relying on automation, not manual efforts to configure, and secure systems.

Security needs automation, but automation does not mean you are secure

Posted in Tech Tagged with: , , ,

November 1st, 2014 by JFrappier

Jonathan Frappier Virtxpert

In what should be a multi-part series (unless work gets insane) I will be setting up the supporting infrastructure for my home lab.  For this lab I will be using the 8-core home lab build I wrote about in the past.  I am currently running Windows 8.1 with VMware Workstation 10.  I have two volumes setup in Windows that will be dedicated for VMs – 1 is a single 120GB Neutron SSD that I will use for some of the “heavier” VMs such as SQL server and the vRealize IaaS server.  The other is a ~1.3TB RAID0 dynamic volume built in Windows on 3x 500GB Seagate hybrid drives which will be used for common VMs such as the domain controller I am setting up here.

I will be starting with all the VMs using NAT in VMware Workstation.  First I am setting up a Windows VM that we will use throughout the lab build – why am I doing this first, mostly because of how long patches are going to take to be totally honest, you could just as easily start with your virtual ESXi boxes (should be the next post) but alas here you are reading this.

First, create a new virtual machine in VMware Workstation


VMware Workstation – Create a New Virtual Machine

  • On the New Virtual Machine wizard page select Custom (I prefer control over which settings I chose) and click Next
    Select Workstation 10.0 and click Next
  • Select I will install the operating system later radio button (old habit I’m hanging onto from old Workstation and Ubuntu days) and click Next
  • Select Microsoft Windows and select the version from the pull down menu. I am using Windows Server 2012; click Next.
  • If you have set your drives up like me, click the browse button and select your preferred Windows volume, in my case I have selected the “V” drive where my RAID0 volume is. I also have create a folder on this drive called VMs because OCD.
  • Name your virtual machine and pasted that along with a leading into the location field aver V:VMs to create the VM in its own folder like so and click the Next button.  In my setup I am actually using vxprt-win-tmp01:
VMware Workstation VM destination folder and virtual machine name

VMware Workstation VM destination folder and virtual machine name

  • I am staying with a single processor, single core – after all we don’t have unlimited resources in this home lab, click the Next button
  • I’m also sticking to 2GB of RAM (Next), NAT (Next), LSI Logic SAS (Next), SCSI (Next), and creating a new virtual disk (Next)
  • On the Specify Disk Capacity page, I typically chose to store my virtual disks as a single file, this is up to you – I don’t like having a bunch of files in my VM folder, feels messy. Also leave Allocate all disk space now unchecked to thin provision your disk and click Next
  • Optionally you can rename your disk file, this again I prefer to have the same as my VM name, click Next and Finish. Your VM will be created, albeit with no OS yet.
  • Right click on your new VM and select settings
  • Click on CD/DVD and select the appropriate option to install Windows, in my case I have a downloaded ISO so I have selecte the Use ISO image file radio button and selected the desired ISO image.  Click OK to close the settings window.
  • Right click on your VM, go to Power and click on Start Up Guest.

From here on out you’ve got a standard Windows install wizard to follow.  Once Windows is installed and you set your password, install VMware Tools by right clicking on the VM and selecting Install VMware Tools – follow that wizard, reboot and patch your Windows VM.  Next up a quick post on cloning VMs in Workstation so we can get to the fun part.

VMware Workstaion Home Lab Setup Part 1 – Windows VM

Posted in Tech Tagged with: , , , , , , , , , , , , , , ,

June 24th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I’d like to state up front I have no inside knowledge on CodeSpaces, these opinions were formed based on the information they posted at

I don’t know any of the people at CodeSpaces, however what I do know of them drives home the point that developers are not operations people (operations people are also not developers, just to be fair) and the mindset that a developer can and should be in charge of operation decisions is wrong (no offense, go do what you are good at).

I’ve seen many companies who think that because developers are technical, they can do any technical job. This is simply not true. Developers are good at writing code, systems administrators and engineers are good at operations, and maybe its that clash of opposing forces that has lead business to listen to the people writing the software instead of the people trying to keep the lights on (IT is utility right? (wrong)!).  To again be fair, there are quite a few admins/engineers that still to this day do not realize they are service providers, there to help the business run efficiently.

The first item that jumped out at me was the fact that their backup systems and production systems were stored, essentially together. Any sysadmin worth his weight knows you need to keep your backups offsite. Well, you say, there backups were offsite at Amazon! True, but they were not offsite from their production systems, a massive failure to Amazon would impact their ability to operate normally as well as their ability to recover, so in this case the backups may have well as been sitting on top of the server they were running on.

Second, as a “cloud” provider, in this case a SaaS type SVN/code hosting service, should have operated on multiple IaaS or PaaS providers, not just Amazon. At the very least a disaster recovery site should have been setup on some other service. Had they set it up in just another availability zone they would have been just as easily and critically impacted.

CodeSpaces fate also shows the need for multifactor authentication. While I would consider their lack of foresight to place their backups on a separate provider unfortunate and based on poor design, not having multifactor authentication in place was downright lazy. Amazon offers a virtual “fob” which generates random codes as the second level authentication for…wait for…FREE.  Thankfully they were not storing private keys at Amazon so customer data was presumably not accessed, just completely and totally lost.

So, what can you learn from CodeSpaces?

– Offsite to you does not mean offsite to your data. If you are using a 100% cloud service for running your business, you need a SEPARATE vendor for backup and DR,
– Use other providers for general high availability where you are running your application in multiple providers, not just mutliple availability zones with the same provider.
– Use two-factor authentication everywhere possible, at the very least where ever customer data and production systems are stored.

While its sad to see all the hardwork that went into building codespaces, as well as all the hardwork their customers lost, let this be a lesson to current and future startups as well as operations teams. It may also be a handy article to give to the CFO if your request for offsite backup or multifactor auth budget is denined!

What to learn from CodeSpaces and how it could have been avoided

Posted in Tech Tagged with: , , , , , , , , , , , ,

June 11th, 2014 by JFrappier

Jonathan Frappier Virtxpert

Containers have been a hot topic of late, many are suggesting it is the next step in the virtualization evolution and spells doom for vSphere, Hyper-V or other “traditional” virtualization platforms.  For those who are not aware of containers, its is a packaged application that can run isolated from other applications.  It is not too dissimilar from ThinApps in a virtual desktop environment but focused on server applications such as Apache, Tomcat or other custom applications on top of a single Operating System.

Traditional OS level virtualization generally looks something like this:


Some, such as Linux Journal suggest it is the future and that traditional OS level virtualization and the hypervisor is not needed.  With containers, you could remove the hypervisor layer and run your OS directly on baremetal and run the “virtualized” applications helping to save resources (including budget) which would look something like this:


You can read more about Linux Journal’s take on Containers here:  Containers—Not Virtual Machines—Are the Future Cloud | Linux Journal

There seems to be a small group who think – as it generally is with most technology, that traditional OS/hypervisor virtualization and containers are complimentary.  Scott Lowe has a blog post here walking through the initial setting up along with some thoughts on LXC:  A Brief Introduction to Linux Containers with LXC – – The weblog of an IT pro specializing in virtual…

My question is, why does it have to be Containers versus Virtualization?  Why can’t it be a marriage of the two technologies to offer the best of each, or something like this:


With this model, you can still isolate and manage resources at the OS level which has been proven over and over again as well as drop containers on top of the virtualized OS?  Not only can I still leverage hypervisor features such as high availability, but I can reduce the number of VM’s needed by adding more containers to a single VM.  Scale up resources when needed for adding containers OR scale out resources when needed for redundancy or additional throughput?  What are your thoughts?

You can find out more about popular container at the following sites:

Parallels: OS Virtualization Solution for Windows and Linux — Parallels Virtuozzo Containers – Parallels

Docker:  Docker – Build, Ship, and Run Any App, Anywhere


lmctfy (Google):

Containers vs Virtualization or Containers + Virtualiztion

Posted in Tech Tagged with: , , , , , , , , , , , , ,