Today I want to focus on the study of User Behavior Analytics and how companies like ForcePoint are developing solutions to help mitigate cybersecurity threats from inside your company.
While attending Tech Field Day 16* out in Austin, User Behavior Analytics took the center stage during one of the presentations by Forcepoint.
I have always had a love for analytics, even more so, how you can determine actions based off of trends from users. This is quite prevalent in the gaming industry and helps game developers fine tune their product. So needless to say, hearing how you can use it to defend cybersecurity threats was quite interesting to me.
User Behavior Analytics is the collection of human behavior data to help identify anomalies in users to help combat cybersecurity threats. Companies like Forcepoint then develop algorithms and statistical models to help businesses detect potential threats from within the company.
The key takeaway is that using this technology is about tracking the users’ actions and not the actions of the system.
It wants to monitor everyone, not just you. Understand that the sooner cybersecurity threats can be detected the less impact it has on the business.
Let’s be clear that not all data theft or corruption is intentional by users. A user sends an email to the wrong person or deletes a folder without realizing what they did. Take another example, you are surfing the internet and accidentally click on a cute kitten video, unknowingly affecting your computer with malware.
There are many cases of former employees trying to enact revenge because they are unhappy with their previous employer. Or the person is a salesperson and they access information and download client database right before quitting and starting with a competitor. This person’s intentions are deliberate.
One way for Forcepoint and their customers to take advantage of this technology is through their User & Entity Behavior Analytics solution, this tool allows for them to bring data in from a variety of sources to understand who employees are and what they are doing.
By understanding who your users are and what they do normally, helps companies detect when something out of the ordinary happens.
If Bob never goes into the office late at night and randomly he starts accessing company files after 11 pm, you can identify a potential threat. Or perhaps, Bob got a new position and is working different hours or got assigned a project and was just trying to meet deadlines. Bob’s manager could go to him and say we noticed that you started logging in and accessing sensitive data late at night and when Bob replies he is in bed normally at 9 pm, the company would know something was wrong.
A couple of weeks ago a question was posted on the Ansible LinkedIn group stemming from an Ansible role for security CentOS. The question, whether Automation is the only way to ensure security. My brief social media shorted response was
Completely agree. If you aren’t automating then you can’t really claim to be secure
This caused some fuss on the post, with most disagreeing with me. Now I stand by my answer, you cannot be secure if you are not automating, but to further my answer, you are not necessarily secure just because you are automating. Security is not just something you turn on, said another way its not binary, you don’t just turn on security. Security consists of many layers, not the least of which is truly understanding your companies business, goals, requirements, processes, and people. With that understanding, you can now apply any specific security measures you may need to abide by. For example if accept credit cards then appropriate safe guards need to be taken to ensure data is encrypted and certain elements such as the validation number are not stored.
Now, if you are not adhering to those requirements, there is no automation process in the world that can secure you. However, even with the most specific of run books, security teams, engineers, and auditors ensuring you have done everything technically possible to ensure security, you cannot truly say you are secure with a means to automate the installation, and configuration as you have defined them.
Another argument in the group discussion was that automation can also lead to widespread vulnerabilities by opening security holes. And while this is true, my previous statements still hold true – you need to have the proper security processes, and details in place before you automate them. Now, say for example, something like Heartbleed comes along again – how quickly would it take you to patch even 10 systems? What about 100? or 1000 if you are doing it by hand? Much longer than it would take to leverage something to patch the systems automatically.
Automation, configuration management, devops; none of these things are a panacea – however security teams
should need be relying on automation, not manual efforts to configure, and secure systems.
In what should be a multi-part series (unless work gets insane) I will be setting up the supporting infrastructure for my home lab. For this lab I will be using the 8-core home lab build I wrote about in the past. I am currently running Windows 8.1 with VMware Workstation 10. I have two volumes setup in Windows that will be dedicated for VMs – 1 is a single 120GB Neutron SSD that I will use for some of the “heavier” VMs such as SQL server and the vRealize IaaS server. The other is a ~1.3TB RAID0 dynamic volume built in Windows on 3x 500GB Seagate hybrid drives which will be used for common VMs such as the domain controller I am setting up here.
I will be starting with all the VMs using NAT in VMware Workstation. First I am setting up a Windows VM that we will use throughout the lab build – why am I doing this first, mostly because of how long patches are going to take to be totally honest, you could just as easily start with your virtual ESXi boxes (should be the next post) but alas here you are reading this.
First, create a new virtual machine in VMware Workstation
From here on out you’ve got a standard Windows install wizard to follow. Once Windows is installed and you set your password, install VMware Tools by right clicking on the VM and selecting Install VMware Tools – follow that wizard, reboot and patch your Windows VM. Next up a quick post on cloning VMs in Workstation so we can get to the fun part.
I’d like to state up front I have no inside knowledge on CodeSpaces, these opinions were formed based on the information they posted at http://www.codespaces.com/
I don’t know any of the people at CodeSpaces, however what I do know of them drives home the point that developers are not operations people (operations people are also not developers, just to be fair) and the mindset that a developer can and should be in charge of operation decisions is wrong (no offense, go do what you are good at).
I’ve seen many companies who think that because developers are technical, they can do any technical job. This is simply not true. Developers are good at writing code, systems administrators and engineers are good at operations, and maybe its that clash of opposing forces that has lead business to listen to the people writing the software instead of the people trying to keep the lights on (IT is utility right? (wrong)!). To again be fair, there are quite a few admins/engineers that still to this day do not realize they are service providers, there to help the business run efficiently.
The first item that jumped out at me was the fact that their backup systems and production systems were stored, essentially together. Any sysadmin worth his weight knows you need to keep your backups offsite. Well, you say, there backups were offsite at Amazon! True, but they were not offsite from their production systems, a massive failure to Amazon would impact their ability to operate normally as well as their ability to recover, so in this case the backups may have well as been sitting on top of the server they were running on.
Second, as a “cloud” provider, in this case a SaaS type SVN/code hosting service, should have operated on multiple IaaS or PaaS providers, not just Amazon. At the very least a disaster recovery site should have been setup on some other service. Had they set it up in just another availability zone they would have been just as easily and critically impacted.
CodeSpaces fate also shows the need for multifactor authentication. While I would consider their lack of foresight to place their backups on a separate provider unfortunate and based on poor design, not having multifactor authentication in place was downright lazy. Amazon offers a virtual “fob” which generates random codes as the second level authentication for…wait for…FREE. Thankfully they were not storing private keys at Amazon so customer data was presumably not accessed, just completely and totally lost.
So, what can you learn from CodeSpaces?
– Offsite to you does not mean offsite to your data. If you are using a 100% cloud service for running your business, you need a SEPARATE vendor for backup and DR,
– Use other providers for general high availability where you are running your application in multiple providers, not just mutliple availability zones with the same provider.
– Use two-factor authentication everywhere possible, at the very least where ever customer data and production systems are stored.
While its sad to see all the hardwork that went into building codespaces, as well as all the hardwork their customers lost, let this be a lesson to current and future startups as well as operations teams. It may also be a handy article to give to the CFO if your request for offsite backup or multifactor auth budget is denined!
Containers have been a hot topic of late, many are suggesting it is the next step in the virtualization evolution and spells doom for vSphere, Hyper-V or other “traditional” virtualization platforms. For those who are not aware of containers, its is a packaged application that can run isolated from other applications. It is not too dissimilar from ThinApps in a virtual desktop environment but focused on server applications such as Apache, Tomcat or other custom applications on top of a single Operating System.
Traditional OS level virtualization generally looks something like this:
Some, such as Linux Journal suggest it is the future and that traditional OS level virtualization and the hypervisor is not needed. With containers, you could remove the hypervisor layer and run your OS directly on baremetal and run the “virtualized” applications helping to save resources (including budget) which would look something like this:
You can read more about Linux Journal’s take on Containers here: Containers—Not Virtual Machines—Are the Future Cloud | Linux Journal
There seems to be a small group who think – as it generally is with most technology, that traditional OS/hypervisor virtualization and containers are complimentary. Scott Lowe has a blog post here walking through the initial setting up along with some thoughts on LXC: A Brief Introduction to Linux Containers with LXC – blog.scottlowe.org – The weblog of an IT pro specializing in virtual…
My question is, why does it have to be Containers versus Virtualization? Why can’t it be a marriage of the two technologies to offer the best of each, or something like this:
With this model, you can still isolate and manage resources at the OS level which has been proven over and over again as well as drop containers on top of the virtualized OS? Not only can I still leverage hypervisor features such as high availability, but I can reduce the number of VM’s needed by adding more containers to a single VM. Scale up resources when needed for adding containers OR scale out resources when needed for redundancy or additional throughput? What are your thoughts?
You can find out more about popular container at the following sites:
lmctfy (Google): https://github.com/google/lmctfy