E-book

5 Reasons You Should Rethink Your SIEM Strategy

The Future of Security Operations and Security Analytics For A Functioning Modern World

Download Ebook

Defining SIEM Technology

SIEM technology supports threat detection, compliance, and security incident management through the collection and analysis of security events as well as a wide variety of other contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities such as incident management and response, dashboards and reporting.

SIEM solutions improve an organization’s ability to quickly detect attacks and data breaches as well as improve incident investigation and response capabilities. However, this requires an ongoing investment in resources for both technology operations and security event monitoring to realize their true value. Modern solutions enable security teams to efficiently and effectively correlate telemetry data during investigations, typically through querying extracted fields across the entire data set, which can identify indicator artifacts such as indicators of compromise (IOCs) and indicators of attack (IOAs). 

SIEM tools also support other use cases such as the reporting needs of organizations with regulatory compliance obligations, as well as those subject to internal and external audits.

Foreword

The Security and Information Event Management (SIEM) market is undergoing a radical transformation which is fueled by continuously evolving changes to infrastructure, supporting a remote workforce, budget restructuring as well as other business, compliance, and security drivers.

Traditional SIEMs no longer meet the growing needs of security pros who face new and emerging threats. The term SIEM was coined in 2005 by Mark Nicolett and Amrit Williams, in Gartner's SIEM report, ‘Improve IT Security with Vulnerability Management.’ In its early days, SIEM was shaped by new compliance drivers that dominated the era, like PCI or HIPAA. In more recent years, SIEM has evolved to handle the convergence of platforms while accelerating threat detection against sophisticated ransomware and malware.

With remote work, cloud adoption and other digitization initiatives accelerating over the last year, the spotlight is again on SIEM as organizations seek a wider net with more scalability and automation. The challenge this time is for users to understand how to assemble the appropriate SIEM solution.

After over 16 years of face lifts and evolutions, the SIEM space as we know it is ripe for a revolution. A new approach and more importantly a new way of thinking about how to address the core need that SIEM has failed to achieve for so long.

So what does this future revolution of SIEM technology look like? This ebook discusses capabilities that distinguish the threat detection and response systems of today from previous iterations with actionable guidance.

Introduction

Market Overview 

Over one third of organizations have adopted and implemented a SIEM in one form of another in various forms of deployment, configuration, maintenance and ultimately disarray.

The current state is that incumbent platforms are no longer sufficient to perform rapid detection at scale. The root cause of the issue is that the tools practitioners use as SIEMs today were never intended for security and were built as general-purpose logging solutions. Put simply, they were designed to process terabytes of data and not exabytes. Security teams have found themselves forced into a “prison cell” they can not escape due to the purpose-built solutions like SIEMs are not able to scale and be flexible to the point they need to do their jobs effectively.

Security teams are small, understaffed, and generally not experienced in DevSecOps or software engineering. They lack the knowledge, skill and abilities to build, operate, and maintain reliable, fault-tolerant, and elastic data processing pipelines that high-scale monitoring requires. SIEMs are inadequately suited to meet current industry demands.

Source: Dimensional Research survey

Future of Security Operations and Security Analytics

Security teams need solutions that can provide dependable security analytics that are built with speed, scale, and flexibility in mind in order to operate efficiently and effectively in high-performing production environments. Adoptions of solutions that embrace this as a new norm will become the new operating standard.

  • Everything-as-Code: Automation has become ubiquitous to everything inside and outside the operations center. As teams mature and grow they will embed code into every part of their way of life to include using it to drive critical security decisions.
  • Real-Time Detection: To get immediate responses when new activity occurs at machine speed.
  • High-Scale Data: With no technical limit on data ingestion volume, security teams will be able to scale infinitely. Data sharing will become more democratized and made available freely with significantly less restrictions and obstacles.

 

The cloud presents an opportunity for speed and scale optimization for new “systems of engagement”—the applications built to engage customers and users. These new apps are the primary interface for the customer to engage with a business, and are ideally suited for delivery in the cloud as they tend to: Have dynamic usage characteristics, needing to scale loads up and down by orders of magnitude during short time periods. Be under pressure to quickly build and iterate. Many of these new systems may be ephemeral in nature, delivering a specific user experience around an event or campaign.

The Need for an Alternate Approach

The inadequacies of SIEM technology functioning in modern day’s society can very easily become overwhelming, baffling and frustrating. No longer can security teams be forced into high-scale operational roles taking valuable time away from detecting, responding, and automating the analysis of potentially nefarious activity. Additionally, as a forcing function, teams need to write code and produce more elegant solutions for analysis, moving away from strictly defined and specialized languages.

The strong need for security teams to fill this void as quickly as possible has resulted in a rush of security products that over-market their solutions and confuse their intended audience. It is in this chaos that a call for a revolution has arisen.

When There is Too Much Hay, It is Impossible to Find the Needle

Despite the benefits, most SIEM solutions are easy to deploy, maintain and manage. Modern SIEM tools often use a data lake structure and cloud analytics to centralise events, attempting to narrow it down to the events that need attention. Effectively finding the proverbial needle in a haystack. The value and effectiveness of a SIEM is highly dependent on the sources of data it has access to, and how well it has been architected, tuned and maintained. Over the years the industry's approach has been to keep piling on more hay. The challenges with SIEM are that it often generates false positives and too many alerts, which results in alert fatigue, or apathy, about alerts which leads to high-priority threats being ignored. This can cause critical incidents and even data breaches to go unnoticed much longer than ever intended. In many cases this lapse can cause dire fiscal and reputational consequences

According to the 2020 State of SecOps and Automation survey, 92% of organizations agree that automation is required to address the growing number of alerts, as well as the high volume of false positives. Ponemon Institute’s research into SIEM productivity revealed that on average, security personnel in U.S. enterprises waste approximately 25% of their time chasing false positives because security alerts or indicators of compromise (IOCs) were erroneous. The report also highlighted the need for security operations center (SOC) productivity improvements,citing that security teams must evaluate and respond to nearly 4,000 security alerts per week.

Still, 65% of organizations use only partially automated alert processing, and 75% would need no fewer than three additional security analysts to deal with all alerts on the same day.

The 2020 Dimensional Research survey on the State of SecOps and Automation found that 70% of enterprise security teams have seen the volume of security alerts they have to manage more than double in the past five years, while 83% say their security staff experiences “alert fatigue.” For those teams with higher levels of automation, handling the higher levels of alerts today is easier, 65% of those teams with high levels of automation stated they were able to resolve most security alerts during the same day, compared to only 34% of enterprises where low levels of automation are in place currently.

Other Key Findings:

  • Security alert volumes create problems for security operations
  • 70% have more than doubled the volume of security alerts in the past five years
  • 99% report high volumes of alerts cause problems for IT security teams
  • 56% of companies with more than 10,000 employees deal with more than 1,000 security alerts per day
  • 93% cannot address all security alerts the same day
  • 83% say their security staff experiences “alert fatigue”
  • Automation helps, but it is still a work in progress
  • 65% of companies have only partially automated security alert processing
  • 65% of teams with high levels of automation resolve most security alerts the same day compared to only 34% of those with low levels of automation
  • 92% agree automation is the best solution for dealing with large volumes of alerts
  • 75% report they would need three or more additional security analysts to address all alerts the same day
  • Better technology is needed to manage security alert volumes
  • 88% have challenges with their SIEM
  • The top issue reported with existing SIEM solutions is the high number of alerts
  • 84% see many advantages in a cloud-native SIEM for cloud or hybrid environments
  • 99% would benefit from additional SIEM automation capabilities

 

Skill & Resource Gap 

There are not enough skilled experts to go around

Millions of additional workers are needed in cybersecurity, and the need for skilled professionals is projected to triple in the next two years. In the United States, one of the countries with the most skilled workers in cybersecurity, the total employed cybersecurity workforce is just over 700,000, and the number of unfilled cybersecurity jobs has grown by more than 50% since 2015 (see “Cybersecurity Supply/Demand Heat Map” and “The Cybersecurity Workforce Gap”). Gartner has stated it is one of the biggest barriers to success for security and risk managers.

Gone are the days of waiting for long software development cycles to address operational requirements. The demands of today’s security risks require detection at the speed of now. Teams operating at scale must add writing code as a critical skill to detect and respond to suspicious activity. This will ensure teams can rapidly avoid performing repetitive and manual tasks and improving the efficacy and sustainability of complex detections on the fly. This also includes remediation of high-confidence issues and/or validating with employees that they performed some action that could be seen as part of a breach.

The need for coding, however, adds a new layer of complexity to organizational resourcing and sustainment requirements. Many security teams are not developers or software engineers that typically have this skill set inherently. This will require an upfront investment, and therefore cost, in order to achieve long term benefits from a more optimal approach. Overtime coding will be ingrained in education and training programs as these capabilities become fully embedded in the overall cyber security ecosystem.

This movement is already happening. The SOC is unbundling, powered by code and automation instead of rooms of screens filled with dashboards. Command-line is king. The makeup of detection teams is also shifting, where security engineers are tasked with designing and implementing the security platforms, while security analysts complement them by monitoring configurations, network telemetry and other system services to prevent and detect attacks. More and more the skill sets needed for work in cybersecurity are akin to that of a software security engineer and less like a typical analyst.

The good and bad behind automation

Writing code to automate processes can be a beautiful thing. When done right, a SOC can run smoothly like a well-oiled machine. When done wrong, it can turn your entire operation inside out and upside down; bringing it to a grinding halt. As it pertains to code that has an impact on infrastructure, this could potentially have serious ramifications in terms of performance or even availability disruption. In terms of threat monitoring, security teams could find themselves investigating the wrong things, become overwhelmed with false positives or, potentially worse, they could be missing critical information that might result in attackers going undetected for longer periods of time.

When implementing any change in production, especially with less experienced personnel, leveraging a mature governance process will mitigate the risk of problems. By maintaining a change management process, this ensures that the right checks and balances are in place and key stakeholders are informed. This process can be expedited with validation testing which can evaluate changes during the development process to ensure that it satisfies specified business requirements.

The Optimal Approach

Embrace a Data Lake Architecture

Move from the little pond to the big lake Best practice is setting a solid foundation from the start. 

Data volumes are not slowing down. Between now and 2024 the volume of data that is created, captured, copied and consumed worldwide will double in size. Embracing the use of cloud provider services like data lakes and SaaS will make life easier. Relying on these services provides slightly less control, but with the added advantage of very minimal overhead, it is very much worth the trade-off when working with a small team.

Data lake architecture enables the ability to collect data up and down the stack to get as much context as possible to include but not be limited to cloud, network, database, host, and application data. By prioritizing the collection of logs that have security value ensure that time and resources are used efficiently. 

This can be a particularly important challenge to overcome early on as analyzing a number of data sources can quickly become very noisy, irrelevant, and can take up unnecessary space and cycles. In addition, organizations can ingest, parse, normalize, and analyze their security data and store it for long-term retention, creating a well-structured and scalable security data lake. 

By normalizing the data and extracting indicators this allows actions to be taken to respond and recover from security incidents. Detections run continuously against streaming event data for true real-time alerting or historically against collected and normalized data for advanced correlation. Detection-as-code provides the flexibility, testability, and repeatability teams need to build data-driven security programs that continually improve incident detection and response. By leveraging serverless stream processing and Python for alerting, security teams are able to operationalize a scalable and flexible platform for writing hardened detections that produce high-signal alerts against highscale security data. This takes the guesswork out of data planning and provides serverless scale and real-time detection and alerting, improving key security metrics like Mean Time to Detect (MTTD), Mean Time to Investigate (MTTI), and Mean Time to Respond (MTTR).

Go Serverless to Boost Capabilities and Reduce Cost

There are many benefits to going serverless that cannot be met with
alternative approaches.

  • Elastic and Scalable: Use what you need when you need it at machine speed. 
  • Cost Effective: Extremely high return on investment due to low license and administrative costs.
  • Ease of Use: Anyone can do it. Onboarding takes minutes not days. By eliminating infrastructure constraints operating teams can reprioritize resources and focus on other priorities.
  • Visibility:By having more, you see more. Many legacy approaches with on-prem infrastructure have strict limits on ingestion and retention. Very quickly Tier 1 security analysts can uplevel to tier 2 and so on due to the eliminated overhead and added functionality an operating environment like this can provide.

 

Leverage Tailored Security Analytics

Detection-as-Code

Threat detection programs that are fine-tuned for specific environments and systems are the most impactful. By treating detections as well-written code that can be tested, checked into source control, and code-reviewed by peers, teams can produce higher-quality alerts that reduce fatigue and quickly flag suspicious activity.

Common benefits and use cases for incorporating detection-as-code:

  • Build custom, flexible detections with a programming language of choice
  • Test-Driven Development (TDD)
  • Collaboration with version control systems
  • Automated workflows for reliable detections
  • Reusable code

As organizations mature their automation framework and custom detection workflows, this approach can mature into a much more evolved process. Organizations may choose to automate threat modeling by using an ontology framework.

In the diagram below, this example shows data being collected from an enterprise environment, standardized, merged, and finally transformed into a threat model using automation.

From: Automating threat modeling using an ontology framework

Detect at speed of now

According to IBM’s Cost of a Data Breach study, the average total cost of a data breach is $3.86 million globally and $8.64 million in the United States. On average it takes 280 days to identify and content a breach. This is a heavy price to pay for not replacing legacy technology and an outdated approach.

According to a recent SANS Incident Response (IR) Survey, 14% of firms indicate that the time between compromise and detection is between 30 to 180 days. Of those that detected an intrusion, nearly 10% said it took up to 90 days to contain.

Every environment is different, and as a result requires a different set of needs for detections. Teams need the ability to create custom tailored rules that can be properly tested, versioned, and programmatically managed in version control. The flexibility and robust nature of full programming languages enables teams to detect either simple or advanced behaviors in addition to context fetching, enriching, and telling the whole story of what happened. Incident response experts have been implementing these best practices for over 2 decades and now it has cascaded to frontline responders. Use tools that allow you to either write code directly or are heavily customizable.

Automate with Confidence

Automation

Platforms such as Tines are highly complementary to modern solutions to take action on generated alerts. Analysts and Engineers should not be performing manual and repetitive tasks and it is recommended to prioritize the ability to easily configurable alerts on to automation for taking action.

This helps scale detection programs by pinging users, opening cases, or preventing unnecessary alerts from reaching your security team. When an alert is generated from the detection engines (or other sources), a modern solution will dispatch a notification to security teams to triage the alert in systems like PagerDuty, Slack, Jira, and Microsoft Teams.

With the swivel seat removed from the equation, security teams can better streamline their process workflows. Organizations are now able to implement complex tasks such as instantly mitigating vulnerabilities as well as implement automated actions at will and as needed. The use cases and examples for this critical capability are endless.

Centralized Composite Design (CCD)

Hybrid Environments Bring On a Fusion of Confusion

In statistics, Centralized Composite Design (CCD) is an optimization method in achieving desirability function for each factor and response. Put simply, the consolidation of disparate data points in a precise manner that achieves optimal outcomes.

There are so many security tools and services to choose from today, and each of them brings their own unique value and perspective to a detection program. This is a blessing and a curse to those that depend on them. Hybrid cloud, hybrid on-prem & cloud, hybrid on-prem, private & public access, and every mix can be thought of in terms of heterogeneous architectures and ecosystems have now become the new norm. It has become the ultimate fusion of confusion for operators to effectively utilize and maintain.

In its 2020 CISO Effectiveness Survey, Gartner found that 78% of CISOs have 16 or more tools in their cybersecurity vendor portfolio, with 12% of CISOs having 46 or more tools. The large number of security products used by organizations drives up complexity and bottom line costs.

Security teams need a solution that allows teams to forward this data into a single, scalable location, and normalize it for detection and investigations, taking a monitoring-in-depth approach. The vendor consolidation movement is already underway as security teams seek to simplify their daily operations. By 2025, Gartner states that 75% of large organizations will be actively pursuing a vendor consolidation strategy, up from approximately 25% today.Security vendor consolidation is - of course - challenging. In the survey, Gartner also found that 85% of organizations currently pursuing a vendor consolidation strategy had not reduced their vendor count in the previous year. Being able to navigate through this transformation will separate those that just survive and those that thrive.

Bridge the Gap Between DevSecOps

The orbiting worlds of development operations and security operations have been slowly integrating over the past decade. There are many benefits to implementing a unified DevSecOps organizational structure. The primary reason being that an organization can become more agile and embrace constant change while simultaneously embedding continuous integration, testing, and delivery. Security is injected into every process as the wheel continually turns. Now that they have a single consolidated lens in which to oversee unified operations, security teams can fully integrate data sources and orchestrate incident management and business intelligence functions. This, in turn, enables a seamless operating environment that can be much more easily automated.

Summary

Security operations teams will get the most out of the investment of their money, time and resources by implementing modernized security analytics best practices for the newly revolutionized world:

  • Go cloud-native: Scale instantly as application data, events, and logs increase
  • Maximize time to value by optimizing license, maintenance and administration costs
  • Tailor security analytics to your specific organization
  • Normalization, standardization, and THEN automation.
  • Operate and maintain secure and compliant cloud environment

Source: Dimensional Research Survey

Recommendation: Start The Revolution Today

Panther is the core detection and response platform for modern, cloud-focused teams. It’s designed for speed, scale, and flexibility while operating as a robust security data platform.

Panther’s utilization of detections-as-code, security data lakes, and real-time alerting gives teams powerful capabilities to meet their unique needs, discover attacker behavior quickly, and answer difficult questions during a breach. Built fully on cloud-native technology, Panther will be there to meet the most demanding needs of teams.

Panther’s data infrastructure pipeline is built on top of the idea of “Streaming ETL (Extract, Transform, and Load),” where realtime security data is parsed, normalized, and stored in an efficient and compressed format at machine speed. This brings structure to security data and enables teams to connect the dots during an investigation by querying the extracted fields, such as common IOCs, IOAs, and other telemetry across ALL data. This provides an extremely scalable operational environment that enables security teams to process, analyze, and retain exabytes of security data at unprecedented low costs when they need it, which is right NOW.

Join Brad LaPorte and Panther CEO, Jack Naglieri on July 22nd for a live webinar as they discuss how to implement modernized security analytics best practices to get the most out of the investment of your money, time, and resources. 

Request a demo today.

Contents