Next Gen Intrusion Prevention System

Next Gen Intrusion Prevention System
 
 
six steps to choosing your next gen instrusion prevention system
 
 
 
Today’s IT professionals—regardless of industry—are adopting the latest technologies to meet increasing bandwidth demands, create higher and faster performing networks and increase availability as cost-effectively as possible. But new technologies bring new challenges, and organizations’IT departments must rapidly adjust to the latest technologies while keeping their already stressed network infrastructures stable and secure. As such, the most common question when adopting a new technology or device is simply: How can I be sure the solution I choose will perform as expected in my network?
 
If the new deployment involves next-generation firewalls or Intrusion Prevention Systems (IPS), this decision can have added challenges. The sophisticated high-performance network and security devices within these infrastructures require a more comprehensive approach to testing and validation than traditional testing tools can provide. Today’s devices use deep packet inspection (DPI) to examine traffic in ways that legacy testing tools were never designed to validate.

These devices, and the complex traffic they handle, demand testing with real-world application, attack and malformed traffic at line-rate speeds. Without this improved approach, contentaware equipment cannot be stressed thoroughly or accurately enough to determine its true capabilities. That’s why companies are turning to an objective testing approach that allows them to impose their own conditions during pre-purchase evaluations, ensuring that they can rigorously validate device capabilities under real-world scenarios, including the applications they must handle, actual user behavior and the attacks they expect to see. Doing this prior to deployment will not only save time and money, but it also ensures that the network remains resilient. Therefore, it’s imperative that IT buyers follow the six steps outlined below to make informed purchase decisions, eliminating costly post-deployment troubleshooting.
 
Create and prioritize specifications for products to be evaluated. As with any project, it is wise to begin with the end goal in mind. Before considering any piece of equipment, define and prioritize the company’s needs for infrastructure build-out. Otherwise, it is too easy to dive into questions of “speeds and feeds” without taking into account broader objectives. A good way to start is by asking fundamental questions. How should the infrastructure support key objectives? What are the transaction latency requirements? How important is the security of transactions in comparison to their speed? Which services are most sensitive, requiring the highest levels of security? Is application inspection necessary?



Rethink testing around repeatable, quantitative principles. Create a plan for stressing each device under test (DUT) with realworld application, attack and malformed traffic at heavy load. Doing this is not as simple as taking the older, ad hoc approach to testing and then injecting authentic traffic. Instead, the entire plan should embrace a standardized methodology and scientific approach to eliminate guesswork. That means the plan must use repeatable experiments that yield clear, quantitative results to accurately validate the capabilities of DPI-enabled devices. Previously, IT professionals have lacked the precision equipment necessary to enforce consistent standards across testing processes. Today, however, they have access to superior testing products that create authentic network traffic and capture precise measurements of its effects, even for complex environments.
Use standardized scores to separate pretenders from contenders.It is relatively straightforward to use standardized scoring methods to pare down a long list of candidate devices without performing comprehensive validation of each product. These scores quickly eliminate devices from consideration that clearly do not meet an organization’s needs. The resulting score is presented as a numeric grade from 1 to 100. Devices may receive no score if they fail to pass traffic at any point or if they degrade to an unacceptable performance level. The Resiliency Score1 takes the guesswork and subjectivity out of validation and allows administrators to quickly understand the degree to which system security will be impacted under load, attack and real-world application traffic.
Test final contenders with individual test scenarios that mirror the production environment. True validation requires an accurate understanding of the application, network and security landscape in which devices will be operating. Review the infrastructure’s traffic mix and the mixes of service providers before designing individual tests; this will ensure that the testing equipment reflects the latest versions and types of application traffic that traverse the network. However, generating real traffic is not enough. The traffic mix used also must be repeatable yet random. Randomization makes test traffic behave like real-world traffic, creating unexpected patterns that force DUTs to work harder. Creating repeatable, random traffic requires testing equipment that uses a pseudorandom number generator (PRNG) to set a seed value that creates standardized, random traffic.
 
Execute a layered testing progression that includes load, application traffic, security attacks and other stress vectors. This is where the scientific method comes into play. By changing only one variable at a time and testing the parameters established earlier, this progression will reveal the specific strengths and weaknesses of each product, replacing guesswork with verifiable results. The processes in this phase ensure that a DUT can adequately handle heavy load, in terms of both sessions and application throughput. If the device cannot pass these tests with traffic known to be free of attacks, there is no way it will process enough traffic once its security features are turned on or when it also must handle malformed traffic or other stress vectors.

Lay the groundwork for successful purchase negotiation, deployment and maintenance. Deploying untested network and security devices creates nightmare scenarios. Untested equipment requires weeks of post-deployment troubleshooting that is frustrating and time-consuming, and often leads to finger-pointing and costly remediation steps. This is particularly true when device outages, security breaches or unplanned bottlenecks affect entire infrastructures; such failures can damage an organization’s reputation. Testing pre-deployment minimizes the risk of these problems and saves hundreds of hours of staff time by eliminating surprises and guesswork. Selecting the right device is about more than finding the right make and model, it also means choosing the right amount of equipment for the infrastructure in order to meet business needs.
 
IT departments should look for information that goes far beyond the performance and security features that can be read off a data sheet. They should be measuring the security and stability of their IPSs based on real-world conditions, not generic conditions in a lab. Another common mistake that IT departments make is relying on test lab reports to make informed decisions. Labs often perform device testing in isolation, without regard to the unique environments of purchasers. Also, test lab reports are often funded by device manufacturers, which inevitably raise objectivity questions. Ultimately, IT departments choose a firewall vendor, but they never feel as though they truly understand how well the device is going to work. Will it actually recognize the difference between applications, even at a granular level, such as the difference between Facebook traffic and Facebook messaging traffic? Putting that next-gen firewall/IPS through proper context-aware testing is the only way to be confident it will performas advertised.
If IT leaders follow these technical recommendations and avoid making common mistakes, they can select the right products to meet their business objectives, improve infrastructure planning and resiliency by understanding device capabilities and save up to 50 percent on IT investments. This also eliminates hundreds of man-hours in post-purchase configuration and tuning, and gives purchasers advanced insight into device capabilities, enabling them to configure devices appropriately in order to avoid surprises and delays.

Comments

Post a Comment