Time-Efficient And Cost-Effective Network Hardening Using .

2y ago
29 Views
4 Downloads
800.93 KB
12 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Kamden Hassan
Transcription

Time-Efficient and Cost-Effective Network Hardening Using Attack GraphsMassimiliano Albanese , Sushil Jajodia † , and Steven Noel Center for Secure Information SystemsGeorge Mason University4400 University Drive, Fairfax, VA 22030Email: {malbanes,jajodia,snoel}@gmu.edu† The MITRE Corporation7515 Colshire Drive, McLean, VA 22102-7539Abstract—Attack graph analysis has been established as apowerful tool for analyzing network vulnerability. However,previous approaches to network hardening look for exactsolutions and thus do not scale. Further, hardening elementshave been treated independently, which is inappropriate forreal environments. For example, the cost for patching manysystems may be nearly the same as for patching a singleone. Or patching a vulnerability may have the same effectas blocking traffic with a firewall, while blocking a portmay deny legitimate service. By failing to account for suchhardening interdependencies, the resulting recommendationscan be unrealistic and far from optimal. Instead, we formalizethe notion of hardening strategy in terms of allowable actions,and define a cost model that takes into account the impact ofinterdependent hardening actions. We also introduce a nearoptimal approximation algorithm that scales linearly with thesize of the graphs, which we validate experimentally.Keywords-network hardening, vulnerability analysis, attackgraphs, intrusion prevention, reliability.I. I NTRODUCTIONAttackers can leverage the complex interdependencies ofnetwork configurations and vulnerabilities to penetrate seemingly well-guarded networks. In-depth analysis of networkvulnerabilities must consider attacker exploits not merelyin isolation, but in combination. Attack graphs reveal suchthreats by enumerating potential paths that attackers can taketo penetrate networks. This helps determine whether a givenset of network hardening measures provides safety of givencritical resources.Attack graph analysis can be extended to automaticallygenerate recommendations for hardening networks. Onemust consider combinations of network conditions to harden,which has corresponding impact on removing paths in theattack graph. Further, one can generate hardening solutionsthat are optimal with respect to some notion of cost. Suchhardening solutions prevent the attack from succeeding,while minimizing the associated costs.However, as we show, the general solution to optimalnetwork hardening scales exponentially as the number ofThe work presented in this paper is supported in part by the ArmyResearch Office under MURI award number W911NF-09-1-0525, and bythe Office of Naval Research under award number N00014-12-1-0461.978-1-4673-1625-5/12/ 31.00 2012 IEEEhardening options itself scales exponentially with the size ofthe attack graph. In applying network hardening to realisticnetwork environments, it is crucial that the algorithms areable to scale. Progress has been made in reducing thecomplexity of attack graph manipulation so that it scalesquadratically (linearly within defined security zones) [1].However, previous approaches for generating hardening recommendations search for exact solutions [2], which is anintractable problem.Another limitation of previous work is the assumptionthat network conditions are hardened independently. Thisassumption does not hold true in real network environments.Realistically, network administrators can take actions thataffect vulnerabilities across the network, such as pushingpatches out to many systems at once. Further, the samehardening result may be obtained through more than oneaction. Overall, to provide realistic recommendations, ourhardening strategy must take such factors into account.We remove the assumption of independent hardeningactions. Instead, we define a network hardening strategy as aset of allowable atomic actions that involve hardening multiple network conditions. We introduce a formal cost modelthat accounts for the impact of these hardening actions.This allows the definition of hardening costs that accuratelyreflect realistic network environments. Because computingthe minimum-cost hardening solution is intractable, weintroduce an approximation algorithm for optimal hardening.This algorithm finds near-optimal solutions while scaling almost linearly – for certain values of the parameters – with thesize of the attack graph, which we validate experimentally.Finally, we determine the theoretical upper bound for theworst-case approximation ratio, and show that, in practice,the approximation ratio is much lower than such bound.The paper is organized as follows. Section II discussesrelated work. Section III recalls some preliminary definitions, whereas Section IV provides a motivating example.Then Section V introduces the proposed cost model, andSection VI describes our approach to time-efficient and costeffective network hardening. Finally, Section VII reports experimental results, and Section VIII gives some concludingremarks and indicates further research directions.

II. R ELATED W ORKSA number of tools are available for scanning networkvulnerabilities, such as Nessus [3], but these only reportisolated vulnerabilities. Attack graphs are constructed byanalyzing the inter-dependencies between vulnerabilities andsecurity conditions that have been identified in the targetnetwork [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. Suchanalysis can be either forward, starting from the initial state[8], [12] or backward from the goal state [9], [11]. Modelchecking was first used to analyze whether the given goalstate is reachable from the initial state [9], [14], but laterused to enumerate all possible sequences of attacks betweenthe two states [11], [15].The explicit attack sequences produced by a modelchecker face a serious scalability issue, because the numberof such sequences is exponential in the number of vulnerabilities multiplied by the number of hosts. To avoid such combinatorial explosion, a more compact representation of attackgraphs was proposed in [4]. The monotonicity assumptionunderlies this representation, i.e., an attacker never relinquishes any obtained capability. This newer representationcan thus keep exactly one vertex for each exploit or securitycondition, leading to an attack graph of polynomial size (inthe total number of vulnerabilities and security conditions).Attack graphs have also been used for correlating intrusion alerts into attack scenarios [16], [17]. Such alertcorrelation methods are parallel to our work, because theyaim to employ the knowledge encoded in attack graphsfor detecting and taking actions against actual intrusions,whereas our work aims to harden the network before anyintrusion may happen.Algorithms exist to find the set of exploits from which thegoal conditions are reachable [4]. This eliminates some irrelevant exploits from further consideration because they donot contribute to reaching the goal condition. The minimalcritical attack set is a minimal set of exploits in an attackgraph whose removal prevents attackers from reaching anyof the goal states [4], [11], [15]. The minimal critical attackset thus provides solutions to harden the network. However,these methods ignore the critical fact that consequences cannot be removed without removing the causes. The exploits inthe solution usually depend on other exploits that also needto be disabled. The solution is thus not directly enforceable.Moreover, after taking into account those implied exploitsthe solution is no longer minimum.A more effective solution to automate the task of hardening a network against multi-step intrusions was proposedby Wang et al. in [2]. Unlike previous approaches, whichrequire removing exploits, this solution focuses on initiallysatisfied conditions only. Initial conditions can be disabled,leading to a readily deployable solution. However, Wanget al. assumed that initial conditions can be independentlydisabled. Although this is true in some cases, there may existdependencies between initial conditions, such that removingcertain initial conditions may also disable additional conditions, and this might not be necessary to harden the network.The work presented in this paper differs significantly fromprevious work in that we (i) drop the assumption that initialconditions can be independently disabled; (ii) introducea formal cost model; and (iii) present an approximationalgorithm that generates suboptimal solutions efficiently.III. P RELIMINARIESAttack graphs represent prior knowledge about vulnerabilities, their dependencies, and network connectivity. Twodifferent representations are possible for an attack graph.First, an attack graph can explicitly enumerate all possiblesequences of vulnerabilities an attacker can exploit to reacha target, i.e., all possible attack paths. However, such graphsface a combinatorial explosion in the number of attackpaths. Second, with a monotonicity assumption stating anattacker never relinquishes obtained capabilities, an attackgraph can record the dependencies among vulnerabilities andkeep attack paths implicitly without losing any information.The resulting attack graph has no duplicate vertices andhence has a polynomial size in the number of vulnerabilitiesmultiplied by the number of connected pairs of hosts.In this paper, we adopt the definition of attack graphpresented in [17], which assumes the latter notion of attackgraphs.Definition 1 (Attack graph): Given a set of exploits E, aset of security conditions C, a require relation Rr C E,and an imply relation Ri E C, an attack graph G is thedirected graph G (E C, Rr Ri ), where E C is thevertex set and Rr Ri the edge set.We denote an exploit as a predicate v(hs , hd ), indicatingan exploitation of vulnerability v on the destination host hd ,initiated from the source host hs . Similarly, we write v(h)for exploits involving only local host h.A security condition is a predicate c(hs , hd ) that indicatesa satisfied security-related condition c involving the sourcehost hs and the destination host hd (when a conditioninvolves a single host, we simply write c(h)). Examples ofsecurity conditions include the existence of a vulnerabilityon a given host or the connectivity between two hosts. Initialconditions are a special subset of security conditions, asdefined below [2].Definition 2 (Initial conditions): Given an attack graphG (E C, Rr Ri ), initial conditions refer to the subset ofconditions Ci {c C e E s.t. (e, c) Ri }, whereasintermediate conditions refer to the subset C \ Ci .Intermediate conditions are usually consequences of exploits and hence cannot be disabled without removing thecauses. Instead, initial conditions are not created throughthe execution of exploits, thus they might be removed.Without loss of generality, we will explicitly model onlyinitial conditions that can be disabled, and omit initial

ftp(0,1)user(0)ftp(0,2)ftp rhosts(0,1)ftp (2,1)user(2)local bof(2)ftp rhosts(2,1)trust(1,2)sshd(0,1)sshd(2,1)Figure 1 is relatively simple, with three hosts – denoted host0, 1, and 2 respectively – and four types of vulnerabilities– f tp rhosts, rsh, sshd bof , and local bof . However,because multiple interleaved attack paths can lead to thegoal condition, an optimal solution to harden the network isstill not apparent from the attack graph itself, and findingsuch a solution by hand may not be trivial. As an example ofattack path, the attacker can first establish a trust relationshipfrom his machine (host 0) to host 2 (condition trust(2, 0))via the exploit f tp rhosts(0, 2) on host 2, then gain userprivileges on host 2 (condition user(2)) with an rsh login(exploit rsh(0, 2)), and finally achieve the goal conditionroot(2) using a local buffer overflow attack on host 2(exploit local bof (2)). The following are some of the validattack paths that can be generated using existing algorithms[4]. rsh(2,1)sshd bof(0,1)sshd bof(2,1) ftp(1,2)ftp rhosts(1,2)user(1)local bof(1)root(1)trust(2,1)rsh(1,2)root(2)Figure 1. An example of attack graph, including initial conditions (purpleovals), exploits (green rectangles), and intermediate conditions (blue ovals)conditions that network administrators cannot control, suchas privileges on the attacker’s machine.IV. M OTIVATING E XAMPLEFigure 1 depicts an example of attack graph, with exploitsappearing as rectangles and conditions as ovals. Purple ovalsrepresent initial conditions, whereas blue ovals representintermediate conditions. Some modeling simplifications havebeen made, such as combining transport-layer ftp connectivity between two hosts hs and hd , physical-layer connectivity,and the existence of the ftp daemon on host hd into a singlecondition f tp(hs , hd ). In this example, we assume that ourobjective is to harden the network with respect to targetcondition root(2), i.e., we want to prevent the attacker fromgaining root privileges on host 2. The scenario depicted inf tp rhosts(0, 2), rsh(0, 2), local bof (2)f tp rhosts(0, 1), rsh(0, 1), f tp rhosts(1, 2), rsh(1, 2),local bof (2)sshd bof (0, 1), f tp rhosts(1, 2), rsh(1, 2), local bof (2)Intuitively, to prevent the goal condition from beingsatisfied, a solution to network hardening must break allthe attack paths leading to the goal. This intuition wascaptured by the concept of critical set, that is, a set ofexploits (and corresponding conditions) whose removal fromthe attack graph will invalidate all attack paths. It hasalso been shown that finding critical sets with the minimum cardinality is NP-hard, whereas finding a minimalcritical set (that is, a critical set with no proper subsetbeing a critical set) is polynomial. Based on the aboveattack paths, there are many minimal critical sets, suchas {rsh(0, 2), rsh(1, 2)}, {f tp rhosts(0, 2), rsh(1, 2)},{f tp rhosts(1, 2), rsh(0, 2)}, and so on. If any of thosesets of exploits could be completely removed, all the attack paths would become invalid, and hence the targetcondition would be unreachable. Unfortunately, the abovesolution ignores the following important fact. Not all exploitsare under the direct control of administrators. An exploitcan only be removed by disabling its required conditions,but not all conditions can be disabled at will. Intuitively,a consequence cannot be removed without removing itscauses. Some conditions are implied by other exploits. Suchintermediate conditions cannot be independently disabledwithout removing the exploits that imply them. Only thoseinitial conditions that are not implied by any exploit can bedisabled independently of other exploits. Hence, it is important to distinguish between these two kinds of conditions, asformalized in Definition 2.For instance, in Figure 1, exploit rsh(1, 2) cannot be independently removed, because the two conditions it requires,trust(2, 1) and user(1), are both intermediate conditionsand cannot be independently disabled. As long as an attacker can satisfy those two conditions through other exploits (for example, f tp rhosts(1, 2) and sshd bof (2, 1)),

the realization of the exploit rsh(1, 2) is unavoidable.Hence, any of the above minimal critical sets, such as{rsh(0, 2), rsh(1, 2)}, is theoretically a sound solution, butpractically not enforceable. For this reason, the approachproposed in [2] relies on initial conditions only. However, ithas some limitations that we address in this paper.The approach of [2] has no explicit cost model andassumes that each initial condition can be independentlydisabled. Thus, even when all possible solutions are enumerated, determining the one with the minimum costis based either on a qualitative notion of cost or onsimply counting the conditions that need to be disabled. For the attack graph of Figure 1, the algorithmin [2] returns two solutions, {f tp(0, 2), f tp(1, 2)} and{f tp(0, 2), f tp(0, 1), sshd(0, 1)}1 . At this point, there is noclear procedure to decide which solution has the minimumcost, unless we make the assumption that the cost of removing each individual condition is assigned by administrators.Intuitively, one may expect the solution {f tp(0, 2),f tp(1, 2)} to have a lower cost than {f tp(0, 2), f tp(0, 1),sshd(0, 1)}, as fewer conditions need to be disabled. However, removing both f tp(0, 2) and f tp(1, 2) may only bepossible if the ftp service on host 2 is shut down. Thisaction may have a considerable cost in terms of disruptionto legitimate users. In this case, the combined cost ofremoving the conditions {f tp(0, 2), f tp(0, 1), sshd(0, 1)}may be lower, as it may be achieved by simply blockingall traffic from host 0.To conclude, note that the attack graph of Figure 1has the same hardening solutions as the simplified attackgraph of Figure 2. This is possible because the algorithmin [2] traverses the graph from target conditions to initialconditions, and, relying on the monotonicity assumption,breaks all the cycles. Intuitively, from the point of view ofa target condition, the attack graph can be seen as a treerooted at the target condition and having initial conditionsas the leaf nodes. In fact, each condition is implied by oneor more exploits. In turn, each exploit requires one or morepreconditions to be satisfied. We leverage this observationin our approach to network hardening.V. C OST M ODELDisabling a set of initial conditions in order to preventattacks on given targets may result in undesired effects,such as denial of service to legitimate users. These effectsare greatly amplified when initial conditions cannot beindividually disabled, but rather require actions that disablea larger number of conditions. In the following, we definea network hardening strategy as a set of atomic actions thatcan be taken to harden a network.1 Initial conditions that the administrators cannot control are not considered for the purpose of network hardening. In the example of Figure 1,the condition user(0), corresponding to user privileges on the attacker’smachine, is ignored.ftp(0,1)ftp rhosts(0,1)ftp(1,2)trust(1,0)sshd(0,1)ftp(0,2)ftp rhosts(1,2)rsh(0,1)sshd bof(0,1)ftp 0,2)user(2)local bof(2)root(2)Figure 2. A tree-style attack graph equivalent to the graph of Figure 1w.r.t. target condition root(2)For instance, an allowable hardening action may consist instopping ftp service on a given host. Thus, each action mayhave additional effects besides disabling a desired condition.Such effects must be taken into account when computingminimum-cost solutions. Previous work simply assumes thatinitial conditions can be individually disabled. We take amore general approach and therefore drop this assumption.For instance, in the attack graph of Figure 1, disablingf tp(1, 2) might not be possible without also disablingf tp(0, 2).Definition 3 (Allowable hardening action): Given an attack graph G (E C, Rr Ri ), an allowable hardeningaction (or simply hardening action) A is any subset of theset Ci of initial conditions such that all the conditions in Acan be jointly disabled in a single step, and no other initialcondition c Ci \ A is disabled when conditions in A aredisabled.A hardening action A is said to be minimal if and only if A A s.t. A is an allowable hardening action. We useA to denote the set of all possible hardening actions.Figure 3 depicts the same attack graph of Figure 2,but it explicitly shows the allowable hardening actions,represented as rounded rectangles. Dashed edges indicatewhich conditions are disabled by each action. Intuitively,a network hardening action is an atomic step that networkadministrators can take to harden the network (e.g., closingan ftp port). When an action A is taken, all and only

stop ftp(2)block host(0)stop sshd(1)ftp(0,1)ftp rhosts(0,1)ftp(1,2)trust(1,0)sshd(0,1)ftp(0,2)ftp rhosts(1,2)rsh(0,1)sshd bof(0,1)ftp 0,2)user(2)local bof(2)root(2)Figure 3. Possible hardening actions (orange rectangles) for the attackgraph of Figure 2the conditions

Time-Efficient and Cost-Effective Network Hardening Using Attack Graphs . introduce an approximation algorithm for optimal hardening. This algorithm finds near-optimal solutions while scaling al- . ening a net

Related Documents:

Cost Accounting 1.2 Objectives and Functions of Cost Accounting 1.3 Cost Accounting and Financial Accounting — Comparison 1.3 Application of Cost Accounting 1.5 Advantages of Cost Accounting 1.6 Limitations or Objections Against cost Accounting 1.7 Installation of a costing system 1.7 Concept of Cost 1.9 Cost Centre 1.10 Cost Unit 1.11 Cost .File Size: 1MB

EA 4-1 CHAPTER 4 JOB COSTING 4-1 Define cost pool, cost tracing, cost allocation, and cost-allocation base. Cost pool––a grouping of individual indirect cost items. Cost tracing––the assigning of direct costs to the chosen cost object. Cost allocation––the assigning of indirect costs to the chosen cost object. Cost-alloca

z find out total fixed cost, total variable cost, average fixed cost, average variable cost, average total cost and marginal cost. 18.1 DEFINITION OF COST AND COST FUNCTION Cost is defined as the expenditure incurred by a firm or producer to purchase or hire factors of production in order to produce a product. As you know, factors of

Positive overall: 1. One standard is simpler 2. Saves time and cost (no need for 3 to 4 days audits, or six monthly audits) COST SAVINGS HARPS AUDIT COST 245 GST WQA SQF Coles Annual Audit Cost 5,300 WQA Half-Yearly Audit Cost 1,800 Total Annual Cost 7,100 SQFHARPSAudit Cost 5,545 245 Total Annual Cost 5,790 COST SAVINGS

Energy Modeling software and developing Life-Cycle Cost Analysis. The life-cycle cost includes the system capital cost, energy cost, system maintenance and replacement cost over a 20-year of life span. The life-cycle cost analysis provides the Present Value (PV) of annual cost and the life cycle cost, and it compares the accumulated cash flow .

III. Tabular analysis The cost of production of the selected vegetables were calculated as per the standard cost concept viz; Cost-A, Cost-B, Cost-C and tabulated for interpretation. Cost concepts: These includes cost A 1, A 2, B 1, B 2, C 1, C 2 and C 3 Cost A 1: All actual expenses

Key Benefits of Zero Data Loss Recovery Appliance The Most Efficient backup/recovery possible Validated Recoverability -Recover with Confidence Automated & Simplified -Save Time & Effort Cost Effective -Storage Efficient, Network Efficient, People Efficient 1 2 3 3 4

on top of it, including the ASP.NET MVC, Entity Framework, and Enterprise Library. Since it has been around for a long time, most legacy and existing .NET applications are developed for the .NET Framework, and it also has the richest set of libraries, assemblies, and an ecosystem of packages. One of the key challenges for .NET Framework applications is that backward- compatibility can be .