(Ebook PDF) Security and Game Theory Algorithms Deployed Systems Lessons Learned 1st Edition by Milind Tambe -Ebook PDF Instant Download/Delivery:9781139200622, 1139200623
Instant download Full Chapter of Security and Game Theory Algorithms Deployed Systems Lessons Learned 1st Edition after payment
Product details:
ISBN 10:1139200623
ISBN 13:9781139200622
Author: Milind Tambe
Global threats of terrorism, drug-smuggling and other crimes have led to a significant increase in research on game theory for security. Game theory provides a sound mathematical approach to deploy limited security resources to maximize their effectiveness. A typical approach is to randomize security schedules to avoid predictability, with the randomization using artificial intelligence techniques to take into account the importance of different targets and potential adversary reactions. This book distills the forefront of this research to provide the first and only study of long-term deployed applications of game theory for security for key organizations such as the Los Angeles International Airport police and the US Federal Air Marshals Service. The author and his research group draw from their extensive experience working with security officials to intelligently allocate limited security resources to protect targets, outlining the applications of these algorithms in research and the real world.
Table of Contents:
- 1 Introduction and Overview of Security Games
- 1.1 Introduction
- 1.2 Motivation: Security Games
- 1.2.1 Game Theory
- 1.2.2 Bayesian Stackelberg Games
- 1.2.3 Security Games
- 1.3 Overview of Part II: Applications of Security Games
- 1.3.1 ARMOR
- 1.3.2 IRIS
- 1.3.3 GUARDS
- 1.3.4 Beyond ARMOR/IRIS/GUARDS
- 1.4 Overview of Part III: Algorithmic Advances to Achieve Scale-up in Security Games
- 1.4.1 Efficient Algorithms: Overview
- 1.4.2 Efficient Algorithms: A More Detailed View
- 1.5 Part IV: Toward the Future
- 1.5.1 Overview of Research Challenges
- 1.5.2 Challenges in Evaluation of Deployed Security Systems
- 1.6 Summary
- PART I Security Experts’ Perspectives
- 2 LAX – Terror Target: The History, the Reason, the Countermeasure
- 2.1 Introduction
- 2.2 A Brief History of Significant Events
- 2.3 Terrorism and the Economic Significance of the Aviation Domain
- 2.3.1 National Perspective
- 2.3.2 Los Angeles International Airport
- 2.4 Aviation Security
- 2.5 LAX Terror History
- 2.6 RAND Study
- 2.7 Los Angeles World Airports Police
- 2.7.1 Critical Infrastructure Protection Unit (CIPU)
- 2.7.2 Vulnerability Assessment and Analysis Unit (VAAU)
- 2.7.3 Emergency Services Unit (ESU)
- 2.7.4 Canine Unit
- 2.7.5 Dignitary Protection Unit (DPU)
- 2.7.6 Intelligence Section
- 2.7.7 Security Credential Section
- 2.7.8 Airport Security Advisory Committee
- 2.8 Terrorist Operational Planning Cycle
- 2.9 CREATE Pilot Project
- 2.10 Summary
- 3 Maritime Transportation System Security and the Use of Game Theory: A Perfect Match to Address Ope
- PART II Deployed Applications
- 4 Deployed ARMOR Protection: The Application of a Game-Theoretic Model for Security at the Los Angel
- 4.1 Introduction
- 4.2 Related Work
- 4.3 Security Domain Description
- 4.4 Approach
- 4.4.1 Bayesian Stackelberg Game
- 4.4.2 DOBSS
- 4.4.3 Bayesian Stackelberg Game for the Los Angeles International Airport
- 4.5 System Architecture
- 4.5.1 Interface
- 4.5.2 Matrix Generation and DOBSS
- 4.5.3 Mixed Strategy and Schedule Generation
- 4.6 Design Challenges
- 4.7 Experimental Results
- 4.7.1 Runtime Analysis
- 4.7.2 Evaluation of ARMOR
- 4.8 Summary
- Acknowledgments
- 5 IRIS – ATool for Strategic Security Allocation in Transportation Networks
- 5.1 Introduction
- 5.2 Federal Air Marshal Service
- 5.3 Background
- 5.3.1 Stackelberg Games
- 5.3.2 Stackelberg Security Games
- 5.3.3 ERASER/ERASER-C
- 5.4 System Architecture
- 5.5 Major Challenges
- 5.5.1 Describing the game
- 5.5.2 Solving the game
- 5.6 Organizational Acceptance
- 5.7 Experimental Results
- 5.7.1 Runtime Analysis
- 5.7.2 Evaluation
- 5.8 Summary
- Acknowledgments
- 6 GUARDS: Game-Theoretic Security Allocation on a National Scale
- 6.1 Introduction
- 6.2 Background
- 6.2.1 Security Games
- 6.2.2 Assistants for Security Games
- 6.3 National Deployment Challenges
- 6.3.1 Modeling the TSA Resource Allocation challenges
- 6.3.1.1 Defender Strategies
- 6.3.1.2 Attacker Actions
- 6.3.2 Compact Representation for Efficiency
- 6.3.2.1 Threat Modeling for TSA
- 6.3.2.2 Compact Representation
- 6.3.3 Knowledge Acquisition
- 6.4 System Architecture
- 6.5 Evaluation
- 6.5.1 Runtime Analysis
- 6.5.2 Security Policy Analysis
- 6.6 Lessons in Transitioning Research into Practice
- 6.7 Related Work and Summary
- Acknowledgments
- PART III Efficient Algorithms for Massive Security Games
- 7 Coordinating Randomized Policies for Increasing the Security of Agent Systems
- 7.1 Introduction
- 7.2 Related Work
- 7.3 Randomization with No Adversary Model
- 7.3.1 Maximal Entropy Solution
- 7.3.2 Efficient Single-Agent Randomization
- 7.4 Randomization Using a Partial Adversary Model
- 7.4.1 Exact Solution: DOBSS
- 7.4.2 Approximate Solution: ASAP
- 7.5 Experimental Results
- 7.5.1 No Adversary Model
- 7.5.2 Partial Adversary Model
- 7.6 Conclusions and Policy Implications
- Acknowledgments
- 8 Computing Optimal Randomized Resource Allocations for Massive Security Games
- 8.1 Introduction
- 8.2 Game-Theoretic Modeling of Security Games
- 8.2.1 Stackelberg Security Games
- 8.2.2 Stackelberg Equilibrium
- 8.3 Motivating Domains
- 8.4 A Compact Representation for Multiple Resources
- 8.4.1 Compact Security Game Model
- 8.4.2 Compact versus Normal Form
- 8.4.3 ERASER Solution Algorithm
- 8.5 Exploiting Payoff Structure
- 8.6 Scheduling and Resource Constraints
- 8.7 Experimental Evaluation
- 8.8 Conclusion
- Acknowledgment
- 9 Security Games with Arbitrary Schedules:A Branch-and-Price Approach
- 9.1 Introduction
- 9.2 SPARS
- 9.3 ASPEN Solution Approach and Related Work
- 9.4 ASPEN Column Generation
- 9.5 Improving Branching and Bounds
- 9.6 Experimental Results
- 9.7 Conclusions
- Acknowledgments
- PART IV Future Research
- 10 Effective Solutions for Real-World Stackelberg Games: When Agents Must Deal with Human Uncertaint
- 10.1 Introduction
- 10.2 Background
- 10.3 Robust Algorithms
- 10.3.1 Brass
- 10.3.2 Guard
- 10.3.3 Cobra
- 10.4 Experiments
- 10.4.1 Quality Comparison
- 10.4.2 Runtime Results
- 10.5 Summary and Related Work
- Acknowledgments
- 11 Approximation Methods for Infinite Bayesian Stackelberg Games: Modeling Distributional Payoff Unc
- 11.1 Introduction
- 11.2 Related Work
- 11.3 Bayesian Security Games
- 11.3.1 Bayesian Stackelberg Equilibrium
- 11.3.2 Attacker Payoff Distributions
- 11.4 Solution Methods
- 11.4.1 Sampled Bayesian ERASER
- 11.4.2 Sampled Replicator Dynamics
- 11.4.3 Greedy Monte Carlo
- 11.4.4 Worst-Case Interval Uncertainty
- 11.4.5 Decoupled Target Sets
- 11.5 Experimental Evaluation
- 11.5.1 Experimental Setup
- 11.5.2 Attacker Response Estimation
- 11.5.3 Approximation Algorithms
- 11.6 Conclusion
- Acknowledgments
- 12 Stackelberg versus Nash in Security Games: Interchangeability, Equivalence, and Uniqueness
- 12.1 Introduction
- 12.2 Motivating Domains
- 12.3 Definitions and Notation
- 12.4 Equilibria in Security Games
- 12.4.1 Equivalence of NE and Minimax
- 12.4.2 Interchangeability of Nash Equilibria
- 12.4.3 SSE and Minimax/NE
- 12.4.4 Uniqueness in Restricted Games
- 12.5 Multiple Attacker Resources
- 12.6 Experimental Results
- 12.7 Summary and Related Work
- Acknowledgments
- 13 Evaluating Deployed Decision-Support Systems for Security: Challenges, Analysis, and Approaches
- 13.1 Introduction
- 13.2 Background: ARMOR and IRIS
- 13.3 Formulating the Problem
- 13.3.1 Abstracting the Real-World Problem
- 13.3.2 Solution Concepts and Computational Considerations
- 13.3.2.1 Potential Solution Concepts
- 13.3.2.2 Algorithmic Goals
- 13.3.3 Implementing the Solution
- 13.4 Evaluation Case Studies
- 13.4.1 Comparison with Previous Best Practices
- 13.4.2 Mathematical Sensitivity Analysis
- 13.4.3 Human Trials
- 13.4.4 Arrest Data
- 13.4.5 Qualitative Expert Evaluations
- 13.5 Goals for Security Decision-Support Systems
- 13.5.1 Security per Dollar
- 13.5.2 Threat Deterrence
- 13.6 Types of Evaluation
- 13.6.1 Model-based/algorithmic
- 13.6.2 Cost-benefit analysis
- 13.6.3 Relative benefit
- 13.6.4 Human behavioral experiments
- 13.6.5 Operational record
- 13.6.6 High-Level evaluations
- 13.7 Related Work
- 13.8 Conclusions
- Acknowledgments
- PART V Short Bios
People also search:
game theory and machine learning for cyber security
game theory and cyber security
game theory for cyber security and privacy
decision and game theory for security
game theory and national security
Tags:
Milind Tambe,Security,Game Theory,Algorithms,Deployed Systems,Lessons Learned