A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed.
We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications.
In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning.
"synopsis" may belong to another edition of this title.
Dr. Aneesh Sreevallabh Chivukula is currently an Assistant Professor in the Department of Computer Science & Information Systems at the Birla Institute of Technology and Science (BITS), Pilani, Hyderabad Campus. He has a PhD in data analytics and machine learning from the University of Technology Sydney (UTS), Australia. He holds a Master Of Science by Research in computer science and artificial intelligence from the International Institute of Information Technology Hyderabad, India. His research interests are in Computational Algorithms, Adversarial Learning, Machine Learning, Deep Learning, Data Mining, Game Theory, and Robust Optimization. He has taught subjects on advanced analytics and problem solving at UTS. He has been teaching academic courses on computer science at BITS, Pilani. He has industry experience in engineering, R&D, consulting at research labs and startup companies. Hehas developed enterprise solutions across the value chains in the open source, Cloud, & Big Data markets.
Dr. Xinghao Yang is currently an Associate Professor at the China University of Petroleum. He has a Ph.D. degree in advanced analytics from the University of Technology Sydney, Sydney, NSW, Australia. His research interests include multiview learning and adversarial machine learning with publications on information fusion and information sciences.
Dr. Wei Liu is the Director of Future Intelligence Research Lab, and an Associate Professor in Machine Learning, in the School of Computer Science, the University of Technology Sydney (UTS), Australia. He is a core member of the UTS Data Science Institute. Wei obtained his PhD degree in Machine Learning research at the University of Sydney (USyd). His current research focuses are adversarial machine learning, game theory, causal inference, multimodal learning, and natural language processing. Wei's research papers are constantly published in CORE A*/A and Q1 (i.e., top-prestigious) journals and conferences. He has received 3 Best Paper Awards. Besides, one of his first-authored papers received the Most Influential Paper Award in the CORE A Ranking conference PAKDD 2021. He was a nominee for the Australian NSW Premier's Prizes for Early Career Researcher Award in 2017. He has obtained more than $2 million government competitive and industry research funding in the past six years.
Dr. Bo Liu is currently a Senior Lecturer with the University of Technology Sydney, Australia. His research interests include cybersecurity and privacy, location privacy and image privacy, privacy protection and machine learning, wireless communications and networks. He is an IEEE Senior Member and Associate Editor of IEEE Transactions on Broadcasting.
Dr. Wanlei Zhou received the Ph.D. degree from Australian National University, Canberra, ACT, Australia, in 1991, all in computer science and engineering, and the D.Sc. degree from Deakin University, Melbourne, VIC, Australia, in 2002. He is currently a Professor and the Head of School of Computer Science at the University of Technology Sydney. He served as a Lecturer with the University of Electronic Science and Technology of China, a System Programmer with Hewlett Packard, Boston, MA, USA, and a Lecturer with Monash University, Melbourne, VIC, Australia, and the National University of Singapore, Singapore. He has published over 300 papers in refereed international journals and refereed international conferences proceedings. His research interests include distributed systems, network security, bioinformatics, and e-Learning. Dr. Wanlei was the General Chair/Program Committee Chair/Co-Chair of a number of international conferences, including ICA3PP, ICWL, PRDC, NSS, ICPAD, ICEUC, and HPCC.
A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed.
We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications.
In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning.
"About this title" may belong to another edition of this title.
US$ 2.64 shipping within U.S.A.
Destination, rates & speedsSeller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: New. Seller Inventory # 46030597-n
Quantity: Over 20 available
Seller: Grand Eagle Retail, Bensenville, IL, U.S.A.
Hardcover. Condition: new. Hardcover. A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed. We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications. In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Seller Inventory # 9783030997717
Quantity: 1 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9783030997717_new
Quantity: Over 20 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: As New. Unread book in perfect condition. Seller Inventory # 46030597
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 46030597-n
Quantity: Over 20 available
Seller: Revaluation Books, Exeter, United Kingdom
Hardcover. Condition: Brand New. 321 pages. 9.25x6.10x9.21 inches. In Stock. This item is printed on demand. Seller Inventory # __3030997715
Quantity: 1 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9783030997717
Quantity: Over 20 available
Seller: moluna, Greven, Germany
Condition: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in uni. Seller Inventory # 571801956
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: As New. Unread book in perfect condition. Seller Inventory # 46030597
Quantity: Over 20 available
Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany
Buch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed. We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications. In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning. 324 pp. Englisch. Seller Inventory # 9783030997717
Quantity: 2 available