Computer Science
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9930
This is the collection for the University of Waterloo's Cheriton School of Computer Science.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Computer Science by Author "Asokan, N."
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Efficient Memory Allocator for Restricting Use-After-Free Exploitations(University of Waterloo, 2024-07-17) Wang, Ruizhe; Asokan, N.; Xu, MengAttacks on heap memory, encompassing memory overflow, double and invalid free, use-after-free (UAF), and various heap-spraying techniques are ever-increasing. Existing secure memory allocators can be generally classified as complete UAF-mitigating allocators that focus on detecting and stopping UAF attacks, type-based allocators that limit type confusion, and entropy-based allocators that provide statistical defenses against virtually all of these attack vectors. In this thesis, I introduce two novel approaches, SEMalloc and S2Malloc, for type- and entropy-based allocation, respectively. Both allocators are designed to restrict, but not to fully eliminate, the attacker's ability, using allocation strategies. They can significantly increase the security level without introducing excessive overheads. SEMalloc proposes a new notion of thread-, context-, and flow-sensitive 'type', SemaType, to capture the semantics and prototype a SemaType-based allocator that aims for the best trade-off amongst the impossible trinity. In SEMalloc, only heap objects allocated from the same call site and via the same function call stack can possibly share a virtual memory address, which effectively stops type-confusion attacks and make UAF vulnerabilities harder to exploit. S2Malloc aims to enhance UAF-attempt detection without compromising other security guarantees or introducing significant overhead. We use three innovative constructs in secure allocator design: free block canaries (FBC) to detect UAF attempts, random in-block offset (RIO) to stop the attacker from accurately overwriting the victim object, and random bag layout (RBL) to impede attackers from estimating the block size based on its address. This thesis demonstrates the importance of memory security and highlights the potential of more secure and efficient memory allocation by constraining attacker actions.Item Multimodal spoofing and adversarial examples countermeasure for speaker verification(University of Waterloo, 2022-07-26) Ramesh, Karthik; Asokan, N.Authentication mechanisms have always been prevalent in our society — even as far back as Ancient Mesopotamia in the form of seals. Since the advent of the digital age, the need for a good digital authentication technique has soared stemming from the widespread adoption of online platforms and digitized content. Audio-based authentication like speaker verification has been explored as another mechanism for achieving this goal. Specifically, an audio template belonging to the authorized user is stored with the authentication system. This template is later compared with the current input voice to authenticate the current user. Audio spoofing refers to attacks used to fool the authentication system to gain access to restricted resources. This has been proven to effectively degrade the performance of a variety of audio-authentication methods. In response to this, spoofing countermeasures for the task of anti-spoofing have been developed that can detect and successfully thwart these types of attacks. The advent of deep learning techniques and their usage in real-life applications has led to the research and development of various techniques for purposes ranging from exploiting weaknesses in the deep learning model to stealing confidential information. One of the ways in which the deep learning-based audio authentication model can be evaded is the usage of a set of attacks that are known as adversarial attacks. These adversarial attacks consist of adding a carefully crafted perturbation to the input to elicit a wrong inference from the model. We first explore the performance that multimodality brings to the anti-spoofing task. We aim to augment a unimodal spoofing countermeasure with visual information to identify whether it can improve performance. Since visuals can serve as an additional domain of information, we experiment with whether the existing paradigm of using unimodal spoofing countermeasures for anti-spoofing can benefit from this new information. Our results indicate that augmenting an existing unimodal countermeasure with visual information does not provide any performance benefits. Future work can explore more tightly coupled multimodal models that use objectives like contrastive loss. We then study the vulnerability of deep learning-based multimodal speaker verification to adversarial attacks. In multimodal speaker verification, the vulnerability has not been established and we aim to accomplish this. We find that the multimodal models are heavily reliant on the visual modality and that attacking both modalities lead to a higher attack success rate. Future work can move on to stronger attacks by applying adversarial attacks to bypass the spoofing countermeasure and speaker verification. Finally, we investigate the feasibility of a generic evasion detector that can block both adversarial and spoofing attacks. Since both the spoofing and adversarial attacks target speaker verification models, we aim to add an adversarial attack detection mechanism — feature squeezing — onto the spoofing countermeasure to achieve this. We find that such a detector is feasible but involves a significant reduction in the identification of genuine samples. Future work can explore combining adversarial training as a defense for attacks that target the complete spoofing countermeasure and speaker verification pipeline.Item On Using Embeddings for Ownership Verification of Graph Neural Networks(University of Waterloo, 2023-08-11) Waheed, Asim; Asokan, N.Graph neural networks (GNNs) have emerged as a state-of-the-art approach to model and draw inferences from large scale graph-structured data in various application settings such as social networking. The primary goal of a GNN is to learn an embedding for each graph node in a dataset that encodes both the node features and the local graph structure around the node. Prior work has shown that GNNs are prone to model extraction attacks. Model extraction attacks and defenses have been explored extensively in other non-graph settings. While detecting or preventing model extraction appears to be difficult, deterring them via effective ownership verification techniques offers a potential defense. In non-graph settings, fingerprinting models, or the data used to build them, have shown to be a promising approach toward ownership verification. We hypothesize that the embeddings generated by a GNN are useful for fingerprints. Based on this hypothesis, we present GrOVe, a state-of-the-art GNN model fingerprinting scheme that, given a target model and a suspect model, can reliably determine if the suspect model was trained independently of the target model or if it is a surrogate of the target model obtained via model extraction. We show that GrOVe can distinguish between surrogate and independent models even when the independent model uses the same training dataset and architecture as the original target model. Using six benchmark datasets and three model architectures, we show that GrOVe consistently achieves low false-positive and false-negative rates. We demonstrate that GrOVe is robust against known fingerprint evasion techniques while remaining computationally efficient.Item Security Evaluations of GitHub's Copilot(University of Waterloo, 2023-08-11) Asare, Owura; Nagappan, Meiyappan; Asokan, N.Code generation tools driven by artificial intelligence have recently become more popular due to advancements in deep learning and natural language processing that have increased their capabilities. The proliferation of these tools may be a double-edged sword because while they can increase developer productivity by making it easier to write code, research has shown that they can also generate insecure code. In this thesis, we perform two evaluations of one such code generation tool, GitHub's Copilot, with the aim of obtaining a better understanding of their strengths and weaknesses with respect to code security. In our first evaluation, we use a dataset of vulnerabilities found in real world projects to compare how Copilot's security performance compares to that of human developers. In the set of (150) samples we consider, we find that Copilot is not as bad as human developers but still has varied performance across certain types of vulnerabilities. In our second evaluation, we conduct a user study that tasks participants with providing solutions to programming problems that have potentially vulnerable solutions with and without Copilot assistance. The main goal of the user study is to determine how the use of Copilot affects participants' security performance. In our set of participants (n=21), we find that access to Copilot accompanies a more secure solution when tackling harder problems. For the easier problem, we observe no effect of Copilot access on the security of solutions. We also capitalize on the solutions obtained from the user study by performing a preliminary evaluation of the vulnerability detection capabilities of GPT-4. We observe mixed results of high accuracies and high false positive rates, but maintain that language models like GPT-4 remain promising avenues for accessible, static code analysis for vulnerability detection. We discuss Copilot's security performance in both evaluations with respect to different types of vulnerabilities as well its implications for the research, development, testing, and usage of code generation tools.