Omid Aramoon

Graduate Research Assistant

Hello! I am a researcher at the Maryland Cybersecurity Center at the University of Maryland. I started my Ph.D. working on security problems in hardware design including IP watermarking, fingerprinting, and logic locking schemes, which led to several publications at prestigious EDA conferences. Later on, I became interested in the security, and privacy aspects of Machine Learning (ML) and Deep Learning (DL) systems, and I studied techniques for protecting ML/DL systems against adversarial attacks such as IP infringements, backdooring attempts as well as model tampering attacks, which led to several publications at top ML conferences. My current areas of interest in research are privacy and security in deep learning systems. I am confident in my research skills, knowledge of deep learning and machine learning concepts, and ability to develop complex ML/DL algorithms using Python and TensorFlow. I am expected to graduate in August 2022

download cv

Latest publications:

"Meta Federated Learning” published on DPML workshop in International Conference on Learning Representations (ICLR-21).

“Provably Accurate Memory Fault Detection Method for Deep Neural Networks” published on Proceedings of the GLSVLSI-21.

“AID: Attesting the Integrity of Deep Neural Networks” published on 2021 58th ACM/IEEE Design Automation Conference (DAC-21).

Education

  • In Progress

    Doctor of Philosophy Electrical and Computer Engineering University of Maryland, College Park, MD

  • 2016-2021

    Master of Science Electrical and Computer Engineering University of Maryland, College Park, MD

  • 2011-2016

    Bachelor of Science Computer Engineering Sharif University of Technology, Tehran, Iran

Research

[on-going project] Power Side-channel Analysis for Security of AI

Physical phenomenons observed during the execution of algorithms on microelectronic devices are an inevitable side effect of their physical implementation. If such physical phenomena, known as side-channels, correlate with the sensitive data processed on the device, they may leak confidential information about the system's internal state and possibly compromise its security. For instance, the amount of time and energy needed for a device to execute certain computations may leak critical information about the operations taking place inside the device. The term Side-Channel Attack (SCA) describes the class of attacks in which the unintended information leaking from the implementation of algorithms is leveraged to extract confidential information. Historically, cryptographic systems have been the primary target of side-channel attacks, and a large number of studies have proposed a variety of techniques to break both symmetric and asymmetric encryption algorithms. However, recently, ML models, especially DNNs, have been shown to be vulnerable to side-channel attacks. In fact, several studies have proposed techniques to reverse-engineer certain properties of neural networks such as their architecture or parameters using the information obtained from physical and micro-architectural side-channel emitted during model execution. The security community often considers side channels as a negative side effect that could significantly weaken the security of algorithms implemented on microelectronic devices. In an alternative perspective, we’d like to view side channels as part of the design space that can provide constructive functionalities. I am currently investigating applications of power side channel analysis in detecting training and test time adversarial attacks against FPGA implementations of deep learning systems .

I am really interested in Hardware Security and Security of Machine Learning and Deep Learning Systems.

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks. In this study, our focus is on backdoor attacks in which the adversary's goal is to cause targeted misclassifications for inputs embedded with an adversarial trigger while maintaining an acceptable performance on the main learning task at hand. Contemporary defenses against backdoor attacks in federated learning require direct access to each individual client's update which is not feasible in recent FL settings where Secure Aggregation is deployed. In this study, we seek to answer the following question, Is it possible to defend against backdoor attacks when secure aggregation is in place?, a question that has not been addressed by prior arts. To this end, we propose Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks. We perform a systematic evaluation of Meta-FL on two classification datasets: SVHN and GTSRB. The results show that Meta-FL not only achieves better utility than classic FL, but also enhances the performance of contemporary defenses in terms of robustness against adversarial attacks.

Meta Federated Learning My presentation on DPML workshop in ICLR-20

Protecting Deep Neural Networks Against Integrity Breaches

Due to their crucial role in many decision-making tasks, Deep Neural Networks (DNNs) are common targets for a large array of integrity breaches. In this paper, we propose AID, a novel methodology to Attest the Integrity of DNNs. AID generates a set of test cases called edge-points that can reveal whether a model has been compromised. AID does not require access to parameters of the DNN and can work with a restricted black-box access to the model, which makes it applicable to most real life scenarios. Experimental results show that AID is highly effective and reliable.

AID: Attesting the Integrity of Deep Neural Networks

Fault Detection For Deep Neural Networks

Deep Neural Networks (DNNs) have been widely deployed in real-world systems, many of which have strict safety constraints. Soft errors on memory acceleration platforms for DNNs can degrade their inference accuracy and result in silent data corruption, which can have severe consequences in safety-critical applications. No doubt to say, efficient and effective techniques to detect and mitigate memory faults are needed. In this paper, we propose a novel methodology to diagnose the presence of faults in the memory of DNN accelerators. Our method queries the protected DNN with a set of specially crafted test cases that can accurately reveal if model parameters stored in the hardware are faulty. We provide a theoretical guarantee for the performance of our method and conduct systematic proof-of-concept experiments by simulating memory faults on computer vision models.

Provably Accurate Memory Fault Detection Method for Deep Neural Networks

Watermarking Deep Neural Networks

Engineering a top-notch deep learning model is an expensive procedure that involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources. For that reason, deep learning models are considered as valuable Intellectual Properties (IPs) of the model vendors. To ensure reliable commercialization of deep learning models, it is crucial to develop techniques to protect model vendors against IP infringements. One of such techniques that recently has shown great promise is digital watermarking. However, current watermarking approaches can embed very limited amount of information and are vulnerable against watermark removal attacks. In this paper, we present GradSigns, a novel watermarking framework for deep neural networks (DNNs). GradSigns embeds the owner's signature into the gradient of the cross-entropy cost function with respect to inputs to the model. Our approach has a negligible impact on the performance of the protected model and it allows model vendors to remotely verify the watermark through prediction APIs.

Don't Forget to Sign the Gradients! My presentation on MLSys-21

Machine Learning For Hardware Security

Physical Unclonable Function (PUF) is seen as a promising alternative to traditional cryptographic algorithms for secure and lightweight device authentication for the diverse IoT use cases. However, the essential security of PUF is threatened by a kind of machine learning (ML) based modeling attacks which could successfully impersonate the PUF by using known challenge and response pairs (CPRs). However, existing modeling methods require access to an extremely large set of CRPs which makes them unrealistic and impractical in the real world scenarios. To handle the limitation of available CRPs from the attack perspective, we explore the possibility to transfer a well-tuned model trained with unlimited CRPs to a target PUF with limited number of CRPs.

Efficient Transfer Learning on Modeling Physical Unclonable Functions Impacts of Machine Learning on Counterfeit IC Detection and Avoidance Techniques

Genetic Algorithm For Designing Polymorphic Gates

Polymorphic gates are reconfigurable devices whose functionality may vary in response to the change of execution environment such as temperature, supply voltage or external control signals. This feature makes them a perfect candidate for circuit watermarking. However, polymorphic gates are hard to find because they do not exhibit the traditional structure. My colleagues and I proposed a genetic algorithm based approach to address this challenge and were able to find many previously unseen polymorphic gates.

Polymorphic Gates For IC Watermarking

My colleagues and I aimed to dig out the potentials of polymorphic circuits in hardware security and trust related applications, which hasn not been comprehensively researched in previous works. One straightforward and convenient application of polymorphic gates is to embed circuit watermark, which is one of the first studied hardware security problems. In this scheme, the circuit delivers correct functionality in the normal mode; when it is necessary to demonstrate the watermark, the circuit is transitioned to the special mode by activating the external control so that the circuit can change its functionality and produce different outputs. In this case, the hidden “secret” is the hardware-level watermark, which proves the ownership of the circuits andgives the circuits legal protection against piracy, overbuilding and counterfeiting.

Polymorphic Gate based IC Watermarking Techniques

Polymorphic Gates For IC Fingerprinting

My colleagues and I proposed a circuit fingerprinting scheme with polymorphic gates controlled by external inputs. The scheme targets SDC (Satisfiability Don’t Care) conditions that usually appear in non-trivial circuits and replaces the standard library cells holding the SDC conditions by polymorphic gates. The modified circuit delivers correct functionality and the configurations of the polymorphic gates constitute the circuit fingerprint.

A Novel Polymorphic Gate Based Circuit Fingerprinting Technique

Polymorphic Gates For Circuit Authentication

Polymorphic gates are reconfigurable electronic devices that exhibit multiple functionalities under different environments such as temperature, supply voltage, or external signals. Such gates are rare as they do not have a complementary topology and need to satisfy the input-output relationships for more than one functionality. My colleagues and I introduced the concept of partial polymorphic gates, which deliver multiple incomplete functions with non-deterministic outputs at certain input combinations. The non-deterministic output is a result of process variations, which are generally believed to be random, unclonable, and different from chip to chip. We utilize this uncertainty as a new mechanism for implementing chip IDs and propose a circuit authentication scheme based on such IDs.

A Novel Circuit Authentication Scheme Based on Partial Polymorphic Gates

Scan-chain Based Authentication Framework for Embedded Systems

Most of the Internet of Things (IoT) and embedded devices are resource constrained, making it impractical to secure them with the traditional computationally expensive crypto-based solutions. However, security and privacy are crucial in many IoT applications such as health monitoring. In this paper, we consider one of the most fundamental security problems: how to identify and authenticate an embedded device. We consider the fact that embedded devices are designed by reusing IP cores with reconfigurable scan network (RSN) as the standard testing facility and propose to generate unique integrated circuit (IC) identifications (IDs) based on different configurations for the RSN. These circuit IDs not only solve the IC and device identification and authentication problems, they can also be considered as a lightweight security primitive in other applications such as IC metering and IP fingerprinting.

A Reconfigurable Scan Network Based IC Identification For Embedded Devices

Publications On Machine Learning

Meta Federated Learning

Distributed and Private Machine Learning Worshop in the International Conference on Learning Representations (ICLR-20)

Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks. In this study, our focus is on backdoor attacks in which the adversary's goal is to cause targeted misclassifications for inputs embedded with an adversarial trigger while maintaining an acceptable performance on the main learning task at hand. Contemporary defenses against backdoor attacks in federated learning require direct access to each individual client's update which is not feasible in recent FL settings where Secure Aggregation is deployed. In this study, we seek to answer the following question, Is it possible to defend against backdoor attacks when secure aggregation is in place?, a question that has not been addressed by prior arts. To this end, we propose Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks. We perform a systematic evaluation of Meta-FL on two classification datasets: SVHN and GTSRB. The results show that Meta-FL not only achieves better utility than classic FL, but also enhances the performance of contemporary defenses in terms of robustness against adversarial attacks.

Don’t Forget to Sign the Gradients!

Proceedings of Machine Learning and Systems 3 (MLSys-21)

Engineering a top-notch deep learning model is an expensive procedure that involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources. For that reason, deep learning models are considered as valuable Intellectual Properties (IPs) of the model vendors. To ensure reliable commercialization of deep learning models, it is crucial to develop techniques to protect model vendors against IP infringements. One of such techniques that recently has shown great promise is digital watermarking. However, current watermarking approaches can embed very limited amount of information and are vulnerable against watermark removal attacks. In this paper, we present GradSigns, a novel watermarking framework for deep neural networks (DNNs). GradSigns embeds the owner's signature into the gradient of the cross-entropy cost function with respect to inputs to the model. Our approach has a negligible impact on the performance of the protected model and it allows model vendors to remotely verify the watermark through prediction APIs. We evaluate GradSigns on DNNs trained for different image classification tasks using CIFAR-10, SVHN, and YTF datasets. Experimental results show that GradSigns is robust against all known counter-watermark attacks and can embed a large amount of information into DNNs.

Provably Accurate Memory Fault Detection Method for Deep Neural Networks

Proceedings of the 2021 on Great Lakes Symposium on VLSI (GLSVLSI-21)

Deep Neural Networks (DNNs) have been widely deployed in real-world systems, many of which have strict safety constraints. Soft errors on memory acceleration platforms for DNNs can degrade their inference accuracy and result in silent data corruption, which can have severe consequences in safety-critical applications. No doubt to say, efficient and effective techniques to detect and mitigate memory faults are needed. In this paper, we propose a novel methodology to diagnose the presence of faults in the memory of DNN accelerators. Our method queries the protected DNN with a set of specially crafted test cases that can accurately reveal if model parameters stored in the hardware are faulty. We provide a theoretical guarantee for the performance of our method and conduct systematic proof-of-concept experiments by simulating memory faults on computer vision models. Our empirical evaluations corroborate the effectiveness and efficiency of our approach. Detecting faults with our method requires simple decision-based access to the inference capability of the DNN, and does not require any additional functionality from the accelerator, which makes our method ideal for legacy systems.

AID: Attesting the Integrity of Deep Neural Networks

2021 58th ACM/IEEE Design Automation Conference (DAC-21)

Due to their crucial role in many decision-making tasks, Deep Neural Networks (DNNs) are common targets for a large array of integrity breaches. In this paper, we propose AID, a novel methodology to Attest the Integrity of DNNs. AID generates a set of test cases called edge-points that can reveal whether a model has been compromised. AID does not require access to parameters of the DNN and can work with a restricted black-box access to the model, which makes it applicable to most real life scenarios. Experimental results show that AID is highly effective and reliable. With at most four edge-points, AID is able to detect eight representative integrity breaches including backdoor, poisoning, and compression attacks, with zero false-positive.

Trust in Machine Learning as a Service

PhD Forum Paper on System on Chip Conference (SOCC-20)

While MLaaS platforms have made it convenient for model vendors to deploy and monetize their products, they raise immediate security and trust concerns. In this paper, we talk about research problems that need to be addressed before reliable and secure commercialization of DNNs on MLaaS platforms would be possible.

Do You Sign Your Model?

DMMLSys workshop in International Conference on Machine Learning (ICML-20)

Engineering a top-notch deep neural network (DNN) is an expensive procedure which involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources. For that reason, DNNs are considered as valuable Intellectual Properties (IPs) of the model vendors. To ensure a reliable commercialization of these products, it is crucial to develop techniques to protect model vendors against IP infringements. One of such techniques that recently has shown great promise is digital watermarking. In this paper, we present GradSigns, a novel watermarking framework for DNNs. GradSigns embeds owner's signature into gradient of cross-entropy cost function with respect to inputs to the model. Our approach has negligible impact on the performance of the protected model, and can verify ownership of remotely deployed models through prediction APIs. We evaluate GradSigns on DNNs trained for different image classification tasks using CIFAR-10, SVHN and YTF datasets, and experimentally show that unlike existing methods, GradSigns is robust against counter-watermark attacks, and can embed large amount of information into DNNs.

Publications on Hardware Security

Independent Verification and Validation of Security-Aware CAD Tools

2021 58th ACM/IEEE Design Automation Conference (DAC-21)

Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks. In this study, our focus is on backdoor attacks in which the adversary's goal is to cause targeted misclassifications for inputs embedded with an adversarial trigger while maintaining an acceptable performance on the main learning task at hand. Contemporary defenses against backdoor attacks in federated learning require direct access to each individual client's update which is not feasible in recent FL settings where Secure Aggregation is deployed. In this study, we seek to answer the following question, Is it possible to defend against backdoor attacks when secure aggregation is in place?, a question that has not been addressed by prior arts. To this end, we propose Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks. We perform a systematic evaluation of Meta-FL on two classification datasets: SVHN and GTSRB. The results show that Meta-FL not only achieves better utility than classic FL, but also enhances the performance of contemporary defenses in terms of robustness against adversarial attacks.

A Novel Circuit Authentication Scheme based on Partial Polymorphic Gates

Asian Hardware Oriented Security and Trust Symposium (AsianHOST-21)

Engineering a top-notch deep learning model is an expensive procedure that involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources. For that reason, deep learning models are considered as valuable Intellectual Properties (IPs) of the model vendors. To ensure reliable commercialization of deep learning models, it is crucial to develop techniques to protect model vendors against IP infringements. One of such techniques that recently has shown great promise is digital watermarking. However, current watermarking approaches can embed very limited amount of information and are vulnerable against watermark removal attacks. In this paper, we present GradSigns, a novel watermarking framework for deep neural networks (DNNs). GradSigns embeds the owner's signature into the gradient of the cross-entropy cost function with respect to inputs to the model. Our approach has a negligible impact on the performance of the protected model and it allows model vendors to remotely verify the watermark through prediction APIs. We evaluate GradSigns on DNNs trained for different image classification tasks using CIFAR-10, SVHN, and YTF datasets. Experimental results show that GradSigns is robust against all known counter-watermark attacks and can embed a large amount of information into DNNs.

Impacts of Machine Learning on Counterfeit IC Detection and Avoidance Techniques

21st International Symposium on Quality Electronic Design (ISQED-20)

Globalization of integrated circuit (IC) supply chain has made counterfeiting a major source of concern in the semiconductor industry. To address this concern, extensive efforts have been put into developing effective counterfeit detection and avoidance techniques. In the recent years, machine learning (ML) algorithms have played an important role in development and evaluation of many emerging countermeasures against counterfeiting. In this paper, we aim to investigate impacts of such algorithms on the landscape of anti-counterfeiting schemes. We provide a comprehensive review of prior arts that deploy machine learning to develop or attack counterfeit detection and avoidance techniques. We also discuss future directions for application of machine learning in anti-counterfeit schemes.

Efficient Transfer Learning Attack for Modeling Physical Unclonable Functions

21st International Symposium on Quality Electronic Design (ISQED-20)

Physical Unclonable Function (PUF) is seen as a promising alternative to traditional cryptographic algorithms for secure and lightweight device authentication for the diverse IoT use cases. However, the essential security of PUF is threatened by a kind of machine learning (ML) based modeling attacks which could successfully impersonate the PUF by using known challenge and response pairs (CPRs). However, existing modeling methods require access to an extremely large set of CRPs which makes them unrealistic and impractical in the real world scenarios. To handle the limitation of available CRPs from the attack perspective, we explore the possibility to transfer a well-tuned model trained with unlimited CRPs to a target PUF with limited number of CRPs. Experimental results show that the proposed transfer learning-based scheme could achieve the same accuracy level with 64% less of CRPs in average. Besides, we also evaluate the proposed transfer learning method with side-channel information and it demonstrates in reducing the number of CRPs significantly.

Balancing Testability and Security by Configurable Partial Scan Design

2018 IEEE International Test Conference in Asia (ITC-Asia-18)

Scan chain design facilitates chip testing by providing an interface for the test engineers to access and control the internal states of the circuit. This feature has also been exploited to break systems such as the cryptographic chips by the attack known as scan chain side channel analysis. From the perspective of information access, test engineers and scan chain attackers have the same goal - observe and control the scan chain side channel information. Consequently, all the existing countermeasures have to make the tradeoff between scan chain security and the testability it can provide. In this paper, we propose a novel public-private partial scan chain design which can deliver both full testability and security. The key idea is to partition the flip flops into a public partial chain and a set of parallel private partial chains. The private partial chains are protected by means of a hardware implemented finite state machine and an obfuscation mechanism based on configurable physical unclonable function. We demonstrate how full testability can be achieved by the proposed public-private partial chains. We conduct security and performance analysis to show that our approach is robust against all the known scan chain based attacks and can improve testing time and power consumption with negligible hardware overhead.

A Reconfigurable Scan Network based IC Identification for Embedded Devices

2018 Design, Automation & Test in Europe Conference & Exhibition (DATE-18)

Most of the Internet of Things (IoT) and embedded devices are resource constrained, making it impractical to secure them with the traditional computationally expensive crypto-based solutions. However, security and privacy are crucial in many IoT applications such as health monitoring. In this paper, we consider one of the most fundamental security problems: how to identify and authenticate an embedded device. We consider the fact that embedded devices are designed by reusing IP cores with reconfigurable scan network (RSN) as the standard testing facility and propose to generate unique integrated circuit (IC) identifications (IDs) based on different configurations for the RSN. These circuit IDs not only solve the IC and device identification and authentication problems, they can also be considered as a lightweight security primitive in other applications such as IC metering and IP fingerprinting. We demonstrate through the ITC'02 benchmarks that the proposed approach can easily create from 10 7 to 10 186 unique IDs without any overhead. Finally, our method complies with the IEEE standards and thus has high practical value.

A Novel Polymorphic Gate based Circuit Fingerprinting Technique

2018 IEEE International Symposium on Circuits and Systems (ISCAS-18)

Polymorphic gates are reconfigurable devices that deliver multiple functionalities at different temperature, supply voltage or external inputs. Capable of working in different modes, polymorphic gate is a promising candidate for embedding secret information such as fingerprints. In this paper we report five polymorphic gates whose functionality varies in response to specific control input and propose a circuit fingerprinting scheme based on these gates. The scheme selectively replaces standard logic cells by polymorphic gates whose functionality differs with the standard cells only on Satisfiability Don't Care conditions. Additional dummy fingerprint bits are also introduced to enhance the fingerprint's robustness against attacks such as fingerprint removal and modification. Experimental results on ISCAS and MCNC benchmark circuits demonstrate that our scheme introduces low overhead. More specifically, the average overhead in area, speed and power are 4.04%, 6.97% and 4.15% respectively when we embed 64-bit fingerprint that consists of 32 real fingerprint bits and 32 dummy bits. This is only half of the overhead of the other known approach when they create 32-bit fingerprints.

Polymorphic Gate based IC Watermarking Techniques

Asia and South Pacific Design Automation Conference (ASP-DAC-18)

Polymorphic gates are reconfigurable devices whose functionality may vary in response to the change of execution environment such as temperature, supply voltage or external control signals. This feature makes them a perfect candidate for circuit watermarking. However, polymorphic gates are hard to find because they do not exhibit the traditional structure. In this paper, we report four dual-function polymorphic gates that we have discovered using an evolutionary approach. With these gates, we propose a circuit watermarking scheme that selectively replaces certain standard logic gates with the polymorphic gates. Experimental results on ISCAS and MCNC benchmark circuits demonstrate that this scheme introduces low overhead. More specifically, the average overhead in area, speed and power are 4.10%, 2.08% and 1.17% respectively when we embed 30-bit watermark sequences. These overheads increase to 6.36%, 4.75% and 2.08% respectively when 10% of the gates in the original circuits are replaced to embed watermark up to more than 300 bits.

Contact

oaramoon@umd.edu +1 (443) 968 1638
  • 1430 AVW Building,
  • Department of Electrical and Computer Engineering
  • University of Maryland, College Park, MD