Structure Discovery for Gene Expression Networks with Emerging Stochastic Hardware

Publication Files

Publication Medium:

in Proceedings of IEEE International Conference on Rebooting Computing (ICRC)

pages

In Press

Year of Publication:

2017

Abstract

Gene Expression Networks (GENs) attempt to model how genetic information stored in the DNA (Genotype) results in the synthesis of proteins, and consequently, the physical traits of an organism (Phenotype). Deciphering GENs plays an important role in a wide range of applications from genetic studies of the origins of life to personalized healthcare. Probabilistic graphical models such as Bayesian Networks (BNs) are used to perform learning and inference of GENs from genetic data. Current techniques of generating BNs of GENs from data, which are mostly approximate in nature, involve searching and scoring of multiple probabilistic graphical structures. However, while search algorithms can be efficiently implemented in software, the same is not true for scoring. Scoring of probabilistic models with inherent parallelism is inefficient when performed sequentially over conventional architectures comprising of deterministic devices. In this paper, we introduce a new nanoscale hardware acceleration framework, enabling fast and efficient Bayesian inference operations, significantly accelerating the scoring aspect of the BN learning of GENs using a combination of emerging stochastic devices and CMOS technology. The stochasticity of the devices is utilized to efficiently perform approximate inference on probabilistic networks, and the circuit framework constituting these devices is designed to exploit the inherent parallelism in these models. We demonstrate approximate inference operation over a small BN. We estimate the performance benefits of five orders of magnitude in performing inference operations using this architecture over software-only approaches.