Performance, power, and confidence modeling of digital designs
MetadataShow full item record
This dissertation presents three modeling methodologies. The first methodology constructs power models for system-on-chip (SoC) designs. The modeling approach uses a gate-level netlist description of the design and training data to automatically select a handful of nets to use as model parameters. The resulting models use the cycle-by-cycle values of the chosen nets to predict cycle-by-cycle power consumption. These models are compact, fast to evaluate, and accurate. The second methodology constructs confidence models to complement the SoC power models. One confidence model is constructed for each power model. At run-time, each confidence model is evaluated in parallel with its corresponding power model. For each power prediction made, the confidence model generates a confidence value. These confidence values provide feedback to the model user that can be used to predict the power prediction's expected error. The third methodology constructs performance and power models for general-purpose computation on graphics processing units (GPGPU). The approach uses K-means clustering and neural networks to construct a model that can predict the scaling behavior of perviously unseen kernels across a range of graphics processing unit (GPU) hardware configurations. The model is capable of predicting scalability across three GPU parameters: core frequency, memory frequency, and compute unit count. The SoC power and confidence models were evaluated using the Rocket Core, which is a pipelined processor that implements the RISC-V Instruction Set Architecture (ISA), and benchmarks from the MiBench suite. Across all modules modeled and benchmarks run, the power models achieved an average cycle-by-cycle prediction error of 5.8%. An average confidence and predicted power value was generated for each benchmark. Correlating these two values resulted in a 0.77 correlation coefficient. The GPGPU scalability models were evaluated against real hardware measurements. The GPGPU performance and power scalability models were shown to be accurate to within 15% and 10%, respectively.