Browsing by Subject "Signal processing--Digital techniques"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Automating transformations from floating-point to fixed-point for implementing digital signal processing algorithms(2006) Han, Kyungtae; Evans, Brian L. (Brian Lawrence), 1965-Many digital signal processing and communication algorithms are first simulated using floating-point arithmetic and later transformed into fixedpoint arithmetic to reduce implementation complexity. This transformation process may take more than 50% of the design time for complex designs. In addition, wordlengths in fixed-point designs may be altered at later stages in the design cycle. Different choices of wordlengths lead to different tradeoffs between signal quality and implementation complexity. In this dissertation, I propose two methods for characterizing the tradeoffs between signal quality and implementation complexity during the transformation of digital system designs to fixed-point arithmetic and variables. The first method, a gradient-based search for single-objective optimization with sensitivity information, scales linearly with the number of variables, but can become trapped in local optima. Based on wordlength design case studies for a wireless communication demodulator, adding sensitivity information reduces the search time by a factor of four and yields a design with 30% lower implementation costs. The second method, a genetic algorithm for multi-objective optimization, provides a Pareto optimal front that evolves towards the optimal tradeoff curve for signal quality vs. implementation complexity. This second method can be used to fully characterize the design space. I propose to use wordlength reduction methods of signed right shift and truncation to reduce power consumption in a given hardware architecture. For each method, I derive the expected values of the number of gates that switch during multiplication of the inputs. I apply the signed right shift method and the truncation method to a 16-bit radix-4 modified Booth multiplier and a 16- bit Wallace multiplier. The truncation method with 8-bit operands reduces the power consumption by 56% in the Wallace multiplier and 31% in the Booth multiplier. The signed right shift method shows a 25% power reduction in the Booth multiplier, but no power reduction in the Wallace multiplier. Finally, this dissertation describes a method to automate design assistance for transformation from floating-point to fixed-point data types. Floatingpoint programs are converted to fixed-point programs by a code generator. Then, the proposed wordlength search algorithms offer designers the freedom to determine data wordlengths to optimize the tradeoffs between signal quality and implementation complexity.Item BIST-based performance characterization of mixed-signal circuits(2004-08) Yu, Hak-soo, 1966-; Abraham, Jacob A.Item Channel equalization to achieve high bit rates in discrete multitone systems(2004) Ding, Ming; Evans, Brian L.Multicarrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) and discrete multi-tone (DMT) modulation are attractive for high-speed data communications due to the ease with which MCM can combat channel dispersion. With all the benefits MCM could give, DMT modulation has an extra ability to perform dynamic bit loading, which has the potential to exploit fully the available bandwidth in a slowly time-varying channel. In broadband wireline communications, DMT modulation is standardized for asymmetric digital subscribe line (ADSL) and very-high-bit-rate digital subscriber line (VDSL) modems. ADSL and VDSL standards are used by telephone companies to provide high speed data service to residences and offices. In an ADSL receiver, an equalizer is required to compensate for the channel’s dispersion in the time domain and the channel’s distortion in the frequency domain of the transmitted waveform. This dissertation proposes design methods for linear equalizers to increase the bit rate of the connection. The methods are amenable to implementation on programmable fixed-point digital signal processors, which are employed in ADSL/VDSL transceivers. A conventional ADSL equalizer consists of a time-domain equalizer, a fast Fourier transform, and a frequency domain equalizer. The time domain equalizer (TEQ) is a finite impulse response filter that when coupled with a discretized channel produces an equivalent channel whose impulse response is shorter than that of the discretized channel. This channel shortening is required by the ADSL standards. In this dissertation, I first propose a linear phase TEQ design that exploits symmetry in existing eigen-filter approaches such as minimum mean square error(MMSE), maximum shortening signal to noise ratio (MSSNR) and minimum intersymbol interference (Min-ISI) equalizers. TEQs with symmetric coefficients can reach the same performance as non-symmetric ones with much lower training complexity. Second, I improve Min-ISI design. I reformulate the cost function to make long TEQs design feasible. I remove the dependency of transmission delay in order to reduce the complexity associated with delay optimization. The quantized weighting is introduced to further lower the complexity. I also propose an iterative optimization procedure of Min-ISI that completely avoids Cholesky decomposition hence is better suited for a fixed-point implementation. Finally I propose a dual-path TEQ structure, which designs a standard singleFIR TEQ to achieve good bit rate over the entire transmission bandwidth, and designs another FIR TEQ to improve the bit rate over a subset of subcarriers. Dualpath TEQ can be viewed as a special case of a complex valued filter bank structure that delivers the best bit rate of existing DMT equalizers. However, dual-path TEQ provides a very good tradeoff between achievable bit rate vs. implementation complexity on a programmable digital signal processor.Item Dense wavelength division multiplexing (DWDM) for optical networks(2001-08) Qiao, Jie; Chen, Ray T.Item I/O test methods in high-speed wireline communication systems(2008-08) Dou, Qingqi; Abraham, Jacob A.The advent of serial tera-bit telecommunication and multi-gigahertz I/O interfaces is posing challenges on the semiconductor and ATE industries. There is a gap in signal integrity testing between what has been specified in serial link standards and what can be practically tested in production. A thorough characterization and a more cost-effective test of the signal integrity, such as BER, jitter, and eye margin, are critical to identify and isolate the root cause of the system degradation and to the binning in production. In this dissertation, measurement and testing schemes on signal integrity are explored. A solution for diagnosing jitter and predicting the range of consequent BER is proposed. This solution is applicable to decomposition of correlated and uncorrelated jitter in both clock and data signals. The statistical information of jitter is estimated using TLC functions. TLC treats jitter in its original form, as a time series, resulting in good accuracy in the decomposition. Hardware results in a PLL indicate that the approach is still valid when the traditional histogram-based method fails. This approach can be implemented using only one-shot capture instead of multiple captures to average out the uncorrelated jitter from the correlated jitter. Therefore, the TLC functions enable test time reduction in jitter decomposition compared to traditional averaging methods. Hardware measurements on stressed data signals are presented to validate the proposed technique. We have also explored low cost, high bandwidth techniques using Built In Self Test(BIST) for on-chip jitter measurement. Undersampling provides a lowcost test solution for on-chip jitter measurement. However, it suffers from sampling clock phase error and time quantization noise. These timing uncertainties on the test accuracy of the traditional technique using a single channel structure can be alleviated by extracting the correlation between two channels using a single reference clock. Simulation results indicate that the proposed approach can achieve a better measurement accuracy and a higher degree of tolerance to sampling clock uncertainty and quantization error than does the single-channel structure, with little additional test overhead. TIADCs provide an attractive solution to the realization of analog front ends in high speed communication systems,such as 10GBASE-T and 10GBASEFiber. However, gain mismatch, offset mismatch, and sampling time mismatch between time-interleaved channels limit the performance of TIADCs. A low-cost test scheme is developed to measure timing mismatch using an undersampling clock. This method is applicable to an arbitrary number of channels, achieving picosecond resolution with low power consumption. Simulation results and hardware measurements on a 10GSps TIADC are presented to validate the proposed technique.Item Mixed-signal signature analysis for systems-on-a-chip(2001-08) Roh, Jeongjin, 1966-; Abraham, Jacob A.Item Theory of principal component filter banks with applications to multicomponent imagery(2003) Pal, Mihaela Dobre; Cheney, E. W.; Brislawn, ChristopherIn the first part of the thesis we give background about the digital signal processing, required throughout. We introduce the Karhunen-Lo`eve transform and the most commonly used optimality criteria for orthonormal uniform filter banks. In the second part of the thesis the definition of principal component filter banks is given; these filter banks unify the theory of optimality of filter banks under explicitly stated criteria. We discuss the existence of principal component filter banks and present a study case pertaining to autoregressive input signals and finite impulse response filter banks. We prove a theorem on the existence of coding gain optimal finite impulse response filter banks. For filter banks with two channels, coding gain optimal filter banks are also principal component filter banks. As an application of the theory of optimal filter banks we design two-channel principal component filter banks for remote sensing hyper-spectral images. These filter banks are used to decorrelate an image, i.e. to represent the image in a more compact form. This design strategy leads do a more efficient compression of large images within the JPEG-2000 paradigm.Item Towards real-time HW/SW co-simulation with operating system support(2007) He, Zhengting; Mok, Aloysius Ka-Lau; Garg, Vijay K. (Vijay Kumar), 1963-A trend in the consumer electronics market is the demand for new applications that have a lot of similarities to older applications but the new ones impose more challenging and special-purpose performance requirements. In the digital signal processing (DSP) industry, this clearly reflects a transition from the design regime of general DSP to the application-specific DSP. From the design perspective, it means that the DSP core remains unchanged but more and more hardware (HW) accelerators, DMAs and bus architectures need to be integrated into the chip. A key in effecting this transition is the engineering capability to make sure that the design specification \matches" the application before detailed design starts. Therefore, application software (SW) needs to be developed in parallel with HW to verify the design specification at the system level. Enabling development and simulation of SW before the actual HW is available also reduce the time-to-market period which is another important benefit. HW/SW co-simulation for design specification refinement imposes many challenging requirements to the simulation platform. The simulation components (simcoms) modeling the real HW (rhw) modules to be designed and the application SW need to be integrated to carry out the simulation at system level. Simulation result needs to be accurate. Simulation speed should allow fast design space exploration and ease debugging complex application SW. HW and SW problems should be isolated cleanly since HW and SW engineers often do not have enough expertise in one another's domains. The simulator should be cost-effective. These requirements often conflict with one another. For example, achieving high simulation accuracy typically requires the simulation to be carried out at low level, which implies that the simulation speed is slow. A simulator allowing integration of simcoms and application SW for simulation is very expensive and thus only very few engineers can use it. In many cases, simcoms and application SW are not constructed in the same programming language. Interfacing them is not a trivial problem and often impacts the simulation speed severely. Using a single simulator requires the engineers to understand both HW and SW details that violates the requirement of HW/SW problem isolation. The bottom line is that a single simulator is not possible to fulfill all these requirements at the same time. This dissertation describes three simulation tools for different usages. The first one models and simulates the real-time operating system (RTOS) together with the application SW. It is motivated by the fact that with the appearance of high performance DSPs, more and more tasks will be implemented as SW on a single DSP managed by an RTOS. Selecting the \right" RTOS before the SW is developed is very important. The tool is implemented based on SystemC and is configurable to support modeling and timed simulation of most popular embedded RTOSes. Timing fidelity is achieved by using delay annotation. The OS timing information is derived from published benchmark data. Application timing information can be profiled or estimated from similar legacy applications. The optimized conservative approach is taken to synchronize simcoms. Compared to other research work, an important contribution of this tool is an online algorithm for predicting the timestamp of the next event based on the realistic assumption that multiple tasks execute on currently on a processor, managed by a static or dynamic priority driven scheduler. The simulation speed is more than 3 orders of magnitude faster than commercial instruction set simulator (ISS) with comparable accuracy. The tool is used to assist in generation of an initial design specifiation. The second tool is a system data flow simulator (SDFS) and is used by the HW engineers to refine the HW specifications. It models the application by a parameter driven conditional data flow graph (CDFG) at the transaction level and the HW by a configurable HW graph at the cycle-accurate level. SDFS takes the application CDFG and HW graph as the input and carries out the simulation to catch the detailed HW activities, i.e., bus arbitration. It only requires the HW engineers to understand the application at the CDFG level. To carry out the system simulation at such a low level, many commercial simulators need to couple an ISS for application SW with an RTL simulator for simcoms that are typically 6 orders of magnitude slower than the rhw speed. The simulation error of SDFS is within 5% in most cases and the worst case error is within 13%, which is comparable to the ISS+RTL approach. But the simulation speed is only 4 orders of magnitude slower than the rhw speed. Compared to other similar research work that also models the system at CDFG level, SDFS an achieve higher simulation accuracy be cause of the following advantages: 1) it does not need a fixed application trace as input and thus is flexible enough to cover many simulation scenarios; 2) it does not assume a fixed cost for each functional block and thus is able to estimate the system performance under actual execution conditions; and 3) it is able to model the pipelined architecture common in modern DSPs. The proposed simulator is cost-effective since it is implemented in the SystemC language and an be executed on most PCs and workstations. The third tool is a real-time simulation platform (RTSP) implemented on legacy DSPs. To the best of our knowledge, this is the first simulator that truly enables the application SW to be developed in parallel with HW by offering the same SW development environment as if the rhw was available. To simulate the behavior of a rhw module, a corresponding simcom is constructed running on a legacy DSP. The success of this simulation strategy hinges on a novel way to apply the concept of Real-Time Virtual Machines to simulation. Each legacy DSP employs a two level scheduler to enforce that each simcom carries out the simulation at a proportional speed (1= ) to the rhw, so that any job that would finish at time t on the rhw will finish no later than t + 4 where 4 is a constant bound. Such a feature eliminates expensive synchronization between the simcoms. RTSP is proven to perform simulations faithfully and also is shown experimentally to be effective for real industry applivations. For a rhw whose timing behavior can be accurately modeled by the SW behavior model, the simulation error is shown to be < 5%. For very complicated rhw whose timing cannot be accurately captured by the behavior model, the simulation accuracy was shown to be ex ellent for the average case. The simulation speed is quite fast. For the selected audio and video applications, simulation is only 10X and 30X slower than rhw execution. The RTSP platform is practically zero-cost since legacy EVM boards can be reused for the purpose of simulation. RTSP and SDFS can be used to complement each other. RTSP carries out the simulation at a higher level than SDFS and usually cannot capture activities on buses at every cycle. The information collected from SDFS determines the appropriate rate settings for simcoms to compensate for the resource competition. RTSP allows SW engineers to optimize the algorithm and suggest improvements to HW architecture. Suggested changes are fed to SDFS for refining the design specification.