Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
T1 Final Exam Study Guide 1. Vector Spaces: a. Definition of a vector space on a field (R or C). b.
A set of vectors with two associated operations: addition and scalar multiplication. c.
Closed under both operations. d. Has a additive identity (the zero vector) and a scalar multiplicative identity 2.
Signal Representation and Signal Spaces: a. Useful to extend the concept above to any collection of elements,
whether finite or infinite. b. Can define signal spaces for continuous and discrete functions c.
Signals are points in the appropriate signal space. Understand the significance of this representation. d.
Examples of signal spaces and their bases. 3. Inner product spaces: a.
An inner product space is a space that has a inner product defined on it b. The inner product is a function
from the space to the set of real numbers R (or complex numbers C). Be aware of the definition of the inner
product for various spaces. c. You should be aware of the various properties of the inner product
(e.g. the Cauchy-Schwartz inequality, the orthogonality, the interpretation of the inner product as the cosine of the angle…) d.
The norm of an element v of the space is a function from the space to the positive real numbers R+ which is defined as [0,∞[ with certain useful properties. Be aware of the definitions
and properties of norms. 4. Linear, time-invariant systems and operators a. Definition of linearity b.
Given an operator, can you determine if it is linear, time-invariant, linear and time invariant, or not LTI?
5. Sampling: a. The interpretation of sampling b. The mathematics of sampling (you should be very
familiar with these by now. Sampling leads to a spectrum that is periodic where each period is the superposition
of the blocks of the original spectrum.) c. Aliasing: the superposition interpretation above
explains what aliasing is. d. Nyquist theorem. 6. Reconstruction: a. Reconstruction involves the interpolation
of the signal to give the continuous–time original. b. The repeated spectra that arise in the sampling process can be viewed as
potential solutions for the reconstruction problem. c. Therefore, the reconstruction problem can be seen as choosing one of these solutions.
d. Reconstruction kernels, ideal reconstruction: How does reconstruction work? Can you write the reconstructed form of
a signal in terms of the interpolation kernel? 7. The FT, DTFT, FS, and DFT: a. Definitions of each expansion b. Where does
each apply? (for instance, the FT is the most general and applies to continuous time signals. We require the signal to have finite energy).
c. Relationships between the Fourier Transform, Fourier Series, Discrete-time Fourier Transform, and Discrete Fourier Transform? d. The role of the FT and DTFT in understanding sampling and reconstruction (see diagram in the notes). e. The formulation of these transforms in terms of the bases of appropriate spaces. f. The orthonormality property. Expansion of elements of the signal space in terms of the bases elements using the inner product. g. Parseval’s theorem. 8. Convolution and polynomial multiplication: a. Formulation of the convolution. b. How to calculate a convolution. c. Convolution: linear vs circular convolution. d. Relationship between time and frequency domains. 9. The z-transform: a. Sampling, delay units b. Power series expansion. What is the z-transform (a mathematical manipulation tool). c. Can you write the Z-transform of a discrete sequence and a discrete operator? 10. Relationship between the z-transform and Fourier Transform. Radius of Convergence. 11. Manipulating the z-transform: a. What the transfer function of an LTI filter? b. Can you factorise the transfer function? c. Can you find the poles and zeros? d. Under what condition is the filter stable? e. What is BIBO stability? f. What is the stability triangle for a second order system? 12. Filter properties: a. Group delay b. Linear phase, minimum phase, all-pass: what are the properties of these filters? c. Can you write a filter as the product of a minimum phase and an all-pass transfer functions? 13. Filters: a. FIR, IIR b. Can you derive the transfer function of the filters? c. Can you find their impulse response given a filter transfer function?
d. Remember what the impulse response-transfer function relationship for first and second order sections. The filter impulse response can be found by factorization and partial fractions. e. What is the filter gain at dc and at the Nyquist frequency? 14. Filter implementation and structures: a. Direct Form, Canonical Form, Cascade, Parallel. b. Lattice filters c. Given a transfer function, can you find the relevant implementation? d. Given an implementation, can you find the transfer function? 15. Filter Design: a. FIR – Windowing, Frequency Sampling, Least Squares b. IIR – Impulse invariance, the derivative based method, the bilinear transformation, Pade’s method, least squares. c. Relative advantages and disadvantages of FIR and IIR filters and their design methods. 16. Fixed point arithmetic: a. Can you write the two’s complement fixed point representation of a number? b. Can you work out the scaling factor a fixed point representation? c. Can you work out the BIBO gain? d. Can you carry the dynamic range optimization? e. Remember that filters are made up of accumulators, multipliers and delay elements (shift registers or memory elements). f. Do you understand the behaviour of quantization and round-off errors? 17. The DFT, its relationship to the DTFT and its properties: a. Basis, orthogonality of the Fourier vectors b. DFT as an inner product c. Can you derive the properties d. Can you calculate the DFT e. Do you know how to obtain the DFT from the DTFT f. Do you know the relationships between the two (especially the effects of aliasing in the frequency and time domains)? 18. Filter implementation using the DFT: a. Linear vs circular convolution b. Overlap-add c. Overlap-save 19. Fundamental statistical concepts: a. The mean, variance, correlation, covariance. b. Statistical independence. Uncorrelated variables and the relationship to statistical independence. c. The Gaussian distribution and its properties. The multidimensional and single dimensional cases. d. Understanding the statistics from the perspective of information. 20. Random processes: a. Mean, and covariance. b. Stationarity, strict and wide sense stationarity. c. Auto and cross-correlation d. Ensemble versus time averages (first and second order statistics).