图书介绍
自适应滤波器原理 英文版 第3版PDF|Epub|txt|kindle电子书版本网盘下载
- (美)(S.海金)Simon Haykin著 著
- 出版社: 北京:电子工业出版社
- ISBN:7505348841
- 出版时间:1998
- 标注页数:989页
- 文件大小:26MB
- 文件页数:1006页
- 主题词:
PDF下载
下载说明
自适应滤波器原理 英文版 第3版PDF格式电子书版下载
下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!
(文件页数 要大于 标注页数,上中下等多册电子书除外)
注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具
图书目录
Introduction1
1.The Filtering Problem1
2.Adaptive Filters2
3.Linear Filter Structures4
4.Approaches to the Development of Linear Adaptive Filtering Algorithms9
Contents13
Preface13
5.Real and Complex Forms of Adaptive Filters14
6.Nonlinear Adaptive Filters15
Acknowledgments16
7.Applications18
8.Some Historical Notes67
PART 1 BACKGROUND MATERIAL78
Chapter 1 Discrete-Time Signal Processing79
1.1 z-Transform79
1.2 Linear Time-Invariant Filters81
1.3 Minimum-Phase Filters86
1.4 Discrete Fourier Transform87
1.5 Implementing Convolutions Using the DFT87
1.6 Discrete Cosine Transform93
1.7 Summary and Discussion94
Problems95
Chapter 2 Stationary Processes and Models96
2.1 Partial Characterization of a Discrete-Time Stochastic Process97
2.2 Mean Ergodic Theorem98
2.3 Correlation Matrix100
2.4 Correlation Matrix of Sine Wave Plus Noi se106
2.5 Stochastic Models108
2.6 Wold Decomposition115
2.7 Asymptotic Stationarity of an Autoregressive Process116
2.8 Yule-Walker Equations118
2.9 Computer Experiment:Autoregressive Process of Order 2120
2.10 Selecting the Model Order128
2.11 Complex Gaussian Processes130
2.12 Summary and Discussion132
Problems133
Chapter 3 Spectrum Analysis136
3.1 Power Spectral Density136
3.2 Properties of Power Spectral Density138
3.3 Transmission of a Stationary Process Through a Linear Filter140
3.4 Cramér Spectral Representation for a Stationary Process144
3.5 Power Spectrum Estimation146
3.6 Other Statistical Characteristics of a Stochastic Process149
3.7 Polyspectra150
3.8 Spectral-Correlation Density154
3.9 Summary and Discussion157
Problems158
4.1 The Eigenvalue Problem160
Chapter 4 Eigenanalysis160
4.2 Properties of Eigenvalues and Eigenvectors162
4.3 Low-Rank Modeling176
4.4 Eigenfilters181
4.5 Eigenvalue Computations184
4.6 Summary and Discussion187
Problems188
PART 2 LINEAR OPTIMUM FILTERING193
Chapter 5 Wiener Filters194
5.1 Linear Optimum Filtering:Problem Statement194
5.2 Principle of Orthogonality197
5.3 Minimum Mean-Squared Error201
5.4 Wiener-Hopf Equations203
5.5 Error-Performance Surface206
5.6 Numerical Example210
5.7 Channel Equalization217
5.8 Linearly Constrained Minimum Variance Filter220
5.9 Generalized Sidelobe Cancelers227
5.10 Summary and Disussion235
Problems236
Chapter 6 Linear Prediction241
6.1 Forward Linear Prediction242
6.2 Backward Linear Prediction248
6.3 Levinson-Durbin Algorithm254
6.4 Properties of Prediction-Error Filters262
6.5 Schur-Cohn Test271
6.6 Autoregressive Modeling of a Stationary Stochastic Process273
6.7 Cholesky Factorization276
6.8 Lattice Predictors280
6.9 Joint-Process Estimation286
6.10 Block Estimation290
6.11 Summary and Discussion293
Problems295
Chapter 7 Kalman Filters302
7.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables303
7.2 Statement of the Kalman FiItering Problem306
7.3 The Innovations Process307
7.4 Estimation ofthe State using the Innovations Process310
7.5 Filtering317
7.6 Initial Conditions320
7.7 Summary of the Kalman FiIter320
7.8 Variants of the Kalman Filter322
7.9 The Extended Kalman Filter328
7.10 Summary and Discussion333
Problems334
PART 3 LINEAR ADAPTIVE FILTERING338
8.1 Some Preliminaries339
Chapter 8 Method of Steepest Descent339
8.2 Steepest-Descent Algorithm341
8.3 Stability of the Steepest-Descent Algorithm343
8.4 Example350
8.5 Summary and Discussion362
Problems362
Chapter 9 Least-Mean-Square Algorithm365
9.1 Overview of the Structure and Operation of the Least-Mean-Square Algorithm365
9.2 Least-Mean-Square Adaptation Algorithm367
9.3 Examples372
9.4 Stability and Performance Analysis of the LMS Algodthm390
9.5 Summary of the LMS Algorithm405
9.6 Computer Experiment on Adaptive Prediction406
9.7 Computer Experiment on Adaptive Equalization412
9.8 Computer Experiment on Minimum-Variance Distortionless Response Beamformer421
9.9 Directionality of Convergence of the LMS Algorithm for Non-White Inputs425
9.10 Robustness of the LMS Algorithm427
9.11 Normalized LMS Algorithm432
9.12 Summary and Discussion438
Problems439
Chapter 10 Frequency-Domain Adaptive Filters445
10.1 Block Adaptive Filters446
10.2 Fast LMS Algorithm451
10.3 Unconstrained Frequency-Domain Adaptive Filtering457
10.4 Self-Orthogonalizing Adaptive Filters458
10.5 Computer Experiment on Adaptive Equalization469
10.6 Classification ofAdaptive Filtering Algorithms477
10.7 Summary and Discussion478
Problems479
Chapter 11 Method of Least Squares483
11.1 Statement of the Linear Least-Squares Estimation Problem483
11.2 Data Windowing486
11.3 Principle of Orthogonality(Revisited)487
11.4 Minimum Sum ofError Squares491
11.5 Normal Equations and Linear Least-Squares Filters492
11.6 Time-Averaged Correlation Matrix495
11.7 Reformulation of the Normal Equations in Terms of Data Matrices497
11.8 Properties of Least-Squares Estimates502
11.9 Parametric Spectrum Estimation506
11.10 Singular Value Decomposition516
11.11 Pseudoinverse524
11.12 Interpretation of Singular Values and Singular Vectors525
11.13 Minimum Norm Solution to the Linear Least-Squares Problem526
11.14 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to anUnderdetermined Least-Squares Estimation Problem530
11.5 Summary and Discussion532
Problems533
Chapter 12 Rotations and Reflections536
12.1 Plane Rotations537
12.2 Two-Sided Jacobi Algorithm538
12.3 Cyclic Jacobi Algorithm544
12.4 Householder Transformation548
12.5 The QR Algorithm551
12.6 Summary and Discussion558
Problems560
Chapter 13 Recursive Least-Squares Algorithm562
13.1 Some Preliminaries563
13.2 The Matrix Inversion Lemma565
13.3 The Exponentially Weighted Recursive Least-Squares Algorithm566
13.4 Update Recursion for the Sum of Weighted Error Squares571
13.5 Example:Single-Weight Adaptive Noise Canceler572
13.6 Convergence Analysis of the RLS Algorithm573
13.7 Computer Experiment on Adaptive Equalization580
13.8 State-Space Formulation of the RLS Problem583
Problems587
13.9 Summary and Discussion587
Chapter 14 Square-Root Adaptive Filters589
14.1 Square-Root Kalman Filters589
14.2 Building Square-Root Adaptive Filtering Algorithms on their Kalman FilterCounterparts597
14.3 QR-RLS Algorithm598
14.4 Extended QR-RLS Algorithm614
14.5 Adaptive Beamforming617
14.6 Inverse QR-RLS AIgorithm624
14.7 Summary and Discussion627
Problems628
Chapter 15 Order-Recursive Adaptive Filters630
15.1 Adaptive Forward Linear Prediction631
15.2 Adaptive Backward Linear Prediction634
15.3 Conversion Factor636
15.4 Least-Squares Lattice Predictor640
15.5 Angle-Normalized Estimation Errors653
15.6 First-Order State-Space Models for Lattice Filtering655
15.7 QR-Decomposition-Based Least-Squares Lattice Filters660
15.8 Fundamental Properties of the QRD-LSL Filter667
15.9 Computer Experiment on Adaptive Equalization672
15.10 Extended QRD-LSL Algorithm677
15 11 Recursive Least-Squares Lattice Filters Using A Posteriori Estimation Errors679
15.12 Recursive LSL Filters Using A Priori Estimation Errors with Error Feedback683
15.13 Computation of the Least-Squares Weight Vector686
15.14 Computer Experiment on Adaptive Prediction 69l693
15.15 Other Variants of Least-Squares Lattice Filters693
15.16 Summary and Discussion694
Problems696
Chapter 16 Tracking of Time-Varying Systems701
16.1 Markov Model for System Identification702
16.2 Degree of Nonstationaritv705
16.3 Criteria for Tracking Assessment706
16.4 Tracking Performance of the LMS Algorithm708
16.5 Tracking Performance of the RLS Algorithm711
16.6 Comparison of the Tracking Performance of LMS and RLS Algorithms716
16.7 Adaptive Recovery of a Chirped Sinusoid in Noise719
16.8 How to Improve the Tracking Behavior of the RLS Algorithm726
16.9 Computer Experiment on System Identification729
16.10 Automatic Tuning ofAdaptation Constants731
16.11 Summary and Discussion736
Problems737
Chapter 17 Finite-Precision Effects738
17.1 Quantization Errors739
17.2 Least-Mean-Square Algorithm741
17.3 Recursive Least-Squares Algorithm751
17.4 Square-Root Adaptive Filters757
17.5 Order-Recursive Adaptive Filters760
17.6 Fast Transversal Filters763
17.7 Summary and Discussion767
Problems769
PART 4 NONLINEAR ADAPTIVE FILTERING771
Chapter 18 Blind Deconvolution772
18.1 Theoretical and Practical Considerations773
18.2 Bussgang Algorithm for Blind Equalization of Real Baseband Channels776
18.3 Extension of Bussgang Algorithms to Complex Baseband Channels791
18.4 Special Cases of the Bussgang Algorithm792
18.5 Blind Channel Identification and Equalization Using Polyspectra796
18.6 Advantages and Disadvantages of HOS-Based Deconvolution Algorithms802
18.7 Channel Identifiability Using Cyclostationary Statistics803
18.8 Subspace Decomposition for Fractionally-Spaced Blind Identification804
18.9 Summary and Discussion813
Problems814
Chapter 19 Back-Propagation Learning817
19.1 Models of aNeuron818
19.2 Multilayer Perceptron822
19.3 Complex Back-Propagation Algorithm824
19.4 Back-Propagation Algorithm for Real Parameters837
19.5 Universal Approximation Theorem838
19.6 Network Complexity840
19.7 Filtering Applications842
19.8 Summary and Discussion852
Problems854
Chapter 20 Radial Basis Function Networks855
20.1 Structure of RBF Networks856
20.2 Radial-Basis Functions858
20.3 Fixed Centers Selected at Random859
20.4 Recursive Hybrid Learning Procedure862
20.5 Stochastic Gradient Approach863
20.6 Universal Approximation Theorem(Revisited)865
20.7 Filtering Applications866
20.8 Summary and Discussion871
Problems873
Appendix A Complex Variables875
Appendix B Differentiation with Respect to a Vector890
Appendix C Method of Lagrange Multipliers895
Appendix D Estimation Theory899
Appendix E Maximum-Entropy Method905
Appendix F Minimum-Variance Distortionless Response Spectrum912
Appendix G Gradient Adaptive Lattice Algorithm915
Appendix H Solution of the Difference Equation(9.75)919
Appendix I Steady-State Analysis of the LMS Algorithm without Invoking the Inde-pendence Assumption921
Appendix J The Complex Wishart Distribution924
GIossary928
Abbreviations932
Principal Symbols933
Bibliography941
Index978