守望者--AIR技术交流

 找回密码
 立即注册

QQ登录

只需一步,快速开始

扫一扫,访问微社区

搜索
热搜: ANE FlasCC 炼金术
查看: 495|回复: 0

[音频分析] LMS自适应滤波算法的C++实现

[复制链接]
  • TA的每日心情
    擦汗
    2018-4-10 15:18
  • 签到天数: 447 天

    [LV.9]以坛为家II

    1742

    主题

    2094

    帖子

    13万

    积分

    超级版主

    Rank: 18Rank: 18Rank: 18Rank: 18Rank: 18

    威望
    562
    贡献
    29
    金币
    51753
    钢镚
    1422

    开源英雄守望者

    发表于 2015-8-20 14:49:36 | 显示全部楼层 |阅读模式
    头文件:
    1. /*
    2. * Copyright (c) 2008-2011 Zhang Ming (M. Zhang), zmjerry@163.com
    3. *
    4. * This program is free software; you can redistribute it and/or modify it
    5. * under the terms of the GNU General Public License as published by the
    6. * Free Software Foundation, either version 2 or any later version.
    7. *
    8. * Redistribution and use in source and binary forms, with or without
    9. * modification, are permitted provided that the following conditions are met:
    10. *
    11. * 1. Redistributions of source code must retain the above copyright notice,
    12. *    this list of conditions and the following disclaimer.
    13. *
    14. * 2. Redistributions in binary form must reproduce the above copyright
    15. *    notice, this list of conditions and the following disclaimer in the
    16. *    documentation and/or other materials provided with the distribution.
    17. *
    18. * This program is distributed in the hope that it will be useful, but WITHOUT
    19. * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
    20. * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
    21. * more details. A copy of the GNU General Public License is available at:
    22. * http://www.fsf.org/licensing/licenses
    23. */


    24. /*****************************************************************************
    25. *                                   lms.h
    26. *
    27. * Least Mean Square Adaptive Filter.
    28. *
    29. * Least mean squares (LMS) algorithms are a class of adaptive filter used
    30. * to mimic a desired filter by finding the filter coefficients that relate
    31. * to producing the least mean squares of the error signal (difference
    32. * between the desired and the actual signal). It is a stochastic gradient
    33. * descent method in that the filter is only adapted based on the error at
    34. * the current time.
    35. *
    36. * This file implement three types of the LMS algorithm: conventional LMS,
    37. * algorithm, LMS-Newton algorhm and normalized LMS algorithm.
    38. *
    39. * Zhang Ming, 2010-10, Xi'an Jiaotong University.
    40. *****************************************************************************/


    41. #ifndef LMS_H
    42. #define LMS_H


    43. #include <vector.h>
    44. #include <matrix.h>


    45. namespace splab
    46. {

    47.     template<typename Type>
    48.     Type lms( const Type&, const Type&, Vector<Type>&, const Type& );

    49.     template<typename Type>
    50.     Type lmsNewton( const Type&, const Type&, Vector<Type>&,
    51.                     const Type&, const Type&, const Type& );

    52.     template<typename Type>
    53.     Type lmsNormalize( const Type&, const Type&, Vector<Type>&,
    54.                        const Type&, const Type& );


    55.     #include <lms-impl.h>

    56. }
    57. // namespace splab


    58. #endif
    59. // LMS_H
    复制代码


    实现文件:
    1. /*
    2. * Copyright (c) 2008-2011 Zhang Ming (M. Zhang), zmjerry@163.com
    3. *
    4. * This program is free software; you can redistribute it and/or modify it
    5. * under the terms of the GNU General Public License as published by the
    6. * Free Software Foundation, either version 2 or any later version.
    7. *
    8. * Redistribution and use in source and binary forms, with or without
    9. * modification, are permitted provided that the following conditions are met:
    10. *
    11. * 1. Redistributions of source code must retain the above copyright notice,
    12. *    this list of conditions and the following disclaimer.
    13. *
    14. * 2. Redistributions in binary form must reproduce the above copyright
    15. *    notice, this list of conditions and the following disclaimer in the
    16. *    documentation and/or other materials provided with the distribution.
    17. *
    18. * This program is distributed in the hope that it will be useful, but WITHOUT
    19. * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
    20. * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
    21. * more details. A copy of the GNU General Public License is available at:
    22. * http://www.fsf.org/licensing/licenses
    23. */


    24. /*****************************************************************************
    25. *                                lms-impl.h
    26. *
    27. * Implementation for LMS Adaptive Filter.
    28. *
    29. * Zhang Ming, 2010-10, Xi'an Jiaotong University.
    30. *****************************************************************************/


    31. /**
    32. * The conventional LMS algorithm, which is sensitive to the scaling of its
    33. * input "xn". The filter order p = wn.size(), where "wn" is the Weight
    34. * Vector, and "mu" is the iterative setp size, for stability "mu" should
    35. * belong to (0, Rr[Rxx]).
    36. */
    37. template <typename Type>
    38. Type lms( const Type &xk, const Type &dk, Vector<Type> &wn, const Type &mu )
    39. {
    40.     int filterLen = wn.size();
    41.     static Vector<Type> xn(filterLen);

    42.     // update input signal
    43.     for( int i=filterLen; i>1; --i )
    44.         xn(i) = xn(i-1);
    45.     xn(1) = xk;

    46.     // get the output
    47.     Type yk = dotProd( wn, xn );

    48.     // update the Weight Vector
    49.     wn += 2*mu*(dk-yk) * xn;

    50.     return yk;
    51. }


    52. /**
    53. * The LMS-Newton is a variant of the LMS algorithm which incorporate
    54. * estimates of the second-order statistics of the environment signals is
    55. * introduced. The objective of the algorithm is to avoid the slowconvergence
    56. * of the LMS algorithm when the input signal is highly correlated. The
    57. * improvement is achieved at the expense of an increased computational
    58. * complexity.
    59. */
    60. template <typename Type>
    61. Type lmsNewton( const Type &xk, const Type &dk, Vector<Type> &wn,
    62.                 const Type &mu, const Type &alpha, const Type &delta )
    63. {
    64.     assert( 0 < alpha );
    65.     assert( alpha <= Type(0.1) );

    66.     int filterLen = wn.size();
    67.     Type beta = 1-alpha;
    68.     Vector<Type> vP(filterLen);
    69.     Vector<Type> vQ(filterLen);

    70.     static Vector<Type> xn(filterLen);

    71.     // initialize the Correlation Matrix's inverse
    72.     static Matrix<Type> invR = eye( filterLen, Type(1.0/delta) );

    73.     // update input signal
    74.     for( int i=filterLen; i>1; --i )
    75.         xn(i) = xn(i-1);
    76.     xn(1) = xk;

    77.     Type yk = dotProd(wn,xn);

    78.     // update the Correlation Matrix's inverse
    79.     vQ = invR * xn;
    80.     vP = vQ / (beta/alpha+dotProd(vQ,xn));
    81.     invR = (invR - multTr(vQ,vP)) / beta;

    82.     // update the Weight Vector
    83.     wn += 2*mu * (dk-yk) * (invR*xn);

    84.     return yk;
    85. }


    86. /**
    87. * The conventional LMS is very hard to choose a learning rate "mu" that
    88. * guarantees stability of the algorithm. The Normalised LMS is a variant
    89. * of the LMS that solves this problem by normalising with the power of
    90. * the input. For stability, the parameter "rho" should beong to (0,2),
    91. * and "gamma" is a small number to prevent <Xn,Xn> == 0.
    92. */
    93. template <typename Type>
    94. Type lmsNormalize( const Type &xk, const Type &dk, Vector<Type> &wn,
    95.                    const Type &rho, const Type &gamma )
    96. {
    97.     assert( 0 < rho );
    98.     assert( rho < 2 );

    99.     int filterLen = wn.size();
    100.     static Vector<Type> sn(filterLen);

    101.     // update input signal
    102.     for( int i=filterLen; i>1; --i )
    103.         sn(i) = sn(i-1);
    104.     sn(1) = xk;

    105.     // get the output
    106.     Type yk = dotProd( wn, sn );

    107.     // update the Weight Vector
    108.     wn += rho*(dk-yk)/(gamma+dotProd(sn,sn)) * sn;

    109.     return yk;
    110. }
    复制代码



    测试代码:
    1. /*****************************************************************************
    2. *                                  lms_test.cpp
    3. *
    4. * LMS adaptive filter testing.
    5. *
    6. * Zhang Ming, 2010-10, Xi'an Jiaotong University.
    7. *****************************************************************************/


    8. #define BOUNDS_CHECK

    9. #include <iostream>
    10. #include <iomanip>
    11. #include <lms.h>


    12. using namespace std;
    13. using namespace splab;


    14. typedef double  Type;
    15. const   int     N = 50;
    16. const   int     order = 1;
    17. const   int     dispNumber = 10;


    18. int main()
    19. {
    20.     Vector<Type> dn(N), xn(N), yn(N), wn(order+1);
    21.     for( int k=0; k<N; ++k )
    22.     {
    23.         xn[k] = Type(cos(TWOPI*k/7));
    24.         dn[k] = Type(sin(TWOPI*k/7));
    25.     }
    26.     int start = max(0,N-dispNumber);
    27.     Type xnPow = dotProd(xn,xn)/N;
    28.     Type mu = Type( 0.1 / ((order+1)*xnPow) );
    29.     Type rho = Type(1.0), gamma = Type(1.0e-9), alpha = Type(0.08);

    30.     cout << "The last " << dispNumber << " iterations of Conventional-LMS:" << endl;
    31.     cout << "observed" << "\t" << "desired" << "\t\t" << "output" << "\t\t"
    32.          << "adaptive filter" << endl << endl;
    33.     wn = 0;
    34.     for( int k=0; k<start; ++k )
    35.         yn[k] = lms( xn[k], dn[k], wn, mu );
    36.     for( int k=start; k<N; ++k )
    37.     {
    38.         yn[k] = lms( xn[k], dn[k], wn, mu );
    39.         cout << setiosflags(ios::fixed) << setprecision(4)
    40.              << xn[k] << "\t\t" << dn[k] << "\t\t" << yn[k] << "\t\t";
    41.         for( int i=0; i<=order; ++i )
    42.             cout << wn[i] << "\t";
    43.         cout << endl;
    44.     }
    45.     cout << endl << endl;

    46.     cout << "The last " << dispNumber << " iterations of LMS-Newton:" << endl;
    47.     cout << "observed" << "\t" << "desired" << "\t\t" << "output" << "\t\t"
    48.          << "adaptive filter" << endl << endl;
    49.     wn = 0;
    50.     for( int k=0; k<start; ++k )
    51.         yn[k] = lmsNewton( xn[k], dn[k], wn, mu, alpha, xnPow );
    52.     for( int k=start; k<N; ++k )
    53.     {
    54.         yn[k] = lmsNewton( xn[k], dn[k], wn, mu, alpha, xnPow );
    55.         cout << setiosflags(ios::fixed) << setprecision(4)
    56.              << xn[k] << "\t\t" << dn[k] << "\t\t" << yn[k] << "\t\t";
    57.         for( int i=0; i<=order; ++i )
    58.             cout << wn[i] << "\t";
    59.         cout << endl;
    60.     }
    61.     cout << endl << endl;

    62.     cout << "The last " << dispNumber << " iterations of Normalized-LMS:" << endl;
    63.     cout << "observed" << "\t" << "desired" << "\t\t" << "output" << "\t\t"
    64.          << "adaptive filter" << endl << endl;
    65.     wn = 0;
    66.     for( int k=0; k<start; ++k )
    67.         yn[k] = lmsNormalize( xn[k], dn[k], wn, rho, gamma );
    68.     for( int k=start; k<N; ++k )
    69.     {
    70.         yn[k] = lmsNormalize( xn[k], dn[k], wn, rho, gamma );
    71.         cout << setiosflags(ios::fixed) << setprecision(4)
    72.              << xn[k] << "\t\t" << dn[k] << "\t\t" << yn[k] << "\t\t";
    73.         for( int i=0; i<=order; ++i )
    74.             cout << wn[i] << "\t";
    75.         cout << endl;
    76.     }
    77.     cout << endl << endl;

    78.     cout << "The theoretical optimal filter is:\t\t" << "-0.7972\t1.2788"
    79.          << endl << endl;

    80.     return 0;
    81. }
    复制代码


    运行结果:
    1. The last 10 iterations of Conventional-LMS:
    2. observed        desired         output          adaptive filter

    3. -0.2225         -0.9749         -0.8216         -0.5683 1.0810
    4. 0.6235          -0.7818         -0.5949         -0.5912 1.0892
    5. 1.0000          -0.0000         0.0879          -0.6084 1.0784
    6. 0.6235          0.7818          0.6991          -0.5983 1.0947
    7. -0.2225         0.9749          0.8156          -0.6053 1.1141
    8. -0.9010         0.4339          0.2974          -0.6294 1.1082
    9. -0.9010         -0.4339         -0.4314         -0.6289 1.1086
    10. -0.2225         -0.9749         -0.8589         -0.6239 1.1291
    11. 0.6235          -0.7818         -0.6402         -0.6412 1.1353
    12. 1.0000          -0.0000         0.0667          -0.6543 1.1271


    13. The last 10 iterations of LMS-Newton:
    14. observed        desired         output          adaptive filter

    15. -0.2225         -0.9749         -0.9739         -0.7958 1.2779
    16. 0.6235          -0.7818         -0.7805         -0.7964 1.2783
    17. 1.0000          -0.0000         0.0006          -0.7966 1.2783
    18. 0.6235          0.7818          0.7816          -0.7966 1.2784
    19. -0.2225         0.9749          0.9743          -0.7968 1.2787
    20. -0.9010         0.4339          0.4334          -0.7971 1.2787
    21. -0.9010         -0.4339         -0.4340         -0.7971 1.2787
    22. -0.2225         -0.9749         -0.9747         -0.7971 1.2788
    23. 0.6235          -0.7818         -0.7816         -0.7972 1.2789
    24. 1.0000          -0.0000         0.0001          -0.7973 1.2789


    25. The last 10 iterations of Normalized-LMS:
    26. observed        desired         output          adaptive filter

    27. -0.2225         -0.9749         -0.9749         -0.7975 1.2790
    28. 0.6235          -0.7818         -0.7818         -0.7975 1.2790
    29. 1.0000          -0.0000         -0.0000         -0.7975 1.2790
    30. 0.6235          0.7818          0.7818          -0.7975 1.2790
    31. -0.2225         0.9749          0.9749          -0.7975 1.2790
    32. -0.9010         0.4339          0.4339          -0.7975 1.2790
    33. -0.9010         -0.4339         -0.4339         -0.7975 1.2790
    34. -0.2225         -0.9749         -0.9749         -0.7975 1.2790
    35. 0.6235          -0.7818         -0.7818         -0.7975 1.2790
    36. 1.0000          -0.0000         -0.0000         -0.7975 1.2790


    37. The theoretical optimal filter is:              -0.7972 1.2788


    38. Process returned 0 (0x0)   execution time : 0.109 s
    39. Press any key to continue.
    复制代码



    本文来自:http://my.oschina.net/zmjerry/blog/8525
    守望者AIR技术交流社区(www.airmyth.com)
    回复

    使用道具 举报

    您需要登录后才可以回帖 登录 | 立即注册

    本版积分规则

    
    关闭

    站长推荐上一条 /4 下一条

    QQ|手机版|Archiver|网站地图|小黑屋|守望者 ( 京ICP备14061876号

    GMT+8, 2019-10-19 15:22 , Processed in 0.042386 second(s), 33 queries .

    守望者AIR

    守望者AIR技术交流社区

    本站成立于 2014年12月31日

    快速回复 返回顶部 返回列表