From 5099e1394cd62b5424ea530037e7bd223e25d4cf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?=
Knowledge
diff --git a/ACL_PyTorch/built-in/audio/whisper/README.md b/ACL_PyTorch/built-in/audio/whisper/README.md new file mode 100644 index 0000000000..e61463fd24 --- /dev/null +++ b/ACL_PyTorch/built-in/audio/whisper/README.md @@ -0,0 +1,118 @@ +# Whisper模型推理指导 + +- [概述](#概述) +- [插件与驱动准备](#插件与驱动准备) +- [获取本仓源码](#获取本仓源码) +- [环境准备](#环境准备) +- [数据集准备](#数据集准备) +- [文件目录结构](#文件目录结构) +- [开始推理](#开始推理) +- [性能数据](#性能数据) + +## 概述 +Whisper 是 OpenAI 开源的通用语音识别模型,支持多语言转录和翻译,基于 Transformer 架构,适用于会议记录、字幕生成等场景。其特点是开箱即用、鲁棒性强,并提供多种模型尺寸平衡速度与精度。 + +## 插件与驱动准备 + +- 该模型需要以下插件与驱动 + + | 配套 | 版本 | 环境准备指导 | + | ------------------------------------------------------------ | ------ | ------------------------------------------------------------ | + | 固件与驱动 | 24.0.RC3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) | + | CANN | 8.0.RC3 | 包含kernels包和toolkit包 | + | Python | 3.8 | - | + | PyTorch | 2.4.0 | - | + | Ascend Extension PyTorch | 2.4.0.post2 | - | + | 说明:Atlas 800I A2 推理卡和Atlas 300I DUO 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ | + + +## 获取本仓源码 +``` +git clone https://gitee.com/ascend/ModelZoo-PyTorch.git +cd ModelZoo-PyTorch/ACL_PyTorch/built-in/audio/whisper/ +``` + +## 环境准备 + +* 通过以下命令下载并安装(或升级至)Whisper 的最新版本: + + `pip3 install -U openai-whisper` + +* 下载模型权重: + * `base.pt`:[下载链接](https://openaipublic.azureedge.net/main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt) + +* 安装命令行工具**ffmpeg**: + * 在 Ubuntu or Debian上: + `sudo apt update && sudo apt install ffmpeg` + * 在 Arch Linux上: + `sudo pacman -S ffmpeg` + +* 安装requirements: + `pip3 install -r requirements.txt` + +## 数据集准备 +* librispeech_asr_dummy数据集[下载地址](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/tree/main),该数据集是 Hugging Face Datasets 库中提供的一个小型测试数据集,用于快速验证语音识别。下载下来后,把它放入当前文件夹内。 +* 文件列表`audio.mp3`是普通的语音文件,在warm up阶段使用,并可以直观测试。 + +## 文件目录结构 +文件目录结构大致如下: + +```text +📁 whisper/ +├── audio.mp3 +├── infer.py +├── rewrited_models.py +├── whisper_decoding.patch +├── base.pt +├── README.md +├── requrements.txt +├── 📁 librispeech_asr_dummy/ +| |── 📁 clean +│ └── 📄 validation-00000-of-00001.parquet +``` + +## 开始推理 +```SHELL +# 1. 激活环境变量 +source /usr/local/Ascend/ascend-toolkit/set_env.sh # 具体路径根据你自己的情况修改 +# 2. 指定使用NPU ID,默认为0 +export ASCEND_RT_VISIBLE_DEVICES=0 +# 3. 开始推理 +python3 infer.py +``` +infer.py推理参数: +* --model_path:模型权重路径,默认为"base.pt" +* --audio_path:音频文件的路径,默认为"audio.mp3" +* --speech_path:librispeech_asr_dummy数据集文件的路径,默认为"./librispeech_asr_dummy/clean/" +* --device: npu设备编号,默认为0 +* --batch_size: batch_size大小,默认为1 +* --warm_up:warm_up次数,默认为5 + +在推理开始后,首先会默认执行warm_up,目的是执行首次编译,首次编译时间较长,在warm_up结束后,会执行推理操作,输出audio.mp3音频的推理得到的文本。 + +warmup结束之后,开始推理librispeech_asr_dummy数据集,推理过程中会打屏输出E2E性能,推理结束后会输出WER精度得分。 + +**如果你想推理过程中打印encode和decode的耗时,你可以执行以下命令:** +```SHELL +# 1. 找到当前的环境路径(简称${location}),Location后面的那一串就是当前环境路径 +pip show openai-whisper | grep Location +# 2. 记录当前whisper库decoding.py的文件路径 +export ${decoding_path} = ${location}/whisper/decoding.py +# 3. 执行patch文件 +patch -p1 < whisper_decoding.patch +# 可能会提示你 +# can't find file to patch at input line 3 +# ... +# File to patch: +# 这时候需要你手动指定文件路径,输入之前得到的 +${decoding_path} +# 按回车,提示 patching file ${decoding_path} 即成功 +``` + +## 性能数据 + 在librispeech_asr_dummy/clean数据集上的性能如下: + + | 模型 | 芯片 | 平均encode | 平均decode |平均E2E | + |---------|------------|----------|-----------------|---------| + | whisper | 800I A2 | 0.90ms | 3.25ms | 67.32ms | + 注:平均decode 指在decode阶段,生成单个token的平均耗时。 \ No newline at end of file diff --git a/ACL_PyTorch/built-in/audio/whisper/audio.mp3 b/ACL_PyTorch/built-in/audio/whisper/audio.mp3 new file mode 100644 index 0000000000000000000000000000000000000000..aaa6dcbcd16ededa135fe797dc0dfde41550bdf2 GIT binary patch literal 30291 zcmeF&Ra9I}^eFhoB?NaG(zv@@aCd0j-Ge&`?(XjH!GpWI2M7=d0fHq2Nr1o^`kVQ$ zS+nLo-=|v-P|fMp=XZ8h*C}@IR*>dK1idtP5~u={>!kw$fe<9Ttn6JNw$|^g-OW9% ztsoYD5Km7JXLCnucQ(s+AgG?69tZ>{`SMIl7iVYdcb*_y30RYK(R8tpX63S0wN>Wv z_Ttb~)3jCgSG6-&<#&`*ld*QzmGV=x6Kh!n11Ecr-j;R0TsH}W7&o;vJH+=}TCx!SiH zafPPfj-uA2tFDDob0y=ZcCb*#j|@dQ-w{>vNk$kYs?*WOPCC)| %lL)}>K(y6EMVb-<-%d<6kbKC*11T|I>(X*Ark$x z^iF0p>aMNI=~B|udga9V+$XJn1Hf*>+7oG?m*X+pbKR!c%bWlCJMAl3Zt8Tst-!|l zxJlOx&0v?tfPt#87+Gvu1QqBg7(|e&zhM505h=!VF7)dnbe1?EbF+y~|K#gex;>Xr z_ng+;hSZCu`>eMlFwa}?)w}gwf7$xo4T41c^)vCo-{Wuw6zQa4^@jW(PO{uSh7x*n z9 fV){9N#$=g{-R<&7cdK&*KS$_j8 Un5<{t4V{D(Xate?tBF9DAc4 467Pn8y^XAlt$>f-$Mz>8RRf z7^X;v`bH}R$6C 5;Bio8vCqPg>r=^7&*9lZ}}b(^c)K-TGdpAOmP zm5VHbk8->x;$|g%2BM@-5#P_vLzvffj_V8%aT*+}k~1VcJts~DBaZvuHI`eA;}J2k ziRhD#>%931^8`^H94+e*YtBM6qoOh3l^g2XMfPj6cV_b=)k!9BX8h&^Qe}}mg)6kS zj+x|h@5QJo1|sG?>X-F_i|4%Uk%(i_N@oucjkDNrqa?w;ZTj<@K^?9=Hem}q;TMg*RQ)L#tVDmV#RLR4g g-@ZOe-`Rdh5+st#4|_AuYxp}=Xfjas8-0ZC43V_Iwl;5VgcrsPv!*pPlD(n& zGeN|Dn?L3%>Y18-!kvv5T*jFFklu1LJdd4!ecqGf(N2wbJ!?i|nRypdSg8hMI1q<6 z&MXma@-~dB3z9908aYno7Y*}hvr1)lf&!34zS0+K^IOaY1b`#^xVby-C)IP`eyb!B zaSuNt@lT&vq-=;FlsubYHZEPaVIVMJ{NR<*Zpa-pL4)5%oDfIAVaBxm*OB-$-k@~r zA=T>G*Ut)O<0!_q=KRGENr+h c;)ULk-7S1FR) z!r(#IFb_42Ayp^xDdHEL5vXRC`*3w8i1I6(sg`D30+Tr+pNmQNzi6Qkwvmq5sZNTN zNFg3?a5Y9bJ}#A KxNG>b&6RK2eG+Tcha;G;u zWC7{&C^HTbAHWC{9|d|obDTxzHcS_WY5V77!go9#=&!44EL++X3s=HC_>{ukGZfp? z8AowU$Di3Di#i{k1Q2Q+*JtG>LnIbTlY5WfE_dY%tuc}yfBT!`Y@=iJRe<0 iXQE>Vr8;ZGqF`4Zye1kM zBT;t0jU99|m3qK8lk`s%gNREz`Zo?&= z%_Nb({wn-L_=?%+6-pn>Lkt$->;E90kUd2AYt6yAc$H3}cH=K0P*=#inHZ`=s_}4> zk!sDA?B3&K3EVo67{q0-o}E1n*i^L;&nH#CtY$|IVgOL50sI|vU}>gAsGy=_)1YzS z)@OD1YFw&=(n4g@Vrq+=J3#E);l P+c%!4g&-?iSNAAbZBGf!U|{m{=V<9rW)K z*r>gXl6{x(< UNy1AvA3TCi%VJ6WaFmEeT7(?}TJmiHdT_A?BX$pJmAGj)m?oHH_P=F6HW>wsz6J zzHpbD$--(6?<553^S*y_b63by4Z2c+dB~{JP8TSMacJ2Q(O2;uf4CNuD)o+`Sl~o< zy$keW#9Wx^F@zkZV8~U6PGX0KQ;qZP;9WH-jJXAIB7FLOVu~FuClZ1ICy-tH@KEp7 zhtD7&%X6#bc8ETKH+dHK{DHBnGTkt;>01tDyiPAiBMmd?0y@vd^Wf0z*wC}SJ ZUSGSZ_7WMOg|$b`f+AZ(6)S>GwI+k3jD|G z7PtiIRi=i6LuBujrWAHaWRrOp^DV~sRL@bv(%x^Aj;*zBvPNO3=BxG>m1oh^+^vJ5 zFIl4{Yv~b0sge@)-30Hm>s<9Yv%-P2pRLym#b%hIw1F@WIZeQHx53Qxz=b@olz1K) z12hSxWN(bJYS6_n$lk{t`A-msDbe Jj~c=ATGf7-TKq4Gi1&}qRHYGw9Fal5oz_22-*WAAlG1WSNx$rP39r@D z&*O*FD<&`xHu==i2F0-K`TBs6%$+?$r^P^bM42sqV8)z0ac*vA3&$ruWc8L{zQC#R z9y|r?tFAHwP7SirS2D*?&MEhBC66w9G&rABqC@L|i$B0`TJng{ ^B%>aO3Ft6<)aG$MB`aY{PDw&z<*1$FJFSLt1NP(pVY3Y0m9t -=Z0Sic%j zE B*p!oOh`I zYOcFk;P-WWP?0l(jjxz8A2BCGA&Mm &WefcZ zn)Ba&3<}P*N$K~O<-yn=oTpQJ(6?8|Ru}3cO|lwDDkFfScrB$C+C=52NKmEx7cSXk zVXH>L>x-%U{5Rctrp?-vk(% J}adw@T`I{Qrc5R30 zOh4sYZ7nMH^l#S9zn-6UW^D-CSOb{j)p_nl)c_AX%na|VL+|K%VOV^uNPna2-S|cJ z-Y@q?X)$+pBc<9Q2E|K15GTn}_|23N9)~oHg%yi~4D6VaxrA_fDRBc!H0;H{Y&3q0 zd;5~NMnXUUQ}rTZ{QH)t7yh(Iqb68DBgRH{Aj_j~HzHePfuE7L3!EA3JF}e015JP~ z@YIghMc1=^+XCWZmA<4f@7ZNQ9~{Fq58r}$0*G_Q<)MC&A_N%RqbiA#S3dlI3LAcz ztA3py-Kw-x=aN$UrvEMmSs0<}mr{?Qi`YWipg&FpF1C4FO*SVf0*1W{fdd(LC133V znuKpa9mRDNb;OgW1gd_`r!!SN)k0?ADIme(wIVHqu~a=>%AVcw@3B@Dovk8A&gDC% z2_&VonDTOY88=&vk1$Uj`R1;>iPHENapxIumwC|p18Bfs=Tzg=%ey$f|X`e9l*= zb|2C9U4-X3KWWAVVuPjeW5`v%K8Lr=%OUA= 4l@NF z64^{Cx&QOg8@u< <$*65~|Px=S=gca+KCa zO2MHd-k(II17R$`$V{|@e3O(u{vlXqBzET{T>hsQN#zDve;tNHJXjK)B42$VJ(}~! z0CJ*Dh1nuL7>z$A+QDZAripm(pZi6e_%{(=SIsj NW{@&;K@Flo)ds*z*2 zKnQZ|qlf;6zY@OOS!Mdw!e8NFKmD7RKC~v6JCavF`)$T5YVRVS;F<4Xpw2J@W^t{I z^#mokkXn1A>mOAEm?x03YuQiMLN5I5p!f!n^diq6X&aZ55WBLVm!Nx}=Uhp$&S}=+ zm$) ?~8`Byeo KX^_r(z>u9TudtpiQX_Wuig1!%>}?b zzHb=UBiYQO!r!v{iEj}?L+B1mzfCMghpEY?2~K^~@|>y&MFUCUt@acMGheWpdSnS^ z$YCSW<4@S2mw=)Tl<6VGjuYoaa=ckK!DbjVs2fzc+NLfn-74Ee>`WdeCO&s!KM_!Z z0OY}84OA|Rd`0vvhS~(3PoBRtSf57g3&e qFRDYYxb&+{#fo&)IJ4PiS59sTvT9AJ?{Esw6FzkA%`Q-6ZhbGViX> zK&1tBNZjHdABm~!PXmQ!bt>8)sgEKu@W+FIih+#P`t%a0(pd8K=j`_?v|Ah4{rV7E zg0JCkR&V|Vi(obMy`mXn%%=Q4Ii(Ks#DKYWKd_mNm6yNe(!6q(EAYJ}ICrs5&&U&O z+36ODanF*n4wC_u>W %}0!98ZJ+?spIIVwNZLmP2FRL&vVBo+p8Kd zu+4177|YWYR$9Kj6En|BmRxn+D0 v%+nqYq0vG(2G)w(vvXocdHb)^bUe`_3S27 zhW3j8dPSc+(L#M8c(`<>aoppKWSN1Dl0P^oYfiH=rlcg6P@7K1_}$iz0UwDp<)Pg1 zGhR=^yk+VAgJw?;aN`tdR#H0Qd#Zi9m|HkGa4aLu&><{7(;QYpP|oaRkR!1FF$A?I z^8C;H QBK=-_tEz#C$S^L}4-`ZXq4RZyE)?Z$JI~ z@N|Rg1H=|mQYn7mlD5!Z;o1}A#$bvBh_9k#CH2;=J%-@z-aRZ-G6MZi>_JUx`R@>y zM0PVD9CfQGY*xp8 g=)2t5JEWL8U5K?G( BGZOWku{6#f(Nblo-Q}nk-!^*S1WSzHASWx@{kQ($Q!W>7 z8BMN#7^OFXi0OjV9F9{6a~)ETbAk{-LIPHu|31yNsi+4bnq4s-g^$S6y&+JMM9k!e zb5B|BLXBZ5ngvulbkVmq+>|@oW3xzv9xggE=Ffb9f6}+$?xmQ&G7EN(Ro^Gu#nn$o zRr+;
o=aT{qj4J`L4s` WxiMu`7Y=2PP?(2^!mzd@aAev@uI!{a>@SdDXrO_@TRc4Tb?eRcn8n&h zN16Ix7Kz$P8Y6v%)xmc4^{%MAcyV0LIF>Idm(z%T4#N%Lboa>d^ukb>%rfZVDQM>A zq0>( H)P^L!sRM$zg{D1Qq+=Y2*q7Wqs z6?h*$?JBW_E+o$JK_po+EF&lujFreH@LeL>{5QT%i0~Oa7!iT{-UkS938vje73lJm z#*BkM0lRWW+MXdNc4^3_M$`|-nX(KIe0NTN$-3SeXd
y$)heBoUlD~7 9F0GOMkocsXH2LfGljuc! zrV$LOVYTzkZe~3zm`9H^r)AuTBehr3ibR;oCv|xaZm#ISc%GBMS O z#}!->+kdkxTdgh`n~6=Tq8BQWGI}}r_`AAWk#+t<&$I>uHJMrN$kAHh8Mj~743j() z(+LCkkQ#y7Q#F#?Rtu<3?rESS_Si@nj&?pN!gx&>0T)#B{NpiYHi_6!Hwf4c*v?3Q zs_swzdPxF2%Tkya@&8!1Z%^3?JUfj7fsDJUJC%mnp2o>LVV+>HX5$3Ki1e?InbOLw zl^1-NZm<3F9cDR+i$3;iabW!bt~eO;+)9ouy6A4vUQx9c2g8#y+8t00 W4=;HOnq*{ESD*%cl4iDl4Bslu+M`mQ4@?VVbej1s2f?**9*IK><%Kc79n zRE?a%R`e73A?7}QP4QK3+y%0dMIo3c1?)K8aW^szRVXK(TqF<` Pba)BKW2o$(`!NxiIT_$U@ml z!_5ZE?)K#~7@EPO!iZv=AFn4PjFrL%SoMv*;jIN0>-kf4t(ZWf8`DlrpueHL7vl^FFGw zz_9`NoxH)O`sX5*;is0@%p8=l4I`2xEFd?;XMy5|361<>;LPz40ov-&=tAp!j zf7^KwTJK9! $L)N8?luy4;u`l*+%{9cQIk5E* zUT!u7-6?F sqCyJR9?D*_0la0E@JS%{b3)a$8tmO2;vuQX+o|Y3Kom> zF&9QfDb$TS;4!0}nB}t%5TzKII9dDp6*;hOHjM`vUqCL+eXlxPjoX4B)D6O>^8E8l zL!dwpn?P^Xx%3PRvt*Q`%MlG*lt#Q0H3V*u%{rRG0I>}FxvTX{D1zMwk}N*bWsIkf z>Efa4)C6zwxXA53Ws8IwV+_LL(@o~KTals90I~{xmAEWHv#(Ci^EyTz?%!|5sI&@k zY5ug#j>gQ7gSa!(38d<(y4N^cvNU{R+GO$^ES02WiDMN(z)^xm67YA&I8X~~w0hW0 zkqDU55cqq_7r_qvz0b3-xsnIh^e8_=SK{92C@adhH7fFncnPf 9XF=IR&cOi`B`P(vNg zD#d{dht4MqFD)un-1i9<>M6 cCKks&G7oEa0)SGrmyCjr7?;J`6s zqSYn;tLErh=szUsDj`u+UW;0lYZbYl7e)sQBoJm2>YpPHAEv6i=jhyE@d@OyT%I=0 z_Jt24+cWva-WN3RGhl^6KC>?yqEh|l8{oeM9D7!l?cVkuXc>-|*ct%)$9}qErazs7 zfUBduarS`aqBG#~`KCgYk?0NKE_YKDR>Fr*uDU3}n5KT{oZCk0lT?x_(nr%337o?h z398OT&WuqI8Y+3!%yf;*^2d~ 9ePL*}wZc}5D& z;DafkOGfne(g$Fze-c-q4Cn%$MDoAfJ2{#-f}QASu@mf2g)^fSJ!Z@t`6VnTUunp^ z%_E^%!oAoQ8T^8NxJMOLm~ElbJ;y%GwCz|>QdoYNEY$2L YO? z^N^7(ZC!p?SJe3ytT6RsZbBK) `@sazN5H)%|d( z&@;(fZ@$cs2;j1nCIoa$y)l7Cts=Ifmmm_6lv24Ad!|FR`Cq*R8`;qA%l%?HJEAj8 zg+%R1$3^9|$P7D+!x!0lL*E;pcisE(y%_TyF)9wJ8L;pr#X5yVp?13lINbmSJ`|*I z#8+defA*^LCXEKsWlhh|_)jpYS;vEx;MVs5?6+aeuB=hm&Rnf7ix*UwY2zaE=5WyA zJqOJ=q&i#zIlHI0{bMVCEERdUP}2X_LrL(^!^n%rh|;vhAB$Ww#}Fuc{%1D$_%>8> zk9mF_{Xkcp1Sndnp7%XzKBE_^Ck>1RA?{lh5JEwUQPU5Nk1~#0Pa-8n1$?Xbk=`o` zx|)gT55;Fi3Yo~+E0BT=s>**`@SF0`mGZK*`mHwZ#oc{RjLW!7OPvaQrU(FdY&O(+ zm$dg_cVPh=FHN!slTL~e#Prm2qEynB1)#eMpx# 9-qg}`Gs_{Wg7Xaj5lCd@TKQDkATj1 |^0-miOn>7 M*O9fO$Pe#u-jRn)=t5gWiQwjZk z{XisU%U0ua_edl!`s+V1Pcm5<;hhOX!nDB!dszcTgB;tgpEla9^6A9#4_2{$-K9?y zVVPwr_Y5MM;}mwBkX65>R~vxR{kkL0QpfY{hx0v~FQu=`{!+1xSRzWB3f^n54jdTl zoXSSjL={xUI4wg(+$MtyaW}~__b1We6GlK_q^R0R$6bbdakY593!}Ick^EV%zFIi! z|MCgtV2cmiV1Fdl8DMA6=t@>`?>W?tS`P#%T&XCZ2wD7EyJK6Yo zYOcz{{P{4XzCLaG*NZjikTlS?ANtXo_b%!7Q4^!3hN;1EltzsS$DWIsq?M;pvbry> z(NLvw4kG-7`sI1H>2+eFu&fp*euuToq_jJhl|uY ay1~cHz$jQg| zdyEXV9H7#anm=BiC*wgqLIX7#i+ggq758jU15GdczX$zoMsR8Bl*7${!ZblJ)tdT6 z{^byylF~9xrV+DK40IbcLzP>FDug!IhF6Hol~^?^@BGW2@m1%-^i%$qVFmbvqZH|O zVWA#6`D7sx+KUXN7O4^O&sWr+;a+3C{_p>tF>qSPu*p*R$1c*s=ct&%udL#xsRh7B zv2aR^ghrASr*Uy(inIa9|8Bd10J)e9FL_7qSw)kYpiaraxc=h;=C{oLkLc@IuP7U< zY3$=8aM(hrDsVZXJ%!eaqwA>aBgh#V{w5 |V8?c4n5PO`>AN^cKaawO4UQ2R&ZYl{u4QADfF~N2Oyl zRD374LOq_?oM-9u=v^w2a6@FVXY&WpFCG+00ss!IGI8%iTx|Zb)z 1dS~Gcl_mtmg$Pg zpB 7FPv_k&ickAdX13{2o(t*fKEz^&l2DepMd@clM`6`W*eBsI54I>HXeM70w z=M=5%)eXn^Ti9Fm7Y5Yk9QTy}?HFIa>GW8Opw7&%Y|>Vcj7<9SqtFp6p3JH89W5qp z=sxtE()ZV#WacY3l1~*{GX{}qHU$ZvQ`)3N4det^8f+$Qq7kkpZyMa@Vw*T@GzHbl z${om3n}6gJtq@07T@I^e7qh`U(L8}v)28EQaD6j6TPL HE5JM-@eR*pFC~P z#C1E=vbT -A`gIy-`R%aqbnO zV;pVtj7aEd6`Typc}z73E!!~0quDvC37zBQST-}zdkTBLK`=F9w|{|!?c1*t-;z^m zgg8qw!#slI%?B%lD~R>NH9J4Er6^6pxXLWTgj6TbBmY3TZ-Ta-%w;Wlj |xbq3m)aI4Wru*fb9-MS!pM7%G|C~TO@WimbyMw!) z_5Cs|Qdbd?VLzZP{mzrE)(U?-vrvA4Bk`5-qNqH 6Y#Dxfx``#6PU+6SKCp(4sGSU|K58YEM7W?o~!Crj0`_0UQE16 z_&&G3?8~M=HH)tzd~xl-^AV+*%n&w#=FmPVgvN`no(dkhT^Hu@2Vc35KL;TD`VqO1 zSjd86K#f%~q+^>ojm*Y7B9`iDZl4P9st D}aEK3=qIYVr_V1z=t2`Z^L%o78)&unA~Ks+z=j%W$shcj7B4v|tC!iSqg zik2=b$w0RezrJX?3&=yEDX3I*q!oUQV2qF8a#|wRHNumRupCBasgOlb6xq3UWJ-Ol zku=R!SBXoXB{98bZ0zq;^!Zs*@t{FXGeg(rJoqNnu%9BRR61*lhR`8*$|8>A+gz-| zL#)SDOwGc3`~UJkx#W!9-6CAdda}RrVqS9@M{bMD1UxvYL>)>85Zu@mhr(mCy>e5j zkiFwXgDw^dK6{L`cg&}mea%D3@@u_&_@$}9L>BmV4qYnv>}}azLG|s)?{x})$7uQR z_+TT>DbLd4M$5){Bvqy~jLLfnQobvirg_J?jzX5(mOAT6`Qij=c%=*tD(m7?cvyU@ z$>EOHpSk3uAL%r9^84i-f@nHuw@9^YUCLt(s8AnkSJC1tbs Tjl+ex8 zpy&B)#R8`JD!3-pe`!?Hjn&o~85^!y!3w0I4joq(WqHZzX3A&<6XU1X6zd3FdwtC> z#HL1_`iag{8a@85fIL{YTatEthdu>8Su2w44~vMS?R0}`98cM36svZl%N{TJ_r)m_9wU6zu!3nr z5{DzsargSs_Eah(>8;gVgZanQ_-#|)an*izJ~DWFl+E>}&lb3m8g;v>n;y)+-5;oL znckprb8-IXdD$CEj<0h`pX>p^%!(mh!&y~X1`jkSt% i9#VS4SXn8B}Sj`16h z2kF-}IFI6GZ|d{hvyP2j`8NFKFAVkXDC|xu=g4AJ86K%knu%DWdy=pJJu=W-w+y+6 zxw6)%Fw*E97`7zctv#z~lV@BS)Gf=V#JEDM(^Ha|Qco?F9hoWjN)*Swz~XZbi5u=t z^ovmL?>g(r-5}HvQjvGB$^(FlKb;E}YD?#?+L>oF3U3UaG)=uPB5QLa Qnc|tFgVihr|hreaTQZW&WB)8CK zY$+4FDmXiTEOC=MlT|w#fQkT*VaAA2tLb1>VEbP^(mC~vX2aoBgvw6tZ2dsVWR(S9 zm6tgO4kbQ;P+03;mZ!b@U`|^C!Y)Fjj4_2pXKaiAXQtEOPzA|#AM{8pYSD63YLz*f zQ^B{*h_}!cmIVBah*9{;z|0S-mhcRqL?pK0I68eh%o9{M+ZmR=()nU-VgP^1ww}7o z$}*OEfrf4}X?`cV6AMAY8y25EDk`D+j*(Z8=x~eRyzubwu|+aKl)!D1fNoVYW8(k^ zpBU>Co|KfN1$@;Qdl8uBh65F9ziCPK?&oc-I4TCEZ$M>ySuV@gDAZR)PK6*hZ@pxH zO8gFysXT!RoH%;HHrQPiDF50U7wkavJ&-6J%=1!3={k3kZe!@f=xxB@MAB<6W0?|s zE|vrHY*M-DR6u7?AWxB_d(y<#b`?ss4#yrTX)Z2+bxZ1L;pVhA6P3?{F#B_ca&07I zlzV}lWq)E6^Io^7`Ee31vcK5nN;HbRCIY|vTaL|l#my43*OsgXv%|yO_PttM#sm=c z!eXC)HT*VYA@Fn7ZMex%l7CqITDPSJ{bN^!uFjUSa8ekpHnOMwtM6T=>anT#H#w%9 zd V!WxsoF@_Or?L8nyAGm(PW+sHk zjci@0P79NhsipXiRI{E7LjQ)z0i`!|AuFyB(5_uyc2Ce{j7w)!T|$RUr1F&=O1?Bh zrIW~DMGS3JRZvRhGKvN+Bfs8I#2d2m3IZkPCd_BI#GsMk2)uvr@PsrRy6;V!Tp6yX z|4`eGO44Uy{$TQfdDh7G6N^6##bH>W78d&1!(sD~H*D8{`ba^66zpc7GU+-Krg!ga zrg8mO>C4;F J0B z;E=-b4}MXI=@po5V>rZg=s}|YzwgAtrNubR8pZrlM=Fukh^T5#wG#AtAi|?!qP5pF z%93fs&o)#=nmcLw91dD$gn~~TklC9P 1@(VJOY~VPLnm15Y{S^%_$f1ag9d z-uR*&`WB!fJvyfU`tlCs_YDIs_0vX!t~+gtkpEqI^BE^EH-b$VN=i7R^059 3)|DseOA+ikknY8aZHjH&rmTRY(u zBIrtYRTVvS$=wkGEyp~_q77z?vOiEt&XDwL(95xnTn}Vc^Oz5vRQ;zndOk30ZRR<@ zAcy$PV2z@=?%2hpup@8bqvfuIdd^eTiKJWAa=c}O0WCF_s(M-cj4I-g97`xHJ^;-Y zG6!f_j#1i6P19*X%u$WLWI@G;<1nd?pK00W=RQyyaWT0@*%4itO`Q3LxdgA)$96HY zb5t9uxIJ~gIXEjPbcUgC{SOPfgp83VUKx?W_mfkwxIlJqb`F&@6&XuvO?}O+wHAXC zq|qwCijsa={m+2qN*X79SH$gimB&q^ql)H+^Xx#y|MnY~RA}=es3WuaLLcksnMv86 zJ|sW-e(!El+3PC$%Ac0Oma<0=wGnvqBR{t$%VjBum)nr5E67DT9g+&y_}t-F&st{M z7agKPRh&QFi3U}!rEZ33rAtBknG{I3NBv*F&CsaZ0+W0n9B~a7*l2YsTc99tM9CFT z_MwZT%V32QFF_5)KnK^>lufq&H=p0YV?sMDlz(-PT}trwuRj_z%V~v&hO43XimVex zTpf?sL$UF_7JrD%So<6oJr!2qn30CLcf+Icc$ FoJA-Ilzm$=E<(!x ~u!r zVyHA%M1y&L(o)gMgZ$FMl?D8pc|h5gVQK#=+1-$Jwwb$&e%kb{iAmDTzg{0*S}FoI zegNm#pCX0@R(R1?lGjhqRg}BXB<0?I=U2I{EQo1u&$L}xvz$=v_rUa^?<6hIGll&A zOT{7` l=5zEWlM~l+mDkp&;7@wpi<3=TdIvdGKJZK3-7Fi2QWU`YCRn*-y?! zItRNT%n~}SG^fB|vM(AZdZkZpLTR&yo>pAeo(}Wufu|p=D$T~CC0P_X%qFOO`G6no zb_th9xyo4(O(Y!m?aa&dq(k6$+ZtksCh -6>@q}Js_9)gnF76DLUJn&qhi5JRr z!R6~kPfm=E+<>v^T$rO`=pnEqP2E(R-DIHecA_-x=fh2#+pgZOI@_R~@lpXHkMJgR z5DiV;T| &qf=aBezy9-6-K-=a1Qi*%~A_m4BaHhYyak z|NDOKA$duQdbz*s&8SCJf^_##*}N@XNxnoYp0D8gjp=CyF^E*F(DuhonnU2?;QVNx z(CyFcQRy+Gn%2%QJ-U4WW$8;%QS#Pgfiv5AOV_8AVMN(qk<}cWmP>Ps-NM(EmpCVn zy-L~al7XjIM4s3toU@N>*Ok%oIP(E=f)h`oi*a1X3C3dFj|yU^u=xCf44a6+56j+I z|1jwpewGEAe(QP!>0Tuz{$K5#RZv`8*RC50?$SXTXmsQ55Zuye;~HE71ouF2cXxNU z;O_1gAV5NbTY@EMAD~X{`m4@&eg2zoU9DPOUH#NMXU!gSj5*%5;80j8fuwZ^rt<67 zH@^{;Wg~fYx*%FPOz7>#L+aMaecMeS{q 5 U{B13! z-BPSusbvWpTyIaCTyL>Fa7dh*bLN7Qs`(S1-y*40l=k#Rm3^4b#{j*#1^`m&?MpoN z3tYJ-jN=)Sb&r7scV|M<&O1L8Iuvvi9_%Xq<#fDXA~3*Y(E@#|hKNcO=9?I9c1b v%ue7GzsjxTxIC6>cIA&kdp1}JZ#S|CmQkFC1DFI4Tga#BY{Kzi{UT}Ttan?& zk}S9X9tjCK#;zE!;LSmSRi${5oOW% zVD!^fSIWdmP{)JRU(Mlnj!T{!)F@|tB$o!7Bo(}yq%B*gbYAPb7V0Yb<9zGlu4(K? zIara@Ef}(P{B*b_r*d&)9Rs5e!EqHj(HrQ;@y%2z8A1o01>+dukP5~R1F0u#@R7=J z>~>g}VlzxzpY`Emyl|_~{V}*I83;O|Ipq7GvN hj>1znky6w6e< zF7w+L4v>09A{;8Yk-k_3a0tZV&s3(<#+z!yBfKY{;4Db6kfAUFNeB}_;k)vMsXKRO zFw8uBO7wk(+7iJnwww!J=r- z83Aodc?aR?de8*dYn~cMmjYJ9>~CDhKXz<1Lq@^$Zn_Ub_;J7J|GH6pZ9x2StYDow zHKz$Hi53@b(W{h1J6?ja6;#3@fNrha?|<4q-Zs=8_9Wxp56RrPJ{TUNbQBoE{x+*n zB_HeQ`%7NI2W(fUo2KN^pj548#@wiNcF27larO(4dFH_b>+RO4&rh(}nO0ROMr4~W zHi`Cx;HWcxzMtIXlKVtRtZ+Zxm{?63wEpN;AkuLpwu(@9a0nG&7%Fe|WW)+pc&Bx{ z(UW=QpG|ZSG Fkf`_y1YJ5P+vTXW%14_5 z0TSkZ0{EG(_;20~IN;79rOZC_;FHTcjgUtodN#fLu8o0$MOLL+Cx@o6D5Q19FxDur ztZihR$uErnvheb6kC^LBHOF$Hr&RJj&p0?deW+SkaGRj!=uax|6qp=x&ph;nak>={ z4LY||( ~<{c{7biye-E^P7up-=dz%oUSV9q3{&wa~V^X+JELDqt?wCi(*vd zuLma~V&uawn*0-Nl>!dHql2!ptGHp-23axqOM}HJ9{N$K?SRcp{A`h6J0(TX3Z6_D zw(S;LXx8jMjqMk5xJa4UrFnNGMP&Sg*hQB?I2i2ee=ELHg6R=F((-1-$4?Uly6|F( zI4U@9e~t@Jxprlomb-(io6C5=5XF>1k(gidV|Zj}<9k37 zp>4^xmB(`Gz-!pM z)V 5$3x$3NrUp z{6dCPjLoT{oK93dPKck>`ByW`lo?j9jkp?5Z;pi`7_-X1hNZBRcH|Ds1pgs-F%wr; zRq$MBywT1Tg2&d`(tOqjOnR`pf@mDl*}VBsz`S-A@0?wfFM}3+exCfU2vcmq>VQBH z<+Z(j+Qr%AgyD7FiY2tK5k(czPB2(nW+4LGU4&!06sMp@e2{AP#7vYWBofP` Ll?YauJN6ILwjT8tlP@ANPE}r)=MW%ILLZ*}YucMtdy8qQ>4}eCw3k?w?1LZe**$ zGAYImdV`?+6+HsDvM*1^dK$vn)?|a(q#wURv%)V~?l|)4H>B;EM;w%|JAS>66vlw- zG=c#vCl+wQ?V^vwR|_atM2tDaI}!{5e9)#hL;F*X8FVGsUD>&r6TNxJtz*nMfUfgB z9z8}e fPyq}nqaeke$v82< z(sb9xmh%Jt)KJ4A1~Ukseqg4o^%|ZA5U8IJG>IcC0`@}c_N~J5Q4rCej|x+xQeJRj z+4(sb9_wKok^0PIL^}J}f?*sI#xPc(FN{m(zsO_-CJMbfq~-|=wctnwrgvNZ@fxSc zna$D(K@}*a4WVo*_TowP6gjDQIC7hfnrALzNm=Lmv2ev0I+q^EB0vTL6Ln@g;6Q@9 zPYBogu)M_6bZwkVF!RmgmWDr4=DBEc%c$X8*wS00m!ty94G* zIXQTy zHwGJ~_L+qHZDIDe8{9{A=(#Gci~(jeQ*l}XIR++-2A`9}R`K4lPhfI2;9!B@xi>dh zTpwfvy^|=~uAp#?@-FUH?+;T>Ore(`6xm+O5|>{3lU}8i7zxBVrD#e`WZ#>>JH`*q zfug)G!GVD@V6Z$G?2iV_%IkT&%t^CYhp**ieeO* JxQkJ}l?!u{z|3DN!WraB#{_N2f&mh5BS0eC}2vwzT!&5~~BBp0JMUcbLf z;-fY@uedfGcBr@H3vsaJXAOh%A5v%bm?gfD)+=bX5vlysp5LXaf#V1b#A(0fzES&R z0}VT#Xy+sv%p2(-bQ&;B4(VCmx5X%$UE+WFKlP(_vIuh9h3I3J8Q6Iw{hpIfgp|aI z08oY9PpVV+G(WrM;sq#BkaDoIY{%>wVUTMwJ`86!ce=Xp7(r~TVq^PEqO T!GT? z`t<5w4CAOx{5W7Vst?((zYStzlns^v5aE)`pn>`6u5h8R07e)_%y96)@7+^*Mvh98 zjP3$etGdk9T!Rw84c#~g92``MKO&f@tw1@R7BBTd6CQQO^n^{7Ib0 %vE%4rhttPCAsFIqTA%!)tnQYG#_UllP-8=GDXb(s42+$+|1Tl7)z1P{+D zkYS{&v1s`0!_`y8_30v{;2SOA)6~ylrQyX1Sep+`-@t5_PCyvTcLdC|*A_`Fn7uXL zDEzD;00&PvfVVj~@vtl(xWV0Kk-GJ)4>p)1Wd0o%9>}d8*i;c)7|j@79&n{*dGH}k z>9q=YUlrD=09DMiJXNt4$Pq~E8L_gecq$P>E`bY8WqTpiD)YNRn#|I8=~k+g{|JhO zl+xTk&^0N)HkXoDOrQJJst~wm0D}#EZAo|^gp0>*c(oj0 -72WGT#v&@^s97K zlaOSaM)e9nqQTuZJoAP_8wZXVUe9XFb3VT*nSMJc=(=GiYVG_uJmCAEEE}h1eZZi8 zk^aZNphYPLZ E`6Q0aGuS*UOwLYVD}9P zS60gRqG*_75=>9dAPWONs)zk{j3gR3 bAG8 ztot642m#|4d+t@MT_N@ErGn_>*lM+9=a|8Aal$7QD`RJBGuXm=vgC~g)un)wz+ Xw^g73wfI_mH3Ucuf?+C|#$e7YEIHQbYYDNjv5Pc4QBLQ->=ppwQ`~n}md2@wC?wtXD#H(y1N*{Q7H_@!` zXDpuPMTBE7ZLil0pBQ0Px$V$ 6Z2*bj+GYd!#@?p;w)(q>$=; z(Xai3^3Q!{tjc!IBTr{vUYFyRT1vl@jDA?fW`kWCSW9WCy-LqDbfv7GC=^JEIJT%? z8mBeT=y~Q*0|#-gle0(r{9z!PAC=7ElR%}`P`e=>otF6+Rr1u;6pcuQl=1W2^%Z&n z$&;6l)SF?qhK5PN!pU`umg$^a0ii^2+;|}qRa(j$l+HgmG)!OwJVHxJT?UArUi4M| zr?UO;((vMI#T)iQIE%>n|G>^J`QHQbY9D)GSD(6co{n?cZ$_-VwP3D&Sr64o?%X!@ z
zhSx27dU1H4O8nCmHM1P#tU_o>EBzTh zU3bvmad3TCk|99lDn`K?P4af+GQ8LvH$w}(QV5|IifT+7q{e76_uWdGT4L&u(}cah zg)OZwhccBh!1><)nN-?A!~_L%2#B7_Scs!kO}+Ft*^sG?FY7}P|NJQ*W=cy|>{q^K z^a$~Q!HN>a^%wc%8E|}p=H}cTFK{Ok5h$r?w*4}g34oH4cqrykZ?mgHYv*?*5s4^i zTjZA(IpU1AuO9G5w9L#*;NlVx*;^hgS9R1L6q3ERsGGmT9(*h}vU!_)VLjEDu+g$m zb)JM_rc1WLh0Tek4$XPL=kf9Y2?G!B a~ivW zx?X>y>6*;dnuaNw=NIX>Gb^*FOQbNcuw>_yBvnc|M)ADxA8$Wy0C&aN18%2
jj44n6T)$8YX6SMzUN`oS@9wC)-ak) z#&r$~HI<^sSoRRMKtml6`T2$&<(p}qRMY}X&zH=#$9G?WSy0`#QO`V?pnCEN&}Int zyQUt|X{%!Iba9$1PIfGiQ@Gf{sN@4R+ppnh_7O&iIPuD1l0EFP^e3g+fU)2sgz6)Y zIvHO2XJVLzx-oMs%pHPQN6*L~P0>zk9DPcwrFZ(fv;K59AWovnHE33(f4SbeolLv( z?;77^6o2FDv`B?2Nw`|;eSqpjeYGO0@!b70PbC>#b0gloEMgs&7=(WKBtBPPx?+5X zwW`X{TozR*OoS<6U2OusiiHBrL0v;R*~zGk$%fn+wyjekx?}LYpztja?MDM8MH7G@ zK`qD>(Z==ts`wZha|&|nKDEHM)EiVXnlu%LvT`zQl6nSW+Sp-ItRKI#Dyv?7(rZ+! z!{KOp4fKi=x!*?6e)&IbC#% &-h6+HK=#~S zmA7#JKoy+4B+ oZI~)*auyjOH@e_Ss!5z<5)Jnmfq^8R* @mUk5@60^YpoJRz0E9l#)^ zPn}~O0F{Gmhw_8r2 Up hJSM;g3gO)@4-`kJ26` zV&-15+esof#^QXF&!WK-J<(8z5MN>W%+t#ZACcX;fGgE=;8%3CITa+xNBdX9+^C^% z`Am%pqonjOSg-(!eiwikX)&w~DfOP)C-9ZQMQ+G2t~^KS=|aK&T9FS?k4Lj$YM^km z+L5ww7TC{t$_-a!np0QRL}(<1jV;wAv)NY*V9x2?LO-#PDy#3`vY+y!u|rP7D2?>N zB1W8kM&TmXFMeo=RE;(5BK@zVHu((}UAW~#e07-NQ!3=VI9!d&>oXGgq=r15Cb$4R zpDu$ES96S_tk(}vk!$Y7r|tSA5c17(8IzM;8MYo%$ui>d>YaapghY03DH#ZhpJKBc zo1-i%@hJYJG_mpq5`z>zj}iPUU>@SG2<@V4)s(L^ZB10HgtERzjo}?c1sm^KpGb-T zi|)q(r2RY=K_^I8Ep+o3sm7j?QJa`BcCWXUSEFHIqT5w0sp31r6# 5eb8Wy*=uT82lr>1wXvZ^<{1#UB_@4x*+mU3J-zWjYt>;$ t5pL?QtrI zz?(Uq5I&%2>dj>eVRCyMk#1bL06@x9VfnC+m2o8YB_AFKI@qnxSwO4OI{Z+$hMiOE z@8~e*!g|{Kuo1>b@YZE9XO_|$E3JhSaS9$J#-jy?tD+*sc%1491-ptau0#XV#2Cp^ zsPt~cA#W=!ZEGPj3=S8;a-xUy9IaI+X!L0izF#)(n{`To!{YY^ZeNoZ*y#Pk5E8+F z(D^6tC8g7=I68%Us@a%dL4ap{azQ#&1@9J+Tu>67^NfcY7ercGTg;T_xWj@s&I+iz zE|rDscMn>LNMi!OU-54f5IJGJ776BpX+Q`VC^ugNKSfOPo0*;VxIq7ar^KtdRYjZH z1uD5)qzRaL7H~2Y`>jOhY(A_>PFVwQAYF;Ul^<*rrc7Xm3l^4M6B+N_mvytK1qo1W zDP>#)pLyy)4Ch(*b0IDUw|HcpkoBvR5sd5X3Pzn_Q!fj{0}b7znw6|*Z5}Yxb}Omj z%wY7IY1&oH(yZR@DPC}`{>WKOHvGr_p<9=eTq0vXFBEF+AoX5ssB>(eNuwhf;BQ*C zO^ZaN^&XzJ4?VZ7l(~0=9RKgVbXSMh2WpdMcb~nk^jUz8gwffTt>I^$ewu}~?srf9 zUxuo#`UL8q^`tAJ`E0i(7y6CR*u{+j;=+M51h`3g8{V)Z*wWYJj&xEO%$PPWN YMT~!pYE!IrMsMRkryXHa%8TU# zlvV+{ P}$kNV= z^C4I$@AKnrbs7p6M_%$9&7gsl+NXXH+gsr >eV4G~ls+j9FhJ0 ke&~FzEV4J)qe$B#PNf9 zM +J~4Uu$waKA8y=y;xO_ zqOYTl2D)XJ$=I4&{iAL na+77SZosN~3sh^8LKJzTWEZ_0n>Spfyi=Yl5C_GiVQ z8YUI)uzesa349R=M8Wr#K!1z}fJ#DJ3C5THs|sE%4bn|M#9;T(yJ_k(F4E**W&VC- ztfjo^wH=w4hSPG!xjaXoBwR5>>?rx69rb6s2FhfG3~EqllWvu?d v{ z)-4y%>UmB~Q-^rk6g;P71q6{KaZ0Phi27IA?&DJ?;eyVj>EpT$$swfrYe~gvU#rhx zj|RR; loIFA|F<4h`}uw1_MM;zd+!x0VF=2b)) zL~+Q6wNx|X)L`etp5-XDZRQ7Ixie){F^lS0An_}eL9t$?Nuo)!9SI}>m1&w{yB(X2 z#cKM5rg7EokdI6g221bCOcib*+R2s8JsZv>Pq~(8CLfF)7;Muzm<3y_usio9p8rF6 zPvYV-7twXBOT4(JVl2&;Oz`$|)`xoPh EUpezI;hTetnFq$IDM$loyV~sH|_$vPxb+8h* z$gm G|sfP8W93~uw@g5 z5K)M$_*05L)U`FX$+JCw_xZx#vZoSVbpP!rZPH+Y_=Bn|dj{jf@6&Rt!8g;BwUKzN zl=++@vMKJNX%UMK0eRRt&pdZRgS-XL{gs0Hcag*+R@3XebZk~E*1EQ7Wu%TSR~GV1 zBjn)&?p&&s4s##X&LFZH1#_qDJ}7;^B&ZSg@WN$;(9qDZDi#v=io_#3qIW)*N}kBX zDz|ubbh)+{ZQu$lg(&JTJ^97qYW(K-_`3!DLAUC?kXrShEqwzY{m;|3&J2W#7d N`KXFf6p6eODCzv1W>1tA@VIO)(kmU`3aGs zyl4!KKJBRKami_gTS^tx$z2RglGFH@Xgm&}!i46a`@e1M^WQoo?vmy{;Y(8Q6yp*> zrDDYb^tVNsFl^Ded2JQQEXbe-?f&)j8efesqHm0Y29Wud%&pvC{OkP4$Ddr8ITWqJ ze8A|&&Ov$FeGGL0N2m%n((G ?B@6soMO#&bIY *SDt|slqF_|!h82#jgB;ngNcIy^ zyf>u@kfDra_jLJu6L`BaqP`X$9`L2x0@K3yf;_H3*pX)!x>_}AJcH2~6xqJ{>z@f4W z%?m{380s5Bf*k8ktnM70$zImL#Ai(ON*~`z=x!fbrTu};wHx@tb|?g>t4Cn{8bM3f9JYW|gN@mqWD;?QEQnovp=e&35>CUuE0&EZ eg z69vxt4+W^!db%o7ZiVw=Gqbc@!BS*j6KuYn9Z;O$TL>jag_Ky|@gg=XXJm3>8;8Y; z9Dl4w*(C@rs97Uy6s>Is{iLG2gFURi>~}^%s*rBY$Z?N+v5$~!*qO~C8kJ)?3$NF@ z`$fY@0+q~nsX=?2I1mz`2;bpVZ3_{yoJZUSwYu2qZL;dp=j?Hj@)qZ}qJ{#+JvfWz zMUybS#H_~q&PZx%2o8P(cEahqD`Ty}?6@uFYKGvbSPhr%laG-1s}WA9UOli~plQ8S zw2PaL{EAA?(~9+^&nlS&Wc*$KVAfr#K|+N($K0AJgYM$`O4 x$~Yp319wG#f%%YN@Px<@3Zj!X{41?|$s2q- zT?TxBI(*4?0K0Tg)LIp6;wpwfPMx=iXek;}Gg zEHs{<(hpg3vLp=@=OY}bV!o)WX?^Rz0$+o3wTSuViUSIcgv>9oVL8wX(H5SRD>Pp~ z%xA9+(o#eu%r(tz=T{893X%lg=8sFQM3IM#hs*fkyvNbOo4dOalStl}?XQ%4O4&+u z=`nriPDt)aYEOloKNXo4U`%f%q5JmbDvMlR*4Cmv1Vzt02Ndbce%$jiS>Bs|GW& kwnk^h4P3ytl6_$C+w8$dgip)!1I7@du>to zCH!$GvwzBsN<^~$L4Bqe)9GgZW&Js!5IR(0?3Zebbe#!p7s5{j5-85z%; !}uN906W1?v{DO$4P6l(;g?(-_u|9*i^G+#c&muk~#O5jO; ;N0`|Lee}3#+f1Qm~tZnnk<7GV~IkE8XYd 3h>BL)JepYm6KhGyyF%@*A>};1WK7H;GAJ_EU7?8fI3H4kbcyw zp DSpBvK@P(Bw1_>Nag<{&=Qi)}#CKQJ1mkMnM;R2vUKOF= zrVFJwLli>sEu$HmOQ{m#JA7+)FsPDK!DXEE!7IYgJlEuf5k!|_!({sd3AlE9d}8p- zJ`B81F2(hb+Jy&Gg<=wtYJ9H^;)r;Em*lzz6=MwaR#kk;M+>~-+|OL9)Ow7;iYaIn z4{MF1kmBaf_`mx-!~7}J51ty*HU?^)lbM;nQU+Qw?o#R3X5R_NPnFK_JE^bj1pRCm z*b)ZGZF6y5JABSdCS5DYec69KfQ8|4pB#KJ`@bCjp{pM#PK>lP%c N z@u4}^khz1j;=|<~Q}vqIN&B kB%dl9wn|?Q&Lbsy(eEcglp@j z#1|H?+xouSn$|`)UUyLgm9Q$lf8fbDCAcinlaO6d(3eKTmtJ>K-FR@rcd=@)_FkR8 zyuMGPT3lk~D^JBr^ef5(S+Tmhm-TRqs0z1Q4oAA2F=`5yfGO*n1PX1cV|+Dc{jVq$ z6^dKJ0-=L!kM>ian1gUa`x|gNXM-oHwtZ22w_m4ev9CP7GvyR1-o@s3t$_Cvvy{ap zBKORaYO+QSPO9Fq&vVhri}Uze_O~)(r4jNup@(R8|LoohEvs{Zrz)ai(Mhq|JD~@5 z^E{tn)C3$x0QR^C@zEMN9&GLZK~eGl`+DX%Bn@Jr$3TT1y!<%M;r}+zCx8(m0N?=t z{9iHtw|V}3AK8C-{-5djJb(VH&;KLKzYqSa&%aXq$D9A^^B-mT_u+r_`B#enc=KO< T{-Z4aKKxJm{QKem!p;8yCt7*J literal 0 HcmV?d00001 diff --git a/ACL_PyTorch/built-in/audio/whisper/infer.py b/ACL_PyTorch/built-in/audio/whisper/infer.py new file mode 100644 index 0000000000..81c3087fb4 --- /dev/null +++ b/ACL_PyTorch/built-in/audio/whisper/infer.py @@ -0,0 +1,307 @@ +import copy +import os +import time +import math +import jiwer +import argparse +import numpy as np +import pandas as pd +from tqdm import tqdm +from datasets import load_dataset +from typing import Optional + +import torch +from torch import nn, Tensor +import torch_npu +import torchair as tng +from torchair.configs.compiler_config import CompilerConfig + +import whisper +from whisper.model import Linear +from whisper.decoding import PyTorchInference, DecodingResult, DecodingTask +from whisper.normalizers import EnglishTextNormalizer + +from rewrited_models import PrefillTextDecoder, DecodeTextDecoder + + +class LibriSpeechDataset(torch.utils.data.Dataset): + def __init__(self, speech_path, device, audio_column="audio", text_column='text'): + self.dataset = load_dataset(speech_path, split="validation") + self.audio_column = audio_column + self.text_column = text_column + self.device = device + + def __len__(self): + return len(self.dataset) + + def __getitem__(self, idx): + # 自动解码音频 + 重采样到 16kHz + audio = self.dataset[idx]["audio"]["array"] # 直接获取 NumPy 数组 + audio = torch.from_numpy(audio).float() + + # 统一长度 + 生成梅尔频谱 + audio = whisper.pad_or_trim(audio) + mel = whisper.log_mel_spectrogram(audio) + + return mel.contiguous().to(self.device), self.dataset[idx][self.text_column] + + +def parse_args(): + parser = argparse.ArgumentParser("Whisper infer") + parser.add_argument("--model_path", type=str, default="./base.pt", help="model checkpoint file path") + parser.add_argument("--audio_path", type=str, default="./audio.mp3", + help="warmup audio file path") + parser.add_argument("--speech_path", type=str, default="./librispeech_asr_dummy/clean/", + help="librispeech_asr_dummy english transaction speech data path") + parser.add_argument('--device', type=int, default='0', help="npu device id") + parser.add_argument('--batch_size', type=int, default=1, help="batch size") + parser.add_argument('--warmup', type=int, default=4, help="Warm up times") + args = parser.parse_args() + return args + + +def create_model(args): + model = whisper.load_model(args.model_path) + print( + f"Model is {'multilingual' if model.is_multilingual else 'English-only'} " + f"and has {sum(np.prod(p.shape) for p in model.parameters()):,} parameters." + ) + return model + + +def rewrite_encoder_conv(model): + conv1 = model.encoder.conv1 + conv2 = model.encoder.conv2 + model.encoder.conv1 = torch.nn.Conv1d(model.dims.n_mels, model.dims.n_audio_state, kernel_size=3, padding=1) + model.encoder.conv2 = torch.nn.Conv1d(model.dims.n_mels, model.dims.n_audio_state, kernel_size=3, stride=2, padding=1) + model.encoder.conv1.weight.data = conv1.weight.data.clone() + model.encoder.conv1.bias.data = conv1.bias.data.clone() + model.encoder.conv2.weight.data = conv2.weight.data.clone() + model.encoder.conv2.bias.data = conv2.bias.data.clone() + + +def rewrite_multi_head_attention_forward(model): + wk = model.key.weight + wv = model.value.weight + model.kv = Linear(in_features=wk.shape[0], out_features=wk.shape[1] + wv.shape[1]) + model.kv.weight = nn.Parameter(torch.concat([wk, wv], dim=0), requires_grad=False) + wk_bias = model.key.bias if model.key.bias is not None else torch.zeros(wk.shape[0]) + wv_bias = model.value.bias if model.value.bias is not None else torch.zeros(wv.shape[0]) + model.kv.bias = nn.Parameter(torch.concat([wk_bias, wv_bias], dim=0), requires_grad=False) + + def forward( + x: Tensor, + xa: Optional[Tensor] = None, + mask: Optional[Tensor] = None, + kv_cache: Optional[dict] = None, + actual_seq_len: Optional[list] = None, + ): + q = model.query(x) + + # encoder + if kv_cache is None: + kv = model.kv(x) + k, v = kv.chunk(2, dim=-1) + + # decoder - cross_attention + if kv_cache is not None and xa is not None: + k_key = "key" + v_key = "value" + if k_key in kv_cache: + k = kv_cache[k_key] + v = kv_cache[v_key] + else: + kv = model.kv(xa) + k, v = kv.chunk(2, dim=-1) + kv_cache[k_key] = k.contiguous() + kv_cache[v_key] = v.contiguous() + + # decoder - self_attention + if kv_cache is not None and xa is None: + k_key = "key" + v_key = "value" + if k_key in kv_cache: + k = kv_cache[k_key] + v = kv_cache[v_key] + new_kv = model.kv(x[:, -1:]) + new_k = new_kv[..., :wk.shape[0]] + new_v = new_kv[..., wk.shape[0]:] + kv_cache[k_key] = torch.cat([k.contiguous(), new_k.contiguous()], dim=1).detach() + kv_cache[v_key] = torch.cat([v.contiguous(), new_v.contiguous()], dim=1).detach() + k, v = kv_cache[k_key], kv_cache[v_key] + else: + kv = model.kv(x) + k, v = kv.chunk(2, dim=-1) + kv_cache[k_key] = k.contiguous() + kv_cache[v_key] = v.contiguous() + + n_batch, n_ctx, n_state = q.shape + q = q.view(*q.shape[:2], model.n_head, -1).permute(0, 2, 1, 3) + k = k.view(*k.shape[:2], model.n_head, -1).permute(0, 2, 1, 3) + v = v.view(*v.shape[:2], model.n_head, -1).permute(0, 2, 1, 3) + + mask = mask.to(torch.bool) if mask is not None and n_ctx > 1 else None + sparse_mode = 1 if mask is not None and n_ctx > 1 else 0 + D = n_state // model.n_head + + at = torch_npu.npu_prompt_flash_attention( + q.contiguous(), + k.contiguous(), + v.contiguous(), + num_heads=model.n_head, + input_layout="BNSD", + scale_value=1 / math.sqrt(D), + atten_mask=mask[:n_ctx, :n_ctx] if mask is not None else None, + sparse_mode=sparse_mode + ) + + qk = None + w_v = at.permute(0, 2, 1, 3).flatten(start_dim=2) + return model.out(w_v), qk + + model.forward = forward + + +def modify_model(model, options, args, device): + print("modify model...") + + rewrite_encoder_conv(model) + + # 修改encoder的attention forward + for block1, block2 in zip(model.encoder.blocks, model.decoder.blocks): + rewrite_multi_head_attention_forward(block1.attn) + rewrite_multi_head_attention_forward(block2.attn) + rewrite_multi_head_attention_forward(block2.cross_attn) + + origin_decoder = model.decoder + + prefill_decoder = PrefillTextDecoder( + model.dims.n_vocab, + model.dims.n_text_ctx, + model.dims.n_text_state, + model.dims.n_text_head, + model.dims.n_text_layer + ) + prefill_decoder.load_state_dict(origin_decoder.state_dict()) + + decode_decoder = DecodeTextDecoder( + model.dims.n_vocab, + model.dims.n_text_ctx, + model.dims.n_text_state, + model.dims.n_text_head, + model.dims.n_text_layer + ) + decode_decoder.load_state_dict(origin_decoder.state_dict()) + + model.prefill_decoder = prefill_decoder + model.decode_decoder = decode_decoder + + if options.fp16: + model = model.half() + for module in model.modules(): + # 在Whisper源码中,LayerNorm层需要接收fp32数据,因此需要特殊处理 + if isinstance(module, nn.LayerNorm): + module = module.float() + + return model.eval().to(device) + + +def rewrite_inference_logits(): + # _origin_logits = PyTorchInference.logits + + def _patched_logits(self, tokens, audio_features) -> Tensor: + if not self.kv_cache: + self.kv_cache, self.hooks = self.model.install_kv_cache_hooks() + self.kv_cache = [ + {'attn': {}, 'cross_attn': {}} for _ in range(6) + ] + return self.model.prefill_decoder(tokens, audio_features, kv_cache=self.kv_cache) + + actual_seq_len = tokens.shape[-1] + updated_kv_positions = torch.tensor([actual_seq_len-1], dtype=torch.long, device=tokens.device) + kv_padding_size = torch.tensor([448 - actual_seq_len], dtype=torch.long, device=tokens.device) + + offset = actual_seq_len - 1 + positional_embedding = self.model.decode_decoder.positional_embedding[offset: offset + 1] + tokens = tokens[:, -1:].contiguous().clone() + + torch._dynamo.mark_static(tokens) + torch._dynamo.mark_static(audio_features) + torch._dynamo.mark_static(positional_embedding) + for i in range(6): + torch._dynamo.mark_static(self.kv_cache[i]['attn']["key"]) + torch._dynamo.mark_static(self.kv_cache[i]['attn']["value"]) + torch._dynamo.mark_static(self.kv_cache[i]['cross_attn']["key"]) + torch._dynamo.mark_static(self.kv_cache[i]['cross_attn']["value"]) + torch._dynamo.mark_static(kv_padding_size) + + return self.model.decode_decoder(tokens, audio_features, positional_embedding, self.kv_cache, + actual_seq_len=[actual_seq_len], kv_padding_size=kv_padding_size, + updated_kv_positions=updated_kv_positions) + + PyTorchInference.logits = _patched_logits + + +def model_compile(): + print("torch.compile...") + wsp_model.encoder.forward = torch.compile(wsp_model.encoder.forward, dynamic=False, fullgraph=True, backend=npu_backend) + wsp_model.prefill_decoder.forward = torch.compile(wsp_model.prefill_decoder.forward, dynamic=False, fullgraph=True, backend=npu_backend) + wsp_model.decode_decoder.forward = torch.compile(wsp_model.decode_decoder.forward, dynamic=True, fullgraph=True, backend=npu_backend) + + +def libri_speech_infer(model, options, loader): + hypotheses = [] + references = [] + + for mels, texts in loader: + start_time = time.time() + results = model.decode(mels, options) + e2e_time = time.time() - start_time + print(f'Parquet infer E2E time = {e2e_time * 1000:.2f} ms') + hypotheses.extend([res.text for res in results]) + references.extend(texts) + + data = pd.DataFrame(dict(hypothesis=hypotheses, reference=references)) + print(data) + normalizer = EnglishTextNormalizer() + data["hypothesis_clean"] = [normalizer(text) for text in data["hypothesis"]] + data["reference_clean"] = [normalizer(text) for text in data["reference"]] + print(data[["hypothesis_clean", "reference_clean"]]) + wer = jiwer.wer(list(data["reference_clean"]), list(data["hypothesis_clean"])) + return wer + + +if __name__ == '__main__': + wsp_args = parse_args() + device = torch.device('npu:{}'.format(wsp_args.device)) + + torch_npu.npu.set_compile_mode(jit_compile=False) + config = CompilerConfig() + config.experimental_config.frozen_parameter = True + config.experimental_config.tiling_schedule_optimize = True # 使能tiling全下沉配置 + npu_backend = tng.get_npu_backend(compiler_config=config) + + dataset = LibriSpeechDataset(wsp_args.speech_path, device=device) + loader = torch.utils.data.DataLoader(dataset, batch_size=wsp_args.batch_size) + options = whisper.DecodingOptions(language='en', without_timestamps=True, fp16=True) + + wsp_model = create_model(wsp_args) + wsp_model = modify_model(wsp_model, options, wsp_args, device) + + rewrite_inference_logits() + model_compile() + + with torch.inference_mode(): + audio = whisper.load_audio(wsp_args.audio_path) + audio = whisper.pad_or_trim(audio) + audio_mel = whisper.log_mel_spectrogram(audio, n_mels=wsp_model.dims.n_mels).to(wsp_model.device) + audio_mel = audio_mel.unsqueeze(0).repeat(wsp_args.batch_size, 1, 1) + w_options = whisper.DecodingOptions(language='zh', without_timestamps=True, fp16=True) + for _step in range(wsp_args.warmup): + result = whisper.decode(wsp_model, audio_mel, w_options) + for bs in range(wsp_args.batch_size): + print("{}/{} - {}".format(_step, wsp_args.warmup, result[bs].text)) + + print("LibriSpeech infer, English to English TRANSCRIBE ...") + p_wer = libri_speech_infer(wsp_model, options, loader) + print(f"LibriSpeech infer WER score = {p_wer * 100:.2f} %") diff --git a/ACL_PyTorch/built-in/audio/whisper/requirements.txt b/ACL_PyTorch/built-in/audio/whisper/requirements.txt new file mode 100644 index 0000000000..606e975265 --- /dev/null +++ b/ACL_PyTorch/built-in/audio/whisper/requirements.txt @@ -0,0 +1,77 @@ +aiohappyeyeballs==2.6.1 +aiohttp==3.11.18 +aiosignal==1.3.2 +async-timeout==5.0.1 +attrs==25.3.0 +audioread==3.0.1 +certifi==2025.1.31 +cffi==1.17.1 +charset-normalizer==3.4.1 +click==8.1.8 +datasets==3.5.0 +decorator==5.2.1 +dill==0.3.8 +einops==0.8.1 +filelock==3.18.0 +frozenlist==1.6.0 +fsspec==2024.12.0 +greenlet==3.2.1 +huggingface-hub==0.30.2 +idna==3.10 +ijson==3.3.0 +Jinja2==3.1.6 +jiwer==3.1.0 +joblib==1.4.2 +lazy_loader==0.4 +librosa==0.11.0 +llvmlite==0.44.0 +MarkupSafe==3.0.2 +more-itertools==10.6.0 +mpmath==1.3.0 +msgpack==1.1.0 +msprof-analyze==2.0.2 +multidict==6.4.3 +multiprocess==0.70.16 +networkx==3.4.2 +numba==0.61.2 +numpy==1.24.0 +openai-whisper==20240930 +packaging==25.0 +pandas==2.2.3 +platformdirs==4.3.7 +pooch==1.8.2 +prettytable==3.16.0 +propcache==0.3.1 +protobuf==6.30.2 +psutil==7.0.0 +pyarrow==19.0.1 +pycparser==2.22 +python-dateutil==2.9.0.post0 +pytz==2025.2 +PyYAML==6.0.2 +RapidFuzz==3.13.0 +regex==2024.11.6 +requests==2.32.3 +safetensors==0.5.3 +scikit-learn==1.6.1 +scipy==1.15.2 +six==1.17.0 +soundfile==0.13.1 +soxr==0.5.0.post1 +SQLAlchemy==2.0.40 +sympy==1.13.1 +tabulate==0.9.0 +threadpoolctl==3.6.0 +tiktoken==0.9.0 +tokenizers==0.21.1 +torch==2.5.1 +torch-npu==2.5.1 +tqdm==4.67.1 +transformers==4.51.3 +typing_extensions==4.13.2 +tzdata==2025.2 +urllib3==1.26.20 +wcwidth==0.2.13 +XlsxWriter==3.2.3 +xxhash==3.5.0 +yarl==1.20.0 \ No newline at end of file diff --git a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py new file mode 100644 index 0000000000..ddcf368ad7 --- /dev/null +++ b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py @@ -0,0 +1,270 @@ +import math +import numpy as np +import torch +import torch.nn as nn +from torch import Tensor +import torch_npu + +from whisper.model import Linear, LayerNorm, MultiHeadAttention, ResidualAttentionBlock + +from typing import Optional + + +class MyMultiHeadSelfAttention(nn.Module): + + def __init__(self, n_state: int, n_head: int): + super().__init__() + self.n_head = n_head + self.query = Linear(n_state, n_state) + self.key = Linear(n_state, n_state, bias=False) + self.value = Linear(n_state, n_state) + self.out = Linear(n_state, n_state) + + self.kv = Linear(in_features=self.key.weight.shape[0], out_features=self.key.weight.shape[1] + self.value.weight.shape[1]) + + def forward( + self, + x: Tensor, + mask: Optional[Tensor] = None, + kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None + ): + q = self.query(x) + + n_batch, n_ctx, n_state = q.shape + max_sample_len = 448 + # decoder - self_attention + k_key = "key" + v_key = "value" + # Prefill + if k_key not in kv_cache: + kv_cache[k_key] = torch.zeros(n_batch, max_sample_len, n_state, dtype=x.dtype, device=x.device) + kv_cache[v_key] = torch.zeros(n_batch, max_sample_len, n_state, dtype=x.dtype, device=x.device) + kv = self.kv(x) + k, v = kv.chunk(2, dim=-1) + # tmp_ids = updated_kv_positions.reshape(-1) + # torch_npu.scatter_update_(kv_cache[k_key], tmp_ids, k, 1) + # torch_npu.scatter_update_(kv_cache[v_key], tmp_ids, v, 1) + kv_cache[k_key][:, :n_ctx, :] = k.detach().contiguous() + kv_cache[v_key][:, :n_ctx, :] = v.detach().contiguous() + # Decode + else: + new_kv = self.kv(x[:, -1:]) + new_k, new_v = new_kv.chunk(2, dim=-1) + # tmp_ids = updated_kv_positions.reshape(-1) + tmp_ids = updated_kv_positions.expand(n_batch) + torch_npu.scatter_update_(kv_cache[k_key], tmp_ids, new_k, 1) + torch_npu.scatter_update_(kv_cache[v_key], tmp_ids, new_v, 1) + + k = kv_cache[k_key] + v = kv_cache[v_key] + + q = q.view(*q.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + k = k.view(*k.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + v = v.view(*v.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + + D = n_state // self.n_head + # Prefill用FPA + if n_ctx > 1: + mask = mask.to(torch.bool) if mask is not None and n_ctx > 1 else None + sparse_mode = 1 if mask is not None and n_ctx > 1 else 0 + at = torch_npu.npu_prompt_flash_attention( + q.contiguous(), + k.contiguous(), + v.contiguous(), + num_heads=self.n_head, + input_layout="BNSD", + scale_value=1 / math.sqrt(D), + atten_mask=mask[:n_ctx, :n_ctx] if mask is not None else None, + sparse_mode=sparse_mode + ) + # Decode用IFA + else: + at = torch_npu.npu_incre_flash_attention( + q.contiguous(), + k.contiguous(), + v.contiguous(), + num_heads=self.n_head, + input_layout="BNSD", + scale_value=1 / math.sqrt(D), + atten_mask=None, + actual_seq_lengths=actual_seq_len, + kv_padding_size=kv_padding_size + ) + + qk = None + w_v = at.permute(0, 2, 1, 3).flatten(start_dim=2) + return self.out(w_v), qk + + +class MyMultiHeadCrossAttention(nn.Module): + def __init__(self, n_state: int, n_head: int): + super().__init__() + self.n_head = n_head + self.query = Linear(n_state, n_state) + self.key = Linear(n_state, n_state, bias=False) + self.value = Linear(n_state, n_state) + self.out = Linear(n_state, n_state) + + self.kv = Linear(in_features=self.key.weight.shape[0], + out_features=self.key.weight.shape[1] + self.value.weight.shape[1]) + + def forward( + self, + x: Tensor, + xa: Optional[Tensor] = None, + mask: Optional[Tensor] = None, + kv_cache: Optional[dict] = None, + ): + q = self.query(x) + + # decoder - cross_attention + k_key = "key" + v_key = "value" + if k_key in kv_cache: + k = kv_cache[k_key] + v = kv_cache[v_key] + else: + kv = self.kv(xa) + k, v = kv.chunk(2, dim=-1) + kv_cache[k_key] = k.contiguous() + kv_cache[v_key] = v.contiguous() + + n_batch, n_ctx, n_state = q.shape + q = q.view(*q.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + k = k.view(*k.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + v = v.view(*v.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) + + mask = mask.to(torch.bool) if mask is not None and n_ctx > 1 else None + sparse_mode = 1 if mask is not None and n_ctx > 1 else 0 + D = n_state // self.n_head + at = torch_npu.npu_prompt_flash_attention( + q.contiguous(), + k.contiguous(), + v.contiguous(), + num_heads=self.n_head, + input_layout="BNSD", + scale_value=1 / math.sqrt(D), + atten_mask=mask[:n_ctx, :n_ctx] if mask is not None else None, + sparse_mode=sparse_mode + ) + + qk = None + w_v = at.permute(0, 2, 1, 3).flatten(start_dim=2) + return self.out(w_v), qk + + +class MyResidualAttentionBlock(nn.Module): + def __init__(self, n_state: int, n_head: int, cross_attention: bool = False): + super().__init__() + + self.attn = MyMultiHeadSelfAttention(n_state, n_head) + self.attn_ln = LayerNorm(n_state) + + self.cross_attn = ( + MyMultiHeadCrossAttention(n_state, n_head) if cross_attention else None + ) + self.cross_attn_ln = LayerNorm(n_state) if cross_attention else None + + n_mlp = n_state * 4 + self.mlp = nn.Sequential( + Linear(n_state, n_mlp), nn.GELU(), Linear(n_mlp, n_state) + ) + self.mlp_ln = LayerNorm(n_state) + + def forward( + self, + x: Tensor, + xa: Optional[Tensor] = None, + mask: Optional[Tensor] = None, + kv_cache: Optional[dict] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None, + updated_kv_positions: Optional[torch.LongTensor] = None + ): + x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache['attn'], + actual_seq_len=actual_seq_len, kv_padding_size=kv_padding_size, + updated_kv_positions=updated_kv_positions)[0] + # if self.cross_attn: + x = x + self.cross_attn(self.cross_attn_ln(x), xa, kv_cache=kv_cache['cross_attn'])[0] + x = x + self.mlp(self.mlp_ln(x)) + return x + + +class PrefillTextDecoder(nn.Module): + def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): + super().__init__() + + self.token_embedding = nn.Embedding(n_vocab, n_state) + self.positional_embedding = nn.Parameter(torch.empty(n_ctx, n_state)) + + self.blocks = nn.ModuleList( + [ + MyResidualAttentionBlock(n_state, n_head, cross_attention=True) + for _ in range(n_layer) + ] + ) + self.ln = LayerNorm(n_state) + + mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) + self.register_buffer("mask", mask, persistent=False) + + def forward(self, x: Tensor, xa: Tensor, kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None): + offset = 0 + x = ( + self.token_embedding(x) + + self.positional_embedding[offset: offset + x.shape[-1]] + ) + x = x.to(xa.dtype) + + for layer_index, block in enumerate(self.blocks): + x = block(x, xa, mask=self.mask, kv_cache=kv_cache[layer_index], + updated_kv_positions=updated_kv_positions) + + x = self.ln(x) + logits = ( + x @ torch.transpose(self.token_embedding.weight.to(x.dtype), 0, 1) + ).float() + + return logits + + +class DecodeTextDecoder(nn.Module): + def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): + super().__init__() + + self.token_embedding = nn.Embedding(n_vocab, n_state) + self.positional_embedding = nn.Parameter(torch.empty(n_ctx, n_state)) + + self.blocks = nn.ModuleList( + [ + MyResidualAttentionBlock(n_state, n_head, cross_attention=True) + for _ in range(n_layer) + ] + ) + self.ln = LayerNorm(n_state) + + mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) + self.register_buffer("mask", mask, persistent=False) + + def forward(self, x: Tensor, xa: Tensor, positional_embedding, kv_cache: Optional[dict] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None, + updated_kv_positions: Optional[torch.LongTensor] = None): + x = (self.token_embedding(x) + positional_embedding) + x = x.to(xa.dtype) + + for layer_index, block in enumerate(self.blocks): + x = block(x, xa, mask=self.mask, kv_cache=kv_cache[layer_index], actual_seq_len=actual_seq_len, + kv_padding_size=kv_padding_size, + updated_kv_positions=updated_kv_positions) + + x = self.ln(x) + logits = ( + x @ torch.transpose(self.token_embedding.weight.to(x.dtype), 0, 1) + ).float() + + return logits diff --git a/ACL_PyTorch/built-in/audio/whisper/whisper_decoding.patch b/ACL_PyTorch/built-in/audio/whisper/whisper_decoding.patch new file mode 100644 index 0000000000..871e972c2f --- /dev/null +++ b/ACL_PyTorch/built-in/audio/whisper/whisper_decoding.patch @@ -0,0 +1,34 @@ ++++ decoding.py +@@ -652,7 +652,10 @@ + # encoded audio features are given; skip audio encoding + audio_features = mel + else: ++ import time ++ time1 = time.time() + audio_features = self.model.encoder(mel) ++ print(f"encode time = {(time.time() - time1) * 1000:.2f} ms") + + if audio_features.dtype != ( + torch.float16 if self.options.fp16 else torch.float32 +@@ -683,6 +686,8 @@ + no_speech_probs = [np.nan] * n_batch + + try: ++ import time ++ time1 = time.time() + for i in range(self.sample_len): + logits = self.inference.logits(tokens, audio_features) + +@@ -703,6 +708,8 @@ + tokens, completed = self.decoder.update(tokens, logits, sum_logprobs) + + if completed or tokens.shape[-1] > self.n_ctx: ++ avg_time = (time.time() - time1) / i * 1000 ++ print(f"avg decode time = {avg_time:.2f} ms") + break + finally: + self.inference.cleanup_caching() +@@ -824,3 +831,4 @@ + result = DecodingTask(model, options).run(mel) + + return result[0] if single else result -- Gitee From c0862a6ac3603fbcedf516933c095de0ac03bf9b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?= Date: Mon, 9 Jun 2025 17:37:44 +0800 Subject: [PATCH 2/7] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=B3=A8=E9=87=8A?= =?UTF-8?q?=E4=BB=A3=E7=A0=81?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ACL_PyTorch/built-in/audio/whisper/rewrited_models.py | 3 --- 1 file changed, 3 deletions(-) diff --git a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py index ddcf368ad7..092cc85a60 100644 --- a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py +++ b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py @@ -44,9 +44,6 @@ class MyMultiHeadSelfAttention(nn.Module): kv_cache[v_key] = torch.zeros(n_batch, max_sample_len, n_state, dtype=x.dtype, device=x.device) kv = self.kv(x) k, v = kv.chunk(2, dim=-1) - # tmp_ids = updated_kv_positions.reshape(-1) - # torch_npu.scatter_update_(kv_cache[k_key], tmp_ids, k, 1) - # torch_npu.scatter_update_(kv_cache[v_key], tmp_ids, v, 1) kv_cache[k_key][:, :n_ctx, :] = k.detach().contiguous() kv_cache[v_key][:, :n_ctx, :] = v.detach().contiguous() # Decode -- Gitee From e59ffc5b6d1dde43577abf171f73e28a2297919b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?= Date: Mon, 9 Jun 2025 18:43:52 +0800 Subject: [PATCH 3/7] fix: clean code --- ACL_PyTorch/built-in/audio/whisper/infer.py | 18 +++++++-------- .../built-in/audio/whisper/rewrited_models.py | 23 +++++++++++-------- 2 files changed, 23 insertions(+), 18 deletions(-) diff --git a/ACL_PyTorch/built-in/audio/whisper/infer.py b/ACL_PyTorch/built-in/audio/whisper/infer.py index 81c3087fb4..db812354af 100644 --- a/ACL_PyTorch/built-in/audio/whisper/infer.py +++ b/ACL_PyTorch/built-in/audio/whisper/infer.py @@ -1,14 +1,12 @@ -import copy -import os import time import math -import jiwer import argparse +from typing import Optional + +import jiwer import numpy as np import pandas as pd -from tqdm import tqdm from datasets import load_dataset -from typing import Optional import torch from torch import nn, Tensor @@ -207,18 +205,20 @@ def modify_model(model, options, args, device): def rewrite_inference_logits(): - # _origin_logits = PyTorchInference.logits - def _patched_logits(self, tokens, audio_features) -> Tensor: if not self.kv_cache: self.kv_cache, self.hooks = self.model.install_kv_cache_hooks() self.kv_cache = [ - {'attn': {}, 'cross_attn': {}} for _ in range(6) + { + 'attn': {}, + 'cross_attn': {} + } + for _ in range(6) ] return self.model.prefill_decoder(tokens, audio_features, kv_cache=self.kv_cache) actual_seq_len = tokens.shape[-1] - updated_kv_positions = torch.tensor([actual_seq_len-1], dtype=torch.long, device=tokens.device) + updated_kv_positions = torch.tensor([actual_seq_len - 1], dtype=torch.long, device=tokens.device) kv_padding_size = torch.tensor([448 - actual_seq_len], dtype=torch.long, device=tokens.device) offset = actual_seq_len - 1 diff --git a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py index 092cc85a60..4633acde17 100644 --- a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py +++ b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py @@ -1,4 +1,6 @@ import math +from typing import Optional + import numpy as np import torch import torch.nn as nn @@ -7,8 +9,6 @@ import torch_npu from whisper.model import Linear, LayerNorm, MultiHeadAttention, ResidualAttentionBlock -from typing import Optional - class MyMultiHeadSelfAttention(nn.Module): @@ -50,7 +50,6 @@ class MyMultiHeadSelfAttention(nn.Module): else: new_kv = self.kv(x[:, -1:]) new_k, new_v = new_kv.chunk(2, dim=-1) - # tmp_ids = updated_kv_positions.reshape(-1) tmp_ids = updated_kv_positions.expand(n_batch) torch_npu.scatter_update_(kv_cache[k_key], tmp_ids, new_k, 1) torch_npu.scatter_update_(kv_cache[v_key], tmp_ids, new_v, 1) @@ -177,9 +176,9 @@ class MyResidualAttentionBlock(nn.Module): xa: Optional[Tensor] = None, mask: Optional[Tensor] = None, kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None, actual_seq_len: Optional[list] = None, - kv_padding_size: Optional[torch.LongTensor] = None, - updated_kv_positions: Optional[torch.LongTensor] = None + kv_padding_size: Optional[torch.LongTensor] = None ): x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache['attn'], actual_seq_len=actual_seq_len, kv_padding_size=kv_padding_size, @@ -247,10 +246,16 @@ class DecodeTextDecoder(nn.Module): mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) self.register_buffer("mask", mask, persistent=False) - def forward(self, x: Tensor, xa: Tensor, positional_embedding, kv_cache: Optional[dict] = None, - actual_seq_len: Optional[list] = None, - kv_padding_size: Optional[torch.LongTensor] = None, - updated_kv_positions: Optional[torch.LongTensor] = None): + def forward( + self, + x: Tensor, + xa: Tensor, + positional_embedding, + kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None + ): x = (self.token_embedding(x) + positional_embedding) x = x.to(xa.dtype) -- Gitee From 41e77deeb3f152e2406e8cd87e3f4f2be29c9de2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?= Date: Tue, 10 Jun 2025 10:08:34 +0800 Subject: [PATCH 4/7] =?UTF-8?q?fix:=20=E6=A0=B9=E6=8D=AE=E6=A3=80=E8=A7=86?= =?UTF-8?q?=E6=84=8F=E8=A7=81=E4=BF=AE=E6=94=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ACL_PyTorch/built-in/audio/whisper/README.md | 16 ++++++------ ACL_PyTorch/built-in/audio/whisper/infer.py | 25 ++++++++----------- .../built-in/audio/whisper/rewrited_models.py | 10 ++++++++ 3 files changed, 29 insertions(+), 22 deletions(-) diff --git a/ACL_PyTorch/built-in/audio/whisper/README.md b/ACL_PyTorch/built-in/audio/whisper/README.md index e61463fd24..af364169c2 100644 --- a/ACL_PyTorch/built-in/audio/whisper/README.md +++ b/ACL_PyTorch/built-in/audio/whisper/README.md @@ -16,14 +16,14 @@ Whisper 是 OpenAI 开源的通用语音识别模型,支持多语言转录和 - 该模型需要以下插件与驱动 - | 配套 | 版本 | 环境准备指导 | - | ------------------------------------------------------------ | ------ | ------------------------------------------------------------ | - | 固件与驱动 | 24.0.RC3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) | - | CANN | 8.0.RC3 | 包含kernels包和toolkit包 | - | Python | 3.8 | - | - | PyTorch | 2.4.0 | - | - | Ascend Extension PyTorch | 2.4.0.post2 | - | - | 说明:Atlas 800I A2 推理卡和Atlas 300I DUO 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ | + | 配套 | 版本 | 环境准备指导 | + | ------------------------------------------------------------ |-------------| ------------------------------------------------------------ | + | 固件与驱动 | 25.0.RC1 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) | + | CANN | 8.1.RC1 | 包含kernels包和toolkit包 | + | Python | 3.10 | - | + | PyTorch | 2.5.1 | - | + | Ascend Extension PyTorch | 2.5.1 | - | + | 说明:Atlas 800I A2 推理卡和Atlas 300I DUO 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ | ## 获取本仓源码 diff --git a/ACL_PyTorch/built-in/audio/whisper/infer.py b/ACL_PyTorch/built-in/audio/whisper/infer.py index db812354af..ba5da6fa13 100644 --- a/ACL_PyTorch/built-in/audio/whisper/infer.py +++ b/ACL_PyTorch/built-in/audio/whisper/infer.py @@ -1,3 +1,13 @@ +# Copyright (c) 2025 Huawei Technologies Co., Ltd +# [Software Name] is licensed under Mulan PSL v2. +# You can use this software according to the terms and conditions of the Mulan PSL v2. +# You may obtain a copy of Mulan PSL v2 at: +# http://license.coscl.org.cn/MulanPSL2 +# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, +# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, +# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. +# See the Mulan PSL v2 for more details. + import time import math import argparse @@ -67,17 +77,6 @@ def create_model(args): return model -def rewrite_encoder_conv(model): - conv1 = model.encoder.conv1 - conv2 = model.encoder.conv2 - model.encoder.conv1 = torch.nn.Conv1d(model.dims.n_mels, model.dims.n_audio_state, kernel_size=3, padding=1) - model.encoder.conv2 = torch.nn.Conv1d(model.dims.n_mels, model.dims.n_audio_state, kernel_size=3, stride=2, padding=1) - model.encoder.conv1.weight.data = conv1.weight.data.clone() - model.encoder.conv1.bias.data = conv1.bias.data.clone() - model.encoder.conv2.weight.data = conv2.weight.data.clone() - model.encoder.conv2.bias.data = conv2.bias.data.clone() - - def rewrite_multi_head_attention_forward(model): wk = model.key.weight wv = model.value.weight @@ -163,16 +162,14 @@ def rewrite_multi_head_attention_forward(model): def modify_model(model, options, args, device): print("modify model...") - rewrite_encoder_conv(model) - # 修改encoder的attention forward for block1, block2 in zip(model.encoder.blocks, model.decoder.blocks): rewrite_multi_head_attention_forward(block1.attn) rewrite_multi_head_attention_forward(block2.attn) rewrite_multi_head_attention_forward(block2.cross_attn) - origin_decoder = model.decoder + # 将原本的decoder拆分成prefill和decode2个阶段 prefill_decoder = PrefillTextDecoder( model.dims.n_vocab, model.dims.n_text_ctx, diff --git a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py index 4633acde17..f542dc7b12 100644 --- a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py +++ b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py @@ -1,3 +1,13 @@ +# Copyright (c) 2025 Huawei Technologies Co., Ltd +# [Software Name] is licensed under Mulan PSL v2. +# You can use this software according to the terms and conditions of the Mulan PSL v2. +# You may obtain a copy of Mulan PSL v2 at: +# http://license.coscl.org.cn/MulanPSL2 +# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, +# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, +# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. +# See the Mulan PSL v2 for more details. + import math from typing import Optional -- Gitee From df784d29a40d3b6b2e864200e461b80455007f67 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?= Date: Tue, 10 Jun 2025 11:51:54 +0800 Subject: [PATCH 5/7] =?UTF-8?q?fix:=20=E6=A0=B9=E6=8D=AE=E6=A3=80=E8=A7=86?= =?UTF-8?q?=E6=84=8F=E8=A7=81=E4=BF=AE=E6=94=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../built-in/audio/whisper/rewrited_models.py | 86 +++++++++++-------- 1 file changed, 49 insertions(+), 37 deletions(-) diff --git a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py index f542dc7b12..4d4b485a85 100644 --- a/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py +++ b/ACL_PyTorch/built-in/audio/whisper/rewrited_models.py @@ -22,7 +22,7 @@ from whisper.model import Linear, LayerNorm, MultiHeadAttention, ResidualAttenti class MyMultiHeadSelfAttention(nn.Module): - def __init__(self, n_state: int, n_head: int): + def __init__(self, n_state: int, n_head: int, n_ctx: int): super().__init__() self.n_head = n_head self.query = Linear(n_state, n_state) @@ -31,6 +31,7 @@ class MyMultiHeadSelfAttention(nn.Module): self.out = Linear(n_state, n_state) self.kv = Linear(in_features=self.key.weight.shape[0], out_features=self.key.weight.shape[1] + self.value.weight.shape[1]) + self.n_ctx = n_ctx def forward( self, @@ -44,7 +45,7 @@ class MyMultiHeadSelfAttention(nn.Module): q = self.query(x) n_batch, n_ctx, n_state = q.shape - max_sample_len = 448 + max_sample_len = self.n_ctx # decoder - self_attention k_key = "key" v_key = "value" @@ -76,7 +77,7 @@ class MyMultiHeadSelfAttention(nn.Module): if n_ctx > 1: mask = mask.to(torch.bool) if mask is not None and n_ctx > 1 else None sparse_mode = 1 if mask is not None and n_ctx > 1 else 0 - at = torch_npu.npu_prompt_flash_attention( + attn = torch_npu.npu_prompt_flash_attention( q.contiguous(), k.contiguous(), v.contiguous(), @@ -88,7 +89,7 @@ class MyMultiHeadSelfAttention(nn.Module): ) # Decode用IFA else: - at = torch_npu.npu_incre_flash_attention( + attn = torch_npu.npu_incre_flash_attention( q.contiguous(), k.contiguous(), v.contiguous(), @@ -100,9 +101,8 @@ class MyMultiHeadSelfAttention(nn.Module): kv_padding_size=kv_padding_size ) - qk = None - w_v = at.permute(0, 2, 1, 3).flatten(start_dim=2) - return self.out(w_v), qk + w_v = attn.permute(0, 2, 1, 3).flatten(start_dim=2) + return self.out(w_v) class MyMultiHeadCrossAttention(nn.Module): @@ -146,7 +146,7 @@ class MyMultiHeadCrossAttention(nn.Module): mask = mask.to(torch.bool) if mask is not None and n_ctx > 1 else None sparse_mode = 1 if mask is not None and n_ctx > 1 else 0 D = n_state // self.n_head - at = torch_npu.npu_prompt_flash_attention( + attn = torch_npu.npu_prompt_flash_attention( q.contiguous(), k.contiguous(), v.contiguous(), @@ -157,16 +157,15 @@ class MyMultiHeadCrossAttention(nn.Module): sparse_mode=sparse_mode ) - qk = None - w_v = at.permute(0, 2, 1, 3).flatten(start_dim=2) - return self.out(w_v), qk + w_v = attn.permute(0, 2, 1, 3).flatten(start_dim=2) + return self.out(w_v) class MyResidualAttentionBlock(nn.Module): - def __init__(self, n_state: int, n_head: int, cross_attention: bool = False): + def __init__(self, n_state: int, n_head: int, n_ctx: int, cross_attention: bool = False): super().__init__() - self.attn = MyMultiHeadSelfAttention(n_state, n_head) + self.attn = MyMultiHeadSelfAttention(n_state, n_head, n_ctx) self.attn_ln = LayerNorm(n_state) self.cross_attn = ( @@ -190,16 +189,18 @@ class MyResidualAttentionBlock(nn.Module): actual_seq_len: Optional[list] = None, kv_padding_size: Optional[torch.LongTensor] = None ): - x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache['attn'], - actual_seq_len=actual_seq_len, kv_padding_size=kv_padding_size, + x = x + self.attn(self.attn_ln(x), + mask=mask, + kv_cache=kv_cache['attn'], + actual_seq_len=actual_seq_len, + kv_padding_size=kv_padding_size, updated_kv_positions=updated_kv_positions)[0] - # if self.cross_attn: x = x + self.cross_attn(self.cross_attn_ln(x), xa, kv_cache=kv_cache['cross_attn'])[0] x = x + self.mlp(self.mlp_ln(x)) return x -class PrefillTextDecoder(nn.Module): +class MyTextDecoder(nn.Module): def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): super().__init__() @@ -208,7 +209,7 @@ class PrefillTextDecoder(nn.Module): self.blocks = nn.ModuleList( [ - MyResidualAttentionBlock(n_state, n_head, cross_attention=True) + MyResidualAttentionBlock(n_state, n_head, n_ctx, cross_attention=True) for _ in range(n_layer) ] ) @@ -217,8 +218,33 @@ class PrefillTextDecoder(nn.Module): mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) self.register_buffer("mask", mask, persistent=False) - def forward(self, x: Tensor, xa: Tensor, kv_cache: Optional[dict] = None, - updated_kv_positions: Optional[torch.LongTensor] = None): + def forward( + self, + x: Tensor, + xa: Tensor, + positional_embedding: Tensor = None, + kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None + ): + pass + + +class PrefillTextDecoder(MyTextDecoder): + def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): + super().__init__(n_vocab, n_ctx, n_state, n_head, n_layer) + + def forward( + self, + x: Tensor, + xa: Tensor, + positional_embedding: Tensor = None, + kv_cache: Optional[dict] = None, + updated_kv_positions: Optional[torch.LongTensor] = None, + actual_seq_len: Optional[list] = None, + kv_padding_size: Optional[torch.LongTensor] = None + ): offset = 0 x = ( self.token_embedding(x) @@ -238,29 +264,15 @@ class PrefillTextDecoder(nn.Module): return logits -class DecodeTextDecoder(nn.Module): +class DecodeTextDecoder(MyTextDecoder): def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): - super().__init__() - - self.token_embedding = nn.Embedding(n_vocab, n_state) - self.positional_embedding = nn.Parameter(torch.empty(n_ctx, n_state)) - - self.blocks = nn.ModuleList( - [ - MyResidualAttentionBlock(n_state, n_head, cross_attention=True) - for _ in range(n_layer) - ] - ) - self.ln = LayerNorm(n_state) - - mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) - self.register_buffer("mask", mask, persistent=False) + super().__init__(n_vocab, n_ctx, n_state, n_head, n_layer) def forward( self, x: Tensor, xa: Tensor, - positional_embedding, + positional_embedding: Tensor, kv_cache: Optional[dict] = None, updated_kv_positions: Optional[torch.LongTensor] = None, actual_seq_len: Optional[list] = None, -- Gitee From af405f43b342a68249f3101c8933bd5b9fef96b1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E6=B1=9F=E6=B1=9F?= Date: Tue, 10 Jun 2025 12:07:34 +0800 Subject: [PATCH 6/7] =?UTF-8?q?fix:=20=E5=88=A0=E9=99=A4aduio.mp3=E6=96=87?= =?UTF-8?q?=E4=BB=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ACL_PyTorch/built-in/audio/whisper/README.md | 5 ++++- ACL_PyTorch/built-in/audio/whisper/audio.mp3 | Bin 30291 -> 0 bytes 2 files changed, 4 insertions(+), 1 deletion(-) delete mode 100644 ACL_PyTorch/built-in/audio/whisper/audio.mp3 diff --git a/ACL_PyTorch/built-in/audio/whisper/README.md b/ACL_PyTorch/built-in/audio/whisper/README.md index af364169c2..9edf41cb0e 100644 --- a/ACL_PyTorch/built-in/audio/whisper/README.md +++ b/ACL_PyTorch/built-in/audio/whisper/README.md @@ -52,7 +52,10 @@ cd ModelZoo-PyTorch/ACL_PyTorch/built-in/audio/whisper/ ## 数据集准备 * librispeech_asr_dummy数据集[下载地址](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/tree/main),该数据集是 Hugging Face Datasets 库中提供的一个小型测试数据集,用于快速验证语音识别。下载下来后,把它放入当前文件夹内。 -* 文件列表`audio.mp3`是普通的语音文件,在warm up阶段使用,并可以直观测试。 +* `audio.mp3`是普通的语音文件,在warm up阶段使用,并可以直观测试,可以通过以下链接获取。(你也可以自己找一个中文语音.mp3/wav文件,放入目录中) + ```TEXT + https://pan.baidu.com/s/1fHL0fWbGgKXQ9W1GXA2RBQ?pwd=xe2x 提取码: xe2x 复制这段内容后打开百度网盘手机App,操作更方便哦 + ``` ## 文件目录结构 文件目录结构大致如下: diff --git a/ACL_PyTorch/built-in/audio/whisper/audio.mp3 b/ACL_PyTorch/built-in/audio/whisper/audio.mp3 deleted file mode 100644 index aaa6dcbcd16ededa135fe797dc0dfde41550bdf2..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 30291 zcmeF&Ra9I}^eFhoB?NaG(zv@@aCd0j-Ge&`?(XjH!GpWI2M7=d0fHq2Nr1o^`kVQ$ zS+nLo-=|v-P|fMp=XZ8h*C}@IR*>dK1idtP5~u={>!kw$fe<9Ttn6JNw$|^g-OW9% ztsoYD5Km7JXLCnucQ(s+AgG?69tZ>{`SMIl7iVYdcb*_y30RYK(R8tpX63S0wN>Wv z_Ttb~)3jCgSG6-&<#&`*ld*QzmGV=x *ZRM&An}S1=zR***G~kUjFt!4Y&>;{GUEZ;hl}k%Y&Ea zgBCy_FAfkA3OWWB4n83<899WCmVud-gNv76NJLCRN>*N3O+!mZ&(PS+!rIos$<^J< z$3O5xNLXZaTtaecW_DgdacM _+f5BP4;COjp5B{goz>ELiT`!J=zmG3t^1p1rwbkg$0R(y2QDq4g8EtKC*e`C_ zb-wifcQ=?P@P9o2&&mIf=l>mr|5bc|C0rSdhM!9}3YddNV%Q*LkjErGewBYLuz!i? z=JA!R2u&u+;@9huKjG5f@qlL>nr4r21lOl%$&qHa;4flr3M_w#-aZ@u@!3aud8H~J z0M_=xQ^yv9-?7_eu+Q`j!AA ~;)RvFJ2C{~u4^V9#!J`a#Ub zR7lY?=;Yge60mD2^^JmL;CVAEJPKDV=r|`;gU%4;W_9qxX-EbHIa{vA1p!qL<4T+b z^m?f2gb|d3jjRfy9wd2lLwNKT7ZVf;QV%BH4=IkA10+uvmVz0IU!{Po&8(-w!q9%H zy?JEpM ab>1IC*in-mi!(hJy?Mu;OJu{ZA5(0$rYy63Q~@ zbU^WX_l@g4A`t*|OdD9KC=BQwV-B{#flxn{BCX(kw4|aEN5r5-(A1L)0)VQyMT-N* ztkn<=;k$UwqE~WcX3-o7AQFvWDFha{bPPOr(8wrmla8Ba`ysqs z+ju{3?26FV-cGOFQ)V;oj}O2%ldICevpwke?NR?9wWe;d)jdJKx|a#@^mFR{lMRdi zOBbMc?B|WYL`40i`(f=KYR{hMLHt*68S0rL&eF!@B!WF9=qT8wYB-?6p1(jl7qzCc z(^iu7A+Gcus0Na9Vl(DW_A5%!5Ej8Ok0`U&Ry5qU9tat3lO_r)Y=1nPba;w{m|S3- z%iOjrsU}FL&%p0b9~1q@qrMH;>01HdnKti>TN*&W`D?N^k@(&9lTzUw*ste8@NLn` z59CdK(O6QAr$4;(pS|!GsESWa (xUO!!K zUtVGkJc>4ZFKm&|3 bZ(Bb7wuWoUCpL?4#|lu z)xdipu_|B%`1#zR CF`SOG4vd%TZov7n6s8_dm#PJ<-REA{T{ksfC)Tf^dCOX6O#Kc5fCjAs zp!1MZGt^G2U#Ap@=NP#?6M__@B#$*}Y!y>QNDBvx4;F+9clkL0J0G*DA>ecQax>=! zLdwmiU`vud^yd*(Ydu&{gQdgJd{`2G0;HE{69`9;6?$!_7wu85vH5goU|;*a-V}dX z@DlIdxu<*OnU*@l2Ycuvi)Mly$p(D}n9P$a>H1NcFU|~Qbq~m?`s>@}8_wifsiZG6 zF7Fk4k*sUag2>N%j&j{w3-jPWrgjgWedp{bXOKmNdMl}y@8;aJN&SQ>CAhkBBeQeT zyoR-zeuOxWJ$yMU*&<)ukiGv~q!k>8pv&(S-i|FR~2#jc4CYeU6AB(#Yl5s_lFM z{9xRx1ivU=UmRc&Bg=m{MKdE(n_5FC!#b#$C6J|4x=p&hb-K3G&bq#sS)biWS^?cE zx}_fG!6rw`-My2KVz8gt5Yn=x+qG!u=2%Z- 6Kh!n11Ecr-j;R0TsH}W7&o;vJH+=}TCx!SiH zafPPfj-uA2tFDDob0y=ZcCb*#j|@dQ-w{>vNk$kYs?*WOPCC)| %lL)}>K(y6EMVb-<-%d<6kbKC*11T|I>(X*Ark$x z^iF0p>aMNI=~B|udga9V+$XJn1Hf*>+7oG?m*X+pbKR!c%bWlCJMAl3Zt8Tst-!|l zxJlOx&0v?tfPt#87+Gvu1QqBg7(|e&zhM505h=!VF7)dnbe1?EbF+y~|K#gex;>Xr z_ng+;hSZCu`>eMlFwa}?)w}gwf7$xo4T41c^)vCo-{Wuw6zQa4^@jW(PO{uSh7x*n z9 fV){9N#$=g{-R<&7cdK&*KS$_j8 Un5<{t4V{D(Xate?tBF9DAc4 467Pn8y^XAlt$>f-$Mz>8RRf z7^X;v`bH}R$6C 5;Bio8vCqPg>r=^7&*9lZ}}b(^c)K-TGdpAOmP zm5VHbk8->x;$|g%2BM@-5#P_vLzvffj_V8%aT*+}k~1VcJts~DBaZvuHI`eA;}J2k ziRhD#>%931^8`^H94+e*YtBM6qoOh3l^g2XMfPj6cV_b=)k!9BX8h&^Qe}}mg)6kS zj+x|h@5QJo1|sG?>X-F_i|4%Uk%(i_N@oucjkDNrqa?w;ZTj<@K^?9=Hem}q;TMg*RQ)L#tVDmV#RLR4g g-@ZOe-`Rdh5+st#4|_AuYxp}=Xfjas8-0ZC43V_Iwl;5VgcrsPv!*pPlD(n& zGeN|Dn?L3%>Y18-!kvv5T*jFFklu1LJdd4!ecqGf(N2wbJ!?i|nRypdSg8hMI1q<6 z&MXma@-~dB3z9908aYno7Y*}hvr1)lf&!34zS0+K^IOaY1b`#^xVby-C)IP`eyb!B zaSuNt@lT&vq-=;FlsubYHZEPaVIVMJ{NR<*Zpa-pL4)5%oDfIAVaBxm*OB-$-k@~r zA=T>G*Ut)O<0!_q=KRGENr+h c;)ULk-7S1FR) z!r(#IFb_42Ayp^xDdHEL5vXRC`*3w8i1I6(sg`D30+Tr+pNmQNzi6Qkwvmq5sZNTN zNFg3?a5Y9bJ}#A KxNG>b&6RK2eG+Tcha;G;u zWC7{&C^HTbAHWC{9|d|obDTxzHcS_WY5V77!go9#=&!44EL++X3s=HC_>{ukGZfp? z8AowU$Di3Di#i{k1Q2Q+*JtG>LnIbTlY5WfE_dY%tuc}yfBT!`Y@=iJRe<0 iXQE>Vr8;ZGqF`4Zye1kM zBT;t0jU99|m3qK8lk`s%gNREz`Zo?&= z%_Nb({wn-L_=?%+6-pn>Lkt$->;E90kUd2AYt6yAc$H3}cH=K0P*=#inHZ`=s_}4> zk!sDA?B3&K3EVo67{q0-o}E1n*i^L;&nH#CtY$|IVgOL50sI|vU}>gAsGy=_)1YzS z)@OD1YFw&=(n4g@Vrq+=J3#E);l P+c%!4g&-?iSNAAbZBGf!U|{m{=V<9rW)K z*r>gXl6{x(< UNy1AvA3TCi%VJ6WaFmEeT7(?}TJmiHdT_A?BX$pJmAGj)m?oHH_P=F6HW>wsz6J zzHpbD$--(6?<553^S*y_b63by4Z2c+dB~{JP8TSMacJ2Q(O2;uf4CNuD)o+`Sl~o< zy$keW#9Wx^F@zkZV8~U6PGX0KQ;qZP;9WH-jJXAIB7FLOVu~FuClZ1ICy-tH@KEp7 zhtD7&%X6#bc8ETKH+dHK{DHBnGTkt;>01tDyiPAiBMmd?0y@vd^Wf0z*wC}SJ ZUSGSZ_7WMOg|$b`f+AZ(6)S>GwI+k3jD|G z7PtiIRi=i6LuBujrWAHaWRrOp^DV~sRL@bv(%x^Aj;*zBvPNO3=BxG>m1oh^+^vJ5 zFIl4{Yv~b0sge@)-30Hm>s<9Yv%-P2pRLym#b%hIw1F@WIZeQHx53Qxz=b@olz1K) z12hSxWN(bJYS6_n$lk{t`A-msDbe Jj~c=ATGf7-TKq4Gi1&}qRHYGw9Fal5oz_22-*WAAlG1WSNx$rP39r@D z&*O*FD<&`xHu==i2F0-K`TBs6%$+?$r^P^bM42sqV8)z0ac*vA3&$ruWc8L{zQC#R z9y|r?tFAHwP7SirS2D*?&MEhBC66w9G&rABqC@L|i$B0`TJng{ ^B%>aO3Ft6<)aG$MB`aY{PDw&z<*1$FJFSLt1NP(pVY3Y0m9t -=Z0Sic%j zE B*p!oOh`I zYOcFk;P-WWP?0l(jjxz8A2BCGA&Mm &WefcZ zn)Ba&3<}P*N$K~O<-yn=oTpQJ(6?8|Ru}3cO|lwDDkFfScrB$C+C=52NKmEx7cSXk zVXH>L>x-%U{5Rctrp?-vk(% J}adw@T`I{Qrc5R30 zOh4sYZ7nMH^l#S9zn-6UW^D-CSOb{j)p_nl)c_AX%na|VL+|K%VOV^uNPna2-S|cJ z-Y@q?X)$+pBc<9Q2E|K15GTn}_|23N9)~oHg%yi~4D6VaxrA_fDRBc!H0;H{Y&3q0 zd;5~NMnXUUQ}rTZ{QH)t7yh(Iqb68DBgRH{Aj_j~HzHePfuE7L3!EA3JF}e015JP~ z@YIghMc1=^+XCWZmA<4f@7ZNQ9~{Fq58r}$0*G_Q<)MC&A_N%RqbiA#S3dlI3LAcz ztA3py-Kw-x=aN$UrvEMmSs0<}mr{?Qi`YWipg&FpF1C4FO*SVf0*1W{fdd(LC133V znuKpa9mRDNb;OgW1gd_`r!!SN)k0?ADIme(wIVHqu~a=>%AVcw@3B@Dovk8A&gDC% z2_&VonDTOY88=&vk1$Uj`R1;>iPHENapxIumwC|p18Bfs=Tzg=%ey$f|X`e9l*= zb|2C9U4-X3KWWAVVuPjeW5`v%K8Lr=%OUA= 4l@NF z64^{Cx&QOg8@u< <$*65~|Px=S=gca+KCa zO2MHd-k(II17R$`$V{|@e3O(u{vlXqBzET{T>hsQN#zDve;tNHJXjK)B42$VJ(}~! z0CJ*Dh1nuL7>z$A+QDZAripm(pZi6e_%{(=SIsj NW{@&;K@Flo)ds*z*2 zKnQZ|qlf;6zY@OOS!Mdw!e8NFKmD7RKC~v6JCavF`)$T5YVRVS;F<4Xpw2J@W^t{I z^#mokkXn1A>mOAEm?x03YuQiMLN5I5p!f!n^diq6X&aZ55WBLVm!Nx}=Uhp$&S}=+ zm$) ?~8`Byeo KX^_r(z>u9TudtpiQX_Wuig1!%>}?b zzHb=UBiYQO!r!v{iEj}?L+B1mzfCMghpEY?2~K^~@|>y&MFUCUt@acMGheWpdSnS^ z$YCSW<4@S2mw=)Tl<6VGjuYoaa=ckK!DbjVs2fzc+NLfn-74Ee>`WdeCO&s!KM_!Z z0OY}84OA|Rd`0vvhS~(3PoBRtSf57g3&e qFRDYYxb&+{#fo&)IJ4PiS59sTvT9AJ?{Esw6FzkA%`Q-6ZhbGViX> zK&1tBNZjHdABm~!PXmQ!bt>8)sgEKu@W+FIih+#P`t%a0(pd8K=j`_?v|Ah4{rV7E zg0JCkR&V|Vi(obMy`mXn%%=Q4Ii(Ks#DKYWKd_mNm6yNe(!6q(EAYJ}ICrs5&&U&O z+36ODanF*n4wC_u>W %}0!98ZJ+?spIIVwNZLmP2FRL&vVBo+p8Kd zu+4177|YWYR$9Kj6En|BmRxn+D0 v%+nqYq0vG(2G)w(vvXocdHb)^bUe`_3S27 zhW3j8dPSc+(L#M8c(`<>aoppKWSN1Dl0P^oYfiH=rlcg6P@7K1_}$iz0UwDp<)Pg1 zGhR=^yk+VAgJw?;aN`tdR#H0Qd#Zi9m|HkGa4aLu&><{7(;QYpP|oaRkR!1FF$A?I z^8C;H QBK=-_tEz#C$S^L}4-`ZXq4RZyE)?Z$JI~ z@N|Rg1H=|mQYn7mlD5!Z;o1}A#$bvBh_9k#CH2;=J%-@z-aRZ-G6MZi>_JUx`R@>y zM0PVD9CfQGY*xp8 g=)2t5JEWL8U5K?G( BGZOWku{6#f(Nblo-Q}nk-!^*S1WSzHASWx@{kQ($Q!W>7 z8BMN#7^OFXi0OjV9F9{6a~)ETbAk{-LIPHu|31yNsi+4bnq4s-g^$S6y&+JMM9k!e zb5B|BLXBZ5ngvulbkVmq+>|@oW3xzv9xggE=Ffb9f6}+$?xmQ&G7EN(Ro^Gu#nn$o zRr+;
o=aT{qj4J`L4s` WxiMu`7Y=2PP?(2^!mzd@aAev@uI!{a>@SdDXrO_@TRc4Tb?eRcn8n&h zN16Ix7Kz$P8Y6v%)xmc4^{%MAcyV0LIF>Idm(z%T4#N%Lboa>d^ukb>%rfZVDQM>A zq0>( H)P^L!sRM$zg{D1Qq+=Y2*q7Wqs z6?h*$?JBW_E+o$JK_po+EF&lujFreH@LeL>{5QT%i0~Oa7!iT{-UkS938vje73lJm z#*BkM0lRWW+MXdNc4^3_M$`|-nX(KIe0NTN$-3SeXd
y$)heBoUlD~7 9F0GOMkocsXH2LfGljuc! zrV$LOVYTzkZe~3zm`9H^r)AuTBehr3ibR;oCv|xaZm#ISc%GBMS O z#}!->+kdkxTdgh`n~6=Tq8BQWGI}}r_`AAWk#+t<&$I>uHJMrN$kAHh8Mj~743j() z(+LCkkQ#y7Q#F#?Rtu<3?rESS_Si@nj&?pN!gx&>0T)#B{NpiYHi_6!Hwf4c*v?3Q zs_swzdPxF2%Tkya@&8!1Z%^3?JUfj7fsDJUJC%mnp2o>LVV+>HX5$3Ki1e?InbOLw zl^1-NZm<3F9cDR+i$3;iabW!bt~eO;+)9ouy6A4vUQx9c2g8#y+8t00 W4=;HOnq*{ESD*%cl4iDl4Bslu+M`mQ4@?VVbej1s2f?**9*IK><%Kc79n zRE?a%R`e73A?7}QP4QK3+y%0dMIo3c1?)K8aW^szRVXK(TqF<` Pba)BKW2o$(`!NxiIT_$U@ml z!_5ZE?)K#~7@EPO!iZv=AFn4PjFrL%SoMv*;jIN0>-kf4t(ZWf8`DlrpueHL7vl^FFGw zz_9`NoxH)O`sX5*;is0@%p8=l4I`2xEFd?;XMy5|361<>;LPz40ov-&=tAp!j zf7^KwTJK9! $L)N8?luy4;u`l*+%{9cQIk5E* zUT!u7-6?F sqCyJR9?D*_0la0E@JS%{b3)a$8tmO2;vuQX+o|Y3Kom> zF&9QfDb$TS;4!0}nB}t%5TzKII9dDp6*;hOHjM`vUqCL+eXlxPjoX4B)D6O>^8E8l zL!dwpn?P^Xx%3PRvt*Q`%MlG*lt#Q0H3V*u%{rRG0I>}FxvTX{D1zMwk}N*bWsIkf z>Efa4)C6zwxXA53Ws8IwV+_LL(@o~KTals90I~{xmAEWHv#(Ci^EyTz?%!|5sI&@k zY5ug#j>gQ7gSa!(38d<(y4N^cvNU{R+GO$^ES02WiDMN(z)^xm67YA&I8X~~w0hW0 zkqDU55cqq_7r_qvz0b3-xsnIh^e8_=SK{92C@adhH7fFncnPf 9XF=IR&cOi`B`P(vNg zD#d{dht4MqFD)un-1i9<>M6 cCKks&G7oEa0)SGrmyCjr7?;J`6s zqSYn;tLErh=szUsDj`u+UW;0lYZbYl7e)sQBoJm2>YpPHAEv6i=jhyE@d@OyT%I=0 z_Jt24+cWva-WN3RGhl^6KC>?yqEh|l8{oeM9D7!l?cVkuXc>-|*ct%)$9}qErazs7 zfUBduarS`aqBG#~`KCgYk?0NKE_YKDR>Fr*uDU3}n5KT{oZCk0lT?x_(nr%337o?h z398OT&WuqI8Y+3!%yf;*^2d~ 9ePL*}wZc}5D& z;DafkOGfne(g$Fze-c-q4Cn%$MDoAfJ2{#-f}QASu@mf2g)^fSJ!Z@t`6VnTUunp^ z%_E^%!oAoQ8T^8NxJMOLm~ElbJ;y%GwCz|>QdoYNEY$2L YO? z^N^7(ZC!p?SJe3ytT6RsZbBK) `@sazN5H)%|d( z&@;(fZ@$cs2;j1nCIoa$y)l7Cts=Ifmmm_6lv24Ad!|FR`Cq*R8`;qA%l%?HJEAj8 zg+%R1$3^9|$P7D+!x!0lL*E;pcisE(y%_TyF)9wJ8L;pr#X5yVp?13lINbmSJ`|*I z#8+defA*^LCXEKsWlhh|_)jpYS;vEx;MVs5?6+aeuB=hm&Rnf7ix*UwY2zaE=5WyA zJqOJ=q&i#zIlHI0{bMVCEERdUP}2X_LrL(^!^n%rh|;vhAB$Ww#}Fuc{%1D$_%>8> zk9mF_{Xkcp1Sndnp7%XzKBE_^Ck>1RA?{lh5JEwUQPU5Nk1~#0Pa-8n1$?Xbk=`o` zx|)gT55;Fi3Yo~+E0BT=s>**`@SF0`mGZK*`mHwZ#oc{RjLW!7OPvaQrU(FdY&O(+ zm$dg_cVPh=FHN!slTL~e#Prm2qEynB1)#eMpx# 9-qg}`Gs_{Wg7Xaj5lCd@TKQDkATj1 |^0-miOn>7 M*O9fO$Pe#u-jRn)=t5gWiQwjZk z{XisU%U0ua_edl!`s+V1Pcm5<;hhOX!nDB!dszcTgB;tgpEla9^6A9#4_2{$-K9?y zVVPwr_Y5MM;}mwBkX65>R~vxR{kkL0QpfY{hx0v~FQu=`{!+1xSRzWB3f^n54jdTl zoXSSjL={xUI4wg(+$MtyaW}~__b1We6GlK_q^R0R$6bbdakY593!}Ick^EV%zFIi! z|MCgtV2cmiV1Fdl8DMA6=t@>`?>W?tS`P#%T&XCZ2wD7EyJK6Yo zYOcz{{P{4XzCLaG*NZjikTlS?ANtXo_b%!7Q4^!3hN;1EltzsS$DWIsq?M;pvbry> z(NLvw4kG-7`sI1H>2+eFu&fp*euuToq_jJhl|uY ay1~cHz$jQg| zdyEXV9H7#anm=BiC*wgqLIX7#i+ggq758jU15GdczX$zoMsR8Bl*7${!ZblJ)tdT6 z{^byylF~9xrV+DK40IbcLzP>FDug!IhF6Hol~^?^@BGW2@m1%-^i%$qVFmbvqZH|O zVWA#6`D7sx+KUXN7O4^O&sWr+;a+3C{_p>tF>qSPu*p*R$1c*s=ct&%udL#xsRh7B zv2aR^ghrASr*Uy(inIa9|8Bd10J)e9FL_7qSw)kYpiaraxc=h;=C{oLkLc@IuP7U< zY3$=8aM(hrDsVZXJ%!eaqwA>aBgh#V{w5 |V8?c4n5PO`>AN^cKaawO4UQ2R&ZYl{u4QADfF~N2Oyl zRD374LOq_?oM-9u=v^w2a6@FVXY&WpFCG+00ss!IGI8%iTx|Zb)z 1dS~Gcl_mtmg$Pg zpB 7FPv_k&ickAdX13{2o(t*fKEz^&l2DepMd@clM`6`W*eBsI54I>HXeM70w z=M=5%)eXn^Ti9Fm7Y5Yk9QTy}?HFIa>GW8Opw7&%Y|>Vcj7<9SqtFp6p3JH89W5qp z=sxtE()ZV#WacY3l1~*{GX{}qHU$ZvQ`)3N4det^8f+$Qq7kkpZyMa@Vw*T@GzHbl z${om3n}6gJtq@07T@I^e7qh`U(L8}v)28EQaD6j6TPL HE5JM-@eR*pFC~P z#C1E=vbT -A`gIy-`R%aqbnO zV;pVtj7aEd6`Typc}z73E!!~0quDvC37zBQST-}zdkTBLK`=F9w|{|!?c1*t-;z^m zgg8qw!#slI%?B%lD~R>NH9J4Er6^6pxXLWTgj6TbBmY3TZ-Ta-%w;Wlj |xbq3m)aI4Wru*fb9-MS!pM7%G|C~TO@WimbyMw!) z_5Cs|Qdbd?VLzZP{mzrE)(U?-vrvA4Bk`5-qNqH 6Y#Dxfx``#6PU+6SKCp(4sGSU|K58YEM7W?o~!Crj0`_0UQE16 z_&&G3?8~M=HH)tzd~xl-^AV+*%n&w#=FmPVgvN`no(dkhT^Hu@2Vc35KL;TD`VqO1 zSjd86K#f%~q+^>ojm*Y7B9`iDZl4P9st D}aEK3=qIYVr_V1z=t2`Z^L%o78)&unA~Ks+z=j%W$shcj7B4v|tC!iSqg zik2=b$w0RezrJX?3&=yEDX3I*q!oUQV2qF8a#|wRHNumRupCBasgOlb6xq3UWJ-Ol zku=R!SBXoXB{98bZ0zq;^!Zs*@t{FXGeg(rJoqNnu%9BRR61*lhR`8*$|8>A+gz-| zL#)SDOwGc3`~UJkx#W!9-6CAdda}RrVqS9@M{bMD1UxvYL>)>85Zu@mhr(mCy>e5j zkiFwXgDw^dK6{L`cg&}mea%D3@@u_&_@$}9L>BmV4qYnv>}}azLG|s)?{x})$7uQR z_+TT>DbLd4M$5){Bvqy~jLLfnQobvirg_J?jzX5(mOAT6`Qij=c%=*tD(m7?cvyU@ z$>EOHpSk3uAL%r9^84i-f@nHuw@9^YUCLt(s8AnkSJC1tbs Tjl+ex8 zpy&B)#R8`JD!3-pe`!?Hjn&o~85^!y!3w0I4joq(WqHZzX3A&<6XU1X6zd3FdwtC> z#HL1_`iag{8a@85fIL{YTatEthdu>8Su2w44~vMS?R0}`98cM36svZl%N{TJ_r)m_9wU6zu!3nr z5{DzsargSs_Eah(>8;gVgZanQ_-#|)an*izJ~DWFl+E>}&lb3m8g;v>n;y)+-5;oL znckprb8-IXdD$CEj<0h`pX>p^%!(mh!&y~X1`jkSt% i9#VS4SXn8B}Sj`16h z2kF-}IFI6GZ|d{hvyP2j`8NFKFAVkXDC|xu=g4AJ86K%knu%DWdy=pJJu=W-w+y+6 zxw6)%Fw*E97`7zctv#z~lV@BS)Gf=V#JEDM(^Ha|Qco?F9hoWjN)*Swz~XZbi5u=t z^ovmL?>g(r-5}HvQjvGB$^(FlKb;E}YD?#?+L>oF3U3UaG)=uPB5QLa Qnc|tFgVihr|hreaTQZW&WB)8CK zY$+4FDmXiTEOC=MlT|w#fQkT*VaAA2tLb1>VEbP^(mC~vX2aoBgvw6tZ2dsVWR(S9 zm6tgO4kbQ;P+03;mZ!b@U`|^C!Y)Fjj4_2pXKaiAXQtEOPzA|#AM{8pYSD63YLz*f zQ^B{*h_}!cmIVBah*9{;z|0S-mhcRqL?pK0I68eh%o9{M+ZmR=()nU-VgP^1ww}7o z$}*OEfrf4}X?`cV6AMAY8y25EDk`D+j*(Z8=x~eRyzubwu|+aKl)!D1fNoVYW8(k^ zpBU>Co|KfN1$@;Qdl8uBh65F9ziCPK?&oc-I4TCEZ$M>ySuV@gDAZR)PK6*hZ@pxH zO8gFysXT!RoH%;HHrQPiDF50U7wkavJ&-6J%=1!3={k3kZe!@f=xxB@MAB<6W0?|s zE|vrHY*M-DR6u7?AWxB_d(y<#b`?ss4#yrTX)Z2+bxZ1L;pVhA6P3?{F#B_ca&07I zlzV}lWq)E6^Io^7`Ee31vcK5nN;HbRCIY|vTaL|l#my43*OsgXv%|yO_PttM#sm=c z!eXC)HT*VYA@Fn7ZMex%l7CqITDPSJ{bN^!uFjUSa8ekpHnOMwtM6T=>anT#H#w%9 zd V!WxsoF@_Or?L8nyAGm(PW+sHk zjci@0P79NhsipXiRI{E7LjQ)z0i`!|AuFyB(5_uyc2Ce{j7w)!T|$RUr1F&=O1?Bh zrIW~DMGS3JRZvRhGKvN+Bfs8I#2d2m3IZkPCd_BI#GsMk2)uvr@PsrRy6;V!Tp6yX z|4`eGO44Uy{$TQfdDh7G6N^6##bH>W78d&1!(sD~H*D8{`ba^66zpc7GU+-Krg!ga zrg8mO>C4;F J0B z;E=-b4}MXI=@po5V>rZg=s}|YzwgAtrNubR8pZrlM=Fukh^T5#wG#AtAi|?!qP5pF z%93fs&o)#=nmcLw91dD$gn~~TklC9P 1@(VJOY~VPLnm15Y{S^%_$f1ag9d z-uR*&`WB!fJvyfU`tlCs_YDIs_0vX!t~+gtkpEqI^BE^EH-b$VN=i7R^059 3)|DseOA+ikknY8aZHjH&rmTRY(u zBIrtYRTVvS$=wkGEyp~_q77z?vOiEt&XDwL(95xnTn}Vc^Oz5vRQ;zndOk30ZRR<@ zAcy$PV2z@=?%2hpup@8bqvfuIdd^eTiKJWAa=c}O0WCF_s(M-cj4I-g97`xHJ^;-Y zG6!f_j#1i6P19*X%u$WLWI@G;<1nd?pK00W=RQyyaWT0@*%4itO`Q3LxdgA)$96HY zb5t9uxIJ~gIXEjPbcUgC{SOPfgp83VUKx?W_mfkwxIlJqb`F&@6&XuvO?}O+wHAXC zq|qwCijsa={m+2qN*X79SH$gimB&q^ql)H+^Xx#y|MnY~RA}=es3WuaLLcksnMv86 zJ|sW-e(!El+3PC$%Ac0Oma<0=wGnvqBR{t$%VjBum)nr5E67DT9g+&y_}t-F&st{M z7agKPRh&QFi3U}!rEZ33rAtBknG{I3NBv*F&CsaZ0+W0n9B~a7*l2YsTc99tM9CFT z_MwZT%V32QFF_5)KnK^>lufq&H=p0YV?sMDlz(-PT}trwuRj_z%V~v&hO43XimVex zTpf?sL$UF_7JrD%So<6oJr!2qn30CLcf+Icc$ FoJA-Ilzm$=E<(!x ~u!r zVyHA%M1y&L(o)gMgZ$FMl?D8pc|h5gVQK#=+1-$Jwwb$&e%kb{iAmDTzg{0*S}FoI zegNm#pCX0@R(R1?lGjhqRg}BXB<0?I=U2I{EQo1u&$L}xvz$=v_rUa^?<6hIGll&A zOT{7` l=5zEWlM~l+mDkp&;7@wpi<3=TdIvdGKJZK3-7Fi2QWU`YCRn*-y?! zItRNT%n~}SG^fB|vM(AZdZkZpLTR&yo>pAeo(}Wufu|p=D$T~CC0P_X%qFOO`G6no zb_th9xyo4(O(Y!m?aa&dq(k6$+ZtksCh -6>@q}Js_9)gnF76DLUJn&qhi5JRr z!R6~kPfm=E+<>v^T$rO`=pnEqP2E(R-DIHecA_-x=fh2#+pgZOI@_R~@lpXHkMJgR z5DiV;T| &qf=aBezy9-6-K-=a1Qi*%~A_m4BaHhYyak z|NDOKA$duQdbz*s&8SCJf^_##*}N@XNxnoYp0D8gjp=CyF^E*F(DuhonnU2?;QVNx z(CyFcQRy+Gn%2%QJ-U4WW$8;%QS#Pgfiv5AOV_8AVMN(qk<}cWmP>Ps-NM(EmpCVn zy-L~al7XjIM4s3toU@N>*Ok%oIP(E=f)h`oi*a1X3C3dFj|yU^u=xCf44a6+56j+I z|1jwpewGEAe(QP!>0Tuz{$K5#RZv`8*RC50?$SXTXmsQ55Zuye;~HE71ouF2cXxNU z;O_1gAV5NbTY@EMAD~X{`m4@&eg2zoU9DPOUH#NMXU!gSj5*%5;80j8fuwZ^rt<67 zH@^{;Wg~fYx*%FPOz7>#L+aMaecMeS{q 5 U{B13! z-BPSusbvWpTyIaCTyL>Fa7dh*bLN7Qs`(S1-y*40l=k#Rm3^4b#{j*#1^`m&?MpoN z3tYJ-jN=)Sb&r7scV|M<&O1L8Iuvvi9_%Xq<#fDXA~3*Y(E@#|hKNcO=9?I9c1b v%ue7GzsjxTxIC6>cIA&kdp1}JZ#S|CmQkFC1DFI4Tga#BY{Kzi{UT}Ttan?& zk}S9X9tjCK#;zE!;LSmSRi${5oOW% zVD!^fSIWdmP{)JRU(Mlnj!T{!)F@|tB$o!7Bo(}yq%B*gbYAPb7V0Yb<9zGlu4(K? zIara@Ef}(P{B*b_r*d&)9Rs5e!EqHj(HrQ;@y%2z8A1o01>+dukP5~R1F0u#@R7=J z>~>g}VlzxzpY`Emyl|_~{V}*I83;O|Ipq7GvN hj>1znky6w6e< zF7w+L4v>09A{;8Yk-k_3a0tZV&s3(<#+z!yBfKY{;4Db6kfAUFNeB}_;k)vMsXKRO zFw8uBO7wk(+7iJnwww!J=r- z83Aodc?aR?de8*dYn~cMmjYJ9>~CDhKXz<1Lq@^$Zn_Ub_;J7J|GH6pZ9x2StYDow zHKz$Hi53@b(W{h1J6?ja6;#3@fNrha?|<4q-Zs=8_9Wxp56RrPJ{TUNbQBoE{x+*n zB_HeQ`%7NI2W(fUo2KN^pj548#@wiNcF27larO(4dFH_b>+RO4&rh(}nO0ROMr4~W zHi`Cx;HWcxzMtIXlKVtRtZ+Zxm{?63wEpN;AkuLpwu(@9a0nG&7%Fe|WW)+pc&Bx{ z(UW=QpG|ZSG Fkf`_y1YJ5P+vTXW%14_5 z0TSkZ0{EG(_;20~IN;79rOZC_;FHTcjgUtodN#fLu8o0$MOLL+Cx@o6D5Q19FxDur ztZihR$uErnvheb6kC^LBHOF$Hr&RJj&p0?deW+SkaGRj!=uax|6qp=x&ph;nak>={ z4LY||( ~<{c{7biye-E^P7up-=dz%oUSV9q3{&wa~V^X+JELDqt?wCi(*vd zuLma~V&uawn*0-Nl>!dHql2!ptGHp-23axqOM}HJ9{N$K?SRcp{A`h6J0(TX3Z6_D zw(S;LXx8jMjqMk5xJa4UrFnNGMP&Sg*hQB?I2i2ee=ELHg6R=F((-1-$4?Uly6|F( zI4U@9e~t@Jxprlomb-(io6C5=5XF>1k(gidV|Zj}<9k37 zp>4^xmB(`Gz-!pM z)V 5$3x$3NrUp z{6dCPjLoT{oK93dPKck>`ByW`lo?j9jkp?5Z;pi`7_-X1hNZBRcH|Ds1pgs-F%wr; zRq$MBywT1Tg2&d`(tOqjOnR`pf@mDl*}VBsz`S-A@0?wfFM}3+exCfU2vcmq>VQBH z<+Z(j+Qr%AgyD7FiY2tK5k(czPB2(nW+4LGU4&!06sMp@e2{AP#7vYWBofP` Ll?YauJN6ILwjT8tlP@ANPE}r)=MW%ILLZ*}YucMtdy8qQ>4}eCw3k?w?1LZe**$ zGAYImdV`?+6+HsDvM*1^dK$vn)?|a(q#wURv%)V~?l|)4H>B;EM;w%|JAS>66vlw- zG=c#vCl+wQ?V^vwR|_atM2tDaI}!{5e9)#hL;F*X8FVGsUD>&r6TNxJtz*nMfUfgB z9z8}e fPyq}nqaeke$v82< z(sb9xmh%Jt)KJ4A1~Ukseqg4o^%|ZA5U8IJG>IcC0`@}c_N~J5Q4rCej|x+xQeJRj z+4(sb9_wKok^0PIL^}J}f?*sI#xPc(FN{m(zsO_-CJMbfq~-|=wctnwrgvNZ@fxSc zna$D(K@}*a4WVo*_TowP6gjDQIC7hfnrALzNm=Lmv2ev0I+q^EB0vTL6Ln@g;6Q@9 zPYBogu)M_6bZwkVF!RmgmWDr4=DBEc%c$X8*wS00m!ty94G* zIXQTy zHwGJ~_L+qHZDIDe8{9{A=(#Gci~(jeQ*l}XIR++-2A`9}R`K4lPhfI2;9!B@xi>dh zTpwfvy^|=~uAp#?@-FUH?+;T>Ore(`6xm+O5|>{3lU}8i7zxBVrD#e`WZ#>>JH`*q zfug)G!GVD@V6Z$G?2iV_%IkT&%t^CYhp**ieeO* JxQkJ}l?!u{z|3DN!WraB#{_N2f&mh5BS0eC}2vwzT!&5~~BBp0JMUcbLf z;-fY@uedfGcBr@H3vsaJXAOh%A5v%bm?gfD)+=bX5vlysp5LXaf#V1b#A(0fzES&R z0}VT#Xy+sv%p2(-bQ&;B4(VCmx5X%$UE+WFKlP(_vIuh9h3I3J8Q6Iw{hpIfgp|aI z08oY9PpVV+G(WrM;sq#BkaDoIY{%>wVUTMwJ`86!ce=Xp7(r~TVq^PEqO T!GT? z`t<5w4CAOx{5W7Vst?((zYStzlns^v5aE)`pn>`6u5h8R07e)_%y96)@7+^*Mvh98 zjP3$etGdk9T!Rw84c#~g92``MKO&f@tw1@R7BBTd6CQQO^n^{7Ib0 %vE%4rhttPCAsFIqTA%!)tnQYG#_UllP-8=GDXb(s42+$+|1Tl7)z1P{+D zkYS{&v1s`0!_`y8_30v{;2SOA)6~ylrQyX1Sep+`-@t5_PCyvTcLdC|*A_`Fn7uXL zDEzD;00&PvfVVj~@vtl(xWV0Kk-GJ)4>p)1Wd0o%9>}d8*i;c)7|j@79&n{*dGH}k z>9q=YUlrD=09DMiJXNt4$Pq~E8L_gecq$P>E`bY8WqTpiD)YNRn#|I8=~k+g{|JhO zl+xTk&^0N)HkXoDOrQJJst~wm0D}#EZAo|^gp0>*c(oj0 -72WGT#v&@^s97K zlaOSaM)e9nqQTuZJoAP_8wZXVUe9XFb3VT*nSMJc=(=GiYVG_uJmCAEEE}h1eZZi8 zk^aZNphYPLZ E`6Q0aGuS*UOwLYVD}9P zS60gRqG*_75=>9dAPWONs)zk{j3gR3 bAG8 ztot642m#|4d+t@MT_N@ErGn_>*lM+9=a|8Aal$7QD`RJBGuXm=vgC~g)un)wz+ Xw^g73wfI_mH3Ucuf?+C|#$e7YEIHQbYYDNjv5Pc4QBLQ->=ppwQ`~n}md2@wC?wtXD#H(y1N*{Q7H_@!` zXDpuPMTBE7ZLil0pBQ0Px$V$ 6Z2*bj+GYd!#@?p;w)(q>$=; z(Xai3^3Q!{tjc!IBTr{vUYFyRT1vl@jDA?fW`kWCSW9WCy-LqDbfv7GC=^JEIJT%? z8mBeT=y~Q*0|#-gle0(r{9z!PAC=7ElR%}`P`e=>otF6+Rr1u;6pcuQl=1W2^%Z&n z$&;6l)SF?qhK5PN!pU`umg$^a0ii^2+;|}qRa(j$l+HgmG)!OwJVHxJT?UArUi4M| zr?UO;((vMI#T)iQIE%>n|G>^J`QHQbY9D)GSD(6co{n?cZ$_-VwP3D&Sr64o?%X!@ z
zhSx27dU1H4O8nCmHM1P#tU_o>EBzTh zU3bvmad3TCk|99lDn`K?P4af+GQ8LvH$w}(QV5|IifT+7q{e76_uWdGT4L&u(}cah zg)OZwhccBh!1><)nN-?A!~_L%2#B7_Scs!kO}+Ft*^sG?FY7}P|NJQ*W=cy|>{q^K z^a$~Q!HN>a^%wc%8E|}p=H}cTFK{Ok5h$r?w*4}g34oH4cqrykZ?mgHYv*?*5s4^i zTjZA(IpU1AuO9G5w9L#*;NlVx*;^hgS9R1L6q3ERsGGmT9(*h}vU!_)VLjEDu+g$m zb)JM_rc1WLh0Tek4$XPL=kf9Y2?G!B a~ivW zx?X>y>6*;dnuaNw=NIX>Gb^*FOQbNcuw>_yBvnc|M)ADxA8$Wy0C&aN18%2
jj44n6T)$8YX6SMzUN`oS@9wC)-ak) z#&r$~HI<^sSoRRMKtml6`T2$&<(p}qRMY}X&zH=#$9G?WSy0`#QO`V?pnCEN&}Int zyQUt|X{%!Iba9$1PIfGiQ@Gf{sN@4R+ppnh_7O&iIPuD1l0EFP^e3g+fU)2sgz6)Y zIvHO2XJVLzx-oMs%pHPQN6*L~P0>zk9DPcwrFZ(fv;K59AWovnHE33(f4SbeolLv( z?;77^6o2FDv`B?2Nw`|;eSqpjeYGO0@!b70PbC>#b0gloEMgs&7=(WKBtBPPx?+5X zwW`X{TozR*OoS<6U2OusiiHBrL0v;R*~zGk$%fn+wyjekx?}LYpztja?MDM8MH7G@ zK`qD>(Z==ts`wZha|&|nKDEHM)EiVXnlu%LvT`zQl6nSW+Sp-ItRKI#Dyv?7(rZ+! z!{KOp4fKi=x!*?6e)&IbC#% &-h6+HK=#~S zmA7#JKoy+4B+ oZI~)*auyjOH@e_Ss!5z<5)Jnmfq^8R* @mUk5@60^YpoJRz0E9l#)^ zPn}~O0F{Gmhw_8r2