上海奉賢 網(wǎng)站建設(shè)百度指數(shù)查詢移動(dòng)版
前言
作為當(dāng)前先進(jìn)的深度學(xué)習(xí)目標(biāo)檢測(cè)算法YOLOv8,已經(jīng)集合了大量的trick,但是還是有提高和改進(jìn)的空間,針對(duì)具體應(yīng)用場(chǎng)景下的檢測(cè)難點(diǎn),可以不同的改進(jìn)方法。此后的系列文章,將重點(diǎn)對(duì)YOLOv8的如何改進(jìn)進(jìn)行詳細(xì)的介紹,目的是為了給那些搞科研的同學(xué)需要?jiǎng)?chuàng)新點(diǎn)或者搞工程項(xiàng)目的朋友需要達(dá)到更好的效果提供自己的微薄幫助和參考。由于出到Y(jié)OLOv8,YOLOv7、YOLOv5算法2020年至今已經(jīng)涌現(xiàn)出大量改進(jìn)論文,這個(gè)不論對(duì)于搞科研的同學(xué)或者已經(jīng)工作的朋友來(lái)說(shuō),研究的價(jià)值和新穎度都不太夠了,為與時(shí)俱進(jìn),以后改進(jìn)算法以YOLOv7為基礎(chǔ),此前YOLOv5改進(jìn)方法在YOLOv7同樣適用,所以繼續(xù)YOLOv5系列改進(jìn)的序號(hào)。另外改進(jìn)方法在YOLOv5等其他算法同樣可以適用進(jìn)行改進(jìn)。希望能夠?qū)Υ蠹矣袔椭?/blockquote>一、解決問(wèn)題
嘗試將原YOLOv7/v5中的損失函數(shù)改為wiou,提升精度和效果。此前修改更先進(jìn)的eiou,siou,a-iou邊框位置回歸函數(shù),精度有所提升,新出的wiou可進(jìn)行嘗試改進(jìn)。
二、基本原理
三、?添加方法
待更新,關(guān)注私信獲取。
四、總結(jié)
預(yù)告一下:下一篇內(nèi)容將繼續(xù)分享深度學(xué)習(xí)算法相關(guān)改進(jìn)方法。有興趣的朋友可以關(guān)注一下我,有問(wèn)題可以留言或者私聊我哦
PS:該方法不僅僅是適用改進(jìn)YOLOv5,也可以改進(jìn)其他的YOLO網(wǎng)絡(luò)以及目標(biāo)檢測(cè)網(wǎng)絡(luò),比如YOLOv7、v6、v4、v3,Faster rcnn ,ssd等。
最后,有需要的請(qǐng)關(guān)注私信我吧。關(guān)注免費(fèi)領(lǐng)取深度學(xué)習(xí)算法學(xué)習(xí)資料!YOLO系列算法改進(jìn)方法 | 目錄一覽表
[💡🎈??1. 添加SE注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/125379649)
[💡🎈??2.添加CBAM注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/125892144)
[💡🎈??3. 添加CoordAtt注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/125379685)
[💡🎈??4. 添加ECA通道注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/125390766)
[💡🎈??5. 改進(jìn)特征融合網(wǎng)絡(luò)PANET為BIFPN](https://blog.csdn.net/m0_70388905/article/details/125391096)
[💡🎈??6. 增加小目標(biāo)檢測(cè)層](https://blog.csdn.net/m0_70388905/article/details/125392908)
[💡🎈??7. 損失函數(shù)改進(jìn)](https://blog.csdn.net/m0_70388905/article/details/125419887)
[💡🎈??8. 非極大值抑制NMS算法改進(jìn)Soft-nms](https://blog.csdn.net/m0_70388905/article/details/125448230)
[💡🎈??9. 錨框K-Means算法改進(jìn)K-Means++](https://blog.csdn.net/m0_70388905/article/details/125530323)
[💡🎈??10. 損失函數(shù)改進(jìn)為SIOU](https://blog.csdn.net/m0_70388905/article/details/125569509)
[💡🎈??11. 主干網(wǎng)絡(luò)C3替換為輕量化網(wǎng)絡(luò)MobileNetV3](https://blog.csdn.net/m0_70388905/article/details/125593267)
[💡🎈??12. 主干網(wǎng)絡(luò)C3替換為輕量化網(wǎng)絡(luò)ShuffleNetV2](https://blog.csdn.net/m0_70388905/article/details/125612052)
[💡🎈??13. 主干網(wǎng)絡(luò)C3替換為輕量化網(wǎng)絡(luò)EfficientNetv2](https://blog.csdn.net/m0_70388905/article/details/125612096)
[💡🎈??14. 主干網(wǎng)絡(luò)C3替換為輕量化網(wǎng)絡(luò)Ghostnet](https://blog.csdn.net/m0_70388905/article/details/125612392)
[💡🎈??15. 網(wǎng)絡(luò)輕量化方法深度可分離卷積](https://blog.csdn.net/m0_70388905/article/details/125612300)
[💡🎈??16. 主干網(wǎng)絡(luò)C3替換為輕量化網(wǎng)絡(luò)PP-LCNet](https://blog.csdn.net/m0_70388905/article/details/125651427)
[💡🎈??17. CNN+Transformer——融合Bottleneck Transformers](https://blog.csdn.net/m0_70388905/article/details/125691455)
[💡🎈??18. 損失函數(shù)改進(jìn)為Alpha-IoU損失函數(shù)](https://blog.csdn.net/m0_70388905/article/details/125704413)
[💡🎈??19. 非極大值抑制NMS算法改進(jìn)DIoU NMS](https://blog.csdn.net/m0_70388905/article/details/125754133)
[💡🎈??20. Involution新神經(jīng)網(wǎng)絡(luò)算子引入網(wǎng)絡(luò)](https://blog.csdn.net/m0_70388905/article/details/125816412)
[💡🎈??21. CNN+Transformer——主干網(wǎng)絡(luò)替換為又快又強(qiáng)的輕量化主干EfficientFormer](https://blog.csdn.net/m0_70388905/article/details/125840816)
[💡🎈??22. 漲點(diǎn)神器——引入遞歸門(mén)控卷積(gnConv)](https://blog.csdn.net/m0_70388905/article/details/126142505)
[💡🎈??23. 引入SimAM無(wú)參數(shù)注意力](https://blog.csdn.net/m0_70388905/article/details/126456722)
[💡🎈??24. 引入量子啟發(fā)的新型視覺(jué)主干模型WaveMLP(可嘗試發(fā)SCI)](https://blog.csdn.net/m0_70388905/article/details/126550613)
[💡🎈??25. 引入Swin Transformer](https://blog.csdn.net/m0_70388905/article/details/126674046)
[💡🎈??26. 改進(jìn)特征融合網(wǎng)絡(luò)PANet為ASFF自適應(yīng)特征融合網(wǎng)絡(luò)](https://blog.csdn.net/m0_70388905/article/details/126926244)
[💡🎈??27. 解決小目標(biāo)問(wèn)題——校正卷積取代特征提取網(wǎng)絡(luò)中的常規(guī)卷積](https://blog.csdn.net/m0_70388905/article/details/126979207)
[💡🎈??28. ICLR 2022漲點(diǎn)神器——即插即用的動(dòng)態(tài)卷積ODConv](https://blog.csdn.net/m0_70388905/article/details/127031843)
[💡🎈??29. 引入Swin Transformer v2.0版本](https://blog.csdn.net/m0_70388905/article/details/127214397)
[💡🎈??30. 引入10月4號(hào)發(fā)表最新的Transformer視覺(jué)模型MOAT結(jié)構(gòu)](https://blog.csdn.net/m0_70388905/article/details/127273808)
[💡🎈??31. CrissCrossAttention注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/127312771)
[💡🎈??32. 引入SKAttention注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/127330663)
[💡🎈??33. 引入GAMAttention注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/127330819)
[💡🎈??34. 更換激活函數(shù)為FReLU](https://blog.csdn.net/m0_70388905/article/details/127381053)
[💡🎈??35. 引入S2-MLPv2注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/127434190)
[💡🎈??36. 融入NAM注意力機(jī)制](https://blog.csdn.net/m0_70388905/article/details/127398898)
[💡🎈??37. 結(jié)合CVPR2022新作ConvNeXt網(wǎng)絡(luò)](https://blog.csdn.net/m0_70388905/article/details/127533379)
[💡🎈??38. 引入RepVGG模型結(jié)構(gòu)](https://blog.csdn.net/m0_70388905/article/details/127532645)
[💡🎈??39. 引入改進(jìn)遮擋檢測(cè)的Tri-Layer插件 | BMVC 2022](https://blog.csdn.net/m0_70388905/article/details/127471913)
[💡🎈??40. 輕量化mobileone主干網(wǎng)絡(luò)引入](https://blog.csdn.net/m0_70388905/article/details/127558329)
[💡🎈??41. 引入SPD-Conv處理低分辨率圖像和小對(duì)象問(wèn)題](https://zhuanlan.zhihu.com/p/579212232)
[💡🎈??42. 引入V7中的ELAN網(wǎng)絡(luò)](https://zhuanlan.zhihu.com/p/579533276)
[💡🎈??43. 結(jié)合最新Non-local Networks and Attention結(jié)構(gòu)](https://zhuanlan.zhihu.com/p/579903718)
[💡🎈??44. 融入適配GPU的輕量級(jí) G-GhostNet](https://blog.csdn.net/m0_70388905/article/details/127932181)
[💡🎈??45. 首發(fā)最新特征融合技術(shù)RepGFPN(DAMO-YOLO)](https://blog.csdn.net/m0_70388905/article/details/128157269)
[💡🎈??46. 改進(jìn)激活函數(shù)為ACON](https://blog.csdn.net/m0_70388905/article/details/128159516)
[💡🎈??47. 改進(jìn)激活函數(shù)為GELU](https://blog.csdn.net/m0_70388905/article/details/128170907)
[💡🎈??48. 構(gòu)建新的輕量網(wǎng)絡(luò)—Slim-neck by GSConv(2022CVPR)](https://blog.csdn.net/m0_70388905/article/details/128198484)
[💡🎈??49. 模型剪枝、蒸餾、壓縮](https://blog.csdn.net/m0_70388905/article/details/128222629)
[💡🎈??50. 超越ConvNeXt!Conv2Former:用于視覺(jué)識(shí)別的Transformer風(fēng)格的ConvNet](https://blog.csdn.net/m0_70388905/article/details/128266070?csdn_share_tail=%7B%22type%22:%22blog%22,%22rType%22:%22article%22,%22rId%22:%22128266070%22,%22source%22:%22m0_70388905%22%7D)
[💡🎈??51.融入多分支空洞卷積結(jié)構(gòu)RFB-Bottleneck改進(jìn)PANet構(gòu)成新特征融合網(wǎng)絡(luò)](https://blog.csdn.net/m0_70388905/article/details/128553832)
[💡🎈??52.將YOLOv8中的C2f模塊融入YOLOv5](https://blog.csdn.net/m0_70388905/article/details/128661165)
[💡🎈??53.融入CFPNet網(wǎng)絡(luò)中的ECVBlock模塊,提升小目標(biāo)檢測(cè)能力](https://blog.csdn.net/m0_70388905/article/details/128720459)