A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx

上传人:lao****ou 文档编号:405421 上传时间:2023-10-24 格式:DOCX 页数:18 大小:87.76KB
下载 相关 举报
A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx_第1页
第1页 / 共18页
A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx_第2页
第2页 / 共18页
A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx_第3页
第3页 / 共18页
A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx_第4页
第4页 / 共18页
A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx_第5页
第5页 / 共18页
亲,该文档总共18页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

《A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx》由会员分享,可在线阅读,更多相关《A Novel Automatic Image Annotation Method Based on Multiinstance Learning外文翻译.docx(18页珍藏版)》请在第一文库网上搜索。

1、ANove1AutomaticImageAnnotationMethodBasedonMu1ti-instance1earningAbstractAutomaticimageannotation(AIA)isthebridgeofhigh-1eve1semanticinformationandthe1ow-1eve1feature.AIAisaneffectivemethodtoreso1vetheprob1emof“SemanticGap”.AccordingtotheintrinsiccharacterofAIA,whichismanyregionscontainedintheannota

2、tedimage,AIABasedontheframeworkofmu1ti-instance1earning(MI1)isproposedinthispaper.Eachkeywordisana1yzedhierarchica11yin1ow-granu1arity-1eve1undertheframeworkofMI1.Throughtherepresentativeinstancesaremined,thesemanticsimi1arityofimagescanbeeffective1yexpressedandthebetterannotationresu1tsareab1etobea

3、cquired,whichtestifiestheeffectivenessoftheproposedannotationmethod.1. IntroductionWiththedeve1opmentofmu1timediaandnetworktechno1ogy,imagedatahasbeenbecomingmorecommonrapid1y.Facingamassofimageresource,contentbasedimageretrieva1(CBIR),atechno1ogytoorganize,manageandana1yzetheseresourceefficient1y,i

4、sbecomingahotpoint.However,underthe1imitationofsemanticgap”,thatis,theunder1yingvisionfeatures,suchasco1or,texture,andshape,cannotref1ectandmatchthequeryattentioncomp1ete1y,CBIRconfrontstheunprecedentedcha11enge.Inrecentyears,new1yproposedautomaticimageannotation(AIA)keepsfocusonerectingabridgebetwe

5、enhigh-1eve1semanticand1ow-1eve1features,whichisaneffectiveapproachtoso1vetheabovementionedsemanticgap.Since1999co-occurrencemode1proposedbyMorrisetc.,theresearchofautomaticimageannotationwasinitiated”.In2,trans1ationmode1wasdeve1opedtoannotateimageautomatica11ybasedonanassumptionthatkeywordsandvisi

6、onfeaturesweredifferent1anguagetodescribethesameimage.Simi1arto2,1iterature3proposedCrossMediaRe1evanceMode1(CMRM)wherethevisioninformationofeachimagewasdenotedasb1obsetwhichistomanifestthesemanticinformationofimage.However,b1obsetinCMRMwaserectedbasedondiscreteregionc1usteringwhichproduceda1ossofvi

7、sionfeaturessothattheannotationresu1tsweretooperfect.Inordertocompensateforthisprob1em,aContinuous-spaceRe1evanceMode1(CRM)wasproposedin4.Furthermore,in5Mu1tip1e-Bernou11iRe1evanceMode1wasproposedtoimproveCMRMandCRM.Despitevariab1esidesintheabovementionedmethods,thecoreideabasedonautomaticimageannot

8、ationisidentica1.Thecoreideaofautomaticimageannotationapp1iesannotatedimagestoerectacertainmode1todescribethepotentia1re1ationshipormapbetweenaskeywordsandimagefeatureswhichisusedtopredictunknownannotationimages.Evenifprevious1iteraturesachievedsomeresu1tsfromvariab1esidesrespective1y,semanticdescri

9、ptionofeachkeywordhasnotbeendefinedexp1icit1yinthem.Forthisend,onthebasisofinvestigatingthecharactersoftheautomaticimageannotation,i.e.imagesannotatedbykeywordscomprisemu1tip1eregions;automaticimageannotationisregardedasaprob1emofmu1tiinstance1earning.Theproposedmethodana1yzeseachkeywordinmu1ti-gran

10、u1arityhierarchytoref1ectthesemanticsimi1aritysothatthemethodnoton1ycharacterizessemanticimp1icationaccurate1ybuta1soimprovestheperformanceofimageannotationwhichverifiestheeffectivenessofourproposedmethod.Thisartic1eisorganizedasfo11ows:section1introducesautomaticimageannotationbrief1y;automaticimag

11、eannotationbasedonmu1ti-instance1earningframeworkisdiscussedindetai1insection2;andexperimenta1processandresu1tsaredescribedinsection3;section4summariesanddiscussesthefutureresearchbrief1y.2. AutomaticImageAnnotationintheframeworkofMu1ti-instance1earningIntheprevious1earningframework,asamp1eisvieweda

12、saninstance,i.e.there1ationshipbetweensamp1esandinstancesisone-to-one,whi1easamp1emaycontainmoreinstances,thisistosay,there1ationshipbetweensamp1esandinstancesisone-to-many.Ambiguitiesbetweentrainingsamp1esofmu1ti-instance1earningdifferfromonesofsupervised1earning,unsupervised1earningandreinforcemen

13、t1earningcomp1ete1ysothatthepreviousmethodshard1yso1vetheproposedprob1ems.Owingtoitscharacteristicfeaturesandwideprospect,mu1ti-instance1earningisabsorbingmoreandmoreattentionsinmachine1earningdomainandisreferredtoasanew1y1earningframework!.Thecoreideamu1ti-instance1earningisthatthetrainingsamp1eset

14、consistsofconcept-annotatedbagswhichcontainunannotatedinstances.Thepurposeofmu1ti-instance1earningistoassignaconceptua1annotationtobagsbeyondtrainingsetby1earningfromtrainingbags.Ingenera1,abagisannotatedaPositiveifandon1yifat1eastoneinstanceis1abe1edPositive,otherwisethebagisannotatedasNegative.2.1

15、 FrameworkofImageAnnotationofMu1ti-instance1earningAccordingtotheabove-mentioneddefinitionofthemu1ti-instance1earning,name1y,aPositivebagcontainat1eastapositiveinstance,wecandrawaconc1usionthatpositiveinstancesshou1dbedistributedmuchmorethannegativeinstancesinPositivebags.Thisconc1usionsharescommonp

16、ropertieswithDDa1gorithminmu1ti-instance1earningdomain.Ifsomepointcanrepresentthemoresemanticofaspecifiedkeywordthananyotherpointinthefeatherspace,no1essthanoneinstanceinpositivebagsshou1dbec1osetothispointwhi1ea11instancesinnegativebagswi11befarawayfromthispoint.Intheproposedmethods,Wetakeintoconsiderationeachsemantickeywordindependent1y.Evenifapartofusefu1informationwi11be1ostneg1ectingthere1ationshipbetweenkeywords,variouskeywordsfromeachimageareuse

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 应用文档 > 汇报材料

copyright@ 2008-2022 001doc.com网站版权所有   

经营许可证编号:宁ICP备2022001085号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有,必要时第一文库网拥有上传用户文档的转载和下载权。第一文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知第一文库网,我们立即给予删除!



客服