transformationsglobalsign

globalsign  时间:2021-01-11  阅读:()
DefendingagainstadversarialattacksbyrandomizeddiversicationOlgaTaran,ShidehRezaeifar,TarasHolotyak,SlavaVoloshynovskiyDepartmentofComputerScience,UniversityofGeneva7,routedeDrize,1227Carouge,Switzerland{olga.
taran,shideh.
rezaeifar,taras.
holotyak,svolos}@unige.
chAbstractThevulnerabilityofmachinelearningsystemstoadver-sarialattacksquestionstheirusageinmanyapplications.
Inthispaper,weproposearandomizeddiversicationasadefensestrategy.
Weintroduceamulti-channelarchitectureinagray-boxscenario,whichassumesthatthearchitectureoftheclassierandthetrainingdatasetareknowntotheattacker.
Theattackerdoesnotonlyhaveaccesstoase-cretkeyandtotheinternalstatesofthesystematthetesttime.
Thedefenderprocessesaninputinmultiplechan-nels.
Eachchannelintroducesitsownrandomizationinaspecialtransformdomainbasedonasecretkeysharedbetweenthetrainingandtestingstages.
Suchatransformbasedrandomizationwithasharedkeypreservesthegra-dientsinkey-denedsub-spacesforthedefenderbutitpre-ventsgradientbackpropagationandthecreationofvari-ousbypasssystemsfortheattacker.
Anadditionalbenetofmulti-channelrandomizationistheaggregationthatfusessoft-outputsfromallchannels,thusincreasingthereliabil-ityofthenalscore.
Thesharingofasecretkeycreatesaninformationadvantagetothedefender.
Experimentalevalu-ationdemonstratesanincreasedrobustnessoftheproposedmethodtoanumberofknownstate-of-the-artattacks.
1.
IntroductionBesidesremarkableandimpressiveachievements,manymachinelearningsystemsarevulnerabletoadversarialat-tacks[6].
Theadversarialattacksattemptattrickingade-cisionofaclassierbyintroducingboundedandinvisibleperturbationstoachosentargetimage.
Thisweaknessseri-ouslyquestionstheusageofthemachinelearninginmanysecurity-andtrust-sensitivedomains.
Manyresearchershaveproposedvariousdefensestrate-giesandcountermeasurestodefeatadversarialattacks.
However,thegrowingnumberofdefensesnaturallystimu-latestheinventionofnewandevenmoreuniversalattacks.
S.
Voloshynovskiyisacorrespondingauthor.
Theresearchwassup-portedbytheSNFprojectNo.
200021182063.
cφθTrainingSoft-maxmodulatorx′xx′=x+wc′Targetclass(c′)Targethost(x)KnowntrainingdataAdversarialKnownarchitectureTesting···1MyφθexamplesSharedsecretkeyk···1nx1···1nxN······1My1···1MyN···XFigure1:Setupunderinvestigation:theattackerknowsthelabeledtrainingdatasetXandthesystemarchitecturebuthedoesnothaveaccesstosecretkeykofthedefendersharedbetweenthetrainingandtesting.
Anoverviewandclassicationofthemostefcientattacksanddefensesaregivenin[15,10].
Inthispaper,weconsidera"game"betweenthede-fenderandtheattackeraccordingtothediagrampresentedinFigure1.
Thedefenderhasaccesstotheclassierφθandthetrain-ingdatasetX.
Thedefendersharesasecretkeykbetweentrainingandtesting.
Theclassieroutputsasoft-maxvectoryoflengthM,whereMcorrespondstothetotalnumberofclasses,andeachyc,1≤c≤Mistreatedasaproba-bilitythatagiveninputxbelongstoaclassc.
Thetrainedclassierφθisusedduringtesting.
Theattackerinthewhite-boxscenariohasfullknowl-edgeabouttheclassierarchitecture,defensemechanisms,trainingdataand,quiteoften,canaccessthetrainedparam-etersoftheclassier.
Inthegray-boxscenario,consideredinthispaper,theattackerknowsthearchitectureoftheclas-sier,thegeneraldefensemechanismandhasaccesstothesametrainingdataX[3,15].
Usingtheaboveavailableknowledge,theattackercangenerateanon-targetedortar-geted,withrespecttoaspeciedclassc′,adversarialpertur-bationwc′.
Theattackerproducesanadversarialexamplebyaddingthisperturbationtothetargethostsamplexas11226φθ1IW1P11W11.
.
.
W11.
.
.
.
.
.
.
.
.
WJφθ11P1I···1M···1MAggregator···1MycBkkyy1Jk11k1I1I11Classier11Classier1IφθJIPJ1W1J.
.
.
W1JφθJ1PJI···1M···1MyykJ1kJIJIJ1ClassierJ1ClassierJIFigure2:Generalizeddiagramoftheproposedmulti-channelclassier.
x′=x+wc′.
Theadversarialexampleispresentedtotheclassierattesttimeinanattempttotricktheclassierφθdecision.
Withoutpretendingtobeexhaustiveinouroverview,wegroupexistingdefensestrategiesintothreemajorgroups:1.
Nonkey-baseddefenses:Thisgroupincludesthema-jorityofstate-of-the-artdefensemechanismsbasedondetectionandrejection,adversarialretraining,lteringandregeneration,etc.
[10].
Besidesthebroaddiversityofthesemethods,acommonfeatureandthemaindis-advantageoftheseapproachesisanabsenceof"cryp-tographic"elements,likeforexampleasecretkey,thatwouldallowtocreateaninformationadvantageofthedefenderovertheattacker.
2.
Defenseviarandomizationandobfuscation:Thedefensemechanismsofthisgrouparemainlybasedontheideasofrandomizationavoidingthereproducibleandrepeatableuseofparametersofthetrainedsystem.
Thisincludesgradientmasking[1]andintroducinganambiguityviadifferenttypesofkey-freerandomiza-tion.
Theexampleofsuchrandomizationcanbenoiseadditionatdifferentlevelsofthesystem[14],injectionofdifferenttypesofrandomizationlike,forexample,randomimageresizingorpadding[13]orrandomizedlossycompression[5],etc.
Themaindisadvantageofthisgroupofdefensestrate-giesconsistsinthefactthattheattackercanbypassthedefenseblocksortakethisambiguityintoaccountdur-ingthegenerationoftheadversarialperturbations[1].
Additionally,theclassicationaccuracyisdegradedsincetheclassierisonlytrainedonaveragefordif-ferentsetsofrandomizationparametersunlessspecialensemblingoraggregationisproperlyappliedtocom-pensatethisloss.
However,eveninthiscasethemis-matchbetweenthetrainingandtestingstagescanonlyensuretheperformanceonaveragewhereasoneisin-terestedtohavetheguaranteedperformanceforeachrealizationofrandomizedparameters.
Unfortunately,thisisnotachievablewithoutthecommonsecretshar-ingbetweenthetrainingandtesting.
3.
Key-baseddefenses:Thethirdgroupgeneralizesthedefensemechanisms,whichincludearandomizationexplicitlybasedonasecretkeythatissharedbetweentrainingandtestingstages.
Forexample,onecanmen-tiontheuseofrandomprojections[11],therandomfeaturesampling[4]andthekey-basedtransformation[10],etc.
Nevertheless,themaindisadvantageoftheknownmethodsinthisgroupconsistsofthelossofperfor-manceduetothereductionofusefuldatathatshouldbecompensatedbyaproperdiversicationandcorre-spondingaggregation.
Inthispaper,wetargetfurtherextensionofthekey-baseddefensestrategiesbasedonthecryptographicprinciplestocreateaninformationadvantageofthedefenderovertheat-tackeryetmaximallypreservingtheinformationintheclas-sicationsystem.
ThegeneralizeddiagramoftheproposedsystemisshowninFigure2.
Ithastwolevelsofrandomiza-tion,eachofwhichcanbebasedonuniquesecretkeys.
Anadditionalrobusticationisachievedviatheaggregationofthesoftoutputsofthemulti-channelclassierstrainedfortheirownrandomizations.
Asitwillbeshownthroughout11227thepaper,usageofmulti-channelarchitecturediminishestheefciencyofattacks.
Themaincontributionofthispaperistwofold:Anewmulti-channelclassicationarchitecturewithdefensestrategyagainstgray-boxattacksbasedonthecryptographicprinciple.
Aninvestigationoftheefciencyoftheproposedap-proachonthreestandarddatasetsforseveralclassesofwell-knownadversarialattacks.
Theremainderofthispaperisorganizedasfollows:Sec-tion2introducesanewmulti-channelclassicationarchi-tecture.
Section3providesanextensionofthedefensestrat-egybasedonthedataindependentpermutationproposedin[10]tomulti-channelarchitecture.
Theefcientkey-baseddataindependenttransformationisinvestigatedinSection4.
Thelteringbyahard-thresholdinginthesecretdomainisanalyzedinSection5.
Section6concludesthepaper.
2.
Multi-channelclassicationalgorithmAmulti-channelclassier,whichformsthecoreoftheproposedarchitecture,isshowninFigure2.
Itconsistsoffourmainbuildingblocks:1.
Pre-processingoftheinputdatainatransformdo-mainviaamappingWj,1≤j≤J.
Ingeneral,thetransformWjcanbeanylinearmapper.
Forex-ampleitcanbearandomprojectionorbelongtothefamilyoforthonormaltransformations(WjWTj=I)likeDFT(discreteFouriertransform),DCT(discretecosinestransform),DWT(discretewavelettransform),etc.
Moreover,Wjcanalsobealearnabletransform.
However,itshouldbepointedoutthatfromthepointofviewoftherobustnesstoadversarialattacks,thedataindependenttransformWjisofinteresttoavoidkey-leakagefromthetrainingdata.
Furthermore,Wjcanbebasedonasecretkeykj.
2.
DataindependentprocessingPji,1≤i≤Ipresentsthesecondlevelofrandomizationandservesasade-fenseagainstgradientbackpropagationtothedirectdomain.
Onecanenvisionseveralcases.
AsshowninFigure3a,Pji∈{0,1}l*n,lInFigure3b,Pji∈{0,1}n*nisalosslesspermutation,similarto[10].
Finally,inFigure3c,Pji∈{1,0,+1}n*ncorrespondstosub-blocksignipping.
Theyellowcolorhighlightsthekeydenedregionofkey-basedsignipping.
Thisoperationisreversibleandthuslosslessforanauthorizedparty.
Moreover,tomakethedataindependentprocessingir-reversiblefortheattacker,itispreferabletouseaPjibasedonsecretkeykji.
Pji=100···00001···00000···10nll.
.
-100.
.
.
.
.
.
00100000.
.
.
010.
.
.
.
.
.
00.
.
.
00-1.
.
.
.
.
.
0000001000010(c)randomizedsignippinginthesub-blockdenedinorangeFigure3:RandomizedtransformationPji,1≤j≤J,1≤i≤Iexamples.
Alltransformsarekey-based.
3.
Classicationblockcanberepresentedbyanyfamilyofclassiers.
However,iftheclassierisdesignedforclassicationofdatainthedirectdomainthenitispreferablethatitisprecededbyW1j.
4.
Aggregationblockcanberepresentedbyanyoperationrangingfromasimplesummationtolearnableopera-torsadaptedtothedataortoaparticularadversarialattack.
AsitcanbeseenfromFigure2,thechainoftherst3blockscanbeorganizedinaparallelmulti-channelstruc-turethatisfollowedbyoneorseveralaggregationblocks.
Thenaldecisionabouttheclassismadebasedontheag-gregatedresult.
Therejectionoptioncanbealsonaturallyenvisioned.
Thetrainingofthedescribedalgorithmcanberepre-sentedas:(,{θji})=argmin,{θji}Tt=1Jj=1Iji=1L(yt,A(φθji(f(xt)))),(1)with:f(xt)=W1jPjiWjxt,whereLisaclassicationloss,ytisavectorizedclasslabelofthesamplext,Acorrespondstotheaggregationoper-atorwithparameters,φθjiistheithclassierofthejthchannel,θdenotestheparametersoftheclassier,Tequalstothenumberoftrainingsamples,JisthetotalnumberofchannelsandIjequalstothenumberofclassiersperchannelthatwewillkeepxedandequalstoI.
Theattackermightdiscoverthesecretkeyskjand/orkjiormakethefullsystemend-to-enddifferentiableusingtheBackwardPassDifferentiableApproximationtechniqueproposedin[1]orviareplacingthekey-basedblocksby11228thebypassmappers.
Toavoidsuchapossibility,werestricttheaccessoftheattackertotheinternalresultswithintheblockB.
Thisassumptioncorrespondstoourdenitionofthegray-boxsetup.
Intheproposedsystem,wewillconsiderseveralprac-ticalsimplicationsleadingtoinformationandcomplexityadvantagesforthedefenderovertheattacker:Thedefendertrainingcanbeperformedperchannelin-dependentlyuntiltheaggregationblock.
Atthesame,theattackershouldtrainandbackpropagatethegradi-entsinallchannelssimultaneouslyoratleasttoguar-anteethemajorityofwrongscoresafteraggregation.
TheblocksofdataindependentprocessingPjiaimatpreventinggradientbackpropagationintothedirectdomainbuttheclassiertrainingisadaptedtoapar-ticularPjiineachchannel.
Itwillbeshownfurtherbythenumericalresultsthattheusageofthemulti-channelarchitecturewiththefollowingaggregationstabilizestheresults'deviationduetotheuseofrandomizingorlossytransformationsPji,ifsuchareused.
TherightchoiceoftheaggregationoperatorApro-videsanadditionaldegreeoffreedomandincreasesthesecurityofthesystemthroughthepossibilitytoadapttospecictypesofattacks.
Moreover,theoverallsecuritylevelconsiderablyin-creasesduetotheindependentrandomizationineachchannel.
Themainadvantageofthemulti-channelsys-temconsistsinthefactthateachchannelcanhaveanadjustableamountofrandomness,thatallowstoobtaintherequiredlevelofdefenseagainsttheattacks.
Inaone-channelsystemtheamountofrandomnesscanbeeitherinsufcienttopreventtheattacksortoohighwhichleadstoclassicationaccuracyloss.
Therefore,havingachannel-wisedistributedrandomnessismoreexibleandefcientfortheabovetrade-off.
Thedescribedgeneralizedmulti-channelarchitectureprovidesavarietyofchoicesforthetransformoperatorsWanddataindependentprocessingPji.
InSection3,wewillconsideravariantwithmultiplePjiintheformofthecon-sideredpermutationinthedirectdomainWj=I.
InSec-tion4,wewillinvestigateasignippingoperatorPjiforthecommonDCToperatorW.
Section5willbededicatedtotheinvestigationofadenoisingversionofPjibasedonhard-thresholdinginasecretsub-spaceoftheDCTdomainWj.
3.
Classicationwithmulti-channelpermuta-tionsinthedirectdomainThesimplestcaseofrandomizeddiversicationcanbeconstructedforthedirectdomainwiththepermutationofinputpixels.
Infact,thealgorithmproposedin[10]reectsP1.
.
.
.
.
.
φθ1···1MφθI···y1M+···1MycBk1kIIy1Classier1ClassierIPIFigure4:Classicationviamulti-channelpermutationsinthedirectdomain.
thisideaforasinglechannel.
However,despitethereportedefciencyoftheproposeddefensestrategy,asinglechannelarchitectureissubjecttoadropinclassicationaccuracy,evenfortheoriginal,i.
e.
,non-adversarial,data.
Therefore,thispaperinvestigatestheperformanceofapermutation-baseddefenseinamulti-channelsetting.
3.
1.
ProblemformulationThegeneralizeddiagramofthecorrespondingextendedmulti-channelapproachisillustratedinFigure4.
Theper-mutationinthedirectdomainimpliesthatWj=IwithJ=1andIpermutationchannels.
Therefore,eachchan-nel1≤i≤IhasonlyonedataindependentpermutationblockPirepresentedbyalosslesspermutationoftheinputsignalx∈Rn*n*minthedirectdomain,wherencorre-spondstothesizeoftheinputimageandmisthenumberofthechannels(colors)inthisimage.
Thus,thepermuta-tionmatrixPiisamatrixofsizen*n,generatedfromasecretkeyki,whoseentriesareallzerosexceptforasingleelementofeachrow,whichisequaltoone.
Inaddition,asillustratedinFigure3b,allnon-zeroentriesarelocatedindifferentcolumns.
Forourexperiments,weassumethatPiisthesameforeachinputimagecolorchannelbutitcanbeadifferentoneinthegeneralcasetoincreasethesecurityofthesystem.
AsanaggregationoperatorA,weuseasum-mationforthesakeofsimplicityandinterpretability.
TheaggregatedresultrepresentsaM-dimensionalvectoryofrealnon-negativevalues,whereMequalstothenumberofclassesandeachentryycistreatedasaprobabilitythatagiveninputxbelongstotheclassc.
Undertheaboveassumptions,theoptimizationproblem(2)reducesto1:{θi}=argmin{θi}Tt=1Ii=1L(yt,A(φθi(Pixt))).
(2)3.
2.
NumericalresultsTorevealtheimpactofmulti-channelprocessing,wecompareourresultswithanidenticalsinglechannelsystem1https://github.
com/taranO/defending-adversarial-attacks-by-RD11229DatatypeI1510152025MNIST:originalclassiererroris1%Original2.
831.
731.
371.
431.
571.
4CW28.
854.
563.
823.
533.
553.
51CW013.
875.
984.
984.
694.
474.
4CW∞11.
674.
724.
033.
873.
593.
69Fasion-MNIST:originalclassiererroris7.
5%Original11.
409.
49.
279.
29.
239.
2CW212.
1610.
159.
789.
419.
499.
4CW013.
4510.
159.
629.
569.
829.
63CW∞11.
999.
729.
699.
249.
269.
32CIFAR:originalclassiererroris21%Original47.
0341.
4740.
239.
839.
239CW247.
7641.
8239.
8339.
5939.
439.
04CW048.
3942.
2740.
8739.
7339.
8539.
76CW∞47.
4142.
1240.
5339.
5839.
6239.
21Table1:Classicationerror(%)ontherst1000testsam-plesforI-channelsystemwiththedirectdomainpermuta-tion.
reportedin[10].
ForeachclassierφθiweuseexactlythesamearchitectureasmentionedinTable2in[10]2.
More-over,takingintoaccountthatthegenerationofadversar-ialexamplesisquiteaslowprocess,aswellasin[10],weverifyourapproachontherst1000testsamplesoftheMNIST[8]andFashion-MNIST[12]datasets.
Addition-ally,weinvestigatetheCIFAR-10dataset[7].
TheobtainedresultsaregiveninTable1.
Foralldatasets,asinglechannelsetupwithI=1correspondstotheresultsoftheapproachproposedin[10]andCWdenotestheattacksproposedbyCarlityandWagnerin[2].
AsonecannotefromTable1,increasingthenumberofchannelsleadstoadecreaseoftheclassicationerror.
InthecaseoftheMNISTdataset,ourmulti-channelalgorithmallowstoreducetheerrorontheoriginalnon-attackeddatain2times,from2.
8%to1.
4%.
Fortheattackeddata,theclassicationerrordecreases2.
5timesfromalmost9-14%to3.
5-4.
5%.
IncaseoftheFashion-MNISTdataset,onecanobserveasimilardynamic,namely,theclassicationerrordecreasesfrom11.
5-13.
5%to9-9.
5.
FortheCIFAR-10datasetusingthemulti-channelarchitectureallowstoreducetheerrorfrom47-48%toonlyabout39.
5%.
TheCIFAR-10naturalimagesaremorecomplexincompari-sontotheMNISTandFasion-MNISTandtheintroducedpermutationdestroyslocalcorrelations.
Thishasadirectimpactonclassierperformance.
2ThePythoncodeforgeneratingadversarialexamplesisavailableathttps://github.
com/carlini/nnrobustattacksPk(a)coordinatedomainkPDCT1DCT(b)DCTdomainFigure5:Globalpermutationinthecoordinatedomain(a)andDCTbasedencodingusingkey-basedsignipping(b).
4.
Classicationwithmulti-channelsignper-mutationintheDCTdomainTheresultsobtainedinSection3fortheCIFAR-10datasetshowahighsensitivitytothegradientperturbationsthatdegradetheperformanceoftheclassier.
InthisSectionweinvestigatetheotherdataindependentprocessingfunctionsPjibasedonasecretkeykjipreservingthegradientinaspecialwaythatismoresuitablefortheclassicationofcomplexnaturalimages.
Wewillconsiderageneralschemefordemonstrativepurposestojustifythatthepermutationsshouldbelocalizedratherthanglobal.
4.
1.
GlobalpermutationWewillconsidersignippingintheDCTdomainasabasisforthemulti-channelrandomization.
Forthevisualcomparisonoftheeffectofglobalpermutationinthecoor-dinatedomainversusglobalsignippingintheDCTdo-main,weshowanexampleinFigure5.
FromthisFigureonecannotethatthepermutationinthecoordinatedomaindisturbsthelocalcorrelationintheimagethatwillimpactthelocalgradients.
Inturn,thismightimpactthetrain-ingofmodernclassiersthataremostlybasedongradienttechniques.
Atthesametime,preservationofthegradi-entsmakesthedatamorevulnerabletoadversarialattacks.
Keepingthisinmind,wecanconcludethattheglobalDCTsignpermutationalso"randomizes"theimagesbut,incon-trasttothepermutationindirectdomain,itkeepsthelocalcorrelation.
Toanswerthequestionwhetherthepreservationoflocalcorrelationattherandomizationcanhelppreservethelossofthegradients,weinvestigatetheglobalDCTsignpermu-tationfortheclassicationarchitectureshowninFigure4withWjisDCTandPi∈{1,1}n*n.
ItshouldbenotedthatthetransformWjisxedforallchannels.
Therefore,thesecrecypartconsistsinthekey-basedippingofDCTcoefcients'signs.
IntheexperimentalresultsweobtainedtheclassicationaccuracytobeveryclosetotheresultsrepresentedinTa-ble1.
Forthesakeofspace,wedonotpresentthistableinthepaper.
Nevertheless,wecanconcludethattheglobalsignpermutationintheDCTdomaindoesnotimprovetheprevioussituationwiththeglobalpermutationindirectdo-main.
11230VDH(a)sub-bands(b)original(c)V(d)H(e)D(f)V+H+DFigure6:LocalrandomizationintheDCTsub-bandsbykey-basedsignipping.
4.
2.
LocalpermutationTakingintoaccounttheaboveobservation,weinvesti-gatethebehaviourofthelocalDCTsignpermutations,i.
e.
,wewilluseaglobalDCTtransformbutwillipthesignsonlyfortheselectednumberofcoefcientsasshowninFig-ure6c.
Thegeneralidea,illustratedinFigure6,consistsinthefactthattheDCTdomaincanbesplitintooverlappingornon-overlappingsub-bandsofdifferentsize.
Inourcase,forthesimplicityandinterpretability,wesplittheDCTdomaininto4sub-bands,namely,(1)topleftthatrepresentsthelowfrequenciesoftheimage,(2)vertical,(3)horizontaland(4)diagonalsub-bands.
AfterthatweapplytheDCTsignip-pingasrandomizationineachsub-bandkeepingallothersub-bandsunchangedandapplytheinverseDCTtransform.
ThecorrespondingillustrativeexamplesareshowninFig-ures6c-6e.
Finally,weapplytheDCTsignpermutationin3sub-bands.
ThecorrespondingresultisshowninFig-ure6f.
ItiseasytoseethatlocalDCTsignippingappliedinoneindividualsub-bandcreatesaspecicorienteddis-tortionduetothespecicityofchosensub-bandsbutpre-servesthelocalimagecontentquitewell.
Thesimultaneouspermutationof3sub-bandscreatesmoredegradationwhichmightbeundesirableandcanhaveanegativeinuenceontheclassicationaccuracy.
ToinvestigatethebehaviourofthelocalDCTsignper-mutationsweusethemulti-channelarchitectureshowninFigure7.
Itisathree-channelmodelwithIsub-channels.
AsaWjweuseastandardDCTtransform.
Thesub-channels'dataindependentprocessingblocksPji∈{1,0,1}n*narebasedontheindividualsecretkeyskjiandarerepresentedbythematricesthatallowtochangetheelements'signsonlyinthesub-bandofinterest,likeil-lustratedinFigure3c.
Ingeneralcase,thesub-bandscanbeoverlappingornon-overlappingandhavedifferentpositionsandsizes.
Asdiscussed,weuseonly3non-overlappingsub-bandsofequalsizeasillustratedinFigure6a.
Thear-chitectureoftheclassiersφθjiisidenticaltotheonesusedinSection2.
Asanaggregationoperatorweuseasimple···1Mycφθ1IP11.
.
.
.
.
.
.
.
.
.
.
.
φθ11P1I···1M···1M+Byyk11k1I1I11Classier11Classier1IDCT1DCTHDCT1φθ3IP31.
.
.
φθ31P3I···1M···1Myyk31k3I3I31Classier31Classier3IDCT1DCT1VFigure7:ClassicationwithlocalDCTsignpermutations.
summation.
Thecorrespondingoptimizationproblembecomes:{θji}=argmin{θji}Tt=13j=1Ii=1L(yt,A(φθji(Pjixt))).
(3)4.
3.
NumericalresultsTheresultsobtainedforthearchitectureproposedinFig-ure7areshowninTable2.
Thecolumn"Classical"corre-spondstotheresultsoftheone-channelclassicalclassierfortheoriginalnon-permuteddata,thatisreferredtoastheclassicalscenario.
Itshouldbepointedoutthatinthepreviousexperimentsweobservedadropintheclassicationaccuracyevenfortheoriginalnon-attackeddata.
Intheproposedschemewiththe12and15sub-channels,theobtainedclassicationer-rorontheadversarialexamplescorrespondstothoseoftheoriginaldataand,insomecases,iseverlower.
Forexample,weobtaineda2timesdecreaseintheclassicationerrorontheMNISTfortheoriginaldataincomparisontotheclas-sicalarchitecture.
TheCIFAR-10datasetpresentsaparticularinterestforusasadatasetwithnaturalimages.
ForCW2andCW∞attackstheclassicationerroristhesameasinthecaseoftheclassicalscenarioontheoriginaldata.
Thisdemon-stratesthattheproposedmethoddoesnotcauseadegrada-tioninperformanceduetotheintroduceddefensemecha-nism.
IncaseofCW0attackthereisonlyabout2%ofsuccessfulattacks.
InthecaseoftheFashion-MNISTdataset,theobtainedresultsarebetterthantheresultsforthepermutationinthedirectdomaingiveninTable1.
Fortheoriginalnon-attackeddatatheclassicalscenarioaccuracyisachieved.
11231DatatypeClassicalJ·I3691215MNISTOriginal10.
50.
50.
50.
50.
5CW21006.
285.
344.
664.
444.
73CW010019.
318.
4817.
616.
717.
42CW∞99.
992.
812.
372.
222.
122.
06Fashion-MNISTOriginal7.
58.
17.
47.
67.
27.
4CW21009.
278.
678.
878.
628.
62CW010010.
629.
9910.
139.
879.
86CW∞99.
99.
28.
418.
668.
478.
49CIFAR-10Original2121.
219.
619.
518.
619.
2CW210022.
4221.
321.
0420.
7920.
92CW010025.
7224.
5223.
8423.
4323.
28CW∞10022.
821.
3921.
2120.
8120.
92Table2:Classicationerror(%)ontherst1000testsam-plesfortheDCTdomainwiththelocalsignippingin3sub-bands(J=3).
Fortheattackeddatatheclassicationerrorexceedsthelevelofthoseontheoriginaldataonlywith1-2%.
ThesituationwiththeMNISTdatasetisevenmorein-teresting.
Firstofall,wewouldliketopointoutthatwedecreasein2timestheclassicationerrorincomparisontotheclassicalscenario.
However,fortheCW0there-sultsaresurprisinglyworse.
Toinvestigatethereasonsoftheperformeddegradationwevisualizetheadversarialex-amples.
TheresultsareshowninTable3.
Itiseasytoseethat,ingeneral,theCW∞noisemanifestsitselfasabackgrounddistortionanddoesn'taffectconsiderablytheregionsofusefulinformation.
TheCW2noiseaffectstheregionsofinterestbuttheintensityofthenoiseismuchlowerthantheintensityofthemeaningfulinformation.
AswellasinCW2,theCW0noiseisconcentratedintheregionneartheedgesbutitsintensityisasstrongasthein-formativeimageparts.
Thus,itbecomesevidentwhylocalDCTsignpermutationisnotcapabletowithstandsuchkindofnoise.
Ingeneral,suchastrongnoiseiseasydetectableandthecorrespondingadversarialexamplescanberejectedbymanydetectionmechanisms,likeforexampleanaux-iliary"detector"sub-network[9].
Moreover,asitcanbeseenfromtheFashion-MNISTexamples,theinuenceofsuchnoiseandsuccessfulattacksdrasticallydecreaseswithincreasingimagecomplexity.
AsithasbeenshownbytheCIFAR-10results,thelocalDCTsignpermutationproducesahighlevelofdefenseagainstsuchanattackfornaturalim-ages.
AttackMNISTFashion-MNISTCW2CW0CW∞Table3:Adversarialexamples.
5.
Classicationwithmulti-channelhardthresholdinginthesub-bandsoftheDCTdomainAsitcanbeseenfromFigure6,thelocalDCTsignpermutationcreatessufcientlyhighimagedistortions.
Asasimplestrategytoavoidthiseffect,weinvestigatehardthresholdingoftheDCTcoefcientsinthedenedsub-bands.
InthiscasethematrixPjicontainszerosforthecoefcientsofkey-denedsub-bands.
Alternatively,onecanconsiderthisstrategyasarandomsamplingasillus-tratedinFigure3a,whereoneretainsonlythecoefcientsusedbytheclassier.
Inthissense,theconsideredstrategyisclosetotherandomizationinasinglechannelwithouttheaggregationconsideredin[4].
Notethattheconsideredprocessingisadataindepen-denttransform.
Thesecretkeyscanbeusedforchoosingthesub-bandspositions.
Thus,theattackercannotpredictinadvance,whichDCTcoefcientswillbeusedorsup-pressed.
Forsimplicityandtobecomparablewiththepreviouslyobtainedresults,weusethemulti-channelarchitecturethatisshowninFigure7,theDCTsub-banddivisionasillus-tratedinFigure6awithxed3sub-bandsizesandposi-tions.
Insteadofapplyingthesignpermutation,thecorre-spondingDCTfrequenciesaresettozeroandtheresultistransformedbacktothedirectdomain.
ThevisualizationoftheresultsofsuchatransformationisshowninFigure8.
TheresultingimagesareslightlyblurrybutlessnoisythaninthecaseoftheDCTsignpermutation.
TheobtainednumericalresultsfortheMNIST,Fashion-MNISTandCIFAR-10datasetsaregiveninTable4.
In11232(a)original(b)sub-bandV(c)sub-bandH(d)sub-bandDFigure8:LocalzerollingintheDCTdomain.
OriginalCW2CW0CW∞MNIST0.
67.
5921.
33.
03Fashion-MNIST8.
89.
611.
239.
58CIFAR21.
123.
2827.
0823.
27Table4:Classicationerror(%)fortheDCTbasedhardthresholdingovertherst1000testsamples(J=3,I=1).
general,theresultsareveryclosetotheresultsofusingtheDCTsignpermutationrepresentedinTable2withthenum-berofclassiersequalsto3.
Fortheoriginalnon-attackeddatatheclassicationerrorisalmostthesame.
Incaseoftheattackeddata,theclassicationerrorisabout0.
5-1%higher.
ThisisrelatedtothefactthatthezeroreplacementofDCTcoefcientsleadstoalossofinformationand,con-sequently,toadecreaseinclassicationaccuracy.
Hence,replacingtheDCTcoefcientsbyzerosmightalsoserveasadefensestrategy.
6.
ConclusionsInthispaper,weaddressaproblemofprotectionagainstadversarialattacksinclassicationsystems.
Weproposetherandomizeddiversicationmechanismasadefensestrat-egyinthemulti-channelarchitecturewiththeaggregationofclassiers'scores.
Therandomizeddiversicationisasecretkey-basedrandomizationinadeneddomain.
Thegoalofthisrandomizationistopreventthegradientbackpropagationoruseofbypasssystemsbytheattacker.
Weevaluatetheefciencyoftheproposeddefenseandtheper-formanceofseveralvariationsofanewarchitectureonthreestandarddatasetsagainstanumberofknownstate-of-the-artattacks.
Thenumericalresultsdemonstratetherobust-nessoftheproposeddefensemechanismagainstadversarialattacksandshowthatusingthemulti-channelarchitecturewiththefollowingaggregationstabilizestheresultsandin-creasestheclassicationaccuracy.
Forthefutureworkweaimatinvestigatingtheproposeddefensestrategyagainstthegradientbasedsparseattacksandnon-gradientbasedattacks.
References[1]A.
Athalye,N.
Carlini,andD.
Wagner.
Obfuscatedgradientsgiveafalsesenseofsecurity:Circumventingdefensestoad-versarialexamples.
InJ.
DyandA.
Krause,editors,Proceed-ingsofthe35thInternationalConferenceonMachineLearn-ing,volume80ofProceedingsofMachineLearningRe-search,pages274–283,Stockholmsmssan,StockholmSwe-den,10–15Jul2018.
PMLR.
2,3[2]N.
CarliniandD.
Wagner.
Towardsevaluatingtherobustnessofneuralnetworks.
In2017IEEESymposiumonSecurityandPrivacy(SP),pages39–57.
IEEE,2017.
5[3]P.
-Y.
Chen,H.
Zhang,Y.
Sharma,J.
Yi,andC.
-J.
Hsieh.
Zoo:Zerothorderoptimizationbasedblack-boxattackstodeepneuralnetworkswithouttrainingsubstitutemodels.
InPro-ceedingsofthe10thACMWorkshoponArticialIntelligenceandSecurity,pages15–26.
ACM,2017.
1[4]Z.
Chen,B.
Tondi,X.
Li,R.
Ni,Y.
Zhao,andM.
Barni.
Securedetectionofimagemanipulationbymeansofrandomfeatureselection.
CoRR,abs/1802.
00573,2018.
2,3,7[5]N.
Das,M.
Shanbhogue,S.
-T.
Chen,F.
Hohman,S.
Li,L.
Chen,M.
E.
Kounavis,andD.
H.
Chau.
Shield:Fast,prac-ticaldefenseandvaccinationfordeeplearningusingjpegcompression.
InProceedingsofthe24ndACMSIGKDDIn-ternationalConferenceonKnowledgeDiscoveryandDataMining.
ACM,2018.
2[6]I.
J.
Goodfellow,J.
Shlens,andC.
Szegedy.
Explainingandharnessingadversarialexamples.
InInternationalConfer-enceonLearningRepresentations(ICLR),2015.
1[7]A.
Krizhevsky,V.
Nair,andG.
Hinton.
Thecifar-10dataset.
online:http://www.
cs.
toronto.
edu/kriz/cifar.
html,2014.
5[8]Y.
LeCun,C.
Cortes,andC.
Burges.
Mnisthandwrittendigitdatabase.
AT&TLabs[Online].
Available:http://yann.
le-cun.
com/exdb/mnist,2,2010.
5[9]J.
H.
Metzen,T.
Genewein,V.
Fischer,andB.
Bischoff.
Ondetectingadversarialperturbations.
InInternationalConfer-enceonLearningRepresentations(ICLR),2017.
7[10]O.
Taran,S.
Rezaeifar,andS.
Voloshynovskiy.
Bridgingma-chinelearningandcryptographyindefenceagainstadver-sarialattacks.
InWorkshoponObjectionableContentandMisinformation(WOCM),ECCV2018,Munich,Germany,September2018.
1,2,3,4,5[11]N.
X.
Vinh,S.
Erfani,S.
Paisitkriangkrai,J.
Bailey,C.
Leckie,andK.
Ramamohanarao.
Trainingrobustmod-elsusingrandomprojection.
InPatternRecognition(ICPR),201623rdInternationalConferenceon,pages531–536.
IEEE,2016.
2[12]H.
Xiao,K.
Rasul,andR.
Vollgraf.
Fashion-mnist:anovelimagedatasetforbenchmarkingmachinelearningal-gorithms.
arXivpreprintarXiv:1708.
07747,2017.
5[13]C.
Xie,J.
Wang,Z.
Zhang,Z.
Ren,andA.
Yuille.
Mitigatingadversarialeffectsthroughrandomization.
InInternationalConferenceonLearningRepresentations(ICLR),2018.
2[14]Z.
You,J.
Ye,K.
Li,andP.
Wang.
Adversarialnoiselayer:Regularizeneuralnetworkbyaddingnoise.
arXivpreprintarXiv:1805.
08000,2018.
2[15]X.
Yuan,P.
He,Q.
Zhu,R.
R.
Bhat,andX.
Li.
Adversarialexamples:Attacksanddefensesfordeeplearning.
arXivpreprintarXiv:1712.
07107,2017.
111233

易速互联月付299元,美国独立服务器促销,加州地区,BGP直连线路,10G防御

易速互联怎么样?易速互联是国人老牌主机商家,至今已经成立9年,商家销售虚拟主机、VPS及独立服务器,目前商家针对美国加州萨克拉门托RH数据中心进行促销,线路采用BGP直连线路,自带10G防御,美国加州地区,100M带宽不限流量,月付299元起,有需要美国不限流量独立服务器的朋友可以看看。点击进入:易速互联官方网站美国独立服务器优惠套餐:RH数据中心位于美国加州、配置丰富性价比高、10G DDOS免...

快快云:香港沙田CN2/美国Cera大宽带/日本CN2,三网直连CN2 GIA云服务器和独立服务器

快快云怎么样?快快云是一家成立于2021年的主机服务商,致力于为用户提供高性价比稳定快速的主机托管服务,快快云目前提供有香港云服务器、美国云服务器、日本云服务器、香港独立服务器、美国独立服务器,日本独立服务器。快快云专注为个人开发者用户,中小型,大型企业用户提供一站式核心网络云端服务部署,促使用户云端部署化简为零,轻松快捷运用云计算!多年云计算领域服务经验,遍布亚太地区的海量节点为业务推进提供强大...

特网云,美国独立物理服务器 Atom d525 4G 100M 40G防御 280元/月 香港站群 E3-1200V2 8G 10M 1500元/月

特网云为您提供高速、稳定、安全、弹性的云计算服务计算、存储、监控、安全,完善的云产品满足您的一切所需,深耕云计算领域10余年;我们拥有前沿的核心技术,始终致力于为政府机构、企业组织和个人开发者提供稳定、安全、可靠、高性价比的云计算产品与服务。公司名:珠海市特网科技有限公司官方网站:https://www.56dr.com特网云为您提供高速、稳定、安全、弹性的云计算服务 计算、存储、监控、安全,完善...

globalsign为你推荐
域名注册申请域名申请有什么要求域名购买域名购买的流程是什么?重庆虚拟空间在重庆开一家VR体验馆价格要多少?重庆虚拟空间现在重庆那家主机空间最好?网站空间申请企业网站空间申请有哪些流程啊。、、。虚拟主机管理系统如何用win虚拟主机管理系统搭建云南虚拟主机云南虚拟主机,公司网站用本地客户,云南数据港怎么样?域名解析什么是域名解析,这个是干嘛的!!域名解析域名解析怎么弄?备案域名网站备案分为哪几种?域名备案跟网站备案有什么不同?
buyvm 服务器怎么绑定域名 cpanel空间 太原联通测速平台 速度云 可外链相册 福建铁通 酷番云 个人免费主页 域名dns 服务器维护 中国linux 免费个人主页 中国联通宽带测速 创速 聚惠网 侦探online cc加速器 pptpvpn 以下 更多