determinezcloud

zcloud  时间:2021-01-11  阅读:()
HALId:hal-01395715https://hal.
inria.
fr/hal-01395715Submittedon11Nov2016HALisamulti-disciplinaryopenaccessarchiveforthedepositanddisseminationofsci-entificresearchdocuments,whethertheyarepub-lishedornot.
ThedocumentsmaycomefromteachingandresearchinstitutionsinFranceorabroad,orfrompublicorprivateresearchcenters.
L'archiveouvertepluridisciplinaireHAL,estdestinéeaudéptetàladiffusiondedocumentsscientifiquesdeniveaurecherche,publiésounon,émanantdesétablissementsd'enseignementetderecherchefranaisouétrangers,deslaboratoirespublicsouprivés.
ManagingHotMetadataforScientificWorkflowsonMultisiteCloudsLuisPineda-Morales,JiLiu,AlexandruCostan,EstherPacitti,GabrielAntoniu,PatrickValduriez,MartaMattosoTocitethisversion:LuisPineda-Morales,JiLiu,AlexandruCostan,EstherPacitti,GabrielAntoniu,etal.
.
ManagingHotMetadataforScientificWorkflowsonMultisiteClouds.
BigData,Dec2016,Washington,DC,UnitedStates.
pp.
390-397,10.
1109/BigData.
2016.
7840628.
hal-01395715ManagingHotMetadataforScienticWorkowsonMultisiteCloudsLuisPineda-Morales,JiLiu,AlexandruCostan,EstherPacitti,GabrielAntoniu§,PatrickValduriez§,andMartaMattosoMicrosoftResearch-InriaJointCentre,France§Inria,France{luis.
pineda-morales,ji.
liu}@inria.
fr{gabriel.
antoniu,patrick.
valduriez}@inria.
frIRISA/INSARennes,FranceLIRMM,FranceCOPPE/UFRJ,Brazilalexandru.
costan@irisa.
fresther.
pacitti@lirmm.
frmarta@cos.
ufrj.
brAbstract—Large-scalescienticapplicationsareoftenex-pressedasworkowsthathelpdeningdatadependenciesbetweentheirdifferentcomponents.
Severalsuchworkowshavehugestorageandcomputationrequirements,andsotheyneedtobeprocessedinmultiple(cloud-federated)datacenters.
Ithasbeenshownthatefcientmetadatahandlingplaysakeyroleintheperformanceofcomputingsystems.
However,mostofthisevidenceconcernonlysingle-site,HPCsystemstodate.
Inthispaper,wepresentahybriddecentralized/distributedmodelforhandlinghotmetadata(frequentlyaccessedmetadata)inmultisitearchitectures.
Wecoupleourmodelwithascienticworkowmanagementsystem(SWfMS)tovalidateandtuneitsapplicabilitytodifferentreal-lifescienticscenarios.
WeshowthatefcientmanagementofhotmetadataimprovestheperformanceofSWfMS,reducingtheworkowexecutiontimeupto50%forhighlyparalleljobsandavoidingunnecessarycoldmetadataoperations.
IndexTerms—hotmetadata,metadatamanagement,multisiteclouds,scienticworkows,geo-distributedapplications.
I.
INTRODUCTIONManylarge-scalescienticapplicationsnowprocessamountsofdatareachingtheorderofPetabytes;asthesizeofthedataincreases,sodotherequirementsforcomputingresources.
Cloudsstandoutasconvenientinfrastructuresforhandlingsuchapplications,fortheyofferthepossibilitytoleaseresourcesatalargescaleandrelativelylowcost.
Veryoften,requirementsofdata-intensivescienticapplicationsexceedthecapabilitiesofasingleclouddatacenter(site),eitherbecausethesiteimposesusagelimitsforfairnessandsecurity[1],orsimplybecausethedatasetistoolarge.
Also,theapplicationdataareoftenphysicallystoredindifferentgeographiclocations,becausetheyaresourcedfromdifferentexperiments,sensingdevicesorlaboratories(e.
g.
thewellknownALICELHCCollaborationspansover37countries[2]).
Hencemultipledatacentersareneededinordertoguaranteeboththatenoughresourcesareavailableandthatdataareprocessedasclosetoitssourceaspossible.
Allpopularpubliccloudstodayaccountforarangeofgeo-distributeddatacenters,e.
g.
MicrosoftAzure[3],AmazonEC2[4],andGoogleCloud[5].
Alargenumberofdata-intensivedistributedapplicationsareexpressedasScienticWorkows(SWf).
ASWfisanassemblyofscienticdataprocessingactivitieswithdatadependenciesbetweenthem[6].
Theapplicationismodeledasagraph,inwhichverticesrepresentprocessingjobs,andedgestheirdependencies.
Suchastructureprovidesaclearviewoftheapplicationowandfacilitatestheexecutionoftheapplicationinageo-distributedenvironment.
Currently,manyScienticWorkowManagementSystems(SWfMS)arepubliclyavailable,e.
g.
Pegasus[7]andTaverna[8];someofthemalreadysupportmultisiteexecution[9].
MetadatahaveacriticalimpactontheefciencyofSWfMS;theyprovideaglobalviewofdatalocationandenabletasktrackingduringtheexecution.
SomeSWfmetadataevenneedtobepersistedtoallowtraceabilityandreproducibilityoftheworkow'sjobs,thesearepartofthesocalledprovenancedata.
Mostnotably,weassertthatsomemetadataaremorefrequentlyaccessedthanothers(e.
g.
thestatusoftasksinexecutioninamultisiteSWfisqueriedmoreoftenthanajob'screationdate).
Wedenotesuchmetadatabyhotmetadataandarguethatitshouldbehandledinaspecic,morequicklyaccessiblewaythantherestofthemetadata.
Whileithasbeenproventhatefcientmetadatahandlingplaysakeyroleinperformance[10],[11],littleresearchhastargetedthisissueinmultisiteclouds.
Onmultisiteinfrastructures,inter-sitenetworklatencyismuchhigherthanintra-sitelatency.
Thisaspectmuststayatthecoreofthedesignofamultisitemetadatamanagementsystem.
AsweexplaininSectionIII,severaldesignprincipleshavetobetakenintoaccount.
Moreover,inmostdataprocessingsystems(shouldtheybedistributed),metadataaretypicallystored,managedandqueriedatsomecentralizedserver(orsetofservers)locatedataspecicsite[7],[12],[13].
However,inamultisitesetting,withhigh-latencyinter-sitenetworksandlargeamountsofconcurrentmetadataoperations,centralizedapproachesarenotanoptimalsolution.
Thispaperpresentsthefollowingcontributions:Basedonthenotionofhotmetadata,weintroduceanarchitectureforoptimizingtheaccessandensuringtheavailabilityofhotmetadatainamultisitecloudenviron-ment(SectionIV).
Wedevelopaprototypebycouplingourproposedschemewithastateoftheartmultisiteworkowexecutionengine,namelyChiron[14](SectionV).
Wedemonstratethatefcientmanagementofhotmeta-dataimprovestheperformanceofSWfMS,reducingtheexecutiontimeofaworkowby1)enablingtimelydataprovisioningand2)avoidingunnecessarycoldmetadatahandling(SectionVI).
II.
THECOREOFOURAPPROACH:HOTMETADATAMetadatamanagementsignicantlyimpactstheperfor-manceofcomputingsystemsdealingwiththousandsormil-lionsofindividualles.
ThisisrecurrentlythecaseofSWfs.
A.
WhyCentralizedMetadataManagementisanIssueWorkowmanagementsystemshandlemorethanle-specicmetadata;runningtheworkowitselfgeneratesasig-nicantamountofexecution-specicmetadata,e.
g.
schedulingmetadata(i.
e.
whichtaskisexecutedwhere)anddata-to-taskmappings.
Mostoftoday'sSWfMShandlemetadatainacentralizedway.
File-specicmetadataisstoredinacentral-izedserver,eitherown-managedorthroughanunderlyinglesystem,whileexecution-specicmetadataisnormallykeptintheexecution'smasterentity.
Controllingandcombiningallthesesortsofmetadatatranslateintoacriticalworkloadasscienticdatasetsgetlarger.
TheCyberShakeworkow,forinstance,runsmorethan800,000tasks,handlinganequalnumberofindividualdatapieces,processingandaggregatingover80,000inputles(whichtranslatesinto200TBofdataread),andrequiringalloftheselestobetrackedandannotatedwithmetadata[15],[16].
Withmanytasks'runtimeintheorderofmilliseconds,theloadofparallelmetadataoperationsbecomesveryheavy,andhandlingitinacentralizedfashionrepresentsaseriousperformancebottleneck.
B.
MultisiteClouds:HowtoScaleOftenenough,scienticdataaresohugeandwidespreadthattheycannotbeprocessed/storedinasingleclouddata-center.
Ontheonehand,thedatasizeorthecomputingrequirementsmightexceedthecapacityofthesiteorthelimitsimposedbyacloudprovider.
Ontheotherhand,datamightbewidelydistributed,andduetotheirsizeitismoreefcienttoprocessthemclosertowheretheyresidethantobringthemtogether;forinstance,theUSEarthquakeHazardProgrammonitorsmorethan7,000sensorssystemsacrossthecountryreportingtotheminute[17].
Ineithercase,multisitecloudsareprogressivelybeingusedforexecutinglarge-scalescienticworkows.
Managingmetadatainacentralizedwayforsuchscenariosisnotappropriate.
Ontopofthecongestiongeneratedbyconcurrentmetadataoperations,remoteinter-siteoperationscauseseveredelaysintheexecution.
Toaddressthisissue,someapproachesproposetheuseofdecentralizedmetadataservers[10].
Inourpreviouswork[18],wealsoimplementedadecentralizedmanagementarchitecturethatprovedtohandlemetadatauptotwiceasfastasacentralizedsolution.
Inthispaperwemakeonestepfurther.
Ourfocusisonthemetadataaccessfrequency,particu-larlyonidentifyingfractionsofmetadatathatdonotrequiremultipleupdates.
Thegoalistoenableamoreefcientdecentralizedmetadatamanagement,reducingthenumberofinter-sitemetadataoperationsbyfavoringtheoperationsonfrequentlyaccessedmetadata,whichwecallHotMetadata.
C.
Whatis"Hot"MetadataThetermhotdatareferstodatathatneedtobefrequentlyaccessed[19].
Hotdataareusuallycriticalfortheapplicationandmustbeplacedinafastandeasy-to-querystorage[20].
WeapplythisconcepttothecontextofSWfmanagementandwedenehotmetadataasthemetadatathatisfrequentlyaccessedduringtheexecutionofaworkow.
Conversely,lessfrequentlyaccessedmetadatawillbedenotedcoldmetadata.
Wedistinguishtwotypesofhotmetadata:taskmetadataandlemetadata.
Taskmetadataisthemetadatafortheexecutionoftasks,whichiscomposedofthecommand,parameters,starttime,endtime,statusandexecutionsite.
HotjobmetadataenablestheSWfMStosearchandgenerateexecutabletasks.
Duringtheexecution,thestatusandtheexecutionsiteofthetasksarequeriedmanytimesbyeachsitetosearchnewtaskstoexecuteandtodetermineifajobisnished.
Inaddition,thestatusofataskmaybeupdatedseveraltimes.
Asaresult,itisimportanttogetthismetadataquicklyateachsite.
Filemetadatathatweconsideras"hot"foraworkowexecutionarethoserelativetothesize,locationandpossiblereplicasofagivenpieceofdata.
KnowledgeoflehotmetadataallowstheSWfMStoplacethedataclosetothecorrespondingtask,orvice-versa.
Thisisespeciallyrelevantinmultisitesettings:timelyavailabilityofthelemetadatawouldpermittomovedatabeforetheyareneeded,hencereducingtheimpactoflow-speedinter-sitenetworks.
Ingeneral,othermetadatasuchasleownershiporpermissionsarenotcriticalfortheexecutionandthusregardedascoldmetadata.
D.
WhataretheChallengesforHotMetadataManagementThereareanumberofimplicationsinordertoeffectivelyapplytheconceptofhotmetadatatorealsystems.
Atthisstageofourresearch,weapplysimpleyetefcientsolutionstothesechallenges.
HowtodecidewhichmetadataarehotWehaveempiri-callychosentheaforementionedtaskandlemetadataashot,sincetheyhavestatisticallyproventobemorefrequentlyaccessedbytheSWfMSweuse:Asampleexecutionof1-degreeMontageWorkow(Fig.
1)asdescribedinsectionVI-B,running820jobsand57Kmetadataoperationsrevealsthatinacentralizedexecu-tion,32.
6%ofthemarelemetadataoperations(store-File,getFile)and32.
4%aretaskmetadataoperations(loadTask,storeTask);whereasinadistributedrun,upFig.
1:RelativefrequencyofmetadataoperationsinMontage.
to67%areleoperations,andtaskoperationsrepresent11%.
Therestcorrespondmostlytomonitoringandnode/siterelatedoperations.
However,aparticularSWfmightactuallyuseothermeta-datamoreoften.
Sinceworkowsaretypicallydenedinstructuredformats(e.
g.
XMLles),anotherwaytoaccountforuser-denedhotmetadatawouldbetoaddapropertytoeachjobdenitionwheretheusercouldspec-ifywhichmetadatatheyconsiderashot.
Thenextiteminourresearchagendaistoimplementanenvironmentthatwillallowforbothuser-denedanddynamically-identiedhotmetadata(byrunningtrainingexecutions).
HowtoassessthatsuchchoiceofhotmetadataisrightEvaluatingtheefcacyofchoosinghotmetadataisnottrivial.
Metadataismuchsmallerthantheapplication'sdataandhandlingitovernetworkswithuctuatingthroughputmayproduceinconsistentresultsintermsofexecutiontime.
Nevertheless,anindicatoroftheimprovementbroughtbyanadequatechoiceofhotmetadata,andwhichisnottime-bounded,isthenumberofmetadataoperationsperformed.
Inourexperimentalevaluation(SectionVI)wepresentresultsintermsofbothexecutiontimeandnumberoftasksperformingsuchoperations.
Thenextsectiondescribeshowtheconceptofhotmeta-datatranslatesintoarchitecturaldesignchoicesforefcientmultisiteworkowprocessing.
III.
DESIGNPRINCIPLESThreekeychoicessetupthefoundationofourarchitecture:Two-LayerMultisiteWorkowManagement.
Weproposetouseatwo-layermultisitesystem:(1)Thelowerintra-sitelayeroperatesascurrentsingle-siteSWfMS:asitecomposedofseveralcomputingnodesandacommonlesystem,oneofsuchnodesactsasmasterandcoordinatescommunicationandtaskexecution.
(2)Anadditionalhigherinter-sitelayercoordinatestheinteractionsatsite-levelthroughamaster/slavearchitecture(onesitebeingthemastersite).
Themasternodeineachsiteisinchargeofsynchronizationanddatatransfers.
InSectionIVweprovideadetaileddescriptionofsuchasystemarchitecture.
AdaptivePlacementforHotMetadata.
Jobdependen-ciesinaworkowformcommonstructures(e.
g.
pipeline,datadistributionanddataaggregation)[21].
SWfMSusuallytakeintoaccountthesedependenciestoschedulethejobexecutioninaconvenientwaytominimizedatamovements(e.
g.
jobco-location).
Accordingly,differentworkowswillyielddifferentschedulingpatterns.
Inordertotakeadvantageoftheseschedulingoptimizations,wemustalsoadapttheworkow'smetadatastoragescheme.
However,maintaininganupdatedversionofallmetadataacrossamultisiteenviron-mentconsumesasignicantamountofcommunicationtime,incurringalsomonetarycosts.
Toreducethisimpact,wewillevaluatedifferentstoragestrategiesforhotmetadataduringtheworkow'sexecution,whilekeepingcoldmetadatastoredlocallyandsynchronizingsuchcoldmetadataonlyduringtheexecutionofthejob.
Inthenextsectionwerecallourdecentralizedadaptivestrategies.
EventualConsistencyforHigh-latencyCommunication.
Whileclouddatacentersarenormallyinterconnectedbyhigh-speedinfrastructure,thelatencyisultimatelyboundedbythephysicaldistancebetweensitesandcommunicationtimemightreachtheorderofseconds[22].
Underthesecircumstancesitisunreasonabletoaimforasystemwithafullyconsistentstateinallofitscomponentsatagivenmomentwithoutstronglycompromisingtheperformanceoftheapplication.
Workowsemanticsallowustheexibilitytooptforaneventuallyconsistentsystem:aworkowexecutionunit(task)processesoneorseveralspecicpiecesofdata;suchunitwillbeginitsexecutiononlywhenallthepiecesitneedsareavailableinthemetadatastorage;however,therestofunitscontinueexecutingindependently.
Thus,withareasonabledelayduetothehigherlatencypropagation,thesystemisguaranteedtobeeventuallyconsistent.
IV.
ARCHITECTUREInpreviousworkweexploreddifferentstrategiesforworkow-drivenmultisitemetadatamanagement,withafocusonlemetadata[18].
Ourstudyindicatedthatahybridap-proachcombiningdecentralizedmetadataandreplicationsuitsbettertheneedsoflarge-scalemultisiteworkowexecution.
Italsoshowedthattherightstrategytoapplydependsontheworkowstructure.
Inthissection,weelaborateontopofsuchobservationsintotwofundamentallines.
(1)Wepresentanarchitectureformultisitecloudworkowprocessingwhichfeaturesdecentralizedmetadatamanagement.
(2)Weenrichthisarchitecturewithacomponentspecicallydedicatedtothemanagementofhotmetadataacrossmultiplesites.
Two-levelMultisiteArchitecture.
Inaccordancewithourdesignprinciples,thebasisforourworkowengineisa2-levelmultisitearchitecture,asshowninFigure2.
1)Attheinter-sitelevel,allcommunicationandsynchro-nizationishandledthroughasetofmasternodes(M),onepersite.
Onesiteactsasaglobalcoordinator(mastersite)andisinchargeofschedulingjobs/taskstoeachsite.
Everymasternodeholdsametadatastorewhichispartoftheglobalmetadatastorageandisdirectlyaccessibletoallothermasternodes.
Fig.
2:MultisiteSWfexecutionarchitecturew/decentralizedmetadata.
Dottedlinesrepresentinter-siteinteractions.
2)Attheintra-sitelevel,oursystempreservesthetypicalmaster/slaveschemewidely-usedtodayonsingle-siteSWfMS:themasternodeschedulesandcoordinatesagroupofslavenodeswhichexecutetheworkowtasks.
Allnodeswithinasiteareconnectedtoasharedlesystemtoaccessdataresources.
Metadataupdatesarepropagatedtoothersitesthroughthemasternode,whichclassieshotandcoldmetadataasexplainedbelow.
SeparateManagementofHotandColdMetadata.
Fol-lowingourcharacterizationofhotmetadatafromSectionII-C,weincorporateanintermediatecomponentwhichltersoutcoldmetadataoperations.
Thismodelensuresthat:a)hotmetadataoperationsaremanagedwithhighpriorityoverthenetwork,andb)coldmetadataupdatesarepropagatedonlyduringperiodsoflownetworkcongestion.
Thelterislocatedinthemasternodeofeachsite(Figure3).
Itseparateshotandcoldmetadata,favoringthepropagationofhotmetadataandthusalleviatescongestionduringmetadata-intensiveperiods.
Thestoragelocationofthehotmetadataisthenselectedbasedonsomemetadatamanagementstrategy,asdevelopedbelow.
DecentralizedHotMetadataManagementStrategies.
Weconsiderthreedifferentalternativesfordecentralizedmetadatamanagement(exploredinpreviouswork[18]).
Here,westudytheirapplicationtohotmetadata.
Theyallincludeametadataserverineachofthedatacenterswhereexecutionnodesaredeployed.
Theydifferinthewayhotmetadataisstoredandreplicated.
Webrieyrecalltheirspecicitiesbelow.
Localwithoutreplication(LOC)Everynewhotmetadataentryisstoredatthesitewhereithasbeencreated.
Forreadoperations,metadataisqueriedateachsiteandthesitethatstoresthedatawillgivetheresponse.
Ifnoreplyisreceivedwithinatimethreshold,therequestisresent.
Thisstrategywilltypicallybenetpipeline-likeworkowstructures,whereconsecutivetasksareusuallyco-locatedatthesamesite.
Hashedwithoutreplication(DHT)Hotmetadataisqueriedandupdatedfollowingtheprincipleofadistributedhashtable(DHT).
ThesitelocationofametadataentrywillbedeterminedbyasimplehashfunctionappliedtoitskeyFig.
3:Thehotmetadatalteringcomponent.
attribute,le-nameincaseoflemetadata,andtask-idfortaskmetadata.
Weassumethattheimpactofinter-siteupdateswillbecompensatedbythelinearcomplexityofreadoperations.
Hashedwithlocalreplication(REP)Wecombinethetwopreviousstrategiesbykeepingbothalocalrecordofthehotmetadataandahashedcopy.
Intuitively,thiswouldreducethenumberofinter-sitereadingrequests.
Weexpectthishybridapproachtohighlightthetrade-offsbetweenmetadatalocalityandDHTlinearoperations.
V.
IMPLEMENTATION:DMM-CHIRONInordertovalidateourarchitecture,wehavedevelopedaprototypemultisiteSWfMSthatimplementshotmetadatahandling.
Itprovidessupportfordecentralizedmetadataman-agement,withadistinctionbetweenhotandcoldmetadata.
WedenoteourprototypebyDecentralized-MetadataMultisiteChiron(DMM-Chiron).
A.
Baseline:MultisiteChironThisworkbuildsonMultisiteChiron[9],aSWfMSspecif-icallydesignedformultisiteclouds.
ItslayeredarchitectureispresentedinFigure4;itiscomposedofninemodules.
Mul-tisiteChironexploitsatextualUItointeractwithusers.
TheSWfisanalyzedbytheJobManagertoidentifyexecutableactivities,i.
e.
unexecutedjobs,forwhichtheinputdataisready.
Thesamemodulegeneratestheexecutabletasks.
Schedulingisdoneintwophases:theMultisiteTaskSched-uleratthecoordinatorsitescheduleseachtasktoasite,followingtherandomOLB(OpportunisticLoadBalancing)algorithmusedin[9].
WhiletheSingleSiteTaskSchedulerappliesthedefaultdynamicFAF(FirstActivityFirst)approachusedbyChiron[14]toscheduletaskstocomputingnodes.
Itisworthtoclarifythatoptimizationstotheschedulingalgorithmsareoutofthescopeofthispaper.
Afterwards,itistheTaskExecutorateachcomputingnodewhichrunsthetasks.
Alongtheexecution,metadataishandledbytheMetadataManageratthemastersite.
Sincethemetadatastructureiswelldened,weusearelationaldatabase,namelyPostgreSQL,tostoreit.
Alldata(input,intermediateandoutput)arestoredinaSharedFileSystemateachsite.
TheletransferbetweentwodifferentsitesisperformedbytheMultisiteFileTransfermodule.
TheMultisiteMessageCommunicationmoduleofthemasternodeateachsiteisinFig.
4:LayeredarchitectureofMultisiteChiron[9].
chargeofsynchronizationthroughamaster/slavearchitecturewhiletheMultisiteFileTransfermoduleexploitsapeer-to-peermodelfordatatransfers.
B.
CombiningMultisiteandHotMetadataManagementToimplementandevaluateourapproachtodecentralizedmetadatamanagement,wefurtherextendedMultisiteChironbyaddingmultisitemetadataprotocols.
Wemainlymodiedtwomodulesasdescribedinthenextsections:theJobMan-agerandtheMetadataManager.
FromSingle-toMultisiteJobManager.
TheJobManageristheprocessthatveriesiftheexecutionofajobisnished,inordertolaunchthenextjobs.
Originally,thisvericationwasdoneonthemetadatastoredatthecoordinatorsite.
InDMM-Chironweimplementanoptimizationtoeachofthehotmetadatamanagementstrategies(SectionIV):forLOC,thelocalDMM-Chironinstanceveriesonlythetasksscheduledatthatsiteandthecoordinatorsiteconrmsthattheexecutionofajobisnishedwhenallthesitesnishtheircorrespondingtasks.
ForDHTandREP,themasterDMM-Chironinstanceofthecoordinatorsitecheckseachtaskofthejob.
IntroducingProtocolsforMultisiteHotMetadata.
Thefollowingprotocolsillustrateoursystem'smetadataopera-tions.
Werecallthatmetadataoperationsaretriggeredbytheslavenodesateachsite,whicharetheactualexecutorsoftheworkowtasks.
MetadataWriteAsshowningure5a,anewmetadatarecordispassedonfromtheslavetothemasternodeateachsite(1).
Uponreception,themasterlterstherecordaseitherhotorcold(2).
Thehotmetadataisassignedbythemasternodetothemetadatastoragepoolatthecorrespondingsite(s)accordingtoonemetadatastrategy,cf.
SectionIV(3a).
Createdcoldmetadataiskeptlocallyandpropagatedasynchronouslytothecoordinatorsiteduringtheexecutionofthejob(3b).
MetadataReadEachmasternodehasaccesstotheentirepoolofmetadatastoressothatitcangethotmetadatafromanysite.
Figure5bshowstheprocess.
Whenareadoperationisrequestedbyaslave(1),amasternodesendsarequesttoeachmetadatastore(2)anditprocessestheresponsethatcomerst(3),providedsuchresponseisnotanemptyset.
Thismechanismensuresthatthe(a)Write(b)ReadFig.
5:MetadataProtocols.
masternodegetstherequiredmetadataintheshortesttime.
Duringtheexecution,DMM-Chirongathersallthetaskmetadatastoredateachsitetoverifyiftheexecutionofajobisnished.
VI.
EXPERIMENTALEVALUATIONAlongthefollowingexperimentswecompareourresultstoamultisiteSWfMSwithcentralizedmetadatamanagement,whichwerecallbeingthestate-of-the-artconguration.
WeuseMultisiteChironasanexampleofsucharchitecture.
A.
ExperimentalSetupDMM-ChironwasdeployedontheMicrosoftAzurecloud[3]usingatotalof27nodesofA4standardvirtualmachines(8cores,14GBmemory).
TheVMswereevenlydistributedamongthreedatacenters:WestEurope(WEU,Netherlands),NorthEurope(NEU,Ireland)andCentralUS(CUS,Iowa).
ControlmessagesbetweenmasternodesaredeliveredthroughtheAzureBus[23].
B.
UseCasesMontageisatoolkitcreatedbytheNASA/IPACInfraredScienceArchiveandusedtogeneratecustommosaicsoftheskyfromasetofimages[24].
Additionalinputfortheworkowincludesthedesiredregionofthesky,aswellasthesizeofthemosaicintermsofsquaredegrees.
WemodeltheMontageSWfusingtheproposalofJuveetal.
[15].
BuzzFlowisadata-intensiveSWfthatsearchesfortrendsandmeasurescorrelationsinscienticpublications[25].
ItanalysesdatacollectedfrombibliographydatabasessuchasDBLPorPubMed.
Buzziscomposedofthirteenjobs.
C.
DifferentStrategiesforDifferentWorkowStructuresOurhypothesisisthatnosingledecentralizedstrategycantallworkowstructures:ahighlyparalleltaskwouldexhibitdifferentmetadataaccesspatternsthanaconcurrentdatagatheringtask.
Thus,theimprovementsbroughttoonetypeofworkowbyeitherofthestrategiesmightturntobedetrimentalforanother.
Toevaluatethishypothesis,weranseveralcombinationsofourstrategieswiththefeaturedworkows.
Figure6showstheaverageexecutiontimefortheMontageworkowgenerating0.
5-,1-,and2-degreemosaicsofthesky,usinginallthecasesa5.
5GBimagedatabasedistributedacrossthethreedatacenters.
Withalargerdegree,alargervolumeofintermediatedataishandledandamosaicofhigherresolutionisproduced.
Fig.
6:Montageexecutiontimefordifferentstrategiesanddegrees.
Avg.
intermediatedatashowninparenthesis.
Inthechartwenoteintherstplaceacleartimegainofupto28%byusingalocaldistributionstrategyinsteadofacentralizedone,forallthedegrees.
Thisresultwasexpectedsincethehotmetadataisnowmanagedinparallelbythreeinstancesinsteadofone,anditisonlythecoldmetadatathatisforwardedtothecoordinatorsiteforschedulingpurposes(andusedatmostonetime).
Weobservethatformosaicsofdegree1andunder,theuseofdistributedhashedstoragealsooutperformsthecentralizedversion.
However,wenoteaperformancedegradationinthehashedstrategies,startingat1-degreeandgettingmoreevidentat2-degree.
Weattributethistothefactthatthereisalargernumberoflong-distancehotmetadataoperationscomparedtothecentralizedapproach:withhashedstrategies,1outof3operationsarecarriedoutonaveragebetweenCUSandNEU.
Inthecentralizedapproach,NEUonlyperformsoperationsintheWEUsite,thussuchlonglatencyoperationsarereduced.
Wealsoassociatethisperformancedropwiththesizeofintermediatedatabeinghandledbythesystem:whilewetrytominimizeinter-sitedatatransfers,withlargervolumesofdatasuchtransfersaffecttheexecutiontimeuptoacertaindegreeandindependentlyofthemetadatamanagementscheme.
WeconcludethatwhiletheDHTmethodmightseemefcientduetolinearreadandwriteoperations,itisnotwellsuitedforgeo-distributedexecutions,whichfavorlocalityandpenalizeremoteoperations.
Inasimilarexperiment,wevalidatedDMM-ChironusingtheBuzzworkow,whichisratherdataintensive,withtwoDBLPdatabasedumpsof60MBand1.
2GB.
TheresultsareshowninFigure7;notethattheleftandrightY-axesdifferbyoneorderofmagnitude.
WeobserveagainthatDMM-Chironbringsageneralimprovementinthecompletiontimewithrespecttothecentralizedimplementation:10%forLOCinthe60MBdatasetand6%for1.
2GB,whileforDHTandREPthetimeimprovementwasoflessthan5%.
InordertobetterunderstandtheperformanceimprovementsbroughtbyDMM-Chiron,andalsotoidentifythereasonofthelowruntimegainfortheBuzzworkow,weevaluatedMontageandBuzzinaper-jobgranularity.
Theresultsarepresentedinthenextsection.
AlbeitthetimegainsperceivedFig.
7:Buzzworkowexecutiontime.
LeftY-axisscalecorrespondsto60MBexecution,rightY-axisto1.
2GB.
intheexperimentsmightnotseemsignicantatrstglance,twoimportantaspectsmustbetakenintoconsideration:OptimizationatnocostOurproposedsolutionsareimple-mentedusingexactlythesamenumberofresourcesastheircounterpartcentralizedapproaches:thedecentral-izedmetadatastoresaredeployedwithinthemasternodesofeachsiteandthecontrolmessagesaresentthroughthesameexistingchannels.
Thismeansthatsuchgains(ifsmall)comeatnoadditionalcostfortheuser.
ActualmonetarysavingsOurlongestexperiment(Buzz1.
2GB)runsintheorderofhundredsofminutes.
Withtoday'sscienticexperimentsrunningatthisscaleandbeyond,againof10%actuallyimpliessavingsofhoursofcloudcomputingresources.
D.
ZoomonMulti-taskJobsWecallajobmulti-taskwhenitsexecutionconsistsofmorethanasingletask.
InDMM-Chiron,thevarioustasksofsuchjobsareevenlydistributedtotheavailablesitesandthuscanbeexecutedinparallel.
WearguethatitispreciselyinthesekindofjobsthatDMM-Chironyieldsitsbestperformance.
Figure8showsabreakdownofBuzzandMontagework-owswiththeproportionalsizeofeachoftheirjobsfromtwodifferentperspectives:taskscountandaverageexecutiontime.
Ourgoalistocharacterizethemostrelevantjobsineachworkowbynumberoftasksandconrmtheirrelevancebylookingattheirrelativeexecutiontime.
InBuzz,wenoticethatbothmetricsarehighlydominatedbythreejobs:Buzz(676tasks),BuzzHistory(2134)andHistogramCreator(2134),whiletherestaresosmallthattheyarebarelynoticeable.
FileSplitcomesfourthintermsofexecutiontimeanditisin-deedtheonlyremainingmulti-taskjob(3tasks).
Likewise,weidentifyforMontagetheonlyfourmulti-taskjobs:mProject(45tasks),prepare(45),mDiff(107)andmBackground(45).
InFigures9and10welookintotheexecutiontimeofthemulti-taskjobsofBuzzandMontage,respectively.
Figure9correspondstoBuzzSWfwith60MBinputdata.
Weobservethatexceptforonecase,namelyBuzzjobwithREP,thedecentralizedstrategiesoutperformconsiderablythebaseline(upto20.
3%forLOC,16.
2%forDHTand14.
4%forREP).
(a)Buzz(b)MontageFig.
8:Workowper-jobbreakdown.
Verysmalljobsareenhancedforvisibility.
Fig.
9:Executiontimeofmulti-taskjobsontheBuzzworkowwith60MBinputdata.
InthecaseofFileSplit,wearguethattheexecutiontimeistooshortandthenumberoftaskstoosmalltorevealaclearimprovement.
However,theotherthreejobsconrmthatDMM-Chironperformsbetterforhighlyparalleljobs.
Itisimportanttonotethatthesegainsaremuchlargerthanthoseoftheoverallcompletiontime(Figure7)sincetherearestillanumberofworkloadsexecutedsequentially,whichhavenotbeenoptimizedbythecurrentreleaseofDMM-Chiron.
Correspondingly,Figure10showstheexecutionofeachmulti-taskjobfortheMontageSWfof0.
5degree.
Thegurerevealsthat,onaverage,hotmetadatadistributionsubstantiallyimprovescentralizedmanagementinmostcases(upto39.
5%forLOC,52.
8%forDHTand64.
1%forREP).
However,wenoticesomeunexpectedpeaksanddropsspecicallyinthehashedapproaches.
Afteranumberofexecutions,webelievethatsuchcasesareduetocommonnetworklatencyvariationsofthecloudenvironmentaddedtothefactthattheexecutiontimeforthejobsisrathershort(intheorderofseconds).
VII.
RELATEDWORKCentralizedapproaches.
Metadataisusuallyhandledbymeansofcentralizedregistriesimplementedontopofrela-tionaldatabases,thatonlyholdstaticinformationaboutdatalocations.
SystemslikeTaverna[8],Pegasus[7]orChiron[14]leveragesuchschemes,typicallyinvolvingasingleserverthatprocessesalltherequests.
IncaseofincreasedclientconcurrencyorhighI/Opressure,however,thesinglemetadataservercanquicklybecomeaperformancebottleneck.
Also,theworkloadsinvolvingmanysmallles,whichtranslateintoFig.
10:Executiontimeofmulti-taskjobsontheMontageworkowof0.
5degree.
heavymetadataaccesses,arepenalizedbytheoverheadsfromtransactionsandlocking[26],[27].
Alightweightalternativetodatabasesisindexingthemetadata;althoughmostindexingtechniques[28],[29]aredesignedfordataratherthanmeta-data.
Eventhededicatedindex-basedmetadataschemes[30]useacentralizedindexandarenotadequateforlarge-scaleworkows,norcantheyscaletomultisitedeployments.
Distributedapproaches.
Someworkowsystemsopttorelyondistributedle-systemsthatpartitionthemetadataandstoreitateachnode(e.
g.
[31],[32]),inashared-nothingarchitecture,asarststeptowardscompletegeo-graphicaldistribution.
Hashingisthemostcommontechniqueforuniformpartitioning:itconsistsofassigningmetadatatonodesbasedonahashofaleidentier.
Giraffa[33]usesfullpathnamesaskeyintheunderlyingHBase[34]store.
Lustre[35]hashesthetailofthelenameandtheIDoftheparentdirectory.
Similarhashingschemesareusedby[36],[37],[38]withalowmemoryfootprint,grantingaccesstodatainalmostconstanttime.
FusionFS[39]implementsadistributedmetadatamanagementbasedonDHTsaswell.
Chironitselfhasaversionwithdistributedcontrolusinganin-memorydistributedDBMS[40].
Allthesesystemsarewellsuitedforsingle-clusterdeploymentsorworkowsthatrunonsupercomputers.
However,theyareunabletomeetthepracticalrequirementsofworkowsexecutedonclouds.
Similarlytous,CalvinFS[10]useshash-partitionedkey-valuemetadataacrossgeo-distributeddatacenterstohandlesmallles,yetitdoesnotaccountforworkowsemantics.
Hybridapproaches.
Morerecently,Zhaoetal.
[41]pro-posedusingbothadistributedhashtable(FusionFS[39])andacentralizeddatabase(SPADE[42])tomanagethemetadata.
Similarlytous,theirmetadatamodelincludesbothleoperationsandprovenanceinformation.
However,theydonotmakethedistinctionbetweenhotandcoldmetadata,andtheymainlytargetsinglesiteclusters.
VIII.
CONCLUSIONInthispaperweintroducedtheconceptofhotmetadataforscienticworkowsrunninginlarge,geographicallydis-tributedandhighlydynamicenvironments.
Basedonit,wedesignedahybriddecentralizedanddistributedmodelforhandlingmetadatainmultisiteclouds.
Ourproposalisabletooptimizetheaccesstoandensuretheavailabilityofhotmetadata,whileeffectivelyhidingtheinter-sitenetworklaten-ciesandremainingnon-intrusiveandeasytodeploy.
Coupledwithascienticworkowengine,ourstrategiesshowedanimprovementofupto28%forthewholeworkow'scomple-tiontimeand50%forspecichighly-paralleljobs,comparedtostate-of-the-artcentralizedsolutions,atnoadditionalcost.
Encouragedbytheseresults,weplantobroadenthescopeofourworkandconsidertheimpactofheterogeneousmultisiteenvironmentsonthehotmetadatastrategies.
Wearealsolookingatthepossibilityofaddingdatalocationawarenessinordertominimizetheimpactoflargeintermediatedatatransfers.
Anotherinterestingdirectiontoexploreisintegratingreal-timemonitoringinformationabouttheexecutedjobsinordertodynamicallybalancethehotmetadataloadaccordingtoeachsite'slivecapacityandperformance.
ACKNOWLEDGMENTThisworkissupportedbytheMSR-InriaJointCentre,theANROverFlowprojectandpartiallyperformedinthecontextoftheComputationalBiologyInstitute.
Theexperi-mentswerecarriedoutusingtheAzureinfrastructureprovidedbyMicrosoftintheframeworkoftheZ-CloudFlowproject.
LuisispartiallyfundedbyCONACyT,Mexico.
JiispartiallyfundedbyEUH2020Programme,MCTI/RNP-Brazil,CNPq,FAPERJ,andInria(MUSICproject).
REFERENCES[1]"ResourceQuotas-GoogleCloudPlatform,"https://cloud.
google.
com/compute/docs/resource-quotas.
[2]"AliceCollaboration,"http://aliceinfo.
cern.
ch/general/index.
html.
[3]"MicrosoftAzureCloud,"http://www.
windowsazure.
com/en-us/.
[4]"AmazonElasticComputeCloud,"https://aws.
amazon.
com/ec2/.
[5]"GoogleCloudPlatform,"https://cloud.
google.
com/.
[6]E.
Deelman,D.
Gannonetal.
,"Workowsande-science:Anoverviewofworkowsystemfeaturesandcapabilities,"FutureGenerationCom-puterSystems,vol.
25,no.
5,pp.
528–540,2009.
[7]E.
Deelman,G.
Singhetal.
,"Pegasus:Aframeworkformappingcomplexscienticworkowsontodistributedsystems,"ScienticPro-gramming,vol.
13,no.
3,pp.
219–237,2005.
[8]K.
Wolstencroft,R.
Hainesetal.
,"Thetavernaworkowsuite:designingandexecutingworkowsofwebservicesonthedesktop,weborinthecloud,"NucleicAcidsResearch,vol.
41,no.
W1,pp.
W557–W561,2013.
[9]J.
Liu,E.
Pacittietal.
,"Scienticworkowschedulingwithprovenancesupportinmultisitecloud,"inHighPerformanceComputingforCom-putationalScienceVECPAR,2016.
[10]A.
ThomsonandD.
J.
Abadi,"CalvinFS:consistentwanreplicationandscalablemetadatamanagementfordistributedlesystems,"inProc.
ofthe13thUSENIXConf.
onFileandStorageTechnologies,2015.
[11]S.
R.
Alam,H.
N.
El-Harakeetal.
,"ParallelI/Oandthemetadatawall,"inProc.
ofthe6thWorkshoponParallelDataStorage,ser.
PDSW'11.
NewYork,NY,USA:ACM,2011,pp.
13–18.
[12]S.
Ghemawat,H.
Gobioff,andS.
-T.
Leung,"Thegooglelesystem,"SIGOPSOper.
Syst.
Rev.
,vol.
37,no.
5,pp.
29–43,Oct.
2003.
[13]F.
SchmuckandR.
Haskin,"GPFS:Ashared-disklesystemforlargecomputingclusters,"inProc.
ofthe1stUSENIXConferenceonFileandStorageTechnologies,ser.
FAST'02,Berkeley,CA,USA,2002.
[14]E.
Ogasawara,J.
Diasetal.
,"Analgebraicapproachfordata-centricscienticworkows,"Proc.
ofVLDBEndowment,vol.
4,no.
12,pp.
1328–1339,2011.
[15]G.
Juve,A.
Chervenaketal.
,"Characterizingandprolingscienticworkows,"FGCS,vol.
29,no.
3,pp.
682–692,2013.
[16]E.
Deelman,S.
Callaghanetal.
,"Managinglarge-scaleworkowexecu-tionfromresourceprovisioningtoprovenancetracking:Thecybershakeexample,"inIEEEIntl.
Conf.
one-ScienceandGridComputing,2006.
[17]"USGSANSS-AdvancedNationalSeismicSystem,"http://earthquake.
usgs.
gov/monitoring/anss/.
[18]L.
Pineda-Morales,A.
Costan,andG.
Antoniu,"Towardsmulti-sitemetadatamanagementforgeographicallydistributedcloudworkows,"inIEEEIntl.
Conf.
onClusterComputing,Sept2015,pp.
294–303.
[19]J.
J.
Levandoski,P.
-A.
Larson,andR.
Stoica,"Identifyinghotandcolddatainmain-memorydatabases,"inDataEngineering(ICDE),2013IEEE29thInternationalConferenceon.
IEEE,2013,pp.
26–37.
[20]D.
Gibson.
(2012)IsYourBigDataHot,Warm,orCold[Online].
Available:http://www.
ibmbigdatahub.
com/blog/your-big-data-hot-warm-or-cold[21]S.
Bharathi,A.
Chervenaketal.
,"Characterizationofscienticwork-ows,"inWorkshoponWFsinSupportofLarge-ScaleScience,2008.
[22]"AzureSpeedTest,"http://www.
azurespeed.
com/.
[23]"MicrosoftAzureServiceBus,"https://azure.
microsoft.
com/en-us/services/service-bus/.
[24]E.
Deelman,G.
Singhetal.
,"Thecostofdoingscienceonthecloud:Themontageexample,"inProceedingsofthe2008ACM/IEEEConferenceonSupercomputing,ser.
SC'08,2008,pp.
50:1–50:12.
[25]J.
Dias,E.
Ogasawaraetal.
,"Algebraicdataowsforbigdataanalysis,"inBigData,2013IEEEIntl.
Conf.
on,2013,pp.
150–155.
[26]M.
Stonebraker,S.
Maddenetal.
,"Theendofanarchitecturalera:Timeforacompleterewrite,"inProc.
ofthe33rdIntl.
Conf.
onVeryLargeDataBases,ser.
VLDB'07,pp.
1150–1160.
[27]M.
StonebrakerandU.
Cetintemel,""onesizetsall":anideawhosetimehascomeandgone,"inDataEngineering,2005.
ICDE2005.
Proceedings.
21stInternationalConferenceon,April2005,pp.
2–11.
[28]J.
Wang,S.
Wuetal.
,"Indexingmulti-dimensionaldatainacloudsystem,"inProc.
ofthe2010ACMSIGMODIntl.
Conf.
onManagementofData,2010,pp.
591–602.
[29]S.
Wu,D.
Jiangetal.
,"Efcientb-treebasedindexingforclouddataprocessing,"Proc.
VLDBEndow.
,vol.
3,no.
1-2,pp.
1207–1218,2010.
[30]A.
W.
Leung,M.
Shaoetal.
,"Spyglass:Fast,scalablemetadatasearchforlarge-scalestoragesystems.
"inFAST,vol.
9,2009,pp.
153–166.
[31]A.
Gehani,M.
Kim,andT.
Malik,"Efcientqueryingofdistributedprovenancestores,"inACMInt.
SymposiumonHighPerformanceDistributedComputingHPDC,2010,pp.
613–621.
[32]T.
Malik,L.
Nistor,andA.
Gehani,"Trackingandsketchingdistributeddataprovenance,"inSixthInt.
Conf.
one-Science,2010,pp.
190–197.
[33]"Giraffa,"https://code.
google.
com/a/apache-extras.
org/p/giraffa/.
[34]"ApacheHBase,"http://hbase.
apache.
org.
[35]"Lustre-OpenSFS,"http://lustre.
org/.
[36]P.
F.
CorbettandD.
G.
Feitelson,"Thevestaparallellesystem,"ACMTrans.
Comput.
Syst.
,vol.
14,no.
3,pp.
225–264,Aug.
1996.
[37]E.
L.
MillerandR.
H.
Katz,"RAMA:Aneasy-to-use,high-performanceparallellesystem,"ParallelComputing,vol.
23,no.
4,pp.
419–446.
[38]S.
A.
Brandt,E.
L.
Milleretal.
,"Efcientmetadatamanagementinlargedistributedstoragesystems,"inProc.
20thIEEE/11thNASAGoddardConferenceonMassStorageSystemsandTechnologies,2003.
[39]D.
Zhao,Z.
Zhangetal.
,"Fusionfs:Towardsupportingdata-intensivescienticapplicationsonextreme-scalehigh-performancecomputingsystems,"in2014IEEEIntl.
Conf.
onBigData,Oct2014.
[40]R.
Souza,V.
Silvaetal.
,"Parallelexecutionofworkowsdrivenbyadistributeddatabasemanagementsystem,"inACM/IEEEConferenceonSupercomputing,Poster,2015.
[41]D.
Zhao,C.
Shouetal.
,"Distributeddataprovenanceforlarge-scaledata-intensivecomputing,"inCLUSTER,2013,pp.
1–8.
[42]M.
J.
Zaki,"Spade:Anefcientalgorithmforminingfrequentse-quences,"MachineLearning,vol.
42,no.
1,pp.
31–60.

PQS彼得巧 年中低至38折提供台湾彰化HiNet线路VPS主机 200M带宽

在六月初的时候有介绍过一次来自中国台湾的PQS彼得巧商家(在这里)。商家的特点是有提供台湾彰化HiNet线路VPS主机,起步带宽200M,从带宽速率看是不错的,不过价格也比较贵原价需要300多一个月,是不是很贵?当然懂的人可能会有需要。这次年中促销期间,商家也有提供一定的优惠。比如月付七折,年付达到38折,不过年付价格确实总价格比较高的。第一、商家优惠活动年付三八折优惠:PQS2021-618-C...

盘点618年中大促中这款云服务器/VPS主机相对值得选择

昨天有在"盘点2021年主流云服务器商家618年中大促活动"文章中整理到当前年中大促618活动期间的一些国内国外的云服务商的促销活动,相对来说每年年中和年末的活动力度还是蛮大的,唯独就是活动太过于密集,而且商家比较多,导致我们很多新人不懂如何选择,当然对于我们这些老油条还是会选择的,估计没有比我们更聪明的进行薅爆款新人活动。有网友提到,是否可以整理一篇当前的这些活动商家中的促销产品。哪些商家哪款产...

哪个好Vultr搬瓦工和Vultr97%,搬瓦工和Vultr全方位比较!

搬瓦工和Vultr哪个好?搬瓦工和Vultr都是非常火爆的国外VPS,可以说是国内网友买的最多的两家,那么搬瓦工和Vultr哪个好?如果要选择VPS,首先我们要考虑成本、服务器质量以及产品的售后服务。老玩家都知道目前在国内最受欢迎的国外VPS服务商vultr和搬瓦工口碑都很不错。搬瓦工和Vultr哪个稳定?搬瓦工和Vultr哪个速度快?为了回答这些问题,本文从线路、速度、功能、售后等多方面对比这两...

zcloud为你推荐
查询ip如何查找IP地址?免费vps服务器有没有便宜的vps,最好是免费的网站空间域名关于网站的域名和空间?北京网站空间求永久免费的网站服务器!台湾虚拟主机香港虚拟主机和台湾虚拟主机比较,哪个更好!?双线虚拟主机G型双线虚拟主机是什么意思域名邮箱域名邮箱帐号,密码是那些?域名估价域名的价值由什么来决定?动态域名动态域名什么意思?域名抢注在网上怎样抢注域名?
江西服务器租用 欧洲免费vps 中国万网域名 微信收钱 南通服务器 空间登录首页 web服务器是什么 太原联通测速 服务器维护 云服务器比较 lamp架构 空间服务器 服务器硬件配置 wordpress空间 restart 长沙服务器托管 八度空间论坛 万网空间价格 linuxweb服务器 元旦促销方案 更多