illustratedfreehost

freehost  时间:2021-04-10  阅读:()
UnderstandingMemoryResourceManagementinVMwareESXServerWHITEPAPER2VMwarewhitepaperTableofContents1.
Introduction32.
ESXMemoryManagementOverview42.
1Terminology42.
2MemoryVirtualizationBasics42.
3MemoryManagementBasicsinESX53.
MemoryReclamationinESX63.
1Motivation63.
2TransparentPageSharing(TPS)73.
3Ballooning83.
4HypervisorSwapping93.
5WhentoReclaimHostMemory104.
ESXMemoryAllocationManagementforMultipleVirtualMachines115.
PerformanceEvaluation135.
1ExperimentalEnvironment135.
2TransparentPageSharingPerformance145.
3Ballooningvs.
Swapping145.
3.
1LinuxKernelCompile155.
3.
2Oracle/Swingbench165.
3.
3SPECjbb175.
3.
4MicrosoftExchangeServer2007186.
BestPractices197.
References193VMwarewhitepaper1.
IntroductionVMwareESXisahypervisordesignedtoefficientlymanagehardwareresourcesincludingCPU,memory,storage,andnetworkamongmultipleconcurrentvirtualmachines.
ThispaperdescribesthebasicmemorymanagementconceptsinESX,theconfigurationoptionsavailable,andprovidesresultstoshowtheperformanceimpactoftheseoptions.
Thefocusofthispaperisinpresentingthefundamentalconceptsoftheseoptions.
Moredetailscanbefoundin"MemoryResourceManagementinVMwareESXServer"[1].
ESXuseshigh-levelresourcemanagementpoliciestocomputeatargetmemoryallocationforeachvirtualmachine(VM)basedonthecurrentsystemloadandparametersettingsforthevirtualmachine(shares,reservation,andlimit[2]).
Thecomputedtargetallocationisusedtoguidethedynamicadjustmentofthememoryallocationforeachvirtualmachine.
Inthecaseswherehostmemoryisovercommitted,thetargetallocationsarestillachievedbyinvokingseverallower-levelmechanismstoreclaimmemoryfromvirtualmachines.
Thispaperassumesapurevirtualizationenvironmentinwhichtheguestoperatingsystemrunninginsidethevirtualmachineisnotmodifiedtofacilitatevirtualization(oftenreferredtoasparavirtualization).
KnowledgeofESXarchitecturewillhelpyouunderstandtheconceptspresentedinthispaper.
Inordertoquicklymonitorvirtualmachinememoryusage,theVMwarevSphereClientexposestwomemorystatisticsintheresourcesummary:ConsumedHostMemoryandActiveGuestMemory.
Figure1:HostandGuestMemoryusageinvSphereClientConsumedHostMemoryusageisdefinedastheamountofhostmemorythatisallocatedtothevirtualmachine,ActiveGuestMemoryisdefinedastheamountofguestmemorythatiscurrentlybeingusedbytheguestoperatingsystemanditsapplications.
Thesetwostatisticsarequiteusefulforanalyzingthememorystatusofthevirtualmachineandprovidinghintstoaddresspotentialperformanceissues.
Thispaperhelpsanswerthesequestions:WhyistheConsumedHostMemorysohighWhyistheConsumedHostMemoryusagesometimesmuchlargerthantheActiveGuestMemoryWhyistheActiveGuestMemorydifferentfromwhatisseeninsidetheguestoperatingsystemThesequestionscannotbeeasilyansweredwithoutunderstandingthebasicmemorymanagementconceptsinESX.
UnderstandinghowESXmanagesmemorywillalsomaketheperformanceimplicationsofchangingESXmemorymanagementparametersclearer.
ThevSphereClientcanalsodisplayperformancechartsforthefollowingmemorystatistics:active,shared,consumed,granted,overhead,balloon,swapped,swappedinrate,andswapped-outrate.
Acompletediscussionaboutthesemetricscanbefoundin"MemoryPerformanceChartMetricsinthevSphereClient"[3]and"VirtualCenterMemoryStatisticsDefinitions"[4].
Therestofthepaperisorganizedasfollows.
Section2presentstheoverviewofESXmemorymanagementconcepts.
Section3discussesthememoryreclamationtechniquesusedinESX.
Section4describeshowESXallocateshostmemorytovirtualmachineswhenthehostisundermemorypressure.
Section5presentsanddiscussestheperformanceresultsfordifferentmemoryreclamationtechniques.
Finally,Section6discussesthebestpracticeswithrespecttohostandguestmemoryusage.
4VMwarewhitepaper2.
ESXMemoryManagementOverview2.
1TerminologyThefollowingterminologyisusedthroughoutthispaper.
Hostphysicalmemory1referstothememorythatisvisibletothehypervisorasavailableonthesystem.
Guestphysicalmemoryreferstothememorythatisvisibletotheguestoperatingsystemrunninginthevirtualmachine.
Guestvirtualmemoryreferstoacontinuousvirtualaddressspacepresentedbytheguestoperatingsystemtoapplications.
Itisthememorythatisvisibletotheapplicationsrunninginsidethevirtualmachine.
Guestphysicalmemoryisbackedbyhostphysicalmemory,whichmeansthehypervisorprovidesamappingfromtheguesttothehostmemory.
Thememorytransferbetweentheguestphysicalmemoryandtheguestswapdeviceisreferredtoasguestlevelpagingandisdrivenbytheguestoperatingsystem.
Thememorytransferbetweenguestphysicalmemoryandthehostswapdeviceisreferredtoashypervisorswapping,whichisdrivenbythehypervisor.
2.
2MemoryVirtualizationBasicsVirtualmemoryisawell-knowntechniqueusedinmostgeneral-purposeoperatingsystems,andalmostallmodernprocessorshavehardwaretosupportit.
Virtualmemorycreatesauniformvirtualaddressspaceforapplicationsandallowstheoperatingsystemandhardwaretohandletheaddresstranslationbetweenthevirtualaddressspaceandthephysicaladdressspace.
Thistechniquenotonlysimplifiestheprogrammer'swork,butalsoadaptstheexecutionenvironmenttosupportlargeaddressspaces,processprotection,filemapping,andswappinginmoderncomputersystems.
Whenrunningavirtualmachine,thehypervisorcreatesacontiguousaddressablememoryspaceforthevirtualmachine.
Thismemoryspacehasthesamepropertiesasthevirtualaddressspacepresentedtotheapplicationsbytheguestoperatingsystem.
Thisallowsthehypervisortorunmultiplevirtualmachinessimultaneouslywhileprotectingthememoryofeachvirtualmachinefrombeingaccessedbyothers.
Therefore,fromtheviewoftheapplicationrunninginsidethevirtualmachine,thehypervisoraddsanextralevelofaddresstranslationthatmapstheguestphysicaladdresstothehostphysicaladdress.
Asaresult,therearethreevirtualmemorylayersinESX:guestvirtualmemory,guestphysicalmemory,andhostphysicalmemory.
TheirrelationshipsareillustratedinFigure2(a).
Figure2:Virtualmemorylevels(a)andmemoryaddresstranslation(b)inESX(a)VM(b)GuestvirtualmemoryApplicationOperatingSystemHypervisorHypervisorGuestphysicalmemoryHostphysicalmemoryGuestOSPageTablesguestvirtual-to-guestphysicalShadowPageTablesguestvirtual-to-guestphysicalpmapguestphysical-to-hostphysicalAsshowninFigure2(b),inESX,theaddresstranslationbetweenguestphysicalmemoryandhostphysicalmemoryismaintainedbythehypervisorusingaphysicalmemorymappingdatastructure,orpmap,foreachvirtualmachine.
Thehypervisorinterceptsallvirtualmachineinstructionsthatmanipulatethehardwaretranslationlookasidebuffer(TLB)contentsorguestoperatingsystempagetables,whichcontainthevirtualtophysicaladdressmapping.
TheactualhardwareTLBstateisupdatedbasedontheseparateshadowpagetables,whichcontaintheguestvirtualtohostphysicaladdressmapping.
Theshadowpagetablesmaintainconsistencywiththeguestvirtualtoguestphysicaladdressmappingintheguestpagetablesandtheguestphysicaltohostphysicaladdress1Thetermshostphysicalmemoryandhostmemoryareusedinterchangeablyinthispaper.
Theyarealsoequivalenttothetermmachinememoryusedin[1].
5VMwarewhitepapermappinginthepmapdatastructure.
Thisapproachremovesthevirtualizationoverheadforthevirtualmachine'snormalmemoryaccessesbecausethehardwareTLBwillcachethedirectguestvirtualtohostphysicalmemoryaddresstranslationsreadfromtheshadowpagetables.
Notethattheextralevelofguestphysicaltohostphysicalmemoryindirectionisextremelypowerfulinthevirtualizationenvironment.
Forexample,ESXcaneasilyremapavirtualmachine'shostphysicalmemorytofilesorotherdevicesinamannerthatiscompletelytransparenttothevirtualmachine.
Recently,somenewgenerationCPUs,suchasthirdgenerationAMDOpteronandIntelXeon5500seriesprocessors,haveprovidedhardwaresupportformemoryvirtualizationbyusingtwolayersofpagetablesinhardware.
Onelayerstorestheguestvirtualtoguestphysicalmemoryaddresstranslation,andtheotherlayerstorestheguestphysicaltohostphysicalmemoryaddresstranslation.
Thesetwopagetablesaresynchronizedusingprocessorhardware.
Hardwaresupportmemoryvirtualizationeliminatestheoverheadrequiredtokeepshadowpagetablesinsynchronizationwithguestpagetablesinsoftwarememoryvirtualization.
Formoreinformationabouthardware-assistedmemoryvirtualization,see"PerformanceEvaluationofIntelEPTHardwareAssist"[5]and"PerformanceEvaluationofAMDRVIHardwareAssist.
"[6]2.
3MemoryManagementBasicsinESXPriortotalkingabouthowESXmanagesmemoryforvirtualmachines,itisusefultofirstunderstandhowtheapplication,guestoperatingsystem,hypervisor,andvirtualmachinemanagememoryattheirrespectivelayers.
Anapplicationstartsandusestheinterfacesprovidedbytheoperatingsystemtoexplicitlyallocateordeallocatethevirtualmemoryduringtheexecution.
Inanon-virtualenvironment,theoperatingsystemassumesitownsallphysicalmemoryinthesystem.
Thehardwaredoesnotprovideinterfacesfortheoperatingsystemtoexplicitly"allocate"or"free"physicalmemory.
Theoperatingsystemestablishesthedefinitionsof"allocated"or"free"physicalmemory.
Differentoperatingsystemshavedifferentimplementationstorealizethisabstraction.
Oneexampleisthattheoperatingsystemmaintainsan"allocated"listanda"free"list,sowhetherornotaphysicalpageisfreedependsonwhichlistthepagecurrentlyresidesin.
Becauseavirtualmachinerunsanoperatingsystemandseveralapplications,thevirtualmachinememorymanagementpropertiescombinebothapplicationandoperatingsystemmemorymanagementproperties.
Likeanapplication,whenavirtualmachinefirststarts,ithasnopre-allocatedphysicalmemory.
Likeanoperatingsystem,thevirtualmachinecannotexplicitlyallocatehostphysicalmemorythroughanystandardinterfaces.
Thehypervisoralsocreatesthedefinitionsof"allocated"and"free"hostmemoryinitsowndatastructures.
Thehypervisorinterceptsthevirtualmachine'smemoryaccessesandallocateshostphysicalmemoryforthevirtualmachineonitsfirstaccesstothememory.
Inordertoavoidinformationleakingamongvirtualmachines,thehypervisoralwayswriteszeroestothehostphysicalmemorybeforeassigningittoavirtualmachine.
Virtualmachinememorydeallocationactsjustlikeanoperatingsystem,suchthattheguestoperatingsystemfreesapieceofphysicalmemorybyaddingthesememorypagenumberstotheguestfreelist,butthedataofthe"freed"memorymaynotbemodifiedatall.
Asaresult,whenaparticularpieceofguestphysicalmemoryisfreed,themappedhostphysicalmemorywillusuallynotchangeitsstateandonlytheguestfreelistwillbechanged.
Thehypervisorknowswhentoallocatehostphysicalmemoryforavirtualmachinebecausethefirstmemoryaccessfromthevirtualmachinetoahostphysicalmemorywillcauseapagefaultthatcanbeeasilycapturedbythehypervisor.
However,itisdifficultforthehypervisortoknowwhentofreehostphysicalmemoryuponvirtualmachinememorydeallocationbecausetheguestoperatingsystemfreelistisgenerallynotpubliclyaccessible.
Hence,thehypervisorcannoteasilyfindoutthelocationofthefreelistandmonitoritschanges.
Althoughthehypervisorcannotreclaimhostmemorywhentheoperatingsystemfreesguestphysicalmemory,thisdoesnotmeanthatthehostmemory,nomatterhowlargeitis,willbeusedupbyavirtualmachinewhenthevirtualmachinerepeatedlyallocatesandfreesmemory.
Thisisbecausethehypervisordoesnotallocatehostphysicalmemoryoneveryvirtualmachine'smemoryallocation.
Itonlyallocateshostphysicalmemorywhenthevirtualmachinetouchesthephysicalmemorythatithasnevertouchedbefore.
Ifavirtualmachinefrequentlyallocatesandfreesmemory,presumablythesameguestphysicalmemoryisbeingallocatedandfreedagainandagain.
Therefore,thehypervisorjustallocateshostphysicalmemoryforthefirstmemoryallocationandthentheguestreuses6VMwarewhitepaperthesamehostphysicalmemoryfortherestofallocations.
Thatis,ifavirtualmachine'sentireguestphysicalmemory(configuredmemory)hasbeenbackedbythehostphysicalmemory,thehypervisordoesnotneedtoallocateanyhostphysicalmemoryforthisvirtualmachineanymore.
Thismeansthatthefollowingalwaysholdstrue:VM'shostmemoryusage/dev/null"Numberofruns:4VMconfiguration:1vCPU,512MBmemorySwingbenchDatabase:Oracle11gFunctionalbenchmark:OrderEntryNumberofusers:30Runtime:20minutesNumberofruns:3VMconfiguration:4vCPUs,4GmemoryExchange2007Server:MicrosoftExchange2007Loadgenclient:2000heavyexchangeusersVMconfiguration:4vCPUs,12GmemoryTheguestoperatingsystemrunninginsidetheSPECjbb,kernelcompile,andSwingbenchvirtualmachineswas64-bitRedHatEnterpriseLinux5.
2Server.
TheguestoperatingsystemrunninginsidetheExchangevirtualmachinewasWindowsServer2008.
ForSPECjbb2005andSwingbench,thethroughputwasmeasuredbycalculatingthenumberoftransactionspersecond.
Forkernelcompile,theperformancewasmeasuredbycalculatingtheinverseofthecompilationtime.
ForExchange,theperformancewasmeasuredusingtheaverageRemoteProcedureCall(RPC)latency.
Inaddition,forSwingbenchandExchange,theclientapplicationswereinstalledinaseparatenativemachine.
14VMwarewhitepaper5.
2TransparentPageSharingPerformanceInthisexperiment,twoinstancesofworkloadswererun.
Theoverallperformanceofworkloadswithpagesharingenabledtothosewithpagesharingdisabledwerecompared.
Therewasafocusonevaluatingtheoverheadofpagescanning.
Sincethepagescanrate(numberofscannedpagespersecond)islargelydeterminedbytheMem.
ShareScanTime,inadditiontothedefault60minutes,theminimalMem.
ShareScanTimeof10minuteswastested,whichpotentiallyintroducesthehighestpagescanningoverhead.
Figure9:Performanceimpactoftransparentpagesharing0.
90.
920.
960.
940.
981.
001.
021.
041.
0000.
9940.
9980.
9981.
0020.
991PshareDefaultPshareOPshare(ShareScanTime10)NormalizedThroughputSPECjbbKernelCompileSwingbenchFigure9confirmsthatenablingpagesharingintroducesanegligibleperformanceoverheadinthedefaultsettingandonly<1percentoverheadwhenMem.
ShareScanTimeis10minutesforallworkloads.
Pagesharingsometimesimprovesperformancebecausethevirtualmachine'shostmemoryfootprintisreducedsothatitfitstheprocessorcachebetter.
Besidespagescanning,breakingCoWpagesisanothersourceofpagesharing.
Unfortunately,suchoverheadishighlyapplicationdependentanditisdifficulttoevaluateitthroughconfigurableoptions.
Therefore,theoverheadofbreakingCoWpageswasomittedinthisexperiment.
5.
3Ballooningvs.
SwappingInthefollowingexperiments,VMmemoryreclamationwasenforcedbychangingeachvirtualmachine'smemorylimitvaluefromthedefaultunlimitedtovaluesthataresmallerthantheconfiguredvirtualmachinememorysize.
Pagesharingwasturnedofftoisolatetheperformanceimpactofballooningorswapping.
Sincethehostmemoryismuchlargerthanthevirtualmachinememorysize,thehostfreememoryisalwaysinthehighstate.
Hence,bydefault,ESXonlyusesballooningtoreclaimmemory.
Theballoondriverwasturnedofftoobtaintheperformanceofusingswappingonly.
Theballoonedandswappedmemorysizeswerealsocollectedwhenthevirtualmachineransteadily.
15VMwarewhitepaper5.
3.
1LinuxKernelCompileFigure10presentsthethroughputofthekernelcompileworkloadwithdifferentmemorylimitswhenusingballooningorswapping.
Thisexperimentwascontrivedtouseonlyballooningorswapping,notboth.
Whilethiscasewillnotoftenoccurinproductionenvironments,itshowstheperformancepenaltyduetoeithertechnologyonitsown.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure10:Performanceofkernelcompilewhenusingtheballooningvs.
theswapping00.
20.
60.
40.
81.
01.
20100300200400500600BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)512448384320256192128Throughput(Swappingonly)Byusingballooning,thekernelcompilevirtualmachineonlysuffersfrom3percentthroughputlossevenwhenthememorylimitisaslowas128MB(1/4oftheconfiguredvirtualmachinememorysize).
Thisisbecausethekernelcompileworkloadhasverylittlememoryreuseandmostoftheguestphysicalmemoryisusedasbuffercachesforthekernelsourcefiles.
Withballooning,theguestoperatingsystemreclaimsguestphysicalmemoryupontheballoondriver'sallocationrequestbydroppingthebufferpagesinsteadofpagingthemouttotheguestvirtualswapdevice.
Becausedroppedbufferpagesarenotreusedfrequently,theperformanceimpactofusingballooningistrivial.
However,withhypervisorswapping,theselectedguestbufferpagesareunnecessarilyswappedouttothehostswapdeviceandsomeguestkernelpagesareswappedoutoccasionally,makingtheperformanceofthevirtualmachinedegradewhenmemorylimitdecreases.
Whenthememorylimitis128MB,thethroughputlossisabout34percentintheswappingcase.
Ballooninflationisabetterapproachtomemoryreclamationfromaperformanceperspective.
Figure10,illustratesthatasmemorylimitdecreases,theballoonedandswappedmemorysizesincreasealmostlinearly.
Thereisadifferencebetweentheballoonedmemorysizeandtheswappedmemorysize.
Intheballooningcases,whenvirtualmachinememoryusageexceedsthespecifiedlimit,theballoondrivercannotforcetheguestoperatingsystemtopageoutguestphysicalpagesimmediatelyunlesstheballoondriverhasusedupmostofthefreeguestphysicalmemory,whichputstheguestoperatingsystemundermemorypressure.
Intheswappingcases,however,aslongasthevirtualmachinememoryusageexceedsthespecifiedlimit,theextraamountofpageswillbeswappedoutimmediately.
Therefore,theballoonedmemorysizeisroughlyequaltothevirtualmachinememorysizeminusthespecifiedlimit,whichmeansthatthefreephysicalmemoryisincluded.
Theswappedmemorysizeisroughlyequaltothevirtualmachinehostmemoryusageminusthespecifiedlimit.
Inthekernelcompilevirtualmachine,sincemostoftheguestphysicalpagesareusedtobuffertheworkloadfiles,thevirtualmachine'seffectivehostmemoryusageisclosetothevirtualmachinememorysize.
Hence,theswappedmemorysizeissimilartotheballoonedmemorysize.
16VMwarewhitepaper5.
3.
2Oracle/SwingbenchFigure11presentsthethroughputofanOracledatabasetestedbytheSwingbenchworkloadwithdifferentmemorylimitswhenusingballooningvs.
swapping.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure11:PerformanceofSwingbenchwhenusingballooningvs.
swapping00.
20.
60.
40.
81.
01.
2050015001000200025003000BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)3840358433283072281625602048179215362304Throughput(Swappingonly)Asinthekernelcompiletest,usingballooningbarelyimpactsthethroughputoftheSwingbenchvirtualmachineuntilthememorylimitdecreasesbelow2048MB.
ThisoccurswhentheguestoperatingsystemstartstopageoutthephysicalpagesthatareheavilyreusedbytheOracledatabase.
Incontrasttousingballooning,usingswappingcausessignificantthroughputpenalty.
Thethroughputlossisalready17percentwhenthememorylimitis3584MB.
Inhypervisorswapping,someguestbufferpagesareunnecessarilyswappedoutandsomeguestkernelorperformance-criticaldatabasepagesarealsounintentionallyswappedoutbecauseoftherandompageselectionpolicy.
FortheSwingbenchvirtualmachine,thevirtualmachinehostmemoryusageisveryclosetothevirtualmachinememorysize,sotheswappedmemorysizeisveryclosetotheballoonedmemorysize.
17VMwarewhitepaper5.
3.
3SPECjbbFigure12presentsthethroughputoftheSPECjbbworkloadwithdifferentmemorylimitswhenusingballooningvs.
swapping.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure12:PerformanceofSPECjbbwhenusingtheballooningvs.
theswapping00.
20.
60.
40.
81.
01.
2050015001000200025003000BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)3072281625602304204817921536Throughput(Swappingonly)Fromthisfigure,itisseenthatwhenthevirtualmachinememorylimitdecreasesbeyond2816MB,thethroughputofSPECjbbdegradessignificantlyinbothballooningandswappingcases.
Whenthememorylimitisreducedto2048MB,thethroughputlossesare89percentand96percentforballooningandswappingrespectively.
SincetheconfiguredJVMheapsizeis2.
5GB,theactualvirtualmachineworkingsetsizeisaround2.
5GBplusguestoperatingsystemmemoryusage(about300MB).
Whenthevirtualmachinememorylimitfallsbelow2816MB,thehostmemorycannotbacktheentirevirtualmachine'sworkingset,sothatvirtualmachinestartstosufferfromguest-levelpagingintheballooningcasesorhypervisorswappingintheswappingcases.
SinceSPECjbbisanextremelymemoryintensiveworkload,itsthroughputislargelydeterminedbythevirtualmachinememoryhitrate.
Inthisinstance,virtualmachinememoryhitrateisdefinedasthepercentageofguestmemoryaccessesthatresultinhostphysicalmemoryhits.
AhighermemoryhitratemeanshigherthroughputfortheSPECjbbworkload.
Sinceballooningandhostswappingsimilarlydecreasememoryhitrate,bothguestlevelpagingandhypervisorswappinglargelyhurtSPECjbbperformance.
Surprisingly,whenthememorylimitis2506MBor2304MB,usingswappingobtainshigherthroughputthanthatofusingballooning.
Thisseemstobecounterintuitivebecausehypervisorswappingtypicallycausesahigherperformancepenalty.
OnereasonableexplanationisthattherandompageselectionpolicyusedinhypervisorswappinglargelyfavorstheaccesspatternsoftheSPECjbbvirtualmachine.
Morespecifically,withballooning,whentheguestoperatingsystem(Linuxinthiscase)pagesoutguestphysicalpagestosatisfytheballoondriver'sallocationrequest,itchoosesthetargetsusinganLRU-approximatedpolicy.
However,theSPECjbbworkloadoftentraversesalltheallocatedguestphysicalmemoryiteratively.
Forexample,theJVMgarbagecollectorperiodicallyscanstheentireheaptofreememory.
Thisbehavioriscategorizedtoawell-knownLRUpathologicalcaseinwhichthememoryhitratedropsdramaticallyevenwhenthememorysizeisslightlysmallerthantheworkingsetsize.
Incontrast,whenusinghypervisorswapping,theswappedphysicalpagesarerandomlyselectedbythehypervisor,whichmakesthememoryhitratereducemoregraduallyasthememorylimitdecreases.
Thatiswhyusingswappingachieveshigherthroughputwhenthememorylimitissmallerthanthevirtualmachine'sworkingsetsize.
However,whenthememorylimitdropsto2304MB,thevirtualmachinememoryhitrateisequivalentlylowinbothswappingandballooningcases.
Usingswappingstartstocauseworseperformancecomparedtousing18VMwarewhitepaperballooning.
Notethattheabovetwoconfigurationswhereswappingoutperformsballooningarerarepathologicalcasesforballooningperformance.
Inmostcases,usingballooningachievesmuchbetterperformancecomparedtousingswapping.
SinceSPECjbbvirtualmachine'sworkingsetsize(~2.
8GB)ismuchsmallerthantheconfiguredvirtualmachinememorysize(4GB),theballoonedmemorysizeismuchhigherthantheswappedmemorysize.
5.
3.
4MicrosoftExchangeServer2007ThissectionpresentshowballooningandswappingimpacttheperformanceofanExchangeServervirtualmachine.
ExchangeSeverisamemoryintensiveworkloadthatisoptimizedtousealltheavailablephysicalmemorytocachethetransactionsforfewerdiskI/Os.
TheExchangeServerperformancewasmeasuredusingtheaverageRemoteProcedureCall(RPC)latencyduringtwohoursstablerun.
TheRPClatencygaugestheserverprocessingtimeforanRPCfromLoadGen(theclientapplicationthatdrivestheExchangeserver).
Therefore,lowerRPClatencymeansbetterperformance.
TheresultsarepresentedinFigure13.
Figure13:AverageRPClatencyofExchangewhenusingballooningvs.
usingswapping012111098765433212045678Ballooning4.
65.
96.
16.
857.
3HypervisorSwappingAverageRPClatency(ms)Memorylimit(GB)(a)01211109604080100120140160HypervisorSwapping4.
6143xAverageRPClatency(ms)Memorylimit(GB)(b)7.
48Figure13(a),illustratesthatwhenthememorylimitdecreasesfrom12GBto3GB,theaverageRPClatencyisgraduallyincreasedfrom4.
6msto7.
3mswithballooning.
However,asshowninFigure13(b),theRPClatencyisdramaticallyincreasedfrom4.
6msto143mswhensolelyswappingout2GBhostmemory.
Whenthememorylimitisreducedto9GB,hypervisorswappingmakestheRPClatencytoohigh,whichresultedinthefailureoftheLoadGenapplication(duetotimeout).
Overall,thisfigureconfirmsthatusingballooningtoreclaimmemoryismuchmoreefficientthanusinghypervisorswappingfortheExchangeServervirtualmachine.
19VMwarewhitepaper6.
BestPracticesBasedonthememorymanagementconceptsandperformanceevaluationresultspresentedintheprevioussections,thefollowingaresomebestpracticesforhostandguestmemoryusage.
Donotdisablepagesharingortheballoondriver.
Asdescribed,pagesharingisalightweighttechniquewhichopportunisticallyreclaimstheredundanthostmemorywithtrivialperformanceimpact.
Inthecaseswherehostsareheavilyovercommitted,usingballooningisgenerallymoreefficientandsafercomparedtousinghypervisorswapping,basedontheresultspresentedinSection5.
3.
ThesetwotechniquesareenabledbydefaultinESX4andshouldnotbedisabledunlessthebenefitsofdoingsoclearlyoutweighthecosts.
Carefullyspecifythememorylimitandmemoryreservation.
Thevirtualmachinememoryallocationtargetissubjecttothelimitandreservation.
Ifthesetwoparametersaremisconfigured,usersmayobserveballooningorswappingevenwhenthehosthasplentyoffreememory.
Forexample,avirtualmachine'smemorymaybereclaimedwhenthespecifiedlimitistoosmallorwhenothervirtualmachinesreservetoomuchhostmemory,eventhoughtheymayonlyuseasmallportionofthereservedmemory.
Ifaperformance-criticalvirtualmachineneedsaguaranteedmemoryallocation,thereservationneedstobespecifiedcarefullybecauseitmayimpactothervirtualmachines.
Hostmemorysizeshouldbelargerthanguestmemoryusage.
Forexample,itisunwisetorunavirtualmachinewitha2GBworkingsetsizeinahostwithonly1GBhostmemory.
Ifthisisthecase,thehypervisorhastoreclaimthevirtualmachine'sactivememorythroughballooningorhypervisorswapping,whichwillleadtopotentiallyseriousvirtualmachineperformancedegradation.
Althoughitisdifficulttotellwhetherthehostmemoryislargeenoughtoholdallofthevirtualmachine'sworkingsets,thebottomlineisthatthehostmemoryshouldnotbeexcessivelyovercommittedmakingtheguestshavetocontinuouslypageoutguestphysicalmemory.
Usesharestoadjustrelativeprioritieswhenmemoryisovercommitted.
Ifthehost'smemoryisovercommittedandthevirtualmachine'sallocatedhostmemoryistoosmalltoachieveareasonableperformance,theusercanadjustthevirtualmachine'ssharestoescalatetherelativepriorityofthevirtualmachinesothatthehypervisorwillallocatemorehostmemoryforthatvirtualmachine.
SetappropriateVirtualMachinememorysize.
Thevirtualmachinememorysizeshouldbeslightlylargerthantheaverageguestmemoryusage.
Theextramemorywillaccommodateworkloadspikesinthevirtualmachine.
Notethatguestoperatingsystemonlyrecognizesthespecifiedvirtualmachinememorysize.
Ifthevirtualmachinememorysizeistoosmall,guest-levelpagingisinevitable,eventhoughthehostmayhaveplentyoffreememory.
Instead,theusermayconservativelysetaverylargevirtualmachinememorysize,whichisfineintermsofvirtualmachineperformance,butmorevirtualmachinememorymeansthatmoreoverheadmemoryneedstobereservedforthevirtualmachine.
7.
References[1]CarlA.
Waldspurger.
"MemoryResourceManagementinVMwareESXServer".
ProceedingofthefifthSymposiumonOperatingSystemDesignandImplementation,Boston,Dec2002.
[2]vSphereResourceManagementGuide.
VMware.
http://www.
vmware.
com/pdf/vsphere4/r40/vsp_40_upgrade_guide.
pdf[3]MemoryPerformanceChartStatisticsinthevSphereClient.
http://communities.
vmware.
com/docs/DOC-10398[4]VirtualCenterMemoryStatisticsDefinitions.
http://communities.
vmware.
com/docs/DOC-5230[5]PerformanceEvaluationofIntelEPTHardwareAssist.
VMware.
http://www.
vmware.
com/resources/techresources/10006[6]PerformanceEvaluationofAMDRVIHardwareAssist.
VMware.
http://www.
vmware.
com/resources/techresources/1079[7]Thebuffercache.
http://tldp.
org/LDP/sag/html/buffer-cache.
html4VMwareToolsmustbeinstalledinordertoenableballooning.
Thisisrecommendedforallworkloads.
UnderstandingMemoryResourceManagementinVMwareESXServerSource:TechnicalMarketing,SDRevision:20090820VMware,Inc.
3401HillviewAvePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2009VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
VMW_ESX_Memory_09Q3_WP_P20_R3

标准互联(450元)襄阳电信100G防御服务器 10M独立带宽

目前在标准互联这边有两台香港云服务器产品,这不看到有通知到期提醒才关注到。平时我还是很少去登录这个服务商的,这个服务商最近一年的促销信息比较少,这个和他们的运营策略有关系。已经从开始的倾向低价和个人用户云服务器市场,开始转型到中高端个人和企业用户的独立服务器。在这篇文章中,有看到标准互联有推出襄阳电信高防服务器100GB防御。有三款促销方案我们有需要可以看看。我们看看几款方案配置。型号内存硬盘IP...

易探云(QQ音乐绿钻)北京/深圳云服务器8核8G10M带宽低至1332.07元/年起

易探云怎么样?易探云香港云服务器比较有优势,他家香港BGP+CN2口碑不错,速度也很稳定。尤其是今年他们动作很大,推出的香港云服务器有4个可用区价格低至18元起,试用过一个月的用户基本会续费,如果年付的话还可以享受8.5折或秒杀价格。今天,云服务器网(yuntue.com)小编推荐一下易探云国内云服务器优惠活动,北京和深圳这二个机房的云服务器2核2G5M带宽低至330.66元/年,还有高配云服务器...

NameCheap域名转入优惠再次来袭 搜罗今年到期域名续费

在上个月的时候也有记录到 NameCheap 域名注册商有发布域名转入促销活动的,那时候我也有帮助自己和公司的客户通过域名转入到NC服务商这样可以实现省钱续费的目的。上个月续费转入的时候是选择9月和10月份到期的域名,这不还有几个域名年底到期的,正好看到NameCheap商家再次发布转入优惠,所以打算把剩下的还有几个看看一并转入进来。活动截止到9月20日,如果我们需要转入域名的话可以准备起来。 N...

freehost为你推荐
哈利波特罗恩升级当爸哈利波特 13年前的晚上发生了什么?商标注册流程及费用申请商标的流程和花费及时间是什么18comic.fun黑色禁药http://www.lovecomic.cn/attachment/Fid_18/18_4_00d3b0cb502ea74.jpg这幅画名字叫什么?qq530.com求教:如何下载http://www.qq530.com/ 上的音乐百度指数词百度指数为0的词 为啥排名没有bbs2.99nets.com天堂1单机版到底怎么做www.zhiboba.com网上看nbayinrentangzimotang氨基酸洗发水的功效咋样?www.15job.com广州天河区的南方人才市场www.1diaocha.com请问网络上可以做兼职赚钱吗?现在骗子比较多,不敢盲目相信。请大家推荐下
中文域名 韩国vps俄罗斯美女 希网动态域名 krypt godaddy主机 免备案空间 优key realvnc parseerror 搜狗抢票助手 青果网 免费ddos防火墙 建站代码 彩虹ip 云鼎网络 佛山高防服务器 如何安装服务器系统 常州联通宽带 域名dns www789 更多