illustratedfreehost
freehost 时间:2021-04-10 阅读:(
)
UnderstandingMemoryResourceManagementinVMwareESXServerWHITEPAPER2VMwarewhitepaperTableofContents1.
Introduction32.
ESXMemoryManagementOverview42.
1Terminology42.
2MemoryVirtualizationBasics42.
3MemoryManagementBasicsinESX53.
MemoryReclamationinESX63.
1Motivation63.
2TransparentPageSharing(TPS)73.
3Ballooning83.
4HypervisorSwapping93.
5WhentoReclaimHostMemory104.
ESXMemoryAllocationManagementforMultipleVirtualMachines115.
PerformanceEvaluation135.
1ExperimentalEnvironment135.
2TransparentPageSharingPerformance145.
3Ballooningvs.
Swapping145.
3.
1LinuxKernelCompile155.
3.
2Oracle/Swingbench165.
3.
3SPECjbb175.
3.
4MicrosoftExchangeServer2007186.
BestPractices197.
References193VMwarewhitepaper1.
IntroductionVMwareESXisahypervisordesignedtoefficientlymanagehardwareresourcesincludingCPU,memory,storage,andnetworkamongmultipleconcurrentvirtualmachines.
ThispaperdescribesthebasicmemorymanagementconceptsinESX,theconfigurationoptionsavailable,andprovidesresultstoshowtheperformanceimpactoftheseoptions.
Thefocusofthispaperisinpresentingthefundamentalconceptsoftheseoptions.
Moredetailscanbefoundin"MemoryResourceManagementinVMwareESXServer"[1].
ESXuseshigh-levelresourcemanagementpoliciestocomputeatargetmemoryallocationforeachvirtualmachine(VM)basedonthecurrentsystemloadandparametersettingsforthevirtualmachine(shares,reservation,andlimit[2]).
Thecomputedtargetallocationisusedtoguidethedynamicadjustmentofthememoryallocationforeachvirtualmachine.
Inthecaseswherehostmemoryisovercommitted,thetargetallocationsarestillachievedbyinvokingseverallower-levelmechanismstoreclaimmemoryfromvirtualmachines.
Thispaperassumesapurevirtualizationenvironmentinwhichtheguestoperatingsystemrunninginsidethevirtualmachineisnotmodifiedtofacilitatevirtualization(oftenreferredtoasparavirtualization).
KnowledgeofESXarchitecturewillhelpyouunderstandtheconceptspresentedinthispaper.
Inordertoquicklymonitorvirtualmachinememoryusage,theVMwarevSphereClientexposestwomemorystatisticsintheresourcesummary:ConsumedHostMemoryandActiveGuestMemory.
Figure1:HostandGuestMemoryusageinvSphereClientConsumedHostMemoryusageisdefinedastheamountofhostmemorythatisallocatedtothevirtualmachine,ActiveGuestMemoryisdefinedastheamountofguestmemorythatiscurrentlybeingusedbytheguestoperatingsystemanditsapplications.
Thesetwostatisticsarequiteusefulforanalyzingthememorystatusofthevirtualmachineandprovidinghintstoaddresspotentialperformanceissues.
Thispaperhelpsanswerthesequestions:WhyistheConsumedHostMemorysohighWhyistheConsumedHostMemoryusagesometimesmuchlargerthantheActiveGuestMemoryWhyistheActiveGuestMemorydifferentfromwhatisseeninsidetheguestoperatingsystemThesequestionscannotbeeasilyansweredwithoutunderstandingthebasicmemorymanagementconceptsinESX.
UnderstandinghowESXmanagesmemorywillalsomaketheperformanceimplicationsofchangingESXmemorymanagementparametersclearer.
ThevSphereClientcanalsodisplayperformancechartsforthefollowingmemorystatistics:active,shared,consumed,granted,overhead,balloon,swapped,swappedinrate,andswapped-outrate.
Acompletediscussionaboutthesemetricscanbefoundin"MemoryPerformanceChartMetricsinthevSphereClient"[3]and"VirtualCenterMemoryStatisticsDefinitions"[4].
Therestofthepaperisorganizedasfollows.
Section2presentstheoverviewofESXmemorymanagementconcepts.
Section3discussesthememoryreclamationtechniquesusedinESX.
Section4describeshowESXallocateshostmemorytovirtualmachineswhenthehostisundermemorypressure.
Section5presentsanddiscussestheperformanceresultsfordifferentmemoryreclamationtechniques.
Finally,Section6discussesthebestpracticeswithrespecttohostandguestmemoryusage.
4VMwarewhitepaper2.
ESXMemoryManagementOverview2.
1TerminologyThefollowingterminologyisusedthroughoutthispaper.
Hostphysicalmemory1referstothememorythatisvisibletothehypervisorasavailableonthesystem.
Guestphysicalmemoryreferstothememorythatisvisibletotheguestoperatingsystemrunninginthevirtualmachine.
Guestvirtualmemoryreferstoacontinuousvirtualaddressspacepresentedbytheguestoperatingsystemtoapplications.
Itisthememorythatisvisibletotheapplicationsrunninginsidethevirtualmachine.
Guestphysicalmemoryisbackedbyhostphysicalmemory,whichmeansthehypervisorprovidesamappingfromtheguesttothehostmemory.
Thememorytransferbetweentheguestphysicalmemoryandtheguestswapdeviceisreferredtoasguestlevelpagingandisdrivenbytheguestoperatingsystem.
Thememorytransferbetweenguestphysicalmemoryandthehostswapdeviceisreferredtoashypervisorswapping,whichisdrivenbythehypervisor.
2.
2MemoryVirtualizationBasicsVirtualmemoryisawell-knowntechniqueusedinmostgeneral-purposeoperatingsystems,andalmostallmodernprocessorshavehardwaretosupportit.
Virtualmemorycreatesauniformvirtualaddressspaceforapplicationsandallowstheoperatingsystemandhardwaretohandletheaddresstranslationbetweenthevirtualaddressspaceandthephysicaladdressspace.
Thistechniquenotonlysimplifiestheprogrammer'swork,butalsoadaptstheexecutionenvironmenttosupportlargeaddressspaces,processprotection,filemapping,andswappinginmoderncomputersystems.
Whenrunningavirtualmachine,thehypervisorcreatesacontiguousaddressablememoryspaceforthevirtualmachine.
Thismemoryspacehasthesamepropertiesasthevirtualaddressspacepresentedtotheapplicationsbytheguestoperatingsystem.
Thisallowsthehypervisortorunmultiplevirtualmachinessimultaneouslywhileprotectingthememoryofeachvirtualmachinefrombeingaccessedbyothers.
Therefore,fromtheviewoftheapplicationrunninginsidethevirtualmachine,thehypervisoraddsanextralevelofaddresstranslationthatmapstheguestphysicaladdresstothehostphysicaladdress.
Asaresult,therearethreevirtualmemorylayersinESX:guestvirtualmemory,guestphysicalmemory,andhostphysicalmemory.
TheirrelationshipsareillustratedinFigure2(a).
Figure2:Virtualmemorylevels(a)andmemoryaddresstranslation(b)inESX(a)VM(b)GuestvirtualmemoryApplicationOperatingSystemHypervisorHypervisorGuestphysicalmemoryHostphysicalmemoryGuestOSPageTablesguestvirtual-to-guestphysicalShadowPageTablesguestvirtual-to-guestphysicalpmapguestphysical-to-hostphysicalAsshowninFigure2(b),inESX,theaddresstranslationbetweenguestphysicalmemoryandhostphysicalmemoryismaintainedbythehypervisorusingaphysicalmemorymappingdatastructure,orpmap,foreachvirtualmachine.
Thehypervisorinterceptsallvirtualmachineinstructionsthatmanipulatethehardwaretranslationlookasidebuffer(TLB)contentsorguestoperatingsystempagetables,whichcontainthevirtualtophysicaladdressmapping.
TheactualhardwareTLBstateisupdatedbasedontheseparateshadowpagetables,whichcontaintheguestvirtualtohostphysicaladdressmapping.
Theshadowpagetablesmaintainconsistencywiththeguestvirtualtoguestphysicaladdressmappingintheguestpagetablesandtheguestphysicaltohostphysicaladdress1Thetermshostphysicalmemoryandhostmemoryareusedinterchangeablyinthispaper.
Theyarealsoequivalenttothetermmachinememoryusedin[1].
5VMwarewhitepapermappinginthepmapdatastructure.
Thisapproachremovesthevirtualizationoverheadforthevirtualmachine'snormalmemoryaccessesbecausethehardwareTLBwillcachethedirectguestvirtualtohostphysicalmemoryaddresstranslationsreadfromtheshadowpagetables.
Notethattheextralevelofguestphysicaltohostphysicalmemoryindirectionisextremelypowerfulinthevirtualizationenvironment.
Forexample,ESXcaneasilyremapavirtualmachine'shostphysicalmemorytofilesorotherdevicesinamannerthatiscompletelytransparenttothevirtualmachine.
Recently,somenewgenerationCPUs,suchasthirdgenerationAMDOpteronandIntelXeon5500seriesprocessors,haveprovidedhardwaresupportformemoryvirtualizationbyusingtwolayersofpagetablesinhardware.
Onelayerstorestheguestvirtualtoguestphysicalmemoryaddresstranslation,andtheotherlayerstorestheguestphysicaltohostphysicalmemoryaddresstranslation.
Thesetwopagetablesaresynchronizedusingprocessorhardware.
Hardwaresupportmemoryvirtualizationeliminatestheoverheadrequiredtokeepshadowpagetablesinsynchronizationwithguestpagetablesinsoftwarememoryvirtualization.
Formoreinformationabouthardware-assistedmemoryvirtualization,see"PerformanceEvaluationofIntelEPTHardwareAssist"[5]and"PerformanceEvaluationofAMDRVIHardwareAssist.
"[6]2.
3MemoryManagementBasicsinESXPriortotalkingabouthowESXmanagesmemoryforvirtualmachines,itisusefultofirstunderstandhowtheapplication,guestoperatingsystem,hypervisor,andvirtualmachinemanagememoryattheirrespectivelayers.
Anapplicationstartsandusestheinterfacesprovidedbytheoperatingsystemtoexplicitlyallocateordeallocatethevirtualmemoryduringtheexecution.
Inanon-virtualenvironment,theoperatingsystemassumesitownsallphysicalmemoryinthesystem.
Thehardwaredoesnotprovideinterfacesfortheoperatingsystemtoexplicitly"allocate"or"free"physicalmemory.
Theoperatingsystemestablishesthedefinitionsof"allocated"or"free"physicalmemory.
Differentoperatingsystemshavedifferentimplementationstorealizethisabstraction.
Oneexampleisthattheoperatingsystemmaintainsan"allocated"listanda"free"list,sowhetherornotaphysicalpageisfreedependsonwhichlistthepagecurrentlyresidesin.
Becauseavirtualmachinerunsanoperatingsystemandseveralapplications,thevirtualmachinememorymanagementpropertiescombinebothapplicationandoperatingsystemmemorymanagementproperties.
Likeanapplication,whenavirtualmachinefirststarts,ithasnopre-allocatedphysicalmemory.
Likeanoperatingsystem,thevirtualmachinecannotexplicitlyallocatehostphysicalmemorythroughanystandardinterfaces.
Thehypervisoralsocreatesthedefinitionsof"allocated"and"free"hostmemoryinitsowndatastructures.
Thehypervisorinterceptsthevirtualmachine'smemoryaccessesandallocateshostphysicalmemoryforthevirtualmachineonitsfirstaccesstothememory.
Inordertoavoidinformationleakingamongvirtualmachines,thehypervisoralwayswriteszeroestothehostphysicalmemorybeforeassigningittoavirtualmachine.
Virtualmachinememorydeallocationactsjustlikeanoperatingsystem,suchthattheguestoperatingsystemfreesapieceofphysicalmemorybyaddingthesememorypagenumberstotheguestfreelist,butthedataofthe"freed"memorymaynotbemodifiedatall.
Asaresult,whenaparticularpieceofguestphysicalmemoryisfreed,themappedhostphysicalmemorywillusuallynotchangeitsstateandonlytheguestfreelistwillbechanged.
Thehypervisorknowswhentoallocatehostphysicalmemoryforavirtualmachinebecausethefirstmemoryaccessfromthevirtualmachinetoahostphysicalmemorywillcauseapagefaultthatcanbeeasilycapturedbythehypervisor.
However,itisdifficultforthehypervisortoknowwhentofreehostphysicalmemoryuponvirtualmachinememorydeallocationbecausetheguestoperatingsystemfreelistisgenerallynotpubliclyaccessible.
Hence,thehypervisorcannoteasilyfindoutthelocationofthefreelistandmonitoritschanges.
Althoughthehypervisorcannotreclaimhostmemorywhentheoperatingsystemfreesguestphysicalmemory,thisdoesnotmeanthatthehostmemory,nomatterhowlargeitis,willbeusedupbyavirtualmachinewhenthevirtualmachinerepeatedlyallocatesandfreesmemory.
Thisisbecausethehypervisordoesnotallocatehostphysicalmemoryoneveryvirtualmachine'smemoryallocation.
Itonlyallocateshostphysicalmemorywhenthevirtualmachinetouchesthephysicalmemorythatithasnevertouchedbefore.
Ifavirtualmachinefrequentlyallocatesandfreesmemory,presumablythesameguestphysicalmemoryisbeingallocatedandfreedagainandagain.
Therefore,thehypervisorjustallocateshostphysicalmemoryforthefirstmemoryallocationandthentheguestreuses6VMwarewhitepaperthesamehostphysicalmemoryfortherestofallocations.
Thatis,ifavirtualmachine'sentireguestphysicalmemory(configuredmemory)hasbeenbackedbythehostphysicalmemory,thehypervisordoesnotneedtoallocateanyhostphysicalmemoryforthisvirtualmachineanymore.
Thismeansthatthefollowingalwaysholdstrue:VM'shostmemoryusage/dev/null"Numberofruns:4VMconfiguration:1vCPU,512MBmemorySwingbenchDatabase:Oracle11gFunctionalbenchmark:OrderEntryNumberofusers:30Runtime:20minutesNumberofruns:3VMconfiguration:4vCPUs,4GmemoryExchange2007Server:MicrosoftExchange2007Loadgenclient:2000heavyexchangeusersVMconfiguration:4vCPUs,12GmemoryTheguestoperatingsystemrunninginsidetheSPECjbb,kernelcompile,andSwingbenchvirtualmachineswas64-bitRedHatEnterpriseLinux5.
2Server.
TheguestoperatingsystemrunninginsidetheExchangevirtualmachinewasWindowsServer2008.
ForSPECjbb2005andSwingbench,thethroughputwasmeasuredbycalculatingthenumberoftransactionspersecond.
Forkernelcompile,theperformancewasmeasuredbycalculatingtheinverseofthecompilationtime.
ForExchange,theperformancewasmeasuredusingtheaverageRemoteProcedureCall(RPC)latency.
Inaddition,forSwingbenchandExchange,theclientapplicationswereinstalledinaseparatenativemachine.
14VMwarewhitepaper5.
2TransparentPageSharingPerformanceInthisexperiment,twoinstancesofworkloadswererun.
Theoverallperformanceofworkloadswithpagesharingenabledtothosewithpagesharingdisabledwerecompared.
Therewasafocusonevaluatingtheoverheadofpagescanning.
Sincethepagescanrate(numberofscannedpagespersecond)islargelydeterminedbytheMem.
ShareScanTime,inadditiontothedefault60minutes,theminimalMem.
ShareScanTimeof10minuteswastested,whichpotentiallyintroducesthehighestpagescanningoverhead.
Figure9:Performanceimpactoftransparentpagesharing0.
90.
920.
960.
940.
981.
001.
021.
041.
0000.
9940.
9980.
9981.
0020.
991PshareDefaultPshareOPshare(ShareScanTime10)NormalizedThroughputSPECjbbKernelCompileSwingbenchFigure9confirmsthatenablingpagesharingintroducesanegligibleperformanceoverheadinthedefaultsettingandonly<1percentoverheadwhenMem.
ShareScanTimeis10minutesforallworkloads.
Pagesharingsometimesimprovesperformancebecausethevirtualmachine'shostmemoryfootprintisreducedsothatitfitstheprocessorcachebetter.
Besidespagescanning,breakingCoWpagesisanothersourceofpagesharing.
Unfortunately,suchoverheadishighlyapplicationdependentanditisdifficulttoevaluateitthroughconfigurableoptions.
Therefore,theoverheadofbreakingCoWpageswasomittedinthisexperiment.
5.
3Ballooningvs.
SwappingInthefollowingexperiments,VMmemoryreclamationwasenforcedbychangingeachvirtualmachine'smemorylimitvaluefromthedefaultunlimitedtovaluesthataresmallerthantheconfiguredvirtualmachinememorysize.
Pagesharingwasturnedofftoisolatetheperformanceimpactofballooningorswapping.
Sincethehostmemoryismuchlargerthanthevirtualmachinememorysize,thehostfreememoryisalwaysinthehighstate.
Hence,bydefault,ESXonlyusesballooningtoreclaimmemory.
Theballoondriverwasturnedofftoobtaintheperformanceofusingswappingonly.
Theballoonedandswappedmemorysizeswerealsocollectedwhenthevirtualmachineransteadily.
15VMwarewhitepaper5.
3.
1LinuxKernelCompileFigure10presentsthethroughputofthekernelcompileworkloadwithdifferentmemorylimitswhenusingballooningorswapping.
Thisexperimentwascontrivedtouseonlyballooningorswapping,notboth.
Whilethiscasewillnotoftenoccurinproductionenvironments,itshowstheperformancepenaltyduetoeithertechnologyonitsown.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure10:Performanceofkernelcompilewhenusingtheballooningvs.
theswapping00.
20.
60.
40.
81.
01.
20100300200400500600BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)512448384320256192128Throughput(Swappingonly)Byusingballooning,thekernelcompilevirtualmachineonlysuffersfrom3percentthroughputlossevenwhenthememorylimitisaslowas128MB(1/4oftheconfiguredvirtualmachinememorysize).
Thisisbecausethekernelcompileworkloadhasverylittlememoryreuseandmostoftheguestphysicalmemoryisusedasbuffercachesforthekernelsourcefiles.
Withballooning,theguestoperatingsystemreclaimsguestphysicalmemoryupontheballoondriver'sallocationrequestbydroppingthebufferpagesinsteadofpagingthemouttotheguestvirtualswapdevice.
Becausedroppedbufferpagesarenotreusedfrequently,theperformanceimpactofusingballooningistrivial.
However,withhypervisorswapping,theselectedguestbufferpagesareunnecessarilyswappedouttothehostswapdeviceandsomeguestkernelpagesareswappedoutoccasionally,makingtheperformanceofthevirtualmachinedegradewhenmemorylimitdecreases.
Whenthememorylimitis128MB,thethroughputlossisabout34percentintheswappingcase.
Ballooninflationisabetterapproachtomemoryreclamationfromaperformanceperspective.
Figure10,illustratesthatasmemorylimitdecreases,theballoonedandswappedmemorysizesincreasealmostlinearly.
Thereisadifferencebetweentheballoonedmemorysizeandtheswappedmemorysize.
Intheballooningcases,whenvirtualmachinememoryusageexceedsthespecifiedlimit,theballoondrivercannotforcetheguestoperatingsystemtopageoutguestphysicalpagesimmediatelyunlesstheballoondriverhasusedupmostofthefreeguestphysicalmemory,whichputstheguestoperatingsystemundermemorypressure.
Intheswappingcases,however,aslongasthevirtualmachinememoryusageexceedsthespecifiedlimit,theextraamountofpageswillbeswappedoutimmediately.
Therefore,theballoonedmemorysizeisroughlyequaltothevirtualmachinememorysizeminusthespecifiedlimit,whichmeansthatthefreephysicalmemoryisincluded.
Theswappedmemorysizeisroughlyequaltothevirtualmachinehostmemoryusageminusthespecifiedlimit.
Inthekernelcompilevirtualmachine,sincemostoftheguestphysicalpagesareusedtobuffertheworkloadfiles,thevirtualmachine'seffectivehostmemoryusageisclosetothevirtualmachinememorysize.
Hence,theswappedmemorysizeissimilartotheballoonedmemorysize.
16VMwarewhitepaper5.
3.
2Oracle/SwingbenchFigure11presentsthethroughputofanOracledatabasetestedbytheSwingbenchworkloadwithdifferentmemorylimitswhenusingballooningvs.
swapping.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure11:PerformanceofSwingbenchwhenusingballooningvs.
swapping00.
20.
60.
40.
81.
01.
2050015001000200025003000BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)3840358433283072281625602048179215362304Throughput(Swappingonly)Asinthekernelcompiletest,usingballooningbarelyimpactsthethroughputoftheSwingbenchvirtualmachineuntilthememorylimitdecreasesbelow2048MB.
ThisoccurswhentheguestoperatingsystemstartstopageoutthephysicalpagesthatareheavilyreusedbytheOracledatabase.
Incontrasttousingballooning,usingswappingcausessignificantthroughputpenalty.
Thethroughputlossisalready17percentwhenthememorylimitis3584MB.
Inhypervisorswapping,someguestbufferpagesareunnecessarilyswappedoutandsomeguestkernelorperformance-criticaldatabasepagesarealsounintentionallyswappedoutbecauseoftherandompageselectionpolicy.
FortheSwingbenchvirtualmachine,thevirtualmachinehostmemoryusageisveryclosetothevirtualmachinememorysize,sotheswappedmemorysizeisveryclosetotheballoonedmemorysize.
17VMwarewhitepaper5.
3.
3SPECjbbFigure12presentsthethroughputoftheSPECjbbworkloadwithdifferentmemorylimitswhenusingballooningvs.
swapping.
Thethroughputisnormalizedtothecasewherevirtualmachinememoryisnotreclaimed.
Figure12:PerformanceofSPECjbbwhenusingtheballooningvs.
theswapping00.
20.
60.
40.
81.
01.
2050015001000200025003000BalloonedsizeSwappedsizeThroughout(Balloononly)NormalizedThroughputBallooned/SwappedMemory(MB)Memorylimit(MB)3072281625602304204817921536Throughput(Swappingonly)Fromthisfigure,itisseenthatwhenthevirtualmachinememorylimitdecreasesbeyond2816MB,thethroughputofSPECjbbdegradessignificantlyinbothballooningandswappingcases.
Whenthememorylimitisreducedto2048MB,thethroughputlossesare89percentand96percentforballooningandswappingrespectively.
SincetheconfiguredJVMheapsizeis2.
5GB,theactualvirtualmachineworkingsetsizeisaround2.
5GBplusguestoperatingsystemmemoryusage(about300MB).
Whenthevirtualmachinememorylimitfallsbelow2816MB,thehostmemorycannotbacktheentirevirtualmachine'sworkingset,sothatvirtualmachinestartstosufferfromguest-levelpagingintheballooningcasesorhypervisorswappingintheswappingcases.
SinceSPECjbbisanextremelymemoryintensiveworkload,itsthroughputislargelydeterminedbythevirtualmachinememoryhitrate.
Inthisinstance,virtualmachinememoryhitrateisdefinedasthepercentageofguestmemoryaccessesthatresultinhostphysicalmemoryhits.
AhighermemoryhitratemeanshigherthroughputfortheSPECjbbworkload.
Sinceballooningandhostswappingsimilarlydecreasememoryhitrate,bothguestlevelpagingandhypervisorswappinglargelyhurtSPECjbbperformance.
Surprisingly,whenthememorylimitis2506MBor2304MB,usingswappingobtainshigherthroughputthanthatofusingballooning.
Thisseemstobecounterintuitivebecausehypervisorswappingtypicallycausesahigherperformancepenalty.
OnereasonableexplanationisthattherandompageselectionpolicyusedinhypervisorswappinglargelyfavorstheaccesspatternsoftheSPECjbbvirtualmachine.
Morespecifically,withballooning,whentheguestoperatingsystem(Linuxinthiscase)pagesoutguestphysicalpagestosatisfytheballoondriver'sallocationrequest,itchoosesthetargetsusinganLRU-approximatedpolicy.
However,theSPECjbbworkloadoftentraversesalltheallocatedguestphysicalmemoryiteratively.
Forexample,theJVMgarbagecollectorperiodicallyscanstheentireheaptofreememory.
Thisbehavioriscategorizedtoawell-knownLRUpathologicalcaseinwhichthememoryhitratedropsdramaticallyevenwhenthememorysizeisslightlysmallerthantheworkingsetsize.
Incontrast,whenusinghypervisorswapping,theswappedphysicalpagesarerandomlyselectedbythehypervisor,whichmakesthememoryhitratereducemoregraduallyasthememorylimitdecreases.
Thatiswhyusingswappingachieveshigherthroughputwhenthememorylimitissmallerthanthevirtualmachine'sworkingsetsize.
However,whenthememorylimitdropsto2304MB,thevirtualmachinememoryhitrateisequivalentlylowinbothswappingandballooningcases.
Usingswappingstartstocauseworseperformancecomparedtousing18VMwarewhitepaperballooning.
Notethattheabovetwoconfigurationswhereswappingoutperformsballooningarerarepathologicalcasesforballooningperformance.
Inmostcases,usingballooningachievesmuchbetterperformancecomparedtousingswapping.
SinceSPECjbbvirtualmachine'sworkingsetsize(~2.
8GB)ismuchsmallerthantheconfiguredvirtualmachinememorysize(4GB),theballoonedmemorysizeismuchhigherthantheswappedmemorysize.
5.
3.
4MicrosoftExchangeServer2007ThissectionpresentshowballooningandswappingimpacttheperformanceofanExchangeServervirtualmachine.
ExchangeSeverisamemoryintensiveworkloadthatisoptimizedtousealltheavailablephysicalmemorytocachethetransactionsforfewerdiskI/Os.
TheExchangeServerperformancewasmeasuredusingtheaverageRemoteProcedureCall(RPC)latencyduringtwohoursstablerun.
TheRPClatencygaugestheserverprocessingtimeforanRPCfromLoadGen(theclientapplicationthatdrivestheExchangeserver).
Therefore,lowerRPClatencymeansbetterperformance.
TheresultsarepresentedinFigure13.
Figure13:AverageRPClatencyofExchangewhenusingballooningvs.
usingswapping012111098765433212045678Ballooning4.
65.
96.
16.
857.
3HypervisorSwappingAverageRPClatency(ms)Memorylimit(GB)(a)01211109604080100120140160HypervisorSwapping4.
6143xAverageRPClatency(ms)Memorylimit(GB)(b)7.
48Figure13(a),illustratesthatwhenthememorylimitdecreasesfrom12GBto3GB,theaverageRPClatencyisgraduallyincreasedfrom4.
6msto7.
3mswithballooning.
However,asshowninFigure13(b),theRPClatencyisdramaticallyincreasedfrom4.
6msto143mswhensolelyswappingout2GBhostmemory.
Whenthememorylimitisreducedto9GB,hypervisorswappingmakestheRPClatencytoohigh,whichresultedinthefailureoftheLoadGenapplication(duetotimeout).
Overall,thisfigureconfirmsthatusingballooningtoreclaimmemoryismuchmoreefficientthanusinghypervisorswappingfortheExchangeServervirtualmachine.
19VMwarewhitepaper6.
BestPracticesBasedonthememorymanagementconceptsandperformanceevaluationresultspresentedintheprevioussections,thefollowingaresomebestpracticesforhostandguestmemoryusage.
Donotdisablepagesharingortheballoondriver.
Asdescribed,pagesharingisalightweighttechniquewhichopportunisticallyreclaimstheredundanthostmemorywithtrivialperformanceimpact.
Inthecaseswherehostsareheavilyovercommitted,usingballooningisgenerallymoreefficientandsafercomparedtousinghypervisorswapping,basedontheresultspresentedinSection5.
3.
ThesetwotechniquesareenabledbydefaultinESX4andshouldnotbedisabledunlessthebenefitsofdoingsoclearlyoutweighthecosts.
Carefullyspecifythememorylimitandmemoryreservation.
Thevirtualmachinememoryallocationtargetissubjecttothelimitandreservation.
Ifthesetwoparametersaremisconfigured,usersmayobserveballooningorswappingevenwhenthehosthasplentyoffreememory.
Forexample,avirtualmachine'smemorymaybereclaimedwhenthespecifiedlimitistoosmallorwhenothervirtualmachinesreservetoomuchhostmemory,eventhoughtheymayonlyuseasmallportionofthereservedmemory.
Ifaperformance-criticalvirtualmachineneedsaguaranteedmemoryallocation,thereservationneedstobespecifiedcarefullybecauseitmayimpactothervirtualmachines.
Hostmemorysizeshouldbelargerthanguestmemoryusage.
Forexample,itisunwisetorunavirtualmachinewitha2GBworkingsetsizeinahostwithonly1GBhostmemory.
Ifthisisthecase,thehypervisorhastoreclaimthevirtualmachine'sactivememorythroughballooningorhypervisorswapping,whichwillleadtopotentiallyseriousvirtualmachineperformancedegradation.
Althoughitisdifficulttotellwhetherthehostmemoryislargeenoughtoholdallofthevirtualmachine'sworkingsets,thebottomlineisthatthehostmemoryshouldnotbeexcessivelyovercommittedmakingtheguestshavetocontinuouslypageoutguestphysicalmemory.
Usesharestoadjustrelativeprioritieswhenmemoryisovercommitted.
Ifthehost'smemoryisovercommittedandthevirtualmachine'sallocatedhostmemoryistoosmalltoachieveareasonableperformance,theusercanadjustthevirtualmachine'ssharestoescalatetherelativepriorityofthevirtualmachinesothatthehypervisorwillallocatemorehostmemoryforthatvirtualmachine.
SetappropriateVirtualMachinememorysize.
Thevirtualmachinememorysizeshouldbeslightlylargerthantheaverageguestmemoryusage.
Theextramemorywillaccommodateworkloadspikesinthevirtualmachine.
Notethatguestoperatingsystemonlyrecognizesthespecifiedvirtualmachinememorysize.
Ifthevirtualmachinememorysizeistoosmall,guest-levelpagingisinevitable,eventhoughthehostmayhaveplentyoffreememory.
Instead,theusermayconservativelysetaverylargevirtualmachinememorysize,whichisfineintermsofvirtualmachineperformance,butmorevirtualmachinememorymeansthatmoreoverheadmemoryneedstobereservedforthevirtualmachine.
7.
References[1]CarlA.
Waldspurger.
"MemoryResourceManagementinVMwareESXServer".
ProceedingofthefifthSymposiumonOperatingSystemDesignandImplementation,Boston,Dec2002.
[2]vSphereResourceManagementGuide.
VMware.
http://www.
vmware.
com/pdf/vsphere4/r40/vsp_40_upgrade_guide.
pdf[3]MemoryPerformanceChartStatisticsinthevSphereClient.
http://communities.
vmware.
com/docs/DOC-10398[4]VirtualCenterMemoryStatisticsDefinitions.
http://communities.
vmware.
com/docs/DOC-5230[5]PerformanceEvaluationofIntelEPTHardwareAssist.
VMware.
http://www.
vmware.
com/resources/techresources/10006[6]PerformanceEvaluationofAMDRVIHardwareAssist.
VMware.
http://www.
vmware.
com/resources/techresources/1079[7]Thebuffercache.
http://tldp.
org/LDP/sag/html/buffer-cache.
html4VMwareToolsmustbeinstalledinordertoenableballooning.
Thisisrecommendedforallworkloads.
UnderstandingMemoryResourceManagementinVMwareESXServerSource:TechnicalMarketing,SDRevision:20090820VMware,Inc.
3401HillviewAvePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2009VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
VMW_ESX_Memory_09Q3_WP_P20_R3
轻云互联成立于2018年的国人商家,广州轻云互联网络科技有限公司旗下品牌,主要从事VPS、虚拟主机等云计算产品业务,适合建站、新手上车的值得选择,香港三网直连(电信CN2GIA联通移动CN2直连);美国圣何塞(回程三网CN2GIA)线路,所有产品均采用KVM虚拟技术架构,高效售后保障,稳定多年,高性能可用,网络优质,为您的业务保驾护航。官方网站:点击进入广州轻云网络科技有限公司活动规则:1.用户购...
企鹅小屋怎么样?企鹅小屋最近针对自己的美国cn2 gia套餐推出了2个优惠码:月付7折和年付6折,独享CPU,100%性能,三网回程CN2 GIA网络,100Mbps峰值带宽,用完优惠码1G内存套餐是年付240元,线路方面三网回程CN2 GIA。如果新购IP不能正常使用,请在开通时间60分钟内工单VPS技术部门更换正常IP;特价主机不支持退款。点击进入:企鹅小屋官网地址企鹅小屋优惠码:年付6折优惠...
六一云 成立于2018年,归属于西安六一网络科技有限公司,是一家国内正规持有IDC ISP CDN IRCS电信经营许可证书的老牌商家。大陆持证公司受大陆各部门监管不好用支持退款退现,再也不怕被割韭菜了!主要业务有:国内高防云,美国高防云,美国cera大带宽,香港CTG,香港沙田CN2,海外站群服务,物理机,宿母鸡等,另外也诚招代理欢迎咨询。官网www.61cloud.net最新直销劲爆...
freehost为你推荐
百度爱好者什么是贴吧多家五星酒店回应网传名媛拼单拼多多商家出钱叫买家好评会被处罚吗百度商城百度积分有什么用?firetrap牛仔裤的四大品牌是那几个啊?广东GDP破10万亿在已披露的2017年GDP经济数据中,以下哪个省份GDP总量排名第一?关键字什么叫关键词同ip域名什么是同主机域名m.kan84.net电视剧海派甜心全集海派甜心在线观看海派甜心全集高清dvd快播迅雷下载抓站工具一起来捉妖神行抓妖辅助工具都有哪些?www.1diaocha.com手机网赚是真的吗
北京域名注册 中文国际域名 火车票抢票攻略 国外php空间 40g硬盘 电子邮件服务器 华为云盘 双12 lamp的音标 大化网 windowsserver2008 so域名 硬防 赵荣 ddos防火墙 次世代主机 qq空间打开慢 八度空间论坛 魔兽世界服务器维护 免费空间申请 更多