competitivesuperdome

superdome  时间:2021-03-26  阅读:()
PeekingAttheFuturewithGiantMonsterVirtualMachinesProjectCapstone:APerformanceStudyofRunningManyLargeVirtualMachinesinParallelTECHNICALWHITEPAPERTECHNICALWHITEPAPER/2PeekingAttheFuturewithGiantMonsterVirtualMachinesTableofContentsExecutiveSummary3Introduction.
3ProjectCapstone.
3VMwarevSphere6.
0.
3HPESuperdomeX.
4IBMFlashSystem.
4TestEnvironment.
4TestConfigurationDetails.
5VirtualMachineConfiguration.
6TestWorkload.
6MonsterVirtualMachineTests.
7StoragePerformance7Four120-vCPUVMs.
7Eight60-vCPUVMs.
8Sixteen30-vCPUVMs9Under-ProvisioningwithFour112-vCPUVMs.
10CPUAffinityvs.
PreferHT.
11BestPractices.
12Conclusion13Appendix.
13References.
14TECHNICALWHITEPAPER/3PeekingAttheFuturewithGiantMonsterVirtualMachinesExecutiveSummaryThistechnicalwhitepaperexaminestheextraordinarypossibilitiesavailablewhenleadingedgeserversandstoragepushtheboundariesofcurrenttechnologyintermsofcapacityandperformance.
Testswererunwithmanydifferentconfigurationsofextremelylargevirtualmachines(knownasmonstervirtualmachines)andresultsshowthatVMwarevSphere6successfullymanagedtorunallofthevirtualmachinesinahighperformingandefficientmanner.
vSphere6isreadytorunthelargestsystemsandworkloadsoftodaywithgreatperformance.
vSphere6isalsoreadyforthefuturetotakeonthehighperformantsystemsandworkloadsthatwillbecomemorecommonindatacenters.
IntroductionTherateofincreasesincapacityandperformanceofcomputingisdramatic.
Startingin1975,Moore'slawobservedthatthenumberoftransistorsinanintegratedcircuitdoubleseverytwoyears.
Thisdoublingoftransistorshastranslatedintochipperformancealsodoublingeverytwoyears.
VMwarevSpherehasalsorapidlyincreaseditscapacitytosupportlargervirtualmachinesandhoststokeepupwiththiscomputecapacitythatincreasesovertime.
Two-andfour-socketx86-basedserversarecommonlyusedtoday.
WhilethenumberofcorespersocketintheseserversdoesnotexactlyfollowMoore'slaw(becauseeachcoreitselfismorepowerfulwitheachgenerationofprocessors)itcanbeusedasaroughproxy.
ThecurrentgenerationIntelXeonchipshaveamaximumof18corespersocketand36logicalprocesseswithhyper-threadingenabled.
Thisisalmostadoublingofthe10corespersocketinXeonchipsfromtwogenerationsandaboutfouryearsbefore.
Manyfour-socketserversthatusethecurrentgenerationIntelXeonprocessorshave72cores,buttheHPESuperdomeXhas16socketswith240cores.
Byusingthiscuttingedgeserver,itispossibletohavethetypeofcomputecapacityinasingleserverthat,byfollowingMoore'sLaw,won'tbeavailableinafour-socketserverformanyyears.
Itisapeekintothefuture.
ProjectCapstoneProjectCapstonebringstogetherVMware,HPE,andIBMinauniqueopportunitytocombineeachoftheseindustryleadingcompaniesandtheirrespectiveleading-edgetechnologiestobuildatestenvironmentthatshowstheupperboundsofwhatiscurrentlypossiblewithsuchgiantcomputepower.
RunningnumerousheavyworkloadsonmonstervirtualmachinesonavSphere6,HPESuperdomeX,andIBMFlashSystemconfigurationinparallelexemplifiesthepresentcapabilitiesofthesecombinedtechnologies.
ProjectCapstonebecameacenterpieceofthe2015VMwareconferenceseasonasitoccupiedcenterstageatVMworldUSinSanFranciscoasthesubjectofahighlyanticipatedSpotlightSessionthatincludedindividualpresentationsfromseniormanagementofVMware,HP,andIBM.
VMworld2015EuropeinBarcelonaincludedaCapstone-themedbreakoutsessionaswell.
Butperhapsmostsignificantly,theVMworldfloorpresenceatOracleOpenWorldinSanFranciscoinOctoberbrandishedacompletedemoversionoftheCapstoneStacktoincludetheSuperdomeXaswellastheIBMFlashSystem.
VMwarevSphere6.
0VMwarevSphere6.
0includesnewscalabilityfeaturesthatenableittohostextremelylargeandperformance-intensiveapplications.
ThecapabilitiesofindividualvirtualmachineshasincreasedsignificantlyfromthepreviousversionsofvSphere.
Asinglevirtualmachinecannowhaveupto128vCPUsand4TBofmemory.
Whiletheselevelsofresourcesarenotcommonlyrequired,therearesomelargeapplicationsthatdorequireandmakeuseofresourcesatthisscale.
Theseareusuallythelastapplicationstobeconsideredforvirtualizationduetotheirsize,butitisnowpossibletomovethislasttierofapplicationsintovirtualmachines.
TECHNICALWHITEPAPER/4PeekingAttheFuturewithGiantMonsterVirtualMachinesHPESuperdomeXHPEIntegritySuperdomeXsetsnew,highstandardsforx86availability,scalability,andperformance;itisanidealplatformforcriticalbusinessprocessinganddecisionsupportworkloads.
SuperdomeXblendsx86efficiencieswithprovenHPEmission-criticalinnovationsforasuperioruptimeexperiencewithRAS(reliability,availability,andserviceability)featuresnotfoundinotherx86platforms,allowingthismachinetoachievefivenines(99.
999%)ofavailability.
Breakthroughscalabilityofupto16socketscanhandlethelargestscaled-upx86workloads.
ThroughtheuniquenParstechnology,HPESuperdomeXincreasesreliabilityandflexibilitybyallowingforelectricallyisolatedenvironmentstobebuiltwithinasingleenclosure[1].
Itisawell-balancedarchitecturewithpowerfulXeonprocessorsworkinginconcertwithhighI/Oandalargememoryfootprintthatenablesthevirtualizationoflargeandcriticalapplicationsatanunprecedentedscale.
Whetheryouwanttomaximizeapplicationuptime,standardize,orconsolidate,HPESuperdomeXhelpsvirtualizemission-criticalenvironmentsinwaysneverbeforeimagined.
TheHPESuperdomeXistheidealsystemforProjectCapstonebecauseitisuniquelysuitedtoactasthephysicalplatformforsuchamassivevirtualizationeffort.
TheabilityofvSphere6toscaleupto128virtualCPUscanbeeasilyrealizedontheHPESuperdomeXbecauseitallowsformassive,individualvirtualmachinestobeencapsulatedonasinglesystemwhilehugeaggregateprocessingisparallelized.
IBMFlashSystemTheIBMFlashSystemfamilyofall-flashstorageplatformsincludesIBMFlashSystem900andIBMFlashSystemV9000arrays.
PoweredbyIBMFlashCoretechnology,theFlashSystem900deliverstheextremeperformance,enterprisereliability,andoperationalefficienciesrequiredtogaincompetitiveadvantageintoday'sdynamicmarketplace.
Addingtothesecapabilities,FlashSystemV9000offerstheadvantagesofsoftware-definedstorageatthespeedofflash.
Theseall-flashstoragesystemsdeliverthefullcapabilitiesofFlashCoretechnology'shardwareacceleratedarchitecture,MicroLatencymodules,andadvancedflashmanagementcoupledwitharichsetoffeaturesfoundinonlythemostadvancedenterprisestoragesolutions,includingIBMReal-timeCompression,virtualization,dynamictiering,thinprovisioning,snapshots,cloning,replication,datacopyservices,andhigh-availabilityconfigurations.
Whilevirtualizationliftsthephysicalrestraintsontheserverroom,theoverallperformanceofmulti-workloadserverenvironmentsenabledbyvirtualizationareheldbackbytraditionalstoragebecausedisk-basedsystemsstrugglewiththechallengesposedbytheresultingI/O.
consolidated.
Asvirtualizationhasenabledtheconsolidationofmultipleworkloadsrunonafewerphysicalservers,diskssimplycan'tkeepup,andthislimitsthevalueenterprisesgainfromvirtualization.
IBMFlashSystemV9000solvesthestoragechallengesleftunansweredbytraditionalstoragesolutions.
IthandlesrandomI/Opatternswithease,anditoffersthecapabilitytovirtualizeallexistingdatastorageresourcesandbringthemtogetherunderonepointofcontrol.
FlashSystemV9000providesacomprehensivestoragesolutionthatseamlesslyandautomaticallyallocatesstorageresourcestoaddresseveryapplicationdemand.
Itmovesdatatothemostefficient,cost-effectivestoragemedium—fromflash,todisk,andeventotape—withoutdisruptingapplicationperformanceordataavailability,andmorecapacitycanbeaddedwithoutapplicationdowntimeoralengthyupdateprocess.
IBMFlashSystemV9000helpsenterprisesrealizethefullvalueofVMwarevSphere6.
TestEnvironmentThetestenvironmentwasdesignedtoallowtestingforextremelylargemonstervirtualmachines.
vSphere6providesthecapabilitytohostvirtualmachinesofupto128vCPUs.
Thisisthefoundationforrunninglargermonstervirtualmachinesthaninthepast.
TheHPESuperdomeXandIBMFlashSystemstoragearrayprovidedthehardwareserverandstorageplatformsrespectively.
TheSuperdomeXusedinthisprojecthad240coresandTECHNICALWHITEPAPER/5PeekingAttheFuturewithGiantMonsterVirtualMachines480logicalthreadswithhyper-threadingenabled.
Thiswascoupledwith20TBofextremelylowlatency,all-flashstoragewithintheIBMFlashSystemarray.
Afour-socketserverwasusedasaclientloaddriversystemforthetestbed.
Thediagrambelowshowsthetestbedsetup.
Figure1.
TestbedhardwareTestConfigurationDetailsHPESuperdomeXServer:vSphere6.
016IntelXeonE7-2890v22.
8GHzCPUs(15coresperCPU)240cores/480threads(hyper-threadingenabled)12TBofRAM16GbFibreChannel10GbEthernetIBMFlashSystem900:20TBcapacityAll-flashmemory16GbFibreChannelTECHNICALWHITEPAPER/6PeekingAttheFuturewithGiantMonsterVirtualMachinesClientloaddriverserver:4xIntelXeonE7-48702.
4GHz512GBofRAM10GbEthernetVirtualMachineConfigurationTheconfigurationofthevirtualmachinewaskeptconstantinalltestsexceptforthenumberofvirtualCPUsandrelatedvirtualNUMA.
Inalltests,thetotalnumberofvCPUsacrossallvirtualmachinesundertestwasequaltothenumberofcoresorhyper-threadsontheserver.
Inthemaximumsizevirtualmachinetestcase,therewerefourvirtualmachineseachwith120vCPUsforatotalof480vCPUsassignedontheserver.
Thismatchesthe480hyper-threadsavailableontheserver.
Table1showsthenumberofvirtualmachineswiththeirvCPUconfigurationsthatweretested.
NumberofVMsvCPUsperVMVirtualSocketsperVMTotalvCPUsAssignedonServerTotalPhysicalThreadsOnServerWithHTEnabled41204480480860248048016301480480Table1.
VirtualmachineconfigurationTheconfigurationparameterPreferHTwasusedfortheseteststooptimizetheuseofthesystem'shyper-threadsinthishighCPUutilizationbenchmark.
Bydefault,vSphere6scheduleseachvCPUonacorewhereanothervCPUisnotscheduled.
Inotherwords,vSpherewillnotusethesecondthreadthatiscreatedoneachcorewithhyper-threadingenableduntilthereisavCPUalreadyscheduledonallofthephysicalcoresonthesystem.
UsingthePreferHTparameterchangesthisandinstructstheschedulerforavirtualmachinetoprefertousehyper-threadsinsteadofphysicalcores.
ThebestperformancefortwovCPUswouldbetouseathreadfromtwophysicalcores,andthisisthedefaultschedulingbehavior.
Usingtwothreadsofthesamecoreresultsinlowerperformancebecausehyper-threadssharemostoftheresourcesofthephysicalcore.
However,inthecaseofhighoverallsystemutilization,allthreadsonallcoresareinuseatthesametime.
PreferHTprovidesaperformanceadvantagebecauseeachvirtualmachineisspreadacrossfewerNUMAnodesandthisresultsinincreasedNUMAmemorylocality.
ByusingPreferHT,ahighlyutilizedsystembecomesmoreefficientbecausethevirtualmachinesallhavemoreNUMAlocalitywhilestillusingallthelogicalthreadsontheserver.
Standardbestpracticesfordatabasevirtualmachineswereusedfortheconfiguration.
Eachvirtualmachinewasconfiguredwith256GBofRAM,twopvSCSIcontrollers,andasinglevmxnet3virtualnetworkadapter.
ThevirtualmachineswereinstalledwithRedHatEnterpriseLinux6.
5astheguestoperatingsystem.
Oracle12cwasinstalledfollowingtheinstallationguidefromOracle.
TestWorkloadTheopensourcedatabaseworkloadDVDStore3wasusedforthesetests[2].
DVDStoresimulatesanonlinestorethatallowscustomerstologin,browseproducts,readandsubmitproductreviews,andpurchaseproducts.
Itusesmanydatabasefeaturestorunthedatabaseincludingprimarykeys,foreignkeys,fulltextindexingandsearching,transactions,rollbacks,storedprocedures,triggers,andsimpleandcomplexmulti-joinqueries.
ItisTECHNICALWHITEPAPER/7PeekingAttheFuturewithGiantMonsterVirtualMachinesdesignedtobeCPUintensive,butalsorequireslowlatencystorageinordertoachievegoodthroughput.
DVDStoreincludesadriverprogramthatsimulatesuseractivityonthedatabase.
Eachsimulateduserstepsthroughthefullprocessofanorder:login,browsetheDVDcatalog,browseproductreviews,andpurchaseDVDs.
Performanceismeasuredinordersperminute(OPM).
DVDStore3,whichwasrecentlyupdatedfromversion2,addsproductreviewsandafewotherfeaturesthataredesignedtomaketheworkloadincludethetypicalproductreviewscommonlyfoundtodayonmanyWebsites,andversion3isalsomoreCPUintensive.
TheincreasedCPUusagemakesitpossibleforaDVDStore3instancetofullysaturatelargersystemsmoreeasilythanwhatwaspossiblewiththepreviousversionofDVDStore.
Forthesetests,a40GBDVDStore3databaseinstancewascreatedoneachvirtualmachine.
Thedirectdatabasedriverwasusedontheclientloadsystemtostressthedatabasewithoutrunningamiddletierbecausethefocusofthesetestswasonthelargedatabasevirtualmachines.
Thedatabasebuffercachewassettosamesizeasthedatabasetooptimizeperformance.
ThenumberofdriverthreadsrunningagainsteachmonstervirtualmachinewasincreaseduntilthemaximumOPMbegantodecrease.
AtthepointofmaximumOPMtheCPUusageandotherperformancemetricswerecheckedtoverifythatthesystemhadreachedsaturation.
MonsterVirtualMachineTestsAseriesoftestswererunwithdifferentsizesofvirtualmachines.
Eachtestisbrieflydescribedwiththeresultsandanalysis.
ThefirsttestsdiscussedareallsimilarinthateachconfigurationisasetofvirtualmachinesthatfullyconsumealltheCPUthreadsonthehost.
Theconfigurationsarefour120-vCPUVMs,eight60-vCPUVMs,and1630-vCPUVMs.
Ineachcase,thetotalnumberofvCPUsrunningacrossallthevirtualmachinesis480,whichequalsthenumberofCPUthreadsonthehost.
Inadditiontothesetestswithmaximumconfigurations,sometestswererunwithavirtualmachineconfigurationthatunder-provisionstheserver,andatestcomparingCPUaffinity(pinning)vs.
PreferHTconfigurations.
StoragePerformanceForthesetests,thegoalwastousealltheCPUsontheserver.
Inordertoaccomplishthis,theamountofdiskI/Owasminimizedbyspecifyingadatabasebuffercachethatwasapproximatelythesamesizeasthedatabaseondisk.
Thismeantthataftertheinitialwarmupphaseofrunningthetest,mostdatabasequeriescouldbesatisfiedwithoutadiskI/Ooperationbecausemostofthedatabasewascachedinmemory.
InorderforallCPUstobekeptbusy,thediskI/Ooperationsthatoccurmustbeaslowlatencyaspossible.
TheIBMFlashSystemarraywasabletokeepaveragedisklatencybelow0.
3millisecondsinalltestsandwasahighlightofsystemperformance.
IOPSpeakedatapproximately50,000duringsomeofthetestruns,whichwaswellwithinthecapabilitiesofthestoragearray.
Thearrayprovidedextremelylowlatencystorageinalltestscenarios.
ThecapabilitiesoftheIBMFlashSystemarrayintermsofIOPSwereneverpushed,butthetestsdidbenefitgreatlyfromtheconsistentlylowresponsetimes.
Four120-vCPUVMsThemaximumsizevirtualmachineinvSphere6is128vCPUs.
Sowiththelimitof480totalCPUthreadsonthehost,runningfour120-vCPUVMsisthemaximumsizepossiblewhilekeepingallvirtualmachinesthesamesizeandstayingunderthevSpheremaximum.
Whilenotmanyenvironmentstodayhaveasinglevirtualmachinerunningatthissize,thistestranfourofthemonasinglehostunderhighload.
Tomeasurethescalabilityofthesolutionatfullcapacity,testswererunfirstwithjustasinglemonstervirtualmachine.
Inadditionaltests,allfourvirtualmachineswererunatthesametime.
Maximumperformancewasfoundforeachtestcasebyincreasingthenumberofthreadsintheclientdriverstofindthepointatwhichthemostordersperminute(OPM)wereachieved.
ThispointofmaximumthroughputwasalsofoundtobeatnearCPUsaturation,indicatingthatperformancehadpeaked.
TECHNICALWHITEPAPER/8PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure2.
Almostlinearscalabilityof4x120vCPUVMsonasingleserverInthistypeoftest,theidealislinearscalability.
Thiswouldbea4timesperformancegrowthgoingfromasinglevirtualmachinetofourvirtualmachines.
AsFigure2shows,thefour120-vCPUvirtualmachinesachieved3.
7timesthethroughputofthesingle120-vCPUvirtualmachine,whichis92%ofperfectlinearscalability.
Storageperformedataveryhighlevelmaintaining0.
3millisecondslatencyand20,000IOPSduringthetest.
Eight60-vCPUVMsForthenextsetoftests,thevirtualmachineswerereadjusteddownto60vCPUsandclonedsothattherewasatotalofeight60-vCPUVMs.
Inthistest,eachvirtualmachinehadthesamenumberofvCPUsaseachSuperdomeXcomputeblade.
Itisbynomeansarequirementtomatchvirtualmachinesizetotheunderlyinghardwaresospecifically,butthiscanallowforoptimizedresultsinsomeenvironments.
05010015020025030035040045050014OrdersPerMinute(OPM)inThousandsNumberof120-vCPUVirtualMachinesScalabilityof4x120vCPUVMsonSingleServerTECHNICALWHITEPAPER/9PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure3.
Scalabilityof60-vCPUVMsThetotalthroughputachievedwitheight60-vCPUVMswasthehighestofanyofthetestsconducted.
TheIBMFlashSystemarrayalsocontinuedtoachieveimpressiveperformancewithlatencyunder0.
3millisecondsandaverage16,000IOPS.
Sixteen30-vCPUVMsThistestconsistedofrunningasixteen30-vCPUVMs.
Eachofthese30-vCPUVMswasessentiallyusingallthethreadsonaserversocketbecauseoftheuseofthepreferHTparameter.
0100200300400500600148OrdersPerMinute(OPM)inThousandsNumberof60vCPUVMsScalibilityof60vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/10PeekingAttheFuturewithGiantMonsterVirtualMachinesThislargenumberofmonsterVMsrunningatthesametimestillresultedinverygoodtotalthroughputandexcellentscalabilitymovingfromasinglevirtualmachinerunningtoallsixteen.
Thethroughputofthesixteenvirtualmachineswas14.
3timesofasingleVMor89%ofperfectlinearscalability.
Onceagain,disklatencyremainedbelow0.
3millisecondswithaverageIOPSof13,000.
Under-ProvisioningwithFour112-vCPUVMsIntheothertestscoveredinthispaper,theserverhasbeenfullycommittedwithavCPUallocatedforeverythreadonthehost.
ThismeansthatallthreadswillbeusedforvirtualmachinevCPUs.
Inmostenvironments,thisstillleaveslotsofCPUavailablebecausenotallvirtualmachinesarerunningatfullCPUutilization.
InanenvironmentwhereallassignedvCPUsareat100%usage,thereisn'tanythingleftoverfortheESXihypervisortouseforitsfunctions.
ThisincludesvirtualnetworkinganddiskI/Ohandling.
ThehypervisorthencompetesdirectlywiththevirtualmachinesforCPU.
Inthiscase,performanceofthevirtualmachinescanactuallybeimprovedbyreducingthenumberofvCPUstoleavesomeCPUthreadsavailableonthehostfortheuseofESXi.
Inthisspecificconfiguration,whilerunningthe4x120virtualCPUswiththeDVDStore3workload,thenetworktrafficisabout900Megabitspersecond(Mb/s)transmittedand200Mb/sreceivedandanaverage30,000ofdiskIOPSisalsobeingprocessed.
InordertoallowforthehosttohavesomeCPUresourceavailabletohandlethisworkload,thenumberofvCPUsforeachofthefourvirtualmachineswasreducedfrom120to112.
Thisleavesonecore(twohyper-threads)persocketunassignedtoavirtualmachine.
01002003004005006001816OrdersPerMinute(OPM)inThousandsNumberof30vCPUVMsScalabilityof30vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/11PeekingAttheFuturewithGiantMonsterVirtualMachines.
Figure4.
4x120vCPUvs.
4x112vCPUTheresultsshowthatoverallthroughputincreasedsignificantlyfrom448thousandto524thousandOPM.
ThegaininperformancewithsmallervirtualmachinesisduetothereductionincontentionofresourcesbetweentheESXihypervisorandthevirtualmachinesthatisfoundinthisextremetestingscenariowhenallCPUresourceswereallocatedandfullyutilized.
CPUAffinityvs.
PreferHTItispossibletocontroltheCPUsthatareusedforavirtualmachinebyusingtheCPUaffinitysetting.
ThisallowsanadministratortooverridetheESXischedulerandonlyallowavirtualmachinetousespecificphysicalcores.
ThevCPUsusedbyavirtualmachinearepinnedtospecificphysicalcores.
Incertainbenchmarkingscenarios,theuseofCPUaffinityhasshownsmallincreasesinperformance.
Evenintheserelativelyuncommoncases,itsuseisnotrecommendedbecauseofthehighadministrativeeffortandthepotentialforpoorperformanceifthesettingisnotupdatedaschangesintheenvironmentoccuroriftheCPUaffinitysettingisdoneincorrectly.
UsingtheCapstonetestingenvironmentatestwithCPUaffinityandPreferHTwasconductedtomeasurewhichconfigurationperformedbetter.
ItwasfoundthatPreferHT,whichallowstheESXihypervisortomakeallvCPUschedulingdecisions,outperformedaCPUaffinityconfigurationby4%.
0100,000200,000300,000400,000500,000600,0004x120vCPUs4x112vCPUsOrdersPerMinute(OPM)4x120vCPUvs4x112vCPUVMTotalThroughputon60Core/120ThreadServerTECHNICALWHITEPAPER/12PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure5.
UsingPreferHTperformedslightlybetterthansettingCPUaffinityBestPracticesRunningmanyverylargevirtualmachinesonanevenlargerservermakesitmoreimportanttofollowmonstervirtualmachinebestpractices.
Considertheserver'sNUMAarchitecturewhendecidingwhatsizetomakethevirtualmachines.
Whencreatingvirtualmachines,makesurethevirtualNUMAsocketsmatchthephysicalNUMAarchitectureofthehostascloselyaspossible.
Formoreinformation,see"UsingNUMASystemswithESXi"[3].
Sizeandconfigurestoragewithenoughperformancetomatchthelargeperformancecapabilityofthemonstervirtualmachines.
Alargeserverwithunderpoweredstoragewillbelimitedbythestorage.
NetworkperformancecanquicklybecomeanissueifthetrafficforthelargevirtualmachinesisnotcorrectlyspreadacrossmultipleNICs.
Combininganumberofhighperformanceworkloadsonasinglehostwillalsoresultinhighnetworktrafficthatwillmostlikelyneedtousemultiplenetworkconnectionstoavoidabottleneck.
InextremelyhighCPUutilizationscenarios,includingbenchmarktests,itcanbebettertoleaveafewCPUcoresunassignedtovirtualmachinestogivetheESXihypervisorneededresourcesforitsfunctions.
DonotuseCPUaffinity,sometimesreferredtoasCPUpinning,becauseitusuallydoesnotresultinabigincreaseinperformance.
Insomeextremehighutilizationscenarios,usethePreferHTsettingtogetmoretotalperformancefromasystem,butnotethatusingthissettingcouldreduceindividualvirtualmachineperformance.
050,000100,000150,000200,000250,000300,000350,000400,000450,000500,000PinnedPreferHTOrdersPerMinute(OPM)CPUAffinityvsPreferHTwith4x120vCPUVMsOnaSingleServerTECHNICALWHITEPAPER/13PeekingAttheFuturewithGiantMonsterVirtualMachinesConclusionProjectCapstonehasshownthatvSphere6iscapableofrunningmultiplegiantmonstervirtualmachinestodayonsomeoftheworld'smostcapableserversandstorage.
TheHPESuperdomeXandsuperlowlatencyIBMFlashSystemstoragewerechosenbecauseoftheirtremendousperformancecapabilities,theireaseofconfigurationanduse,andtheiroverallcomplimentarystaturetovSphere6.
Theuniquepropertiesofthisstackallowedthetestingteamtopushthelimitsofvirtualizedinfrastructuretoneverbeforeseenlevels.
Asstatedinthemediacollateral"ProjectCapstone,DrivingOracletoSoarBeyondtheClouds,"(seeAppendix)thisexampleinfrastructurestackispossibletodayandshowsthatashighercorecountsandall-flashstoragearraysbecomemorecommoninthefuture,aVMwarevSphere–basedapproachwillprovidetheneededscalabilityandcapacity.
ThiscollaborationofVMware,HPE,andIBMshowsthatapplicationsofthelargestsizescanrunonavSpherevirtualinfrastructure.
Thelimitingfactorinmostdatacenterstodayisthehardware,butwhenusingthelatesttechnologyavailable,itispossibletolifttheselimitsandbringtheflexibilityandcapabilitiesofvirtualizedinfrastructuretoallcornersofthedatacenter.
Thiscollaborativeachievementbetweenthreeoftheworld'smostrecognizedcomputingcompanieshassolidifiedthepropositionofcomprehensivevirtualizationthatVMwarehasheldforanumberofyears.
Verysimplyput,allapplicationsanddatabases—regardlessoftheirprocessing,memory,networking,orthroughputdemands—arecandidatesforavirtualizedinfrastructure.
VMware,HPE,andIBMbuiltProjectCapstonewithleadingedgecomponentsusedasafoundationtoprovethat100%virtualizationisarealityineventhelargestcomputeenvironment.
AppendixAninitialblogforProjectCapstonewaspreviouslypublished[4].
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
htmlAshortvideoonProjectCapstonethatgivessomehighlightsfromtheprojectisavailableonline[5].
https://www.
youtube.
com/watchv=X4SRxl04uQ0ProjectCapstonewaspresentedatVMworld2015inSanFranciscoandwithexecutivesfromallthreecompaniesparticipating.
Avideoofthispresentationisavailableonline[6].
https://www.
youtube.
com/watchv=O3BTvP46i4cTECHNICALWHITEPAPER/14PeekingAttheFuturewithGiantMonsterVirtualMachinesReferences[1]Hewlett-PackardDevelopmentCompany,L.
P.
(2010)HPnPartitions(nPars),forIntegrityandHP9000midrange.
http://www8.
hp.
com/h20195/v2/GetPDF.
aspx/c04123352.
pdf[2]ToddMuirheadandDaveJaffe.
(2015,July)DVDStore3.
http://www.
github.
com/dvdstore/ds3[3]VMware,Inc.
(2015)UsingNUMASystemswithESXi.
http://pubs.
vmware.
com/vsphere-60/index.
jsp#com.
vmware.
vsphere.
resmgmt.
doc/GUID-7E0C6311-5B27-408E-8F51-E4F1FC997283.
html[4]DonSullivan.
(2015,August)VMworldUS2015SpotlightSession:ProjectCapstone,aCollaborationbetweenVMW,HP&IBM.
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
html[5]IBMSystemsISVs.
(2015,November)ProjectCapstone-Pushingtheperformancelimitsofvirtualization.
https://www.
youtube.
com/watchv=X4SRxl04uQ0[6]VMworld.
(2015,November)VMworld2015:VAPP6952-S-VMwareProjectCapstone,aCollaborationofVMware,HP,andIBM.
https://www.
youtube.
com/watchv=O3BTvP46i4c[7]VMware,Inc.
(2015)ConfigurationMaximumsvSphere6.
0.
https://www.
vmware.
com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.
pdfVMware,Inc.
3401HillviewAvenuePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2016VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
Date:27January2016Commentsonthisdocument:https://communities.
vmware.
com/docs/DOC-30846PeekingAttheFuturewithGiantMonsterVirtualMachinesAbouttheAuthorsLeoDemers,MissionCriticalProductManager,HPEKristyOrtega,EcoSystemOfferingManager,IBMRawleyBurbridge,FlashSystemCorporateSolutionArchitect,IBMToddMuirhead,StaffPerformanceEngineer,VMwareDonSullivan,ProductLineMarketingManagerforBusinessCriticalApplications,VMwareAcknowledgementsTheauthorsthankMarkLohmeyer,MichaelKuhn,RandyMeyer,DrewSher,RawleyBurbridge,BruceHerndon,JimBritton,RezaTaheri,JuanGarcia-Rovetta,MichelleTidwell,andJosephDieckhans.

Gcore(75折)迈阿密E5-2623v4 CPU独立服务器

部落分享过多次G-core(gcorelabs)的产品及评测信息,以VPS主机为主,距离上一次分享商家的独立服务器还在2年多前,本月初商家针对迈阿密机房限定E5-2623v4 CPU的独立服务器推出75折优惠码,活动将在9月30日到期,这里再分享下。G-core(gcorelabs)是一家总部位于卢森堡的国外主机商,主要提供基于KVM架构的VPS主机和独立服务器租用等,数据中心包括俄罗斯、美国、日...

Pacificrack:新增三款超级秒杀套餐/洛杉矶QN机房/1Gbps月流量1TB/年付仅7美刀

PacificRack最近促销上瘾了,活动频繁,接二连三的追加便宜VPS秒杀,PacificRack在 7月中下旬已经推出了五款秒杀VPS套餐,现在商家又新增了三款更便宜的特价套餐,年付低至7.2美元,这已经是本月第三波促销,带宽都是1Gbps。PacificRack 7月秒杀VPS整个系列都是PR-M,也就是魔方的后台管理。2G内存起步的支持Windows 7、10、Server 2003\20...

易探云香港云服务器价格多少钱1个月/1年?

易探云怎么样?易探云是目前国内少数优质的香港云服务器服务商家,目前推出多个香港机房的香港云服务器,有新界、九龙、沙田、葵湾等机房,还提供CN2、BGP及CN2三网直连香港云服务器。近年来,许多企业外贸出海会选择香港云服务器来部署自己的外贸网站,使得越来越多的用户会选择易探云作为网站服务提供平台。今天,云服务器网(yuntue.com)小编来谈谈易探云和易探云服务器怎么样?具体香港云服务器多少钱1个...

superdome为你推荐
22zizi.comwww 地址 didi22怎么打不开了,还有好看的吗>com陈嘉垣陈浩民、马德钟强吻女星陈嘉桓,求大家一个说法。陈嘉垣大家觉得陈嘉桓漂亮还是钟嘉欣漂亮?psbc.com95580是什么诈骗信息不点网址就安全吧!同ip域名两个网站同一个IP怎么绑定两个域名8090lu.com8090看看电影网怎么打不开了avtt4.comwww.5c5c.com怎么进入baqizi.cc和空姐一起的日子电视剧在线观看 和空姐一起的日子全集在线观看888300.com请问GXG客服电话号码是多少?222cc.com有什么电影网站啊
虚拟主机空间 虚拟主机试用30天 四川虚拟主机 网址域名注册 域名查询系统 mediafire gomezpeer 抢票工具 贵州电信宽带测速 12306抢票助手 天互数据 圣诞促销 老左来了 cdn加速是什么 爱奇艺会员免费试用 in域名 shuang12 百度云加速 上海电信测速 美国盐湖城 更多