competitivesuperdome

superdome  时间:2021-03-26  阅读:()
PeekingAttheFuturewithGiantMonsterVirtualMachinesProjectCapstone:APerformanceStudyofRunningManyLargeVirtualMachinesinParallelTECHNICALWHITEPAPERTECHNICALWHITEPAPER/2PeekingAttheFuturewithGiantMonsterVirtualMachinesTableofContentsExecutiveSummary3Introduction.
3ProjectCapstone.
3VMwarevSphere6.
0.
3HPESuperdomeX.
4IBMFlashSystem.
4TestEnvironment.
4TestConfigurationDetails.
5VirtualMachineConfiguration.
6TestWorkload.
6MonsterVirtualMachineTests.
7StoragePerformance7Four120-vCPUVMs.
7Eight60-vCPUVMs.
8Sixteen30-vCPUVMs9Under-ProvisioningwithFour112-vCPUVMs.
10CPUAffinityvs.
PreferHT.
11BestPractices.
12Conclusion13Appendix.
13References.
14TECHNICALWHITEPAPER/3PeekingAttheFuturewithGiantMonsterVirtualMachinesExecutiveSummaryThistechnicalwhitepaperexaminestheextraordinarypossibilitiesavailablewhenleadingedgeserversandstoragepushtheboundariesofcurrenttechnologyintermsofcapacityandperformance.
Testswererunwithmanydifferentconfigurationsofextremelylargevirtualmachines(knownasmonstervirtualmachines)andresultsshowthatVMwarevSphere6successfullymanagedtorunallofthevirtualmachinesinahighperformingandefficientmanner.
vSphere6isreadytorunthelargestsystemsandworkloadsoftodaywithgreatperformance.
vSphere6isalsoreadyforthefuturetotakeonthehighperformantsystemsandworkloadsthatwillbecomemorecommonindatacenters.
IntroductionTherateofincreasesincapacityandperformanceofcomputingisdramatic.
Startingin1975,Moore'slawobservedthatthenumberoftransistorsinanintegratedcircuitdoubleseverytwoyears.
Thisdoublingoftransistorshastranslatedintochipperformancealsodoublingeverytwoyears.
VMwarevSpherehasalsorapidlyincreaseditscapacitytosupportlargervirtualmachinesandhoststokeepupwiththiscomputecapacitythatincreasesovertime.
Two-andfour-socketx86-basedserversarecommonlyusedtoday.
WhilethenumberofcorespersocketintheseserversdoesnotexactlyfollowMoore'slaw(becauseeachcoreitselfismorepowerfulwitheachgenerationofprocessors)itcanbeusedasaroughproxy.
ThecurrentgenerationIntelXeonchipshaveamaximumof18corespersocketand36logicalprocesseswithhyper-threadingenabled.
Thisisalmostadoublingofthe10corespersocketinXeonchipsfromtwogenerationsandaboutfouryearsbefore.
Manyfour-socketserversthatusethecurrentgenerationIntelXeonprocessorshave72cores,buttheHPESuperdomeXhas16socketswith240cores.
Byusingthiscuttingedgeserver,itispossibletohavethetypeofcomputecapacityinasingleserverthat,byfollowingMoore'sLaw,won'tbeavailableinafour-socketserverformanyyears.
Itisapeekintothefuture.
ProjectCapstoneProjectCapstonebringstogetherVMware,HPE,andIBMinauniqueopportunitytocombineeachoftheseindustryleadingcompaniesandtheirrespectiveleading-edgetechnologiestobuildatestenvironmentthatshowstheupperboundsofwhatiscurrentlypossiblewithsuchgiantcomputepower.
RunningnumerousheavyworkloadsonmonstervirtualmachinesonavSphere6,HPESuperdomeX,andIBMFlashSystemconfigurationinparallelexemplifiesthepresentcapabilitiesofthesecombinedtechnologies.
ProjectCapstonebecameacenterpieceofthe2015VMwareconferenceseasonasitoccupiedcenterstageatVMworldUSinSanFranciscoasthesubjectofahighlyanticipatedSpotlightSessionthatincludedindividualpresentationsfromseniormanagementofVMware,HP,andIBM.
VMworld2015EuropeinBarcelonaincludedaCapstone-themedbreakoutsessionaswell.
Butperhapsmostsignificantly,theVMworldfloorpresenceatOracleOpenWorldinSanFranciscoinOctoberbrandishedacompletedemoversionoftheCapstoneStacktoincludetheSuperdomeXaswellastheIBMFlashSystem.
VMwarevSphere6.
0VMwarevSphere6.
0includesnewscalabilityfeaturesthatenableittohostextremelylargeandperformance-intensiveapplications.
ThecapabilitiesofindividualvirtualmachineshasincreasedsignificantlyfromthepreviousversionsofvSphere.
Asinglevirtualmachinecannowhaveupto128vCPUsand4TBofmemory.
Whiletheselevelsofresourcesarenotcommonlyrequired,therearesomelargeapplicationsthatdorequireandmakeuseofresourcesatthisscale.
Theseareusuallythelastapplicationstobeconsideredforvirtualizationduetotheirsize,butitisnowpossibletomovethislasttierofapplicationsintovirtualmachines.
TECHNICALWHITEPAPER/4PeekingAttheFuturewithGiantMonsterVirtualMachinesHPESuperdomeXHPEIntegritySuperdomeXsetsnew,highstandardsforx86availability,scalability,andperformance;itisanidealplatformforcriticalbusinessprocessinganddecisionsupportworkloads.
SuperdomeXblendsx86efficiencieswithprovenHPEmission-criticalinnovationsforasuperioruptimeexperiencewithRAS(reliability,availability,andserviceability)featuresnotfoundinotherx86platforms,allowingthismachinetoachievefivenines(99.
999%)ofavailability.
Breakthroughscalabilityofupto16socketscanhandlethelargestscaled-upx86workloads.
ThroughtheuniquenParstechnology,HPESuperdomeXincreasesreliabilityandflexibilitybyallowingforelectricallyisolatedenvironmentstobebuiltwithinasingleenclosure[1].
Itisawell-balancedarchitecturewithpowerfulXeonprocessorsworkinginconcertwithhighI/Oandalargememoryfootprintthatenablesthevirtualizationoflargeandcriticalapplicationsatanunprecedentedscale.
Whetheryouwanttomaximizeapplicationuptime,standardize,orconsolidate,HPESuperdomeXhelpsvirtualizemission-criticalenvironmentsinwaysneverbeforeimagined.
TheHPESuperdomeXistheidealsystemforProjectCapstonebecauseitisuniquelysuitedtoactasthephysicalplatformforsuchamassivevirtualizationeffort.
TheabilityofvSphere6toscaleupto128virtualCPUscanbeeasilyrealizedontheHPESuperdomeXbecauseitallowsformassive,individualvirtualmachinestobeencapsulatedonasinglesystemwhilehugeaggregateprocessingisparallelized.
IBMFlashSystemTheIBMFlashSystemfamilyofall-flashstorageplatformsincludesIBMFlashSystem900andIBMFlashSystemV9000arrays.
PoweredbyIBMFlashCoretechnology,theFlashSystem900deliverstheextremeperformance,enterprisereliability,andoperationalefficienciesrequiredtogaincompetitiveadvantageintoday'sdynamicmarketplace.
Addingtothesecapabilities,FlashSystemV9000offerstheadvantagesofsoftware-definedstorageatthespeedofflash.
Theseall-flashstoragesystemsdeliverthefullcapabilitiesofFlashCoretechnology'shardwareacceleratedarchitecture,MicroLatencymodules,andadvancedflashmanagementcoupledwitharichsetoffeaturesfoundinonlythemostadvancedenterprisestoragesolutions,includingIBMReal-timeCompression,virtualization,dynamictiering,thinprovisioning,snapshots,cloning,replication,datacopyservices,andhigh-availabilityconfigurations.
Whilevirtualizationliftsthephysicalrestraintsontheserverroom,theoverallperformanceofmulti-workloadserverenvironmentsenabledbyvirtualizationareheldbackbytraditionalstoragebecausedisk-basedsystemsstrugglewiththechallengesposedbytheresultingI/O.
consolidated.
Asvirtualizationhasenabledtheconsolidationofmultipleworkloadsrunonafewerphysicalservers,diskssimplycan'tkeepup,andthislimitsthevalueenterprisesgainfromvirtualization.
IBMFlashSystemV9000solvesthestoragechallengesleftunansweredbytraditionalstoragesolutions.
IthandlesrandomI/Opatternswithease,anditoffersthecapabilitytovirtualizeallexistingdatastorageresourcesandbringthemtogetherunderonepointofcontrol.
FlashSystemV9000providesacomprehensivestoragesolutionthatseamlesslyandautomaticallyallocatesstorageresourcestoaddresseveryapplicationdemand.
Itmovesdatatothemostefficient,cost-effectivestoragemedium—fromflash,todisk,andeventotape—withoutdisruptingapplicationperformanceordataavailability,andmorecapacitycanbeaddedwithoutapplicationdowntimeoralengthyupdateprocess.
IBMFlashSystemV9000helpsenterprisesrealizethefullvalueofVMwarevSphere6.
TestEnvironmentThetestenvironmentwasdesignedtoallowtestingforextremelylargemonstervirtualmachines.
vSphere6providesthecapabilitytohostvirtualmachinesofupto128vCPUs.
Thisisthefoundationforrunninglargermonstervirtualmachinesthaninthepast.
TheHPESuperdomeXandIBMFlashSystemstoragearrayprovidedthehardwareserverandstorageplatformsrespectively.
TheSuperdomeXusedinthisprojecthad240coresandTECHNICALWHITEPAPER/5PeekingAttheFuturewithGiantMonsterVirtualMachines480logicalthreadswithhyper-threadingenabled.
Thiswascoupledwith20TBofextremelylowlatency,all-flashstoragewithintheIBMFlashSystemarray.
Afour-socketserverwasusedasaclientloaddriversystemforthetestbed.
Thediagrambelowshowsthetestbedsetup.
Figure1.
TestbedhardwareTestConfigurationDetailsHPESuperdomeXServer:vSphere6.
016IntelXeonE7-2890v22.
8GHzCPUs(15coresperCPU)240cores/480threads(hyper-threadingenabled)12TBofRAM16GbFibreChannel10GbEthernetIBMFlashSystem900:20TBcapacityAll-flashmemory16GbFibreChannelTECHNICALWHITEPAPER/6PeekingAttheFuturewithGiantMonsterVirtualMachinesClientloaddriverserver:4xIntelXeonE7-48702.
4GHz512GBofRAM10GbEthernetVirtualMachineConfigurationTheconfigurationofthevirtualmachinewaskeptconstantinalltestsexceptforthenumberofvirtualCPUsandrelatedvirtualNUMA.
Inalltests,thetotalnumberofvCPUsacrossallvirtualmachinesundertestwasequaltothenumberofcoresorhyper-threadsontheserver.
Inthemaximumsizevirtualmachinetestcase,therewerefourvirtualmachineseachwith120vCPUsforatotalof480vCPUsassignedontheserver.
Thismatchesthe480hyper-threadsavailableontheserver.
Table1showsthenumberofvirtualmachineswiththeirvCPUconfigurationsthatweretested.
NumberofVMsvCPUsperVMVirtualSocketsperVMTotalvCPUsAssignedonServerTotalPhysicalThreadsOnServerWithHTEnabled41204480480860248048016301480480Table1.
VirtualmachineconfigurationTheconfigurationparameterPreferHTwasusedfortheseteststooptimizetheuseofthesystem'shyper-threadsinthishighCPUutilizationbenchmark.
Bydefault,vSphere6scheduleseachvCPUonacorewhereanothervCPUisnotscheduled.
Inotherwords,vSpherewillnotusethesecondthreadthatiscreatedoneachcorewithhyper-threadingenableduntilthereisavCPUalreadyscheduledonallofthephysicalcoresonthesystem.
UsingthePreferHTparameterchangesthisandinstructstheschedulerforavirtualmachinetoprefertousehyper-threadsinsteadofphysicalcores.
ThebestperformancefortwovCPUswouldbetouseathreadfromtwophysicalcores,andthisisthedefaultschedulingbehavior.
Usingtwothreadsofthesamecoreresultsinlowerperformancebecausehyper-threadssharemostoftheresourcesofthephysicalcore.
However,inthecaseofhighoverallsystemutilization,allthreadsonallcoresareinuseatthesametime.
PreferHTprovidesaperformanceadvantagebecauseeachvirtualmachineisspreadacrossfewerNUMAnodesandthisresultsinincreasedNUMAmemorylocality.
ByusingPreferHT,ahighlyutilizedsystembecomesmoreefficientbecausethevirtualmachinesallhavemoreNUMAlocalitywhilestillusingallthelogicalthreadsontheserver.
Standardbestpracticesfordatabasevirtualmachineswereusedfortheconfiguration.
Eachvirtualmachinewasconfiguredwith256GBofRAM,twopvSCSIcontrollers,andasinglevmxnet3virtualnetworkadapter.
ThevirtualmachineswereinstalledwithRedHatEnterpriseLinux6.
5astheguestoperatingsystem.
Oracle12cwasinstalledfollowingtheinstallationguidefromOracle.
TestWorkloadTheopensourcedatabaseworkloadDVDStore3wasusedforthesetests[2].
DVDStoresimulatesanonlinestorethatallowscustomerstologin,browseproducts,readandsubmitproductreviews,andpurchaseproducts.
Itusesmanydatabasefeaturestorunthedatabaseincludingprimarykeys,foreignkeys,fulltextindexingandsearching,transactions,rollbacks,storedprocedures,triggers,andsimpleandcomplexmulti-joinqueries.
ItisTECHNICALWHITEPAPER/7PeekingAttheFuturewithGiantMonsterVirtualMachinesdesignedtobeCPUintensive,butalsorequireslowlatencystorageinordertoachievegoodthroughput.
DVDStoreincludesadriverprogramthatsimulatesuseractivityonthedatabase.
Eachsimulateduserstepsthroughthefullprocessofanorder:login,browsetheDVDcatalog,browseproductreviews,andpurchaseDVDs.
Performanceismeasuredinordersperminute(OPM).
DVDStore3,whichwasrecentlyupdatedfromversion2,addsproductreviewsandafewotherfeaturesthataredesignedtomaketheworkloadincludethetypicalproductreviewscommonlyfoundtodayonmanyWebsites,andversion3isalsomoreCPUintensive.
TheincreasedCPUusagemakesitpossibleforaDVDStore3instancetofullysaturatelargersystemsmoreeasilythanwhatwaspossiblewiththepreviousversionofDVDStore.
Forthesetests,a40GBDVDStore3databaseinstancewascreatedoneachvirtualmachine.
Thedirectdatabasedriverwasusedontheclientloadsystemtostressthedatabasewithoutrunningamiddletierbecausethefocusofthesetestswasonthelargedatabasevirtualmachines.
Thedatabasebuffercachewassettosamesizeasthedatabasetooptimizeperformance.
ThenumberofdriverthreadsrunningagainsteachmonstervirtualmachinewasincreaseduntilthemaximumOPMbegantodecrease.
AtthepointofmaximumOPMtheCPUusageandotherperformancemetricswerecheckedtoverifythatthesystemhadreachedsaturation.
MonsterVirtualMachineTestsAseriesoftestswererunwithdifferentsizesofvirtualmachines.
Eachtestisbrieflydescribedwiththeresultsandanalysis.
ThefirsttestsdiscussedareallsimilarinthateachconfigurationisasetofvirtualmachinesthatfullyconsumealltheCPUthreadsonthehost.
Theconfigurationsarefour120-vCPUVMs,eight60-vCPUVMs,and1630-vCPUVMs.
Ineachcase,thetotalnumberofvCPUsrunningacrossallthevirtualmachinesis480,whichequalsthenumberofCPUthreadsonthehost.
Inadditiontothesetestswithmaximumconfigurations,sometestswererunwithavirtualmachineconfigurationthatunder-provisionstheserver,andatestcomparingCPUaffinity(pinning)vs.
PreferHTconfigurations.
StoragePerformanceForthesetests,thegoalwastousealltheCPUsontheserver.
Inordertoaccomplishthis,theamountofdiskI/Owasminimizedbyspecifyingadatabasebuffercachethatwasapproximatelythesamesizeasthedatabaseondisk.
Thismeantthataftertheinitialwarmupphaseofrunningthetest,mostdatabasequeriescouldbesatisfiedwithoutadiskI/Ooperationbecausemostofthedatabasewascachedinmemory.
InorderforallCPUstobekeptbusy,thediskI/Ooperationsthatoccurmustbeaslowlatencyaspossible.
TheIBMFlashSystemarraywasabletokeepaveragedisklatencybelow0.
3millisecondsinalltestsandwasahighlightofsystemperformance.
IOPSpeakedatapproximately50,000duringsomeofthetestruns,whichwaswellwithinthecapabilitiesofthestoragearray.
Thearrayprovidedextremelylowlatencystorageinalltestscenarios.
ThecapabilitiesoftheIBMFlashSystemarrayintermsofIOPSwereneverpushed,butthetestsdidbenefitgreatlyfromtheconsistentlylowresponsetimes.
Four120-vCPUVMsThemaximumsizevirtualmachineinvSphere6is128vCPUs.
Sowiththelimitof480totalCPUthreadsonthehost,runningfour120-vCPUVMsisthemaximumsizepossiblewhilekeepingallvirtualmachinesthesamesizeandstayingunderthevSpheremaximum.
Whilenotmanyenvironmentstodayhaveasinglevirtualmachinerunningatthissize,thistestranfourofthemonasinglehostunderhighload.
Tomeasurethescalabilityofthesolutionatfullcapacity,testswererunfirstwithjustasinglemonstervirtualmachine.
Inadditionaltests,allfourvirtualmachineswererunatthesametime.
Maximumperformancewasfoundforeachtestcasebyincreasingthenumberofthreadsintheclientdriverstofindthepointatwhichthemostordersperminute(OPM)wereachieved.
ThispointofmaximumthroughputwasalsofoundtobeatnearCPUsaturation,indicatingthatperformancehadpeaked.
TECHNICALWHITEPAPER/8PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure2.
Almostlinearscalabilityof4x120vCPUVMsonasingleserverInthistypeoftest,theidealislinearscalability.
Thiswouldbea4timesperformancegrowthgoingfromasinglevirtualmachinetofourvirtualmachines.
AsFigure2shows,thefour120-vCPUvirtualmachinesachieved3.
7timesthethroughputofthesingle120-vCPUvirtualmachine,whichis92%ofperfectlinearscalability.
Storageperformedataveryhighlevelmaintaining0.
3millisecondslatencyand20,000IOPSduringthetest.
Eight60-vCPUVMsForthenextsetoftests,thevirtualmachineswerereadjusteddownto60vCPUsandclonedsothattherewasatotalofeight60-vCPUVMs.
Inthistest,eachvirtualmachinehadthesamenumberofvCPUsaseachSuperdomeXcomputeblade.
Itisbynomeansarequirementtomatchvirtualmachinesizetotheunderlyinghardwaresospecifically,butthiscanallowforoptimizedresultsinsomeenvironments.
05010015020025030035040045050014OrdersPerMinute(OPM)inThousandsNumberof120-vCPUVirtualMachinesScalabilityof4x120vCPUVMsonSingleServerTECHNICALWHITEPAPER/9PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure3.
Scalabilityof60-vCPUVMsThetotalthroughputachievedwitheight60-vCPUVMswasthehighestofanyofthetestsconducted.
TheIBMFlashSystemarrayalsocontinuedtoachieveimpressiveperformancewithlatencyunder0.
3millisecondsandaverage16,000IOPS.
Sixteen30-vCPUVMsThistestconsistedofrunningasixteen30-vCPUVMs.
Eachofthese30-vCPUVMswasessentiallyusingallthethreadsonaserversocketbecauseoftheuseofthepreferHTparameter.
0100200300400500600148OrdersPerMinute(OPM)inThousandsNumberof60vCPUVMsScalibilityof60vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/10PeekingAttheFuturewithGiantMonsterVirtualMachinesThislargenumberofmonsterVMsrunningatthesametimestillresultedinverygoodtotalthroughputandexcellentscalabilitymovingfromasinglevirtualmachinerunningtoallsixteen.
Thethroughputofthesixteenvirtualmachineswas14.
3timesofasingleVMor89%ofperfectlinearscalability.
Onceagain,disklatencyremainedbelow0.
3millisecondswithaverageIOPSof13,000.
Under-ProvisioningwithFour112-vCPUVMsIntheothertestscoveredinthispaper,theserverhasbeenfullycommittedwithavCPUallocatedforeverythreadonthehost.
ThismeansthatallthreadswillbeusedforvirtualmachinevCPUs.
Inmostenvironments,thisstillleaveslotsofCPUavailablebecausenotallvirtualmachinesarerunningatfullCPUutilization.
InanenvironmentwhereallassignedvCPUsareat100%usage,thereisn'tanythingleftoverfortheESXihypervisortouseforitsfunctions.
ThisincludesvirtualnetworkinganddiskI/Ohandling.
ThehypervisorthencompetesdirectlywiththevirtualmachinesforCPU.
Inthiscase,performanceofthevirtualmachinescanactuallybeimprovedbyreducingthenumberofvCPUstoleavesomeCPUthreadsavailableonthehostfortheuseofESXi.
Inthisspecificconfiguration,whilerunningthe4x120virtualCPUswiththeDVDStore3workload,thenetworktrafficisabout900Megabitspersecond(Mb/s)transmittedand200Mb/sreceivedandanaverage30,000ofdiskIOPSisalsobeingprocessed.
InordertoallowforthehosttohavesomeCPUresourceavailabletohandlethisworkload,thenumberofvCPUsforeachofthefourvirtualmachineswasreducedfrom120to112.
Thisleavesonecore(twohyper-threads)persocketunassignedtoavirtualmachine.
01002003004005006001816OrdersPerMinute(OPM)inThousandsNumberof30vCPUVMsScalabilityof30vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/11PeekingAttheFuturewithGiantMonsterVirtualMachines.
Figure4.
4x120vCPUvs.
4x112vCPUTheresultsshowthatoverallthroughputincreasedsignificantlyfrom448thousandto524thousandOPM.
ThegaininperformancewithsmallervirtualmachinesisduetothereductionincontentionofresourcesbetweentheESXihypervisorandthevirtualmachinesthatisfoundinthisextremetestingscenariowhenallCPUresourceswereallocatedandfullyutilized.
CPUAffinityvs.
PreferHTItispossibletocontroltheCPUsthatareusedforavirtualmachinebyusingtheCPUaffinitysetting.
ThisallowsanadministratortooverridetheESXischedulerandonlyallowavirtualmachinetousespecificphysicalcores.
ThevCPUsusedbyavirtualmachinearepinnedtospecificphysicalcores.
Incertainbenchmarkingscenarios,theuseofCPUaffinityhasshownsmallincreasesinperformance.
Evenintheserelativelyuncommoncases,itsuseisnotrecommendedbecauseofthehighadministrativeeffortandthepotentialforpoorperformanceifthesettingisnotupdatedaschangesintheenvironmentoccuroriftheCPUaffinitysettingisdoneincorrectly.
UsingtheCapstonetestingenvironmentatestwithCPUaffinityandPreferHTwasconductedtomeasurewhichconfigurationperformedbetter.
ItwasfoundthatPreferHT,whichallowstheESXihypervisortomakeallvCPUschedulingdecisions,outperformedaCPUaffinityconfigurationby4%.
0100,000200,000300,000400,000500,000600,0004x120vCPUs4x112vCPUsOrdersPerMinute(OPM)4x120vCPUvs4x112vCPUVMTotalThroughputon60Core/120ThreadServerTECHNICALWHITEPAPER/12PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure5.
UsingPreferHTperformedslightlybetterthansettingCPUaffinityBestPracticesRunningmanyverylargevirtualmachinesonanevenlargerservermakesitmoreimportanttofollowmonstervirtualmachinebestpractices.
Considertheserver'sNUMAarchitecturewhendecidingwhatsizetomakethevirtualmachines.
Whencreatingvirtualmachines,makesurethevirtualNUMAsocketsmatchthephysicalNUMAarchitectureofthehostascloselyaspossible.
Formoreinformation,see"UsingNUMASystemswithESXi"[3].
Sizeandconfigurestoragewithenoughperformancetomatchthelargeperformancecapabilityofthemonstervirtualmachines.
Alargeserverwithunderpoweredstoragewillbelimitedbythestorage.
NetworkperformancecanquicklybecomeanissueifthetrafficforthelargevirtualmachinesisnotcorrectlyspreadacrossmultipleNICs.
Combininganumberofhighperformanceworkloadsonasinglehostwillalsoresultinhighnetworktrafficthatwillmostlikelyneedtousemultiplenetworkconnectionstoavoidabottleneck.
InextremelyhighCPUutilizationscenarios,includingbenchmarktests,itcanbebettertoleaveafewCPUcoresunassignedtovirtualmachinestogivetheESXihypervisorneededresourcesforitsfunctions.
DonotuseCPUaffinity,sometimesreferredtoasCPUpinning,becauseitusuallydoesnotresultinabigincreaseinperformance.
Insomeextremehighutilizationscenarios,usethePreferHTsettingtogetmoretotalperformancefromasystem,butnotethatusingthissettingcouldreduceindividualvirtualmachineperformance.
050,000100,000150,000200,000250,000300,000350,000400,000450,000500,000PinnedPreferHTOrdersPerMinute(OPM)CPUAffinityvsPreferHTwith4x120vCPUVMsOnaSingleServerTECHNICALWHITEPAPER/13PeekingAttheFuturewithGiantMonsterVirtualMachinesConclusionProjectCapstonehasshownthatvSphere6iscapableofrunningmultiplegiantmonstervirtualmachinestodayonsomeoftheworld'smostcapableserversandstorage.
TheHPESuperdomeXandsuperlowlatencyIBMFlashSystemstoragewerechosenbecauseoftheirtremendousperformancecapabilities,theireaseofconfigurationanduse,andtheiroverallcomplimentarystaturetovSphere6.
Theuniquepropertiesofthisstackallowedthetestingteamtopushthelimitsofvirtualizedinfrastructuretoneverbeforeseenlevels.
Asstatedinthemediacollateral"ProjectCapstone,DrivingOracletoSoarBeyondtheClouds,"(seeAppendix)thisexampleinfrastructurestackispossibletodayandshowsthatashighercorecountsandall-flashstoragearraysbecomemorecommoninthefuture,aVMwarevSphere–basedapproachwillprovidetheneededscalabilityandcapacity.
ThiscollaborationofVMware,HPE,andIBMshowsthatapplicationsofthelargestsizescanrunonavSpherevirtualinfrastructure.
Thelimitingfactorinmostdatacenterstodayisthehardware,butwhenusingthelatesttechnologyavailable,itispossibletolifttheselimitsandbringtheflexibilityandcapabilitiesofvirtualizedinfrastructuretoallcornersofthedatacenter.
Thiscollaborativeachievementbetweenthreeoftheworld'smostrecognizedcomputingcompanieshassolidifiedthepropositionofcomprehensivevirtualizationthatVMwarehasheldforanumberofyears.
Verysimplyput,allapplicationsanddatabases—regardlessoftheirprocessing,memory,networking,orthroughputdemands—arecandidatesforavirtualizedinfrastructure.
VMware,HPE,andIBMbuiltProjectCapstonewithleadingedgecomponentsusedasafoundationtoprovethat100%virtualizationisarealityineventhelargestcomputeenvironment.
AppendixAninitialblogforProjectCapstonewaspreviouslypublished[4].
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
htmlAshortvideoonProjectCapstonethatgivessomehighlightsfromtheprojectisavailableonline[5].
https://www.
youtube.
com/watchv=X4SRxl04uQ0ProjectCapstonewaspresentedatVMworld2015inSanFranciscoandwithexecutivesfromallthreecompaniesparticipating.
Avideoofthispresentationisavailableonline[6].
https://www.
youtube.
com/watchv=O3BTvP46i4cTECHNICALWHITEPAPER/14PeekingAttheFuturewithGiantMonsterVirtualMachinesReferences[1]Hewlett-PackardDevelopmentCompany,L.
P.
(2010)HPnPartitions(nPars),forIntegrityandHP9000midrange.
http://www8.
hp.
com/h20195/v2/GetPDF.
aspx/c04123352.
pdf[2]ToddMuirheadandDaveJaffe.
(2015,July)DVDStore3.
http://www.
github.
com/dvdstore/ds3[3]VMware,Inc.
(2015)UsingNUMASystemswithESXi.
http://pubs.
vmware.
com/vsphere-60/index.
jsp#com.
vmware.
vsphere.
resmgmt.
doc/GUID-7E0C6311-5B27-408E-8F51-E4F1FC997283.
html[4]DonSullivan.
(2015,August)VMworldUS2015SpotlightSession:ProjectCapstone,aCollaborationbetweenVMW,HP&IBM.
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
html[5]IBMSystemsISVs.
(2015,November)ProjectCapstone-Pushingtheperformancelimitsofvirtualization.
https://www.
youtube.
com/watchv=X4SRxl04uQ0[6]VMworld.
(2015,November)VMworld2015:VAPP6952-S-VMwareProjectCapstone,aCollaborationofVMware,HP,andIBM.
https://www.
youtube.
com/watchv=O3BTvP46i4c[7]VMware,Inc.
(2015)ConfigurationMaximumsvSphere6.
0.
https://www.
vmware.
com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.
pdfVMware,Inc.
3401HillviewAvenuePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2016VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
Date:27January2016Commentsonthisdocument:https://communities.
vmware.
com/docs/DOC-30846PeekingAttheFuturewithGiantMonsterVirtualMachinesAbouttheAuthorsLeoDemers,MissionCriticalProductManager,HPEKristyOrtega,EcoSystemOfferingManager,IBMRawleyBurbridge,FlashSystemCorporateSolutionArchitect,IBMToddMuirhead,StaffPerformanceEngineer,VMwareDonSullivan,ProductLineMarketingManagerforBusinessCriticalApplications,VMwareAcknowledgementsTheauthorsthankMarkLohmeyer,MichaelKuhn,RandyMeyer,DrewSher,RawleyBurbridge,BruceHerndon,JimBritton,RezaTaheri,JuanGarcia-Rovetta,MichelleTidwell,andJosephDieckhans.

美国多IP站群VPS商家选择考虑因素和可选商家推荐

如今我们很多朋友做网站都比较多的采用站群模式,但是用站群模式我们很多人都知道要拆分到不同IP段。比如我们会选择不同的服务商,不同的机房,至少和我们每个服务器的IP地址差异化。于是,我们很多朋友会选择美国多IP站群VPS商家的产品。美国站群VPS主机商和我们普通的云服务器、VPS还是有区别的,比如站群服务器的IP分布情况,配置技术难度,以及我们成本是比普通的高,商家选择要靠谱的。我们在选择美国多IP...

昔日数据月付12元起,湖北十堰机房10M带宽月付19元起

昔日数据怎么样?昔日数据是一个来自国内服务器销售商,成立于2020年底,主要销售国内海外云服务器,目前有国内湖北十堰云服务器和香港hkbn云服务器 采用KVM虚拟化技术构架,湖北十堰机房10M带宽月付19元起;香港HKBN,月付12元起; 此次夏日活动全部首月5折促销,有需要的可以关注一下。点击进入:昔日数据官方网站地址昔日数据优惠码:优惠码: XR2021 全场通用(活动持续半个月 2021/7...

博鳌云¥799/月,香港110Mbps(含10M CN2)大带宽独立服务器/E3/8G内存/240G/500G SSD或1T HDD

博鳌云是一家以海外互联网基础业务为主的高新技术企业,运营全球高品质数据中心业务。自2008年开始为用户提供服务,距今11年,在国人商家中来说非常老牌。致力于为中国用户提供域名注册(国外接口)、免费虚拟主机、香港虚拟主机、VPS云主机和香港、台湾、马来西亚等地服务器租用服务,各类网络应用解決方案等领域的专业网络数据服务。商家支持支付宝、微信、银行转账等付款方式。目前香港有一款特价独立服务器正在促销,...

superdome为你推荐
敬汉卿姓名被抢注12306身份证名字被注册怎么办特朗普取消访问丹麦特朗普专机抵达日本安保警力情形如何?原代码什么是原代码杰景新特美国杰尼.巴尼特的资料同ip域名不同的几个ip怎样和同一个域名对应上51sese.com谁有免费看电影的网站?www.gegeshe.comSHE个人资料javmoo.comjavbus上不去.怎么办www.vtigu.com如图所示的RT三角形ABC中,角B=90°(初三二次根式)30 如图所示的RT三角形ABC中,角B=90°,点p从点B开始沿BA边以1厘米每秒的速度向A移动;同时,点Q也从点B开始沿BC边以2厘米每秒的速度向点C移动。问:几秒后三角形PBQ的面积为35平方厘米?PQ的距离是多少ip查询器查看自己IP的指令
重庆虚拟主机 域名劫持 踢楼 bbr 息壤主机 美国主机论坛 免费ftp站点 华为4核 免费网站申请 空间出租 合租空间 卡巴斯基试用版 umax120 重庆双线服务器托管 新世界服务器 双线asp空间 智能dns解析 net空间 贵阳电信测速 百度云空间 更多