avoidwisediskcleaner

wisediskcleaner  时间:2021-04-01  阅读:()
HybridStorageManagementforDatabaseSystemsXinLiuUniversityofWaterloo,Canadax39liu@uwaterloo.
caKennethSalemUniversityofWaterloo,Canadaksalem@uwaterloo.
caABSTRACTTheuseofash-basedsolidstatedrives(SSDs)instoragesystemsisgrowing.
AddingSSDstoastoragesystemnotonlyraisesthequestionofhowtomanagetheSSDs,butalsoraisesthequestionofwhethercurrentbuerpoolalgo-rithmswillstillworkeectively.
Weareinterestedintheuseofhybridstoragesystems,consistingofSSDsandharddiskdrives(HDDs),fordatabasemanagement.
Wepresentcost-awarereplacementalgorithms,whichareawareofthedierenceinperformancebetweenSSDsandHDDs,forboththeDBMSbuerpoolandtheSSDs.
Inhybridstoragesys-tems,thephysicalaccesspatterntotheSSDsdependsonthemanagementoftheDBMSbuerpool.
Westudiedtheim-pactofbuerpoolcachingpoliciesonSSDaccesspatterns.
Basedonthesestudies,wedesignedacost-adjustedcachingpolicytoeectivelymanagetheSSD.
WeimplementedthesealgorithmsinMySQL'sInnoDBstorageengineandusedtheTPC-Cworkloadtodemonstratethatthesecost-awareal-gorithmsoutperformpreviousalgorithms.
1.
INTRODUCTIONFlashmemoryhasbeenusedformanyyearsinportableconsumerdevices(e.
g,cameras,phones)wherelowpowerconsumptionandlackofmovingpartsareparticularlydesir-ablefeatures.
Flash-basedsolidstatestoragedevices(SSDs)arenowalsobecomingcommonplaceinserverenvironments.
SSDsaremoreexpensiveperbitthantraditionalharddisks(HDD),buttheyaremuchcheaperintermsofcostperI/Ooperation.
Thus,serversindatacentersmaybecon-guredwithbothtypesofpersistentstorage.
HDDsarecosteectiveforbulky,infrequentlyaccesseddata,whileSSDsarewell-suitedtodatathatarerelativelyhot[8].
Inthispaper,weareconcernedwiththeuseofsuchhybrid(SSDandHDD)storagesystemsfordatabasemanagement.
Weconsiderhybridstoragesystemsinwhichthetwotypesofdevicesarevisibletothedatabasemanagementsystem(DBMS),sothatitcanusetheinformationatitsdisposaltodecidehowtomakeuseofthetwotypesofdevices.
ThisPermissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributedforprotorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationontherstpage.
Tocopyotherwise,torepublish,topostonserversortoredistributetolists,requirespriorspecicpermissionand/orafee.
ArticlesfromthisvolumewereinvitedtopresenttheirresultsatThe39thInternationalConferenceonVeryLargeDataBases,August26th-30th2013,RivadelGarda,Trento,Italy.
ProceedingsoftheVLDBEndowment,Vol.
6,No.
8Copyright2013VLDBEndowment2150-8097/13/06.
.
.
$10.
00.
isillustratedinFigure1.
Whenwritingdatatostorage,theDBMSchooseswhichtypeofdevicetowriteitto.
Figure1:SystemArchitecturePreviousworkhasconsideredhowaDBMSshouldplacedatainahybridstoragesystem[11,1,2,16,5].
WeprovideasummaryofsuchworkinSection7.
Inthispaper,wetakeabroaderviewoftheproblemthanisusedbymostofthiswork.
OurviewincludestheDBMSbuerpoolaswellasthetwotypesofstoragedevices.
Weconsidertworelatedproblems.
TherstisdeterminingwhichdatashouldberetainedintheDBMSbuerpool.
TheanswertothisquestionisaectedbythepresenceofhybridstoragebecauseblocksevictedfromthebuercachetoanSSDaremuchfastertoretrievelaterthanblocksevictedtotheHDD.
Thus,weconsidercost-awarebuermanagement,whichcantakethisdistinctionintoaccount.
Second,assumingthattheSSDisnotlargeenoughtoholdtheentiredatabase,wehavetheproblemofdecidingwhichdatashouldbeplacedontheSSD.
Thisshoulddependonthephysicalaccesspatternforthedata,whichdepends,inturn,onboththeDBMSworkloadandthemanagementoftheDBMSbuerpool.
Becauseweconsiderbothbuerpoolmanagementandmanagementofthehybridstoragesystem,wehavemorescopeforoptimizationthanpreviousworkinthisarea,attheexpenseofadditionalinvasivenessinthedesignandim-plementationoftheDBMS.
Inaddition,wemustaccountforthefactthatthetwoproblemsweconsideraremutuallydependent.
Replacementdecisionsinthebuerpooldependonthelocations(SSDorHDD)ofthepagesbeingreplaced,sincelocationaectsbothevictioncostandreloadingcost.
Conversely,SSDpageplacementdecisionsdependonhow541thepageisused,e.
g.
,howfrequentlyitisreadorwritten,whichdependsinturnonthebuermanager.
Forexam-ple,undertheGD2Lreplacementpolicyweproposehere,movingapagefromtheHDDtotheSSDmayresultinasignicantincreaseinthephysicalreadandwriteratesforthatpage,sinceGD2LtendstoevictSSDpagesquicklyfromthebuerpool.
Ourworkaddressesthesedependenciesusingananticipa-toryapproachtoSSDmanagement.
WhendecidingwhethertomoveapageintotheSSD,ourproposedadmissionandreplacementpolicy(calledCAC)predictshowsuchamovewillaectthephysicalI/Oloadexperiencedbythatpage.
ThepageismovedintotheSSDonlyifitisdeterminedtobeagoodcandidateunderthispredictedworkload.
TheDBMSbuermanagerthenmakescost-awarereplacementdecisionsbasedonthecurrentplacementsofbueredpages.
Inthispaperwepresentthefollowingcontributions:WepresentGD2L,acost-awarealgorithmforbuerpoolmanagementindatabasesystemswithhybridstor-agesystems.
GD2LtakestheusualconcernsofDBMSbuermanagement(exploitinglocality,scanresistance)intoaccount,butalsoconsidersthefactthatdierentdevicesinahybridstoragesystemperformdierently.
GD2LisbasedontheGreedyDualalgorithm[19],butwehaverestrictedGreedyDualtohybridsystemsthatincludeonlytwotypesofdevices.
Inaddition,wehaverenedGreedyDualforoperationinaDBMSenviron-ment.
WepresentCAC,ananticipatorycost-basedtechniqueformanagingtheSSD.
Unlikeprevioustechniques,CACisintendedtoworktogetherwithacost-awarebuermanagerlikeGD2L.
ItexpectsthatmovingapageintooroutoftheSSDwillchangetheaccesspatternforthatpage,anditanticipatesthesechangeswhenmakingSSDplacementdecisions.
WepresentanempiricalevaluationofGD2LandCAC.
WehaveimplementedbothtechniquesinMySQL'sInnoDBstoragemanager.
Wecomparetheperfor-manceofGD2LwiththatofInnoDB'snativebuermanager,whichisoblivioustothelocationofpagesinahybridstoragesystem.
WecompareCACtoseveralalternatives,includinganon-anticipatorycost-basedtechnique,LRU-2,andMV-FIFO.
Ourevaluationusestransactionalworkloads(TPC-C).
Theremainderofthispaperisorganizedasfollows.
Sec-tion2givesanoverviewofthesystemarchitecturethatweassume.
Section3presentstheGD2Ltechniquefordatabasebuerpoolmanagement,andSection4showsempiricalre-sultsthatillustratetheeectofGD2Lonthephysicalaccesspatternsofdatabasepages.
Section5presentstheCACal-gorithmformanagingthecontentsoftheSSDdevice(s).
TheresultsofourevaluationGD2LandCACarepresentedinSection6,andSection7summarizesrelatedwork.
2.
SYSTEMOVERVIEWFigure1illustratesthesystemarchitecturewehaveas-sumedforthiswork.
TheDBMSseestwotypesofstoragedevices,SSDsandHDDs.
AlldatabasepagesarestoredontheHDD,wheretheyarelaidoutaccordingtotheDBMS'ssecondarystoragelayoutpolicies.
Inaddition,copiesofsomepagesarelocatedintheSSDandcopiesofsomepagesarelocatedintheDBMSbuerpool.
AnygivenpagemayhavecopiesintheSSD,inthebuerpool,orboth.
WhentheDBMSneedstoreadapage,thebuerpoolisconsultedrst.
Ifthepageiscachedinthebuerpool,theDBMSreadsthecachedcopy.
IfthepageisnotinthebuerpoolbutitisintheSSD,itisreadintothebuerpoolfromtheSSD.
TheSSDmanagerisresponsiblefortrackingwhichpagesarecurrentlylocatedintheSSD.
IfthepageisinneitherthebuerpoolnortheSSD,itisreadfromtheHDD.
Ifthebuerpoolisfullwhenanewpageisreadin,thebuermanagermustevictapageaccordingtoitspagere-placementpolicy,whichwepresentinSection3.
Whenthebuermanagerevictsapage,theevictedpageisconsideredforadmissiontotheSSDifitisnotalreadylocatedthere.
SSDadmissiondecisionsaremadebytheSSDmanagerac-cordingtoitsSSDadmissionpolicy.
Ifadmitted,theevictedpageiswrittentotheSSD.
IftheSSDisfull,theSSDman-agermustalsochooseapagetobeevictedfromtheSSDtomakeroomforthenewlyadmittedpage.
SSDevictiondecisionsaremadeaccordingtoanSSDreplacementpolicy.
(TheSSDadmissionandreplacementpoliciesarepresentedinSection5.
)IfapageevictedfromtheSSDismorerecentthantheversionofthatpageontheHDD,thentheSSDmanagermustcopythepagefromtheSSDtotheHDDbe-foreevictingit,otherwisethemostrecentpersistentversionofthepagewillbelost.
TheSSDmanagerdoesthisbyreadingtheevictedpagefromtheSSDintoastagingbuerinmemory,andthenwritingittotheHDD.
WeassumethattheDBMSbuermanagerimplementsasynchronouspagecleaning,whichiswidelyusedtohidewritelatenciesfromDBMSapplications.
Whenthebuermanagerelectstocleanadirtypage,thatpageiswrittentotheSSDifthepageisalreadylocatedthere.
IfthedirtypageisnotalreadylocatedontheSSD,itisconsideredforadmissiontotheSSDaccordingtotheSSDadmissionpolicy,inexactlythesamewaythatabuerpoolevictionisconsidered.
ThedirtypagewillbeushedtotheSSDifitisadmittedthere,otherwiseitwillbeushedtotheHDD.
ThebuerandSSDmanagementtechniquesthatwehavedescribedhavetwokeyproperties.
First,admissionofpagesintotheSSDoccursonlywhenpagesareevictedorcleanedfromtheDBMSbuerpool.
PagesarenotadmittedintotheSSDwhentheyareloadedintothebuerpoolfromtheHDD,asmightbeexpectedinamulti-tiercache.
Thereasonforthisistominimizecacheinclusion[18],i.
e.
,theduplica-tionofpagesinthebuercacheandtheSSD.
Second,eachushofadirtypagefromtheDBMSbuerpoolgoeseithertotheSSDortotheHDD,butnottoboth(atleastnotim-mediately).
Oneadvantageofthisapproach,comparedtoawrite-throughdesign,isthattheSSDcanpotentiallyim-proveDBMSwriteperformance,totheextentthatwritesaredirectedtotheSSD.
Adisadvantageofthisapproachisthatthelatestversionofanunbueredpagemight,ingeneral,befoundoneitherdevice.
However,becausetheDBMSalwayswritesadirtybuerpoolpagetotheSSDifthatpageisalreadyontheSSD,itcanbesurethattheSSDversion(ifany)ofapageisalwaysatleastasrecentastheHDDversion.
Thus,toensurethatitcanobtainthemostrecentlywrittenversionofanypage,itissucientfortheDBMStoknowwhichpageshavecopiesontheSSD,andtoreadapagefromtheSSDifthereisacopyofthe542Table1:StorageDeviceParametersSymbolDescriptionRDThereadservicetimeofHDDWDThewriteservicetimeofHDDRSThereadservicetimeofSSDWSThewriteservicetimeofSSDpagethere.
Tosupportthis,theSSDmanagermaintainsanin-memoryhashmapthatrecordswhichpagesareontheSSD.
ToensurethatitcandeterminethecontentsoftheSSDevenafterafailure,theSSDmanagerusesacheck-pointingtechnique(describedinSection5.
4)toecientlypersistitsmapsothatitcanberecoveredquickly.
3.
BUFFERPOOLMANAGEMENTInthissection,wedescribeourreplacementalgorithmforthebuerpool:atwo-levelversionoftheGreedyDualalgo-rithm[19]whichwehaveadaptedforuseindatabasesys-tems.
WerefertoitasGD2L.
Mostexistingcost-awarealgorithms,e.
g.
,thebalanceal-gorithm[15]andGreedyDual[19],wereproposedforlecaching.
Theytakeintoaccountthesizeofcachedobjectsandtheiraccesscostswhenmakingreplacementdecisions,andtargetdierentsituations:uniformobjectsizewithar-bitraryretrievalcosts,arbitraryobjectsizewithuniformre-trievalcosts,orarbitraryobjectsizewitharbitraryretrievalcosts.
TheGreedyDualalgorithmaddressesthecaseofob-jectswithuniformsizebutdierentretrievalcosts.
Young[19]showsthatGreedyDualhasthesame(optimal)compet-itiveratioasLRUandFIFO[15].
GreedyDualisactuallyarangeofalgorithmswhichgen-eralizewell-knowncachingalgorithms,suchasLRUandFIFO.
Initially,wepresenttheGreedyDualgeneralizationofLRUandourrestrictiontotworetrievalcosts.
InSec-tion3.
1,wedescribehowasimilarapproachcanbeappliedtotheLRUvariantusedbyInnoDB,andwealsodiscusshowitcanbeextendedtohandlewrites.
GreedyDualassociatesanon-negativecostHwitheachcachedpagep.
Whenapageisbroughtintothecacheorreferencedinthecache,Hissettothecostofretrievingthepageintothecache.
Tomakeroomforanewpage,thepagewiththelowestHinthecache,Hmin,isevictedandtheHvaluesofallremainingpagesarereducedbyHmin.
ByreducingtheHvaluesandresettingthemuponaccess,GreedyDualagespagesthathavenotbeenaccessedforalongtime.
Thealgorithmthusintegrateslocalityandcostconcernsinaseamlessfashion.
GreedyDualisusuallyimplementedusingapriorityqueueofcachedpages,prioritizedbasedontheirHvalue.
Withapriorityqueue,handlingahitandanevictioneachrequireO(logk)time.
AnothercomputationalcostofGreedyDualisthecostofreducingtheHvaluesoftheremainingpageswhenevictingapage.
ToreducethevalueHforallpagesinthecache,GreedyDualrequiresksubtractions.
Caoetal.
[3]haveproposedatechniquetoavoidthesubtractioncost.
Theirideaistokeepan"ination"valueLandtoosetallfuturesettingsofHbyL.
ParametersrepresentingthereadandwritecoststotheSSDandtheHDDaresummarizedinTable1.
Inourcase,thereareonlytwopossibleinitialvaluesforH:onecorre-spondingtothecostofretrievinganSSDpage(RS)andtheothertothecostofretrievinganHDDpage(RD).
TheGD2Lalgorithmisdesignedforthisspecialcase.
GD2Lusestwoqueuestomaintainpagesinthebuerpool:onequeue(QS)isforpagesplacedontheSSD,theother(QD)isforpagesnotontheSSD.
BothqueuesaremanagedusingLRU.
WiththetechniqueproposedbyCaoetal.
[3],GD2LachievesO(1)timeforhandlingbothhitsandevictions.
Figure2describestheGD2Lalgorithm.
WhenGD2LevictsthepagewiththesmallestHfromthebuerpool,L(theinationvalue)issettotheHvalue.
IfthenewlyrequestedpageisontheSSD,itisinsertedtotheMRUendofQSanditsHvalueissettoL+RS;otherwiseitisinsertedtotheMRUendofQDanditsHvalueissettoL+RD.
BecausetheLvalueincreasesgraduallyaspageswithhigherHareevicted,pagesinQDandQSaresortedbyHvalue.
TheonehavingthesmallestHvalueisintheLRUend.
BycomparingtheHvaluesoftheLRUpageofQDandtheLRUpageofQS,GD2LeasilyidentiesthevictimpagethathasthesmallestHvalueinthebuerpool.
ThealgorithmevictsthepagewiththelowestHvalueifthenewlyrequestedpageisnotinthebuerpool.
InFigure2,pageqrepresentsthepagewiththelowestHvalue.
ifpisnotcached1compareLRUpageofQSwithLRUpageofQD2evictthepageqthathasthesmallerH3setL=H(q)4bringpintothecache5ifpisontheSSD6H(p)=L+RS7putptotheMRUofQS8elseifpisonHDD9H(p)=L+RD10putptotheMRUofQD11Figure2:GD2LAlgorithmForReadingPagep.
3.
1ImplementationofGD2LinMySQLWeimplementedGD2LforthebuerpoolmanagementinInnoDB,thedefaultstorageengineoftheMySQLdatabasesystem.
InnoDBusesavariantoftheleastrecentlyused(LRU)algorithm.
Whenroomisneededtoaddanewpagetothepool,InnoDBevictstheLRUpagefromthebuerpool.
PagesthatarefetchedondemandareplacedattheMRUendofInnoDB'slistofbueredpages.
PrefetchedpagesareplacednearthemidpointoftheLRUlist(3/8ofthewayfromtheLRUend),movingtotheMRUpositiononlyiftheyaresubsequentlyread.
Sinceprefetchingisusedduringtablescans,thisprovidesameansofscanresistance.
ToimplementGD2L,wesplitInnoDB'sLRUlistintotwoLRUlists:QDandQS.
AsshowninFigure3,thecachedHDDpages(representedbyH)arestoredinQDandthecachedSSDpages(representedbyS)inQS.
NewlyloadedpagesareplacedeitherattheMRUendorthemidpointoftheappropriatelist,dependingonwhethertheywereprefetchedorloadedondemand.
WhenanewprefetchedpageisinsertedatthemidpointofQDorQS,itsHvalueissettotheHvalueofthecurrentmidpointpage.
Whenpagesaremodiedinthebuerpool,theyneedtobecopiedbacktotheunderlyingstoragedevice.
InInnoDB,dirtypagesaregenerallynotwrittentotheunderlyingstor-agedeviceimmediatelyaftertheyaremodiedinthebuerpool.
Instead,pagecleanerthreadsareresponsibleforasyn-chronouslywritingbackdirtypages.
Thepagecleanerscan543Figure3:BuerpoolmanagedbyGD2Lissuetwotypesofwrites:replacementwritesandrecoverabil-itywrites.
Replacementwritesareissuedwhendirtypagesareidentiedasevictioncandidates.
Toremovethelatencyassociatedwithsynchronouswrites,thepagecleanerstrytoensurethatpagesthatarelikelytobereplacedarecleanatthetimeofthereplacement.
Incontrast,recoverabil-itywritesarethosethatareusedtolimitfailurerecoverytime.
TheInnoDBuseswriteaheadloggingtoensurethatcommitteddatabaseupdatessurvivefailures.
Thefailurerecoverytimedependsontheageoftheoldestchangesinthebuerpool.
Thepagecleanersissuerecoverabilitywritesoftheleastrecentlymodiedpagestoensurethatacong-urablerecoverytimethresholdwillnotbeexceeded.
InInnoDB,whenthefreespaceofthebuerpoolisbelowathreshold,pagecleanersstarttocheckarangeofpagesfromthetailoftheLRUlist.
Iftherearedirtypagesintherange,thepagecleanersushthemtothestoragedevices.
Thesearereplacementwrites.
Wechangedthepagecleanerstoreectthenewcost-awarereplacementpolicy.
SincepageswithlowerHvaluesarelikelytobereplacedsooner,thepagecleanersconsiderHvalueswhenchoosingwhichpagestoush.
AsGD2LmaintainstwoLRUlistsinthebuerpool(QDandQS),thepagescleanerscheckpagesfromtailsofbothlists.
Iftherearedirtypagesinbothlists,thepagecleanerscomparetheirHvaluesandchoosedirtypageswithlowerHvaluestowritebacktothestoragedevices.
Wedidnotchangethewaythepagecleanersissuerecoverabilitywrites,sincethosewritesdependonpageupdatetimeandnotonpageaccesscost.
TheoriginalGreedyDualalgorithmassumedthatapage'sretrievalcostdoesnotchange.
However,inoursystemapage'sretrievalcostchangeswhenitismovedintooroutoftheSSD.
IfabueredpageismovedintotheSSD,thenGD2LmusttakethatpageoutofQDandplaceitintoQS.
Thissituationcanoccurwhenadirty,bueredpagethatisnotontheSSDisushed,andtheSSDmanagerelectstoplacethepageintotheSSD.
Ifthepageushisare-placementwrite,itmeansthatthepagebeingushedisalikelyevictioncandidate.
Inthatcase,GD2LremovesthepagefromQDandinsertsitattheLRUendofQS.
Ifthepageushisarecoverabilitywrite,thentheushedpageshouldnotbeinsertedtotheLRUendofQSbecauseitisnotanevictioncandidate.
AsQSissortedbypageHvalue,wecouldndthepage'spositioninQSbylookingthroughpagesinQSandcomparingtheHvalues.
Sincere-coverabilitywritesaremuchlesscommonthanreplacementwrites,pagesarerarelymovingintotheSSDbyrecoverabil-itywrites.
Hence,wechoseasimpleapproachforGD2L.
Insteadofndingtheaccuratepositionforthepage,GD2LFigure4:Missrate/writeratewhileontheHDD(only)vs.
missrate/writeratewhileontheSSD.
EachpointrepresentsonepagesimplyinsertsthepageatthemidpointofQSandassignsitthesameHvalueasthepreviousQSmidpointpage.
ItisalsopossiblethatabueredpagethatisintheSSDwillbeevictedfromtheSSD(whileremaininginthebuerpool).
ThismayoccurtomakeroomintheSSDforsomeotherpage.
Inthiscase,GD2LremovesthepagefromQSandinsertsittoQD.
Sincethissituationisalsouncommon,GD2LsimplyinsertsthepageatthemidpointofQD,asitdoesforrecoverabilitywrites.
4.
THEIMPACTOFCOST-AWARECACHINGCost-awarecachingalgorithmslikeGD2Ltakeintoac-countpagelocationwhenmakingreplacementdecisions.
Asaresult,apage'sphysicalreadrateandwriteratemightbedierentafteritslocationchanges.
Inthissection,wead-dressthefollowingquestions:ifthebuerpoolusestheGD2Lcachingalgorithm,howdopages'physicalreadrateschangewhentheyareplacedintheSSDGD2Lalsochangesthemechanismforasynchronouscleaningofdirtypages.
Howdoesthisimpactthepages'writeratesTostudytheimpactoftheGD2Lalgorithmonthepageaccesspattern,wedrovethemodiedInnoDBwithaTPC-Cworkload,usingascalefactorof10.
Theinitialsizeofthedatabasewasapproximately1GB.
FormanagingtheSSD,weusedthepolicythatwillbedescribedinSection5.
Inourexperiments,wesetthebuerpoolsizeto200MB,theSSDsizeto400MB,andtherunningdurationtosixtyminutes.
Duringtherun,wemonitoredtheamountoftimeeachpagespentontheSSD,anditsreadandwriterateswhileontheSSDandwhilenotontheSSD.
WeidentiedpagesthatspentatleasttwentyminutesontheSSDandalsospentatleast20minutesnotontheSSD(about2500pages).
Weobservedthebuerpoolmissratesandphysical544writeratesforthesepages.
Alogicalreadrequestonapageisrealizedasaphysicalreadwhenthepageismissedinthebuerpool.
Thepagemissrateinthebuerpoolisdenedasthepercentageoflogicalreadsrealizedasphysicalreads.
Figure4showsthepages'buerpoolmissrateswhilethepagesareontheHDD(only)vs.
theirmissratewhilethepagesareontheSSD,andthepages'writerateswhilethepagesareontheHDD(only)vs.
theirwriterateswhiletheyareontheSSD.
FromthetwographsweseethatmostpagemissratesandwriteratesarelargerwhilethepageiscachedonSSD.
Thisisasexpected.
OncepagesareplacedonSSD,theyaremorelikelytobeevictedfromthebuerpoolbecausetheyhavelowerretrievalcosts.
AsSSDpagesinthebuerpoolarebetterevictioncandidates,thepagecleanerneedstoushdirtyonestothestoragebeforetheyareevicted.
Asaresult,pagereadandwriteratesgoupwhiletheyarecachedinSSD.
5.
SSDMANAGEMENTSection2providedahigh-leveloverviewofthemanage-mentoftheSSDdevice.
Inthissection,wepresentmoredetailsaboutSSDmanagement,includingthepageadmis-sionandreplacementpoliciesusedfortheSSDandthecheckpoint-basedmechanismforrecoveringSSDmeta-dataafterafailure.
PagesareconsideredforSSDadmissionwhentheyarecleanedorevictedfromtheDBMSbuerpool.
PagesarealwaysadmittedtotheSSDifthereisfreespaceavailableonthedevice.
NewfreespaceiscreatedontheSSDdeviceasaresultofinvalidationsofSSDpages.
ConsideracleanpagepintheDBMSbuerpool.
Ifpisupdatedandhencemadedirtyinthebuerpool,theSSDmanagerinvalidatesthecopyofpontheSSDifsuchacopyexistsanditisidenticaltothecopyofpontheHDD.
InvalidationfreesthespacethatwasoccupiedbypontheSSD.
IftheSSDversionofpisnewerthantheHDDversion,itcannotbeinvalidatedwithoutrstcopyingtheSSDversionbacktotheHDD.
Ratherthanpaythisprice,theSSDmanagersimplyavoidsinvalidatingpinthiscase.
IfthereisnofreespaceontheSSDwhenapageiscleanedorevictedfromtheDBMSbuerpool,theSSDmanagermustdecidewhethertoplacethepageontheSSDandwhichSSDpagetoevicttomakeroomforthenewcomer.
TheSSDmanagermakesthesedecisionsbyestimatingthebenet,intermsofreductioninoverallreadandwritecost,ofplacingapageontheSSD.
ItattemptstokeeptheSSDlledwiththepagesthatitestimateswillprovidethehighestbenet.
OurspecicapproachiscalledCost-AdjustedCaching(CAC).
CACisspecicallydesignedtoworktogetherwithacost-awareDBMSbuerpoolmanager,liketheGD2LalgorithmpresentedinSection3.
5.
1CAC:Cost-AdjustedCachingTodecidewhethertoadmitapageptotheSSD,CACestimatesthebenetB,intermsofreducedaccesscost,thatwillbeobtainedifpisplacedontheSSD.
TheessentialideaisthatCACadmitsptotheSSDifthereissomepagepalreadyontheSSDcacheforwhichB(p)Tomakeroomforp,itevictstheSSDpagewiththelowestestimatedbenet.
Supposethataphasexperiencedr(p)physicalreadre-questsandw(p)physicalwriterequestsoversomemeasure-mentintervalpriortotheadmissiondecision.
IfthephysicalI/OloadonpinthepastwereagoodpredictoroftheI/Oloadpwouldexperienceinthefuture,areasonablewaytoestimatethebenetofadmittingptotheSSDwouldbeB(p)=r(p)(RDRS)+w(p)(WDWS)(1)whereRD,RS,WD,andWSrepresentthecostsofreadandwriteoperationsontheHDDandtheSSD(Table1).
Unfortunately,whentheDBMSbuermanageriscost-aware,likeGD2L,thereadandwritecountsexperiencedbypinthepastmaybeparticularlypoorpredictorsofitsfuturephysicalI/Oworkload.
ThisisbecauseadmittingptotheSSD,orevictingitfromtheSSDifitisalreadythere,willchangep'sphysicalI/Oworkload.
Inparticular,ifpisadmittedtotheSSDthenweexpectthatitspost-admissionphysicalreadandwriterateswillbemuchhigherthanitspre-admissionrates,aswasillustratedbytheexperimentsinSection4.
Conversely,ifpisevictedfromtheSSD,weexpectitsphysicalI/Oratestodrop.
Thus,wedonotexpectEquation1toprovideagoodbenetestimatewhentheDBMSusescost-awarebuermanagement.
ToestimatethebenetofplacingpagepontheSSD,wewouldliketoknowwhatitsphysicalreadandwritework-loadwouldbeifitwereontheSSD.
SupposethatrS(p)andwS(p)arethephysicalreadandwritecountsthatpwouldexperienceifitwereplacedontheSSD,andrD(p)andwD(p)arethephysicalreadandwritecountspwouldexperienceifitwerenot.
Usingthesehypotheticalphysicalreadandwritecounts,wecanwriteourdesiredestimateofthebenetofplacingpontheSSDasfollowsB(p)=(rD(p)RDrS(p)RS)+(wD(p)WDwS(p)WS)(2)Thus,theproblemofestimatingbenetreducestotheprob-lemofestimatingvaluesforrD(p),rS(p),wD(p),andwS(p).
ToestimaterS(p),CACusestwomeasuredreadcounts:rS(p)andrD(p).
(Inthefollowing,wewilldroptheexplicitpagereferencefromournotationaslongasthepageisclearfromcontext.
)Ingeneral,pmayspendsometimeontheSSDandsometimenotontheSSD.
rSisthecountofthenumberofphysicalreadsexperiencedbypwhilepisontheSSD.
rDisthenumberofphysicalreadsexperiencedbypwhileitisnotontheSSD.
Toestimatewhatp'sphysicalreadcountwouldbeifitwereontheSSDfulltime(rS),CACusesrS=rS+αrD(3)Inthisexpression,thenumberofphysicalreadsexperiencedbypwhileitwasnotontheSSD(rD)ismultipliedbyascalingfactorαtoaccountforthefactthatitwouldhaveexperiencedmorephysicalreadsduringthatperiodifithadbeenontheSSD.
Werefertothescalingfactorαasthemissrateexpansionfactor,andwewilldiscussitfurtherinSection5.
2.
CACestimatesthevaluesofrD,wD,andwSinasimilarfashion:rD=rD+rSα(4)wS=wS+αwD(5)wD=wD+wSα(6)ThenotationusedintheseformulasissummarizedinTa-ble5.
AnalternativeapproachtoestimatingrSwoulduseonlytheobservedreadcountwhilethepagewasontheSSD(rS),545Figure5:SummaryofNotationSymbolDescriptionrD,wDMeasuredphysicalread/writecountwhilenotontheSSDrS,wSMeasuredphysicalread/writecountwhileontheSSDrD,wDEstimatedphysicalread/writecountifneverontheSSDrS,wSEstimatedphysicalread/writecountifal-waysontheSSDmSBuercachemissrateforpagesontheSSDmDBuercachemissrateforpagesnotontheSSDαMissrateexpansionfactorscalingituptoaccountforanytimeinwhichthepagewasnotontheSSD.
Whilethismightbeeective,itwillworkonlyifthepagehasactuallyspenttimeontheSSDtothatrScanbeobserved.
WestillrequireawaytoestimaterSforpagesthathavenotbeenobservedontheSSD.
Incontrast,estimationusingEquation3willworkevenifrSorrDarezeroduetolackofobservations.
WetrackreferencecountsforallpagesinthebuerpoolandallpagesintheSSD.
Inaddition,wemaintainanout-queuefortotrackreferencecountsforaxednumber(Noutq)ofadditionalpages.
WhenapageisevictedfromtheSSD,anentryforthepageisinsertedintotheoutqueue.
EntriesarealsoplacedintheoutqueueforpagesthatareevictedfromthebuerpoolbutnotplacedintotheSSD.
Eachentryintheoutqueuerecordsonlythepagestatistics.
Whentheoutqueueisfull,theleast-recentlyinsertedentryisevictedtomakearoomforanewentry.
5.
2TheMissRateExpansionFactorThepurposeofthemissrateexpansionfactor(α)istoestimatehowmuchapagephysicalreadandwriterateswillchangeifthepageisadmittedtotheSSD.
AsimplewaytoestimateαistocomparetheoverallmissratesofpagesontheSSDtothatofpagesthatarenotontheSSD.
SupposethatmSrepresentstheoverallmissrateoflogicalreadrequestsforpagesthatareontheSSD,i.
e.
,thetotalnumberofphysicalreadsfromtheSSDdividedbythetotalnumberoflogicalreadsofpagesontheSSD.
Similarly,letmDrepresenttheoverallmissrateoflogicalreadrequestsforpagesthatarenotlocatedontheSSD.
BothmSandmDareeasilymeasured.
UsingmSandmD,wecandenethemissrateexpansionfactoras:α=mSmD(7)Forexample,α=3meansthatthemissrateisthreetimeshigherpagesforontheSSDthanforpagesthatarenotontheSSD.
WhileEquation7capturesourintuitionaboutincreasedmissratesforpagesontheSSD,wehavefoundthatitistoocoarse.
InEquation7,αiscalculatedusingthebuerpoolmissratesofalldatabasepages,meaningthatallpageswillbeassumedtohavethesameexpansionfactor.
However,sincedierenttablesmayhavedierentaccesspatternsandthedistributionofpagerequestsisnotuniform,thismaynotbetrue.
Asanexample,Figure6illustratesmissrateFigure6:MissrateexpansionfactorforpagesfromthreeTPC-Ctables.
expansionfactorsofpagesgroupedbytableandbylogicalreadrate.
ThethreelinesrepresentpagesholdingtheTPC-CSTOCK,CUSTOMER,andORDERLINEtables.
Sincedierentpagesmayhavesubstantiallydierentmissrateexpansionfactors,weusedierentexpansionfactorsfordierentgroupsofpages.
Specically,wegroupdatabasepagesbasedonthedatabaseobject(e.
g.
,table)forwhichtheystoredata,andontheirlogicalreadrate,andwetrackadierentexpansionfactorforeachgroup.
Wedividetherangeofpossiblelogicalreadratesintosubrangesofequalsize.
Wedeneagroupaspagesthatstoredataforthesamedatabaseobjectandwhoselogicalreadratesfallinthesamesubrange.
Forexample,inourexperiments,wedenedthesubrangewidthasonelogicalreadperminute.
Ifthemaximumlogicalreadrateofatablewas1000,thistablemighthave1000groups.
Foreachpagegroupg,wedenethemissrateexpansionfactorasinEquation7:α(g)=mS(g)mD(g)(8)wheremS(g)istheoverallmissrateforpagesingwhiletheyareintheSSD,andmD(g)istheoverallmissrateforpagesingwhiletheyarenotintheSSD.
Wetracklogicalandphysicalreadcountsforeachindi-vidualpage,aswellasmissratesforeachgroup.
Pagereadcountsareupdatedwitheachlogicalorphysicalreadrequesttothepage.
Groupmissratesareupdatedlazily,whencer-taineventsoccur,usingtheper-pagestatisticsofthegroup'spages.
Specically,weupdategroupmissrateswhenapageisevictedfromthebuerpool,whenadirtypageisushedfromthebuerpooltotheSSD,andwhenapageisevictedfromtheSSD.
Becausepagesaregroupedinpartbasedontheirlogicalreadrates,whichcanuctuate,thegrouptowhichapagebelongsmayalsochangeovertime.
Ifthisoccurs,wesubtractthepage'sreadcountsfromthoseofitsoldgroupandaddthemtothenewgroup.
ItispossiblethatmS(g)mD(g)willbeundenedforsomegroups.
Forexample,agroup'smD(g)maybeundenedbecausepagesinthegrouphaveneverbeenevictedfromthebuerpool.
WeassumethatmD(g)=0forsuchgroups.
Similarly,mS(g)maybeundenedbecausenopagesinthegrouphavebeenadmittedtotheSSD.
Wesetα(g)=1forsuchgroups,whichgivesabetteropportunityforthemtobeadmittedtotheSSD,givingusabetterchancetocollectstatisticsforthem.
Apotentialeciencythreatisthenumberofpossiblegroupsforwhichthesystemmustmaintainstatistics.
The546numberofpossiblegroupsdependsonthesizeofthesub-range.
Ifwedonotsetthesubrangesize,thereisonlyonegroupineachtable.
Asmallersubrangesizeleadstomoreaccurateα(g)atacostofmorespaceforcollectingstatistics.
Inourevaluation(Section6)weignorethiscostbecausethespacerequirementfortrackinggroupstatisticswaslessthan0.
01%ofthebuerpoolsize.
5.
3SequentialI/OHarddiskshavesubstantiallybetterperformanceforse-quentialreadsthanforrandomreads.
Toaccountforthis,CACconsidersonlyrandomreadswhenestimatingtheben-etofplacingapageintheSSD.
Inparticular,themeasuredvaluesrSandrDusedinEquation3countonlyrandomreads.
ThisrequiresthattheSSDmanagerclassifyreadrequestsassequentialorrandom.
Twoclassicationap-proacheshavebeenproposedinrecentwork.
Canimetal.
[2]classifyapagerequestassequentialifthepageiswithin64pagesoftheprecedingrequest.
Doetal.
[5]exploittheexistingDBMSread-ahead(prefetch)mechanism:apageismarkedassequentialifitisreadfromthediskviatheread-aheadmechanism;otherwise,thepageismarkedasrandom.
Doetal.
[5]indicatethatleveragingtheread-aheadmech-anismwasmuchmoreeective.
CACadoptsthisapproachforidentifyingsequentialreads,usingtheread-aheadintheInnoDBbuermanager.
5.
4FailureHandlingSincedatapresentintheSSDmaybemorerecentthanthatintheHDD,thesystemneedstoensurethatitcanidentifythepagesintheSSDafterasystemfailure.
TAC[2]doesnothavesuchaproblembecauseitwritesdirtypagestoboththeSSDandtheHDD.
Lazycleaning,asproposedbyDoetal.
[5],handlesthisissuesbyushingalldirtypagesintheSSDtotheHDDwhentakingacheckpoint.
NeitheroftheseapproachesexploitsthepersistenceoftheSSD.
Incontrast,CACassumesthatthecontentsoftheSSDwillsurviveafailure,anditwillreadthelatestversionofapagefromtheSSDafterafailureifthepagewaslocatedthere.
ThechallengewiththisapproachisthattheSSDmanager'sin-memoryhashmapindicatingwhichpagesareintheSSDislostduringafailure.
Debnathetal.
[4]addressthisproblembycheckpointingthehashmapandloggingallwritestotheSSD.
Duringrecovery,thehashmapcanberebuiltbasedonthelastwrittenhashmapandthelog.
CAC'sapproachisalsobasedoncheckpointing,butitdoesnotrequireloggingofchangestothehashmap.
Aseachpageheaderincludesapageidentier,thehashmapcanberebuiltwithoutcausinganyruntimeoverheadbyscanningallpagesintheSSDduringthefailurerecoveryprocess.
However,thismaysubstantiallyincreaserecoverytime.
Forexample,basedonthereadservicetimeofourSSD,toscana32GSSDrequiresaboutthreeminutes.
LargerSSDswouldintroduceproportionallylargerdelaysduringrecovery.
Toachievefasterrecovery,CACcheckpointsthehashmappe-riodicallyandalsoidentiesagroupofklowprioritypagesasanevictionzoneontheSSD.
Untilthenextcheckpoint,CACwillevictonlypagesthatfallintotheevictionzone.
Afterafailure,CACinitializesitshashmapusingthemostrecentlycheckpointedcopy,andthenchecksthekSSDslotswheretheevictioncandidateswerelocatedtoidentifywhatisactuallythere,updatingthehashmapifnecessary.
Theevictionzonesize(k)controlsatrade-obetweenopera-tionaloverheadandtherecoverytime.
CACwillcheckpointitshashmapwhenalloftheevictioncandidatesintheevic-tionzonehavebeenevictedfromtheSSD.
Thus,smallervaluesofkresultinmorefrequenthashmapcheckpoints,butfasterrecovery.
6.
EVALUATIONInthissection,wepresentanexperimentalevaluationofGD2LandCAC.
Ourrstobjectiveistoprovidesomein-sightintothebehaviorofourproposedalgorithm,whichisthecombinationofGD2L(forthebuerpool)andCAC(fortheSSD).
Specically,wewishtoaddresstwoques-tions.
First,howeectiveisGD2Lrelativetonon-cost-awarebuermanagementSecond,whenGD2Lisusedtomanagethebuerpool,howimportantisittouseanantic-ipatorySSDmanager,likeCAC,thatrecognizesthatpageaccesspatternschangewhenthepageismovedbetweentheSSDandtheHDDOursecondobjectiveistocomparetheperformanceofourproposedalgorithms(GD2LwithCAC)tothatofother,recentlyproposedtechniquesformanagingSSDsindatabasesystems.
Toanswerthesequestions,wehaveimplementedavarietyofalgorithmsinMySQL'sInnoDBstoragemanager.
FortheDBMSbuerpool,wehavetwoalternatives:theoriginalbuerpoolpoliciesofInnoDB,whichwerefertoasLRU,andourimplementationofGD2L.
ForSSDmanagementwehaveimplementedCACaswellasthreealternatives,whichwerefertoasCC,MV-FIFO,andLRU2:CC:CCiscost-based,likeCAC,butitisnotanticipatory.
Thatis,unlikeCACitdoesnotattempttopredicthowapage'sI/OpatternwillchangeifthatpageismovedbetweentheSSDandHDD.
ItusesEquation1toestimatethebenetofplacingapageintheSSD,andevictsthepagewiththelowestbenetfromtheSSDwhennecessary.
CC'sapproachforestimatingthebenetofplacingapageintheSSDissimilartotheapproachusedbyTAC[2],althoughTACtracksstatis-ticsonaregionbasis,ratherthanapagebasis.
How-ever,CCdiersfromTACinthatitconsiderspagesforadmissiontotheSSDwhentheyarecleanedorevictedfromthebuerpool,whileTACadmitspagesonread.
Also,TACmanagestheSSDasawrite-throughcache,whileCC,likeCAC,iswrite-back.
LRU2:LRU2managestheSSDusingtheLRU2replace-mentpolicy,asrecentlyproposedbyDoetal[5]fortheirlazycleaning(LC)technique.
LRU2isneithercost-basednoranticipatory.
OurimplementationofLRU2issimilartoLC.
Bothconsiderpagesforad-missionwhentheyarecleanedorevictedfromthedatabasebuerpool,andbothtreattheSSDasawrite-backcache.
OurLRU2implementationcleanspagesintheSSDonlywhentheyareevicted,whichcorrespondstotheleastaggressive(andbestperform-ing)versionofLCimplementedbyDoetal.
inSQLServer.
Theinvalidationprocedureusedforourimplementa-tionofLRU2diersslightlyfromLC'sinthatourimplementationinvalidatesanSSDpageonlyifthatpageisidenticaltotheversionofthepageontheHDD.
MV-FIFO:MV-FIFOmanagestheSSDasaFIFOqueueofpages.
PagesareadmittedtotheSSDwhentheyarecleanedorevictedfromthedatabasebuerpool.
If547thepagebeingcleanedorevictedalreadyexistsintheSSDandtheexistingversionisolder,theexistingver-sionisinvalidated.
MV-FIFOisneithercost-basednoranticipatory.
ItwasproposedforSSDmanagementbyKangetal.
asthebasisoftheirFaCEalgorithm[10].
TheFIFOorganizationoftheSSDensuresthatallwritestotheSSDaresequentialandhencefast-thisisthechiefadvantageFaCE.
EitherbuerpooltechniquecanbecombinedwithanyoftheSSDmanagers,andwewillusethenotationX+YtorefertothecombinationofbuerpoolmanagerXwiththeSSDmanagerY.
Forexample,LRU+MV-FIFOreferstotheoriginalInnoDBbuermanagercombinedwithSSDman-agementusingMV-FIFO.
6.
1MethodologyWeusedtheMySQLdatabasemanagementsystem,ver-sion5.
1.
45,withtheInnoDBstorageenginemodiedtoimplementourbuermanagementandSSDmanagementtechniques.
MySQLranonaserverwithsix2.
5GHzIntelXeoncoresand4GBofmainmemory,runningUbuntu10.
10Linuxwithkernelversion2.
6.
35-22-generic.
Theserverhastwo500GBRPMSCSIharddisks.
Onediskholdsallsystemsoftware,includingMySQL,andthetestdatabase.
Thesec-onddiskholdsthetransactionlogs.
Inaddition,theserverhasa32GBIntelX25-ESATASSDThedatabaseSSDcacheisimplementedasasingleleontheSSD.
AlllesinInn-oDBuseunbueredI/O.
AllofourexperimentswereperformedusingTPC-Cwork-loads[17].
EachofourexperimentsinvolvedmeasuringperformanceunderaTPC-Cworkloadforagivensystemconguration,TPC-Cscalefactor,andacombinedbuerpoolandSSDalgorithm.
OurprimaryperformancemetricisTPC-Cthroughput,measuredasthenumberofTPC-CNew-Ordertransactionsthatareprocessedperminute(tpmC).
Throughputismeasuredafterthesystemhaswarmedupandreacheditssteadystateperformance.
Wealsocol-lectedawidevarietyofsecondarymetrics,includingdeviceutilizationsandI/Ocountsmeasuredatboththedatabaseandoperatingsystemlevels.
Experimentdurationsvariedfromfromfourtosevenhours,largelybecausetheamountoftimerequiredtoachieveasteadystatevariedwiththesys-temcongurationandtheTPC-Cscalefactor.
Aftereachrun,werestartedtheDBMStocleanupthebuerpoolandreplacedthedatabasewithacleancopy.
LikeDoetal.
[5],wehavefocusedourexperimentsonthreerepresentativescenarios:databasemuchlargerthanthesizeoftheSSDcachedatabasesomewhatlargerthantheSSDcachedatabasesmallerthantheSSDcacheToachievethis,wexedtheSSDsizeat10GBandvariedtheTPC-Cscalefactortocontrolthedatabasesize.
WeusedTPC-Cscalefactorsof80,150,and300warehouses,correspondingtoinitialdatabasesizesofapproximatelyare8GB,15GB,and30GB,respectively.
ThesizeofaTPC-Cdatabasegrowsastheworkloadruns.
ThenumberofTPC-Cclientterminalswassettotwicethenumberofwarehouses.
Foreachofthesescenarios,wetesteddatabasebuerpoolsizesof10%,20%,and40%oftheSSDsize(1GB,2GB,and4GB,respectively).
ForexperimentsinvolvingCACorCC,themaximumnum-berofentriesintheoutqueuewassettobethesameasthenumberofdatabasepagesthattintotheSSDcache.
WesubtractedthespacerequiredfortheoutqueuefromtheavailablebuerspacewhenusingCACandCC,sothatallcomparisonswouldbeonanequalspacebasis.
Unlessother-wisestated,allexperimentsinvolvingCACusedanevictionzoneof10%oftheSSDcachesize.
6.
2CostParameterCalibrationAsintroducedinSections3and5,bothGD2LandCACrelyondevicereadandwritecostparameters(listedinTa-ble1)whenmakingreplacementdecisions.
Onecharacteris-ticofanSSDisitsI/Oasymmetry:itsreadsarefasterthanitswritesbecauseawriteoperationmayinvolveanerasingdelay.
WemeasureRSandWSseparately.
Tomeasuretheseaccesscosts,weranaTPC-CworkloadusingMySQLandusediskstats,aLinuxtoolforrecordingdiskstatistics,tocollectthetotalI/Oservicetime.
WealsousedInnoDBtotrackthetotalnumberofreadandwriterequestsitmade.
Asdiskstatsdoesnotseparatetotalser-vicetimeofreadrequestsandthatofwriterequests,wemeasuredthedevices'readservicetimeusingaread-onlyworkload.
Theread-onlyworkloadwascreatedbyconvert-ingallTPC-Cupdatestoquerieswiththesamesearchcon-straintanddeletingallinsertionsanddeletions.
Thus,themodiedworkloadhasadiskblockaccesspatternsimilartothatoftheunmodiedTPC-Cworkload.
First,westoredtheentiredatabaseontheSSDandrantheread-onlywork-load,forwhichwefoundthat99.
97%ofthephysicalI/Orequestswerereads.
DividingthetotalI/Oservicetime(fromdiskstats)bythetotalnumberofreadrequests(fromInnoDB),wecalculatedRS=0.
11ms.
Then,werananunmodiedTPC-Cworkload,andmeasuredthetotalI/Oservicetime,thetotalnumberofreads,andthetotalnum-berofwrites.
UsingthetotalnumberofreadsandthevalueofRSobtainedfromtheread-onlyexperiment,weestimatedthetotalI/Oservicetimeofthereadrequests.
DeductingthatfromthetotalI/Oservicetime,wehadthetotalI/Oservicetimespendingonwriterequests.
Dividingtheto-talI/Oservicetimeonwritesbythetotalnumberofwriterequests,wecalculatedWS=0.
27ms.
Similarly,westoredthedatabaseontheHDDandrepeatedthisprocesstode-termineRD=7.
2msandWD=4.
96ms.
Forthepurposeofourexperiments,wenormalizedthesevalues:RS=1,RD=70,WS=3,andWD=50.
6.
3AnalysisofGD2LandCACTounderstandtheperformanceofGD2LandCAC,weranexperimentsusingthreealgorithmcombinations:LRU+CC,GD2L+CC,andGD2L+CAC.
BycomparingLRU+CCandGD2L+CC,wecanfocusontheimpactofswitchingfromacost-oblivousbuermanager(LRU)toacost-awarebuermanager(GD2L).
BycomparingtheresultsofGD2L+CCandGD2L+CAC,wecanfocusontheeectofswitchingfromanon-anticipatorySSDmanagertoananticipatoryone.
Figure7showstheTPC-CthroughputofeachofthesealgorithmcombinationsforeachtestdatabasesizeandInn-oDBbuerpoolsize.
6.
3.
1GD2Lvs.
LRUBycomparingLRU+CCwithGD2L+CCinFigure7,weseethatGD2LoutperformsLRUwhenthedatabaseismuch548Figure7:TPC-CThroughputUnderLRU+CC,GD2L+CC,andGD2L+CACForVariousDatabaseandDBMSBuerPoolSizes.
largerthantheSSD.
Thetwoalgorithmshavesimilarper-formanceforthetwosmallerdatabasesizes.
Forthelargedatabase,GD2LprovidesTPC-Cthroughputimprovementsofabout40%-75%relativetoLRU.
Figures8and9showtheHDDandSDDdeviceutiliza-tions,buerpoolmissrates,andnormalizedtotalI/Ocostoneachdevicefortheexperimentswiththe30GBand15GBdatabases.
ThenormalizedI/OcostforadeviceisthedeviceutilizationdividedbytheNewOrdertransactionthrough-put.
Itcanbeinterpretedasthenumberofmillisecondsofdevicetimeconsumed,onaverage,percompletedNewOr-dertransaction.
ThenormalizedtotalI/OcostisthesumofthenormalizedcostsontheHDDandSSD.
Forthe30GBexperiments,inwhichGD2L+CCoutper-formsLRU+CC,Figure8showsthatGD2LresultsinamuchlowertotalI/Ocost(pertransaction)thanLRU,de-spitethefacttheGD2LhadahighermissrateintheInn-oDBbuerpool.
GD2L'shighermissrateisnotsurprising,sinceitconsidersreplacementcostinadditiontorecencyofusewhenmakingevictiondecisions.
AlthoughthetotalnumberofI/OoperationsperformedbyGD2L+CCishigherthanthatofLRU+CC,GD2L+CCresultsinlessI/OtimeAlg&HDDHDDSSDSSDtotalBPmissBPsizeutilI/OutilI/OI/Orate(GB)(%)(ms)(%)(ms)(ms)(%)LRU+CC1G9388.
61211.
199.
76.
62G9372.
886.
078.
84.
44G9455.
163.
358.
42.
4GD2L+CC1G9253.
12112.
165.
28.
82G9044.
3209.
754.
07.
44G9036.
3145.
842.
14.
7GD2L+CAC1G8539.
6198.
848.
47.
42G8331.
8207.
839.
66.
34G8223.
5205.
829.
34.
8Figure8:DeviceUtilizations,BuerPoolMissRate,andNormalizedI/OTime(DBsize=30GB)I/Oisreportedasms.
perNewOrdertransaction.
Alg&HDDHDDSSDSSDtotalBPmissBPsizeutilI/OutilI/OI/Orate(GB)(%)(ms)(%)(ms)(ms)(%)LRU+CC1G7911.
1385.
416.
54.
22G685.
7474.
09.
72.
74G734.
4432.
67.
01.
3GD2L+CC1G212.
5688.
010.
46.
12G181.
5625.
36.
83.
74G140.
9613.
84.
72.
3GD2L+CAC1G303.
2737.
811.
05.
72G211.
6786.
27.
84.
04G482.
3602.
95.
32.
0Figure9:DeviceUtilizations,BuerPoolMissRate,andNormalizedI/OTime(DBsize=15GB)I/Oisreportedasms.
perNewOrdertransaction.
pertransactionbecauseitdoesmoreofitsI/OontheSSDandlessontheHDD,comparedtoLRU+CC.
ThisreectsGD2L'spreferenceforevictingSSDpages,sincetheyarecheapertoreloadthanHDDpages.
Inthecaseofthe30GBdatabase,GD2L'sshiftingofI/OactivityfromtheHDDtotheSSDresultsinsignicantlyhigherthroughput(relativetoLRU+CC)sincetheHDDistheperformancebottleneckinourtestenvironment.
ThiscanbeseenfromtheveryhighHDDutilizationsshowninFigure8.
Forthe15GBexperiments,Figure9showsthatGD2L+CCagainhaslowertotalI/OcostpertransactionthanLRU+CC,andshiftsI/OactivityfromtheHDDtotheSSD.
How-ever,theeectisnotaspronouncedasitwasforthelargerdatabase.
Furthermore,ascanbeseenfromFigure7,thisbehaviordoesnotleadtoasignicantTPC-CthroughputadvantagerelativetoLRU+CC,asitdoesforthe30GBdatabase.
ThisisbecausetheSSDonourtestserverbe-ginstosaturateundertheincreasedloadinducedbyGD2L.
(TheSSDsaturateshere,andnotinthe30GBcase,be-causemostofthedatabasehotspotcantintheSSD.
)InasystemwithgreaterSSDbandwidth,wewouldexpecttoseeaTPC-Cthroughputimprovementsimilartowhatweobservedwiththe30GBdatabase.
Fortheexperimentswiththe8Gdatabase,bothLRU+CCandGD2L+CChaveverysimilarperformance.
Inthoseexperiments,theentiredatabasecantintotheSSD.
AsmoreofthedatabasebecomesSSD-resident,thebehaviorofGD2LdegeneratestothatofLRU,sinceoneofitstwoqueues(QD)willbenearlyempty.
5496.
3.
2CACvs.
CCNext,weconsidertheimpactofswitchingfromanon-anticipatorycost-basedSSDmanager(CC)toanantici-patoryone(CAC).
Figure7showsthatGD2L+CACpro-videsadditionalperformancegainsaboveandbeyondthoseachievedbyGD2L+CCinthecaseofthelarge(30GB)database.
Together,GD2LandCACprovideaTPC-Cper-formanceimprovementofaboutafactoroftworelativetotheLRU+CCbaselineinour30GBtests.
Theperformancegainwaslesssignicantinthe15GBdatabasetestsandnon-existentinthe8GBdatabasetests.
Figure8showsthatGD2L+CACresultsinlowertotalI/OcostsonboththeSSDandHDDdevices,relativetoGD2L+CC,inthe30GBexperiments.
Bothpoliciesre-sultinsimilarbuerpoolhitratios,sothelowerI/OcostachievedbyGD2L-CACisattributabletobetterdecisionsaboutwhichpagestoretainontheSSD.
Tobetterunder-standthereasonsforthelowertotalI/OcostachievedbyCAC,weanalyzedlogsofsystemactivitytotrytoiden-tifyspecicsituationsinwhichGD2L+CCandGD2L+CACmakedierentplacementdecisions.
Oneinterestingsitua-tionweencounterisoneinwhichaveryhotpagethatisinthebuerpoolisplacedintheSSD.
Thismayoccur,forex-ample,whenthepageiscleanedbythebuermanagerandthereisfreespaceintheSSD,eitherduringcoldstartandbecauseofinvalidations.
Whenthisoccurs,I/OactivityforthehotpagewillspikebecauseGD2Lwillconsiderthepagetobeagoodevictioncandidate.
UndertheCCpolicy,suchapagewilltendtoremainintheSSDbecauseCCpreferstokeeppageswithhighI/OactivityintheSSD.
Incontrast,CACismuchmorelikelytotoevictsuchapagefromtheSSD,sinceitcan(correctly)estimatethatmovingthepagewillresultinasubstantialdropinI/Oactivity.
Thus,wendthatGD2L+CACtendstokeepveryhotpagesinthebuerpoolandoutoftheSSD,whilewithGD2L+CCsuchpagestendtoremainintheSSDandbounceintoandoutofthebuerpool.
Suchdynamicsillustratewhyitisimpor-tanttouseananticipatorySSDmanager(likeCAC)ifthebuerpoolmanageriscost-aware.
Fortheexperimentswithsmallerdatabases(15GBand8GB),thereislittledierenceinperformancebetweenGD2L+CCandGD2L+CAC.
Bothpoliciesresultinsimilarper-transactionI/OcostsandsimilarTPC-Cthroughput.
Thisisnotsur-prising,sinceinthesesettingsmostorallofthehotpartofthedatabasecantintotheSSD,i.
e.
,thereisnoneedtobesmartaboutSSDplacementdecisions.
TheSSDmanagermattersmostwhenthedatabaseislargerelativetotheSSD.
6.
4ComparisonwithLRU2andMV-FIFOInthissectionwecompareGD2L+CACtootherrecentlyproposedtechniquesformanagingtheSSD,namelylazycleaning(LC)andFaCE.
Moreprecisely,wecompareGD2L-+CACagainstLRU2andMV-FIFO,whicharesimilartoLCandFaCEbutimplementedinInnoDBtoallowforside-by-sidecomparison.
WecombineLRU2andMV-FIFOwithInnoDB'sdefaultbuermanager,whichresultscombinedal-gorithmsLRU+LRU2andLRU+MV-FIFO.
WealsotestedGD2L+LRU2.
Figure10showstheTPC-Cthroughputachievedbyallfouralgorithmsonourtestsystemforallthreeofthedatabasesizesthatwetested.
Insummary,wefoundthatGD2L+CACsignicantlyoutperformedtheotheralgorithmsinthecaseofthe30GBdatabase,achievingthegreatestadvantageoveritsclosestcompetitor(GD2L+LRU2)forlargerbuerpoolsizes.
Forthe15GBdatabase,GD2L+CACwasonlymarginallyfasterthanLRU+LRU2,andforthesmallestdatabase(8GB)theywereessentiallyindistinguishable.
LRU-+MV-FIFOperformedmuchworsethantheothertwoal-gorithmsinallofthescenarioswetest.
Figure10:TPC-CThroughputUnderLRU+MV-FIFO,LRU+LRU2,GD2L+LRU2,andGD2L+CACforVariousDatabaseandDBMSBuerPoolSizes.
LRU+MV-FIFOperformspoorlyinourenvironmentbe-causetheperformancebottleneckinourtestsystemistheHDD.
ThegoalofMV-FIFOistoincreasetheeciencyoftheSSDbywritingsequentially.
Althoughitsucceedsindoingthis,theSSDisrelativelylightlyutilizedinourtestenvironment,soMV-FIFO'soptimizationsdonotincreaseoverallTPC-Cperformance.
Interestingly,LRU+MV-FIFOperformspoorlyeveninourtestswiththe8GBdatabase,andremainslimitedbytheperformanceoftheHDD.
Therearetworeasonsforthis.
TherstisthatMV-FIFOmakespooreruseoftheavailablespaceontheSSDthanLRU2andCACbecauseofversioning.
ThesecondisdiskwritesduetoevictionsasSSDspaceisrecycledbyMV-FIFO.
Figure11showsthedeviceutilizations,buerhitratesandnormalizedI/Ocostsfortheexperimentswiththe30GBdatabase.
LRU+LRU2performedworsethanGD2L+CAC550Alg&HDDHDDSSDSSDtotalBPmissBPsizeutilI/OutilI/OI/Orate(GB)(%)(ms)(%)(ms)(ms)(%)GD2L+CAC1G8539.
6198.
848.
47.
42G8331.
8207.
839.
66.
34G8223.
5205.
829.
34.
8LRU+LRU21G8549.
4158.
658.
06.
72G8750.
3126.
757.
04.
44G9048.
594.
753.
22.
4GD2L+LRU21G7341.
14324.
665.
810.
72G7946.
53218.
965.
38.
84G7934.
43213.
848.
27.
6LRU+FIFO1G91101.
389.
2110.
56.
62G9292.
365.
898.
14.
44G9262.
953.
766.
62.
5Figure11:DeviceUtilizations,BuerPoolMissRate,andNormalizedI/OTime(DBsize=30GB)I/Oisreportedasms.
perNewOrdertransaction.
inthe30GBdatabasetestbecauseithadhighertotalI/Ocost(pertransaction)thanGD2L+CAC.
Furthermore,theadditionalcostfellprimarilyontheHDD,whichistheper-formancebottleneckinoursetting.
AlthoughitisnotshowninFigure11,GD2L+CACdidfewerreadspertransactionontheHDDandmorereadspertransactionontheSSDthandidLRU+LRU2.
ThismaybeduepartlytoCAC'sSSDplacementdecisionsandpartlytoGD2L'spreferenceforevictingSSDpages.
Inthe30GBtests,theperformanceofLRU+LRU2remainedrelativelyatasthesizeofthedatabasebuerpoolwasincreased.
WeobservedthatSSDreadhitratioofLRU2droppedasthebuerpoolsizein-creased.
OnepotentialreasonisthattheeectivenessofLRU2(recency-based)isreducedasthetemporallocalityintherequeststreamexperiencedbytheSSDisreducedasthebuerpoolgetslarger.
AnotherreasonforthisisthatLRU+LRU2generatedmorewritetractotheHDDbecauseofSSDevictionsthandidGD2L+CAC.
GD2L+LRU2hadworseperformancethanLRU+LRU2andGD2L+CACinmostcases.
InFigure11,GD2L+LRU2showshigherI/Ocost,whichiscausedbythehigherI/OcostontheSSD.
UnlikeCCandCAC,LRU2ushesalldirtypagestotheSSD.
WhenhotdirtypagesareushedtotheSSD,GD2Levictsthemfromthebuerpool,andthenreloadstothebuerpoolquickly.
Asaresult,GD2L+LRU2tendstoretainhotpagesontheSSDandkeepsbouncingthemintoandoutofthebuerpool.
Withacostawarebuerpolicy,hotpagescausemoreI/OcostwhenonSSDthanonHDD.
Whenthedatabasesizeis15GB,GD2L+CAC'sadvan-tagedisappears.
Inthissetting,bothalgorithmshadsimi-larper-transactionI/Ocosts.
GD2L+CACdirectedslightlymoreoftheI/OtractotheSSDthandidLRU+LRU2,butthedierencewassmall.
Forthe8GBdatabasetherewasnosignicantdierenceinperformancebetweenthetwoalgorithms6.
5ImpactoftheEvictionZoneToevaluationtheimpactoftheevictionzone,weranex-perimentswithGD2L+CACusingdierentevictionzonesizes.
Intheseexperiments,thedatabasesizewas1GB,thebuerpoolsizeissetto200MandtheSSDcachesizeissetto400M.
Wetestedksetto1%,2%,5%and10%oftheSSDsize,OurresultsshowedthatkvaluesinthisrangehadnoimpactonTPC-Cthroughput.
InInnoDB,thepageidenti-eriseightbytesandthesizeofeachpageis16K.
Thus,thehashmapfora400MSSDtsintotenpages.
WemeasuretheratewithwhichtheSSDhashmapwasushed,andndthatevenwithk=1%,thehighestrateofcheckpointingthehashmapexperiencedbyanyofthethreeSSDmanagementalgorithms(CAC,CC,andLRU2)islessthanthreepersec-ond.
Thus,theoverheadimposedbycheckpointingthehashmapisnegligible.
7.
RELATEDWORKPlacinghotdatainfaststorage(e.
g.
harddisks)andcolddatainslowstorage(e.
g.
tapes)isnotanewidea.
Hierarchicalstoragemanagement(HSM)isadatastoragetechniquewhichautomaticallymovesdatabetweenhigh-costandlow-coststoragemedia.
Itusesfaststorageasacacheforslowstorage.
TheperformanceandpriceofSSDssuggestthat"Tapeisdead,diskistape,ashisdisk"[9]seemstohavecometrue.
SomeresearchhasfocusedonhowtopartiallyreplaceharddiskswithSSDsindatabasesystems.
UnlikeDRAM,ashmemoryisnon-volatile,i.
e.
datastoredonashmem-orywillnotbelostincaseofalossofpower.
Koltsidasetal.
[11]assumethatrandomreadstoSSDaretentimesfasterthanrandomwritestoHDD,whilerandomwritestoSSDaretentimesslowerthanrandomwritestoHDD.
Theyde-signanalgorithmtoplaceread-intensivepagesontheSSDandwrite-intensivepagesontheHDD.
Canim,etal.
[1]introduceanobjectplacementadvisorforDB2.
Usingrun-timestatisticsaboutI/Obehaviorgatheredbythebuermanager,theadvisorhelpsthedatabaseadministratormakedecisionsaboutSSDsizing,andaboutwhichdatabaseob-jects,suchastablesorindices,shouldbeplacedonlimitedSSDstorage.
Ozmenetal.
[16]presentadatabaselayoutoptimizerwhichplacesdatabaseobjectstobalancework-loadandtoavoidinterference.
ItcangeneratelayoutsforheterogeneousstoragecongurationsthatincludeSSDs.
Flashmemoryhasalsobeenusedasalower-tiercacheinvarioussettings.
InSunZFS[12],ashmemoryhasbeenusedasanextensionofthemainmemory.
Thesys-tempopulatestheashmemorycacheasentriesareevictedfromthemainmemorycache.
FlashCache[6],aproductofFacebook,sitsbetweenthemainmemoryandthedisksandismanagedusingaLRU/FIFOpolicy.
FlashStore[4]usesSSDsasawrite-backcachebetweenRAMandtheHDD.
Itorganizesdataaskey-valuepairs,andwritesthepairsinalog-structureonashtoimprovethewriteperformance.
Canim,etal.
[2]investigatetheuseofSSDasasecond-tierwrite-throughcache.
Themost-frequentlyreadpages,iden-tiedbyrun-timeI/Ostatisticsgathering,aremovedintotheSSD.
ReadsareservedfromtheSSDifthepageisintheSSD,butwritesneedtogototheharddisksimmedi-ately.
Do,etal.
[5]proposelazycleaning,aneviction-basedmechanismformanaginganSSDasasecond-tierwrite-backcachefordatabasesystems.
Kangetal.
[10]proposeFaCE,analternativewrite-backdesign.
FaCEisbasedontheFIFOreplacementalgorithm.
FaCEinvalidatesstalepagesontheSSDandwritesnewversionstotheendoftheFIFOqueue.
Therefore,FaCEalwayswritespagessequentiallytotheSSD,andimprovestheperformancebyavoidingran-domwrites.
hStorage-DB[13]extractssemanticinformation551fromthequeryoptimizerandqueryplannerandpassesitwithI/OrequeststotheSSDmanager.
Thesemanticinfor-mationincludeshintsabouttherequesttype,e.
g,randomorsequential.
hStorage-DBassociatesaprioritywitheachre-questtype,andtheSSDmanagerusestheseprioritieswhenmakingplacementandreplacementdecisions.
Mostreplacementpoliciesforthebuercache,suchasLRUandARC,arecost-oblivious.
Existingcost-awareal-gorithmsforheterogeneousstoragesystems,e.
g.
balanceal-gorithm[15]andGreedyDual,areproposedforlecaching.
Caoetal.
[3]extendGreedyDualtohandlecachedobjectsofvaryingsize,withapplicationtowebcaching.
Forneyetal.
[7]revisitcachingpoliciesforheterogeneousstoragesystems.
Theysuggestpartitioningthecachefordierentclassesofstorageaccordingtotheworkloadandperformanceofeachclass.
Lv,etal.
[14]designanother"cost-aware"replace-mentalgorithmforthebuerpool.
However,itisdesignedforstoragesystemsthatonlyhaveSSD,notforheteroge-neousstoragesystems.
ThealgorithmisawarethatSSDreadcostsandwritecostsaredierentandtriestoreducethenumberofwritestotheSSD.
Thus,its"cost-aware"isinadierentsensethanGD2L.
Ourworkbuildsuponpreviousstudies[15,3,2,5].
TheGD2LalgorithmformanagingthebuerpoolisarestrictedversionofGreedyDual[15],whichwehaveadaptedforuseindatabasesystems.
OurCACalgorithmformanagingtheSSDisrelatedtothepreviouscost-basedalgorithmofCanimetal[2].
CACisawarethatthebuerpoolismanagedbyacost-awarealgorithmandadjuststheitscostanalysisaccordinglywhenmakingreplacementdecisions.
8.
CONCLUSIONInthispaperwepresenttwonewalgorithms,GD2LandCAC,formanagingthebuerpoolandtheSSDinadatabasemanagementsystem.
Bothalgorithmsarecost-basedandthegoalistominimizetheoverallaccesstimecostoftheworkload.
WeimplementedthetwoalgorithmsinInnoDBstorageengineandevaluatedthemusingaTPC-Cworkload.
WecomparedtheperformanceofGD2LandCACwithotherexistingalgorithms.
FordatabasesthatourlargerelativetothesizeoftheSSD,ouralgorithmprovidedsubstantialperformanceimprovementsoveralternativeapproachesinourtests.
OurresultsalsosuggestthattheperformanceofGD2LandCACandotheralgorithmsformanagingSSDcachesindatabasesystemswilldependstronglyonthesys-temconguration,andinparticularonthebalancebetweenavailableHDDandSSDbandwidth.
Inourtestenviron-ment,performancewasusuallylimitedbyHDDbandwidth.
Otheralgorithms,likeFaCE,arebettersuitedtosettingsinwhichtheSSDisthelimitingfactor.
9.
REFERENCES[1]M.
Canim,G.
A.
Mihaila,B.
Bhattacharjee,K.
A.
Ross,andC.
A.
Lang.
Anobjectplacementadvisorfordb2usingsolidstatestorage.
Proc.
VLDBEndow.
,2:1318–1329,August2009.
[2]M.
Canim,G.
A.
Mihaila,B.
Bhattacharjee,K.
A.
Ross,andC.
A.
Lang.
Ssdbuerpoolextensionsfordatabasesystems.
Proc.
VLDBEndow.
,3:1435–1446,September2010.
[3]P.
CaoandS.
Irani.
Cost-awarewwwproxycachingalgorithms.
InProc.
USENIXSymp.
onInternetTechnologiesandSystems,pages193–206,1997.
[4]B.
Debnath,S.
Sengupta,andJ.
Li.
Flashstore:highthroughputpersistentkey-valuestore.
Proc.
VLDBEndow.
,3:1414–1425,September2010.
[5]J.
Do,D.
Zhang,J.
M.
Patel,D.
J.
DeWitt,J.
F.
Naughton,andA.
Halverson.
Turbochargingdbmsbuerpoolusingssds.
InProc.
SIGMODInt'lConf.
onManagementofData,pages1113–1124,2011.
[6]Facebook.
Facebook:FlashCache,2012.
http://assets.
en.
oreilly.
com/1/event/45/Flashcache%20Presentation.
pdf.
[7]B.
C.
Forney,A.
C.
Arpaci-Dusseau,andR.
H.
Arpaci-Dusseau.
Storage-awarecaching:Revisitingcachingforheterogeneousstoragesystems.
InProc.
the1stUSENIXFAST,pages61–74,2002.
[8]G.
Graefe.
Theve-minuterule20yearslater:andhowashmemorychangestherules.
Queue,6:40–52,July2008.
[9]J.
GrayandB.
Fitzgerald.
Flashdiskopportunityforserverapplications.
Queue,6(4):18–23,July2008.
[10]W.
-H.
Kang,S.
-W.
Lee,andB.
Moon.
Flash-basedextendedcacheforhigherthroughputandfasterrecovery.
Proc.
VLDBEndow.
,5(11):1615–1626,July2012.
[11]I.
KoltsidasandS.
D.
Viglas.
Flashingupthestoragelayer.
Proc.
VLDBEndow.
,1:514–525,August2008.
[12]A.
Leventhal.
Flashstoragememory.
Commun.
ACM,51:47–51,July2008.
[13]T.
Luo,R.
Lee,M.
Mesnier,F.
Chen,andX.
Zhang.
hstorage-db:heterogeneity-awaredatamanagementtoexploitthefullcapabilityofhybridstoragesystems.
Proc.
VLDBEndow.
,5(10):1076–1087,June2012.
[14]Y.
Lv,B.
Cui,B.
He,andX.
Chen.
Operation-awarebuermanagementinash-basedsystems.
InProc.
ACMSIGMODInt'lConf.
onManagementofdata,pages13–24,2011.
[15]M.
S.
Manasse,L.
A.
McGeoch,andD.
D.
Sleator.
Competitivealgorithmsforserverproblems.
J.
Algorithms,11:208–230,May1990.
[16]O.
Ozmen,K.
Salem,J.
Schindler,andS.
Daniel.
Workload-awarestoragelayoutfordatabasesystems.
InProc.
ACMSIGMODInt'lConf.
onManagementofdata,pages939–950,2010.
[17]TheTPC-CBenchmark.
http://www.
tpc.
org/tpcc/.
[18]T.
M.
WongandJ.
Wilkes.
Mycacheoryoursmakingstoragemoreexclusive.
InProcUSENIXATC,pages161–175,2002.
[19]N.
Young.
Thek-serverdualandloosecompetitivenessforpaging.
Algorithmica,11:525–541,1994.
552

统计一下racknerd正在卖的超便宜VPS,值得推荐的便宜美国VPS

racknerd从成立到现在发展是相当迅速,用最低的价格霸占了大部分低端便宜vps市场,虽然VPS价格便宜,但是VPS的质量和服务一点儿都不拉跨,服务器稳定、性能给力,尤其是售后方面时间短技术解决能力强,估计这也是racknerd这个品牌能如此成功的原因吧! 官方网站:https://www.racknerd.com 多种加密数字货币、信用卡、PayPal、支付宝、银联、webmoney,可...

印象云七夕促销,所有机器7折销售,美国CERA低至18元/月 年付217元!

印象云,成立于2019年3月的商家,公司注册于中国香港,国人运行。目前主要从事美国CERA机房高防VPS以及香港三网CN2直连VPS和美国洛杉矶GIA三网线路服务器销售。印象云香港三网CN2机房,主要是CN2直连大陆,超低延迟!对于美国CERA机房应该不陌生,主要是做高防服务器产品的,并且此机房对中国大陆支持比较友好,印象云美国高防VPS服务器去程是163直连、三网回程CN2优化,单IP默认给20...

2021年7月最新洛杉矶CN2/香港CN2 vps套餐及搬瓦工优惠码 循环终身优惠6.58%

搬瓦工怎么样?2021年7月最新vps套餐推荐及搬瓦工优惠码整理,搬瓦工优惠码可以在购买的时候获取一些优惠,一般来说力度都在 6% 左右。本文整理一下 2021 年 7 月最新的搬瓦工优惠码,目前折扣力度最大是 6.58%,并且是循环折扣,续费有效,可以一直享受优惠价格续费的。搬瓦工优惠码基本上可能每年才会更新一次,大家可以收藏本文,会保持搬瓦工最新优惠码更新的。点击进入:搬瓦工最新官方网站搬瓦工...

wisediskcleaner为你推荐
51sese.com谁有免费电影网站www.se222se.comhttp://www.qqvip222.com/菊爆盘请问网上百度贴吧里有些下载地址,他们就直接说菊爆盘,然后后面有字母和数字,比如dk几几几的,baqizi.cc汉字的故事100字www.175qq.com请帮我设计个网名175qq.com查询QQ登录地址222cc.com怎样开通网站啊网站检测工具网站数据分析员都在使用那些工具监测网站啊?达林赞雅达信雅是什么意思vovokan新白发魔女传41集什么时候播出
上海域名注册 花生壳免费域名申请 中国万网虚拟主机 中国域名交易中心 赵容 新加坡服务器 美元争夺战 紫田 英语简历模板word 512au debian6 蜗牛魔方 工作站服务器 双11秒杀 isp服务商 安徽双线服务器 智能dns解析 游戏服务器出租 iki 杭州电信宽带优惠 更多