sittvants官网
tvants官网 时间:2021-04-20 阅读:(
)
SmoothCache:HTTP-LiveStreamingGoesPeer-to-PeerRobertoRoverso1,2,SamehEl-Ansary1,andSeifHaridi21PeerialismInc.
,Stockholm,Sweden2TheRoyalInstituteofTechnology(KTH),Stockholm,Sweden{roberto,sameh}@peerialism.
com,haridi@kth.
seAbstract.
Inthispaper,wepresentSmoothCache,apeer-to-peerlivevideostreaming(P2PLS)system.
ThenoveltyofSmoothCacheisthree-fold:i)ItistherstP2PLSsystemthatisbuilttosupporttherelatively-newapproachofusingHTTPasthetransportprotocolforlivecontent,ii)Thesystemsupportsbothsingleandmulti-bitratestreamingmodesofoperation,andiii)InSmoothcache,wemakeuseofrecentadvancesinapplication-layerdynamiccongestioncontroltomanageprioritiesoftransfersaccordingtotheirurgency.
WestartbyexplainingwhytheHTTPlivestreamingsemanticsrendermanyoftheexistingassumptionsusedinP2PLSprotocolsobsolete.
Afterwards,wepresentourdesignstartingwithabaselineP2Pcachingmodel.
We,then,showanumberofoptimizationsrelatedtoaspectssuchasneighborhoodmanagement,uploaderselectionandproactivecaching.
Finally,wepresentoureval-uationconductedonarealyetinstrumentedtestnetwork.
Ourresultsshowthatwecanachievesubstantialtracsavingsonthesourceofthestreamwithoutmajordegradationinuserexperience.
Keywords:HTTP-Livestreaming,peer-to-peer,caching,CDN.
1IntroductionPeer-to-peerlivestreaming(P2PLS)isaprobleminthePeer-To-Peer(P2P)net-workingeldthathasbeentackledforquitesometimeonboththeacademicandindustrialfronts.
Thetypicalgoalistoutilizetheuploadbandwidthofhostscon-sumingacertainlivecontenttoooadthebandwidthofthebroadcastingorigin.
Ontheindustrialfront,wendsuccessfullargedeploymentswhereknowledgeabouttheirtechnicalapproachesisratherlimited.
ExceptionsincludesystemsdescribedbytheirauthorslikeCoolstreaming[16]orinferredbyreverseengi-neeringlikePPlive[4]andTVAnts[12].
Ontheacademicfront,therehavebeenseveralattemptstotrytoestimatetheoreticallimitsintermsofoptimalityofbandwidthutilization[3][7]ordelay[15].
Traditionally,HTTPhasbeenutilizedforprogressivedownloadstreaming,championedbypopularVideo-On-Demand(VoD)solutionssuchasNetix[1]andApple'siTunesmoviestore.
However,lately,adaptiveHTTP-basedstream-ingprotocolsbecamethemaintechnologyforlivestreamingaswell.
Allcompa-nieswhohaveamajorsayinthemarketincludingMicrosoft,AdobeandAppleR.
Bestaketal.
(Eds.
):NETWORKING2012,PartII,LNCS7290,pp.
29–43,2012.
cIFIPInternationalFederationforInformationProcessing201230R.
Roverso,S.
El-Ansary,andS.
HaridihaveadoptedHTTP-streamingasthemainapproachforlivebroadcasting.
ThisshifttoHTTPhasbeendrivenbyanumberofadvantagessuchasthefollowing:i)RoutersandrewallsaremorepermissivetoHTTPtraccomparedtotheRTSP/RTPii)HTTPcachingforreal-timegeneratedmediaisstraight-forwardlikeanytraditionalweb-contentiii)TheContentDistributionNetworks(CDNs)businessismuchcheaperwhendealingwithHTTPdownloads[5].
TherstgoalofthispaperistodescribetheshiftfromtheRTSP/RTPmodeltotheHTTP-livemodel(Section2).
This,inordertodetailtheimpactofthesameonthedesignofP2Plivestreamingprotocols(Section3).
Apointwhichwendratherneglectedintheresearchcommunity(Section4).
Wearguethatthisshifthasrenderedmanyoftheclassicalassumptionsmadeinthecurrentstate-of-the-artliteratureobsolete.
Forallpracticalpurposes,anynewP2PLSalgorithmirrespectiveofitstheoreticalsoundness,won'tbedeployableifitdoesnottakeintoaccounttherealitiesofthemainstreambroadcastingecosystemaroundit.
TheissuebecomesevenmoretopicalasweobserveatrendinstandardizingHTTPlive[8]streamingandembeddingitinallbrowserstogetherwithHTML5,whichisalreadythecaseinbrowserslikeApple'sSafari.
ThesecondgoalofthispaperistopresentaP2PLSprotocolthatiscompatiblewithHTTPlivestreamingfornotonlyonebitratebutthatisfullycompatiblewiththeconceptofadaptivebitrate,whereastreamisbroadcastwithmultiplebitratessimultaneouslytomakeitavailableforarangeofviewerswithvariabledownloadcapacities(Section5).
ThelastgoalofthispaperistodescribeanumberofoptimizationsofourP2PLSprotocolconcerningneighborhoodmanagement,uploaderselectionandpeertransferwhichcandeliverasignicantamountoftracsavingsonthesourceofthestream(Section6and7).
Experimentalresultsofourapproachshowthatthisresultcomesatalmostnocostintermsofqualityofuserexperience(Section8).
2TheShiftfromRTP/RTSPtoHTTPInthetraditionalRTSP/RTPmodel,theplayerusesRTSPasthesignallingprotocoltorequesttheplayingofthestreamfromastreamingserver.
TheplayerentersareceiveloopwhiletheserverentersasendloopwherestreamfragmentsaredeliveredtothereceiverusingtheRTPprotocoloverUDP.
Theinteractionbetweentheserverandplayerisstateful.
Theservermakesdecisionsaboutwhichfragmentissentnextbasedonacknowledgementsorerrorinformationpreviouslysentbytheclient.
Thismodelmakestheplayerratherpassive,havingthemereroleofrenderingthestreamfragmentswhichtheserverprovides.
IntheHTTPlivestreamingmodelinstead,itistheplayerwhichcontrolsthecontentdeliverybyperiodicallypullingfromtheserverpartsofthecontentatthetimeandpaceitdeemssuitable.
Theserverinsteadisentitledwiththetaskofencodingthestreaminrealtimewithdierentencodingrates,orqualities,andstoringitindatafragmentswhichappearontheserverassimpleresources.
Whenaplayerrstcontactsthestreamingserver,itispresentedwithameta-datale(Manifest)containingthelateststreamfragmentsavailableattheserverSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer31(a)(b)Fig.
1.
a)SampleSmoothstreamingManifest,b)Client-ServerinteractionsinMi-crosoftSmoothStreamingatthetimeoftherequest.
Eachfragmentisuniquelyidentiedbyatime-stampandabitrate.
Ifastreamisavailableinndierentbitrates,thenthismeansthatforeachtimestamp,thereexistsnversionsofit,oneforeachbitrate.
Afterreadingthemanifest,theplayerstartstorequestfragmentsfromtheserver.
Theburdenofkeepingthetimelinessofthelivestreamistotallyupontheplayer.
Theserverincontrast,isstatelessandmerelyservesfragmentslikeanyotherHTTPserverafterencodingthemintheformatadvertisedinthemanifest.
ManifestContents.
Togiveanexample,weuseMicrosoft'sSmoothStreamingmanifest.
InFigure1a,weshowtherelevantdetailsofamanifestforalivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps).
Byinspectingoneofthestreams,wendtherst(themostrecent)fragmentcontainingadvaluewhichisthetimedurationofthefragmentinaunitof100nanosecondsandatvaluewhichisthetimestampofthefragment.
Thefragmentunderneath(theolderfragment)hasonlyadvaluebecausethetimestampisinferredbyaddingthedurationtothetimestampoftheoneabove.
Thestreamseachhaveatemplateforformingarequesturlforfragmentsofthatstream.
Thetemplatehasplaceholdersforsubstitutionwithanactualbitrateandtimestamp.
Foradenitionofthemanifest'sformat,see[5].
AdaptiveStreamingProtocol.
InFigure1b,weshowanexampleinteractionsequencebetweenaSmoothStreamingClientandServer[5].
TheClientrstissuesaHTTPGETrequesttoretrievethemanifestfromthestreamingserver.
Afterinterpretingthemanifest,theplayerrequestsavideofragmentfromthelowestavailablebitrate(331Kbps).
Thetimestampoftherstrequestisnotpredictablebutinmostcaseswehaveobservedthatitisanamountequalto10secondsbackwardfromthemostrecentfragmentinthemanifest.
Thisisprobablytheonlypredictablepartoftheplayer'sbehavior.
Infact,withoutdetailedknowledgeoftheplayer'sinternalalgorithmandgiventhatdierentplayersmayimplementdierentalgorithms,itisdiculttomakeassumptionsabouttheperiodbetweenconsecutivefragmentrequests,thetimeatwhichtheplayerwillswitchrates,orhowtheaudioandvideoare32R.
Roverso,S.
El-Ansary,andS.
Haridiinterleaved.
Forexample,whenafragmentisdelayed,itcouldgetre-requestedatthesamebitrateoratalowerrate.
Thetimeoutbeforetakingsuchactionisonethingthatwefoundslightlymorepredictableanditwasmostofthetimearound4seconds.
Thatisasubsetofmanydetailsaboutthepullbehavioroftheplayer.
ImplicationsofUnpredictability.
Thepointofmentioningthesedetailsistoexplainthatthebehavioroftheplayer,howitbuersandclimbsupanddownthebitratesisratherunpredictable.
Infact,wehaveseenitchangeindierentversionofthesameplayer.
Moreover,dierentadoptersoftheapproachhaveminorvariationsontheinteractionssequence.
Forinstance,AppleHTTP-live[8]dictatesthattheplayerrequestsamanifesteverytimebeforerequestinganewfragmentandpacksaudioandvideofragmentstogether.
Asaconsequenceofwhatwedescribedabove,webelievethataP2PLSprotocolforHTTPlivestreamingshouldoperateasifreceivingrandomrequestsintermsoftimingandsizeandhastomakethisthemainprinciple.
Thisltersoutthedetailsofthedierentplayersandtechnologies.
3ImpactoftheShiftonP2PLSAlgorithmsTraditionally,thetypicalsetupforaP2PLSagentistositbetweenthestreamingserverandtheplayerasalocalproxyoeringtheplayerthesameprotocolasthestreamingserver.
Insuchasetup,theP2PLSagentwoulddoitsbest,exploitingthepeer-to-peeroverlay,todeliverpiecesintimeandintherightorderfortheplayer.
Thus,theP2PLSagentistheonedrivingthestreamingprocessandkeepinganactivestateaboutwhichvideooraudiofragmentshouldbedeliverednext,whereastheplayerblindlyrenderswhatitissuppliedwith.
Giventheassumptionofapassiveplayer,itiseasytoenvisagetheP2PLSalgorithmskippingforinstancefragmentsaccordingtotheplaybackdeadline,i.
e.
discardingdatathatcomestoolateforrendering.
Inthiskindofsituation,theplayerisexpectedtoskipthemissingdatabyfast-forwardingorblockingforfewinstantsandthenstarttheplaybackagain.
ThistypeofbehaviortowardstheplayerisanintrinsicpropertyofmanyofthemostmatureP2PLSsystemdesignsandanalysessuchas[13,15,16].
Incontrasttothat,aP2PLSagentforHTTPlivestreamingcannotrelyonthesameoperationalprinciples.
Thereisnofreedominskippingpiecesanddecidingwhatistobedeliveredtotheplayer.
TheP2PLSagenthastoobeytheplayer'srequestforfragmentsfromtheP2Pnetworkandthespeedatwhichthisisaccomplishedaectstheplayer'snextaction.
Fromourexperience,delvinginthepathoftryingtoreverseengineertheplayerbehaviorandintegratingthatintheP2Pprotocolissomekindofblackartbasedontrial-and-errorandwillresultintoverycomplicatedandextremelyversion-speciccustomizations.
Essentially,anyP2PLSschedulingalgorithmthatassumesthatithascontroloverwhichdatashouldbedeliveredtotheplayerisratherinapplicabletoHTTPlivestreaming.
SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer334RelatedWorkWearenotawareofanyworkthathasexplicitlyarticulatedtheimpactoftheshifttoHTTPontheP2Plivestreamingalgorithms.
However,amorerelevanttopictolookatisthebehavioroftheHTTP-basedliveplayers.
Akhshabiet.
al[2],providearecentdissectionofthebehaviorofthreesuchplayersunderdierentbandwidthvariationscenarios.
Itishoweverclearfromtheiranalysisthatthebitrateswitchingmechanicsoftheconsideredplayersarestillinearlystagesofdevelopment.
Inparticular,itisshownthatthroughputuctuationsstillcauseeithersignicantbueringorunnecessarybitratereductions.
Ontopofthat,itisshownhowallthelogicimplementedintheHTTP-liveplayersistailoredtoTCP'sbehavior,astheonesuggestedin[6].
Thatinordertocompen-satethroughputvariationscausedbyTCP'scongestioncontrolandpotentiallylargeretransmissiondelays.
InthecaseofaP2PLSagentactingasproxy,itisthenofparamountimportancetonotinterferewithsuchadaptationpatterns.
Webelieve,giventhepresentedapproaches,themostrelatedworkistheP2PcachingnetworkLiveSky[14].
WeshareincommonthefactoftryingtoestablishaP2PCDN.
However,LiveSkydoesnotpresentanysolutionforsupportingHTTPlivestreaming.
5P2PLSasaCachingProblemWewilldescribehereourbaselinedesigntotacklethenewrealitiesoftheHTTP-basedplayers.
WetreattheproblemofreducingtheloadonthesourceofthestreamthesamewayitwouldbetreatedbyaContentDistributionNetwork(CDN):asacachingproblem.
ThedesignofthestreamingprotocolwasmadesuchthateveryfragmentisfetchedasanindependentHTTPrequestthatcouldbeeasilyscheduledonCDNnodes.
Thedierenceisthatinourcase,thecachingnodesareconsumermachinesandnotdedicatednodes.
TheplayerisdirectedtoorderfromourlocalP2PLSagentwhichactsasanHTTPproxy.
Alltracto/fromthesourceofthestreamaswellasotherpeerspassesbytheagent.
BaselineCaching.
Thepolicyisasfollows:anyrequestformanifestles(meta-data),isfetchedfromthesourceasisandnotcached.
Thatisduetothefactthatthemanifestchangesovertimetocontainthenewlygeneratedfragments.
Contentfragmentsrequestedbytheplayerarelookedupinalocalindexofthepeerwhichkeepstrackofwhichfragmentisavailableonwhichpeer.
Ifinforma-tionaboutthefragmentisnotintheindex,thenweareinthecaseofaP2Pcachemissandwehavetoretrieveitfromthesource.
Incaseofacachehit,thefragmentisrequestedfromtheP2Pnetworkandanyerrororslownessintheprocessresults,again,inafallbacktothesourceofthecontent.
Onceafragmentisdownloaded,anumberofotherpeersareimmediatelyinformedinorderforthemtoupdatetheirindicesaccordingly.
AchievingSavings.
Themainpointisthustoincreasethecachehitratioasmuchaspossiblewhilethetimelinessofthemovieispreserved.
Thecachehitratioisourmainmetricbecauseitrepresentssavingsfromtheloadonthesource34R.
Roverso,S.
El-Ansary,andS.
HaridiTable1.
SummaryofbaselineandimprovedstrategiesStrategyBaselineImprovedManifestTrimming(MT)OOnPartnershipConstruction(PC)RandomRequest-Point-awarePartnershipMaintenance(PM)RandomBitrate-awareUploaderSelection(US)RandomThroughput-basedProactiveCaching(PR)OOnofthelivestream.
Havingexplainedthebaselineidea,wecanseethat,inthe-ory,ifallpeersstartedtodownloadthesameuncachedmanifestsimultaneously,theywouldalsoallstartrequestingfragmentsexactlyatthesametimeinper-fectalignment.
Thisscenariowouldleavenotimeforthepeerstoadvertiseandexchangeusefulfragmentsbetweeneachothers.
Consequentlyaperfectalign-mentwouldresultinnosavings.
Inreality,wehavealwaysseenthatthereisanamountofintrinsicasynchronyinthestreamingprocessthatcausessomepeerstobeaheadofothers,hencemakingsavingspossible.
However,thelargerthenumberofpeers,thehighertheprobabilityofmorepeersbeingaligned.
Wewillshowthat,giventheaforementionedasynchrony,eventhepreviouslydescribedbaselinedesigncanachievesignicantsavings.
Ourtargetsavingsarerelativetothenumberofpeers.
Thatiswedonottargetachievingaconstantloadonthesourceofthestreamirrespectiveofthenumberofusers,whichwouldleadtolossoftimeliness.
Instead,weaimtosaveasubstantialpercentageofallsourcetracbyooadingthatpercentagetotheP2Pnetwork.
TheattractivenessofthatmodelfromabusinessperspectivehasbeenveriedwithcontentownerswhonowadaysbuyCDNservices.
6BeyondBaselineCachingWegivehereadescriptionofsomeoftheimportanttechniquesthatarecrucialtotheoperationoftheP2PLSagent.
Foreachsuchtechniqueweprovidewhatwethinkisthesimplestwaytorealizeitaswellasimprovementsifwewereabletoidentifyany.
ThetechniquesaresummarizedinTable1.
ManifestManipulation.
OneimprovementparticularlyapplicableinMi-crosoft'sSmoothstreamingbutthatcouldbeextendedtoallothertechnologiesismanifestmanipulation.
AsexplainedinSection2,theserversendsamanifestcontainingalistofthemostrecentfragmentsavailableatthestreamingserver.
Thepointofthatistoavailtotheplayersomedataincasetheuserdecidestojumpbackintime.
Minortrimmingtohidethemostrecentfragmentsfromsomepeersplacesthembehindothers.
Weusethattechniquetopushpeerswithhighuploadbandwidthslightlyaheadofothersbecausetheyhavetheycanbemoreusefultothenetwork.
Wearecarefulnottoabusethistoomuch,otherwisepeerswouldsuerahighdelayfromliveplayingpoint,sowelimitittoamaximumofSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer354seconds.
Itisworthnotingherethatwedoaquickbandwidthmeasurementforpeersuponadmissiontothenetwork,mainly,forstatisticalpurposesbutwedonotdependonthismeasurementexceptduringtheoptionaltrimmingprocess.
NeighborhoodandPartnershipConstruction.
Weuseatrackeraswellasgossipingforintroducingpeerstoeachother.
Anytwopeerswhocanestablishbi-directionalcommunicationareconsideredneighbors.
Eachpeerprobeshisneighborsperiodicallytoremovedeadpeersandupdateinformationabouttheirlastrequestedfragments.
Neighborhoodconstructionisinessenceaprocesstocreatearandomundirectedgraphwithhighnodearity.
Asubsetoftheedgesintheneighborhoodgraphisselectedtoformadirectedsubgraphtoestablishpartnershipbetweenpeers.
Unliketheneighborhoodgraph,whichisupdatedlazily,theedgesofthepartneshipgraphareusedfrequently.
Aftereachsuccessfuldownloadofafragment,thesetof(out-partners)isinformedaboutthenewlydownloadedfragment.
Fromtheoppositeperspective,itiscrucialforapeertowiselypickhisin-partnersbecausetheyaretheprovidersoffragmentsfromtheP2Pnetwork.
Forthisdecision,weexperimentwithtwodierentstrategies:i)Randompicking,ii)Request-point-awarepicking:wherethein-partnersincludeonlypeerswhoarerelativelyaheadinthestreambecauseonlysuchpeerscanhavefuturefragments.
PartnershipMaintenance.
Eachpeerstrivestocontinuouslyndbetterin-partnersusingperiodicmaintenance.
Themaintenanceprocesscouldbelimitedtoreplacementofdeadpeersbyrandomly-pickedpeersfromtheneighborhood.
Ourimprovedmaintenancestrategyistoscorethein-partnersaccordingtoacertainmetricandreplacelow-scoringpartnerswithnewpeersfromtheneigh-borhood.
Themetricweuseforscoringpeersisacompositeonebasedon:i)favoringthepeerswithhigherpercentageofsuccessfullytransferreddata,ii)favoringpeerswhohappentobeonthesamebitrate.
Notethatwhilefavor-ingpeersonthesamebitrate,havingallpartnersfromasinglebitrateisverydangerous,becauseonceabit-ratechangeoccursthepeerisisolated.
Thatis,allthereceivedupdatesaboutpresenceoffragmentsfromotherpeerswouldbefromtheoldbitrate.
Thatiswhy,uponreplacement,wemakesurethattheresultingin-partnerssethasallbit-rateswithagaussiandistributioncenteredaroundthecurrentbitrate.
Thatis,mostin-partnersarefromthecurrentbitrate,lesspartnersfromtheimmediatelyhigherandlowerbitratesandmuchlesspartnersfromotherbitratesandsoforth.
Oncethebit-ratechanges,themaintenancere-centersthedistributionaroundthenewbitrate.
UploaderSelection.
Inthecaseofacachehit,ithappensquiteoftenthatapeerndsmultipleuploaderswhocansupplythedesiredfragment.
Inthatcase,weneedtopickone.
Thesimpleststrategywouldbetopickarandomuploader.
Ourimprovedstrategyhereistokeeptrackoftheobservedhistoricalthroughputofthedownloadsandpickthefastestuploader.
Sub-fragments.
Uptothispoint,wehavealwaysusedinourexplanationthefragmentasadvertisedbythestreamingserverastheunitoftransportforsimplifyingthepresentation.
Inpractice,thisisnotthecase.
Thesizesof36R.
Roverso,S.
El-Ansary,andS.
Haridithefragmentvaryfromonebitratetotheother.
LargerfragmentswouldresultinwaitingforalongertimebeforeinformingotherpeerswhichwoulddirectlyentaillowersavingsbecauseoftheslownessofdisseminatinginformationaboutfragmentpresenceintheP2Pnetwork.
Tohandlethat,ourunitoftransportandadvertisingisasub-fragmentofaxedsize.
Thatsaid,therealityoftheuploaderselectionprocessisthatitalwayspicksasetuploadersforeachfragmentratherthanasingleuploader.
Thisparallelizationappliesforbothrandomandthroughput-baseduploaderselectionstrategies.
Fallbacks.
Whiledownloadingafragmentfromanotherpeer,itisofcriticalimportancetodetectproblemsassoonaspossible.
Thetimeoutbeforefallingbacktothesourceisthusoneofthemajorparameterswhiletuningthesystem.
Weputanupperbound(Tp2p)onthetimeneededforanyP2Poperation,computedas:Tp2p=TplayerSTfwhereTplayeristhemaximumamountoftimeafterwhichtheplayerconsidersarequestforafragmentexpired,SisthesizeoffragmentandTfistheexpectedtimetoretrieveaunitofdatafromthefallback.
Basedonourexperience,Tplayerisplayer-specicandconstant,forinstanceMicrosoft'sSmoothStreamingwaits4secondsbeforetimingout.
AlongerTp2ptranslatesinahigherP2Psuccesstransferratio,hencehighersavings.
SinceTplayerandSareoutsideofourcontrol,itisextremelyimportanttoestimateTfcorrectly,inparticularinpresenceofcongestionanductuatingthroughputtowardsthesource.
Asafurtheroptimization,werecalculatethetimeoutforafragmentwhileaP2Ptransferishappeningdependingontheamountofdataalreadydownloaded,toallowmoretimefortheoutstandingpartofthetransfer.
Finally,uponfallback,onlytheamountoffragmentthatfailedtobedownloadedfromtheoverlaynetworkisretrievedfromthesource,i.
e.
throughapartialHTTPrequestontherangeofmissingdata.
7ProactiveCachingThebaselinecachingprocessisinessencereactive,i.
e.
theattempttofetchafragmentstartsaftertheplayerrequestsit.
However,whenapeerisinformedaboutthepresenceofafragmentintheP2Pnetwork,hecantriviallyseethatthisisafuturefragmentthatwouldbeeventuallyrequested.
Startingtoprefetchitearlybeforeitisrequested,increasestheutilizationoftheP2Pnetworkanddecreasestheriskoffailingtofetchitintimewhenrequested.
Thatsaid,wedonotguaranteethatthisfragmentwouldberequestedinthesamebitrate,whenthetimecomes.
Therefore,weendureabitofriskthatwemighthavetodiscarditifthebitratechanges.
Inpractice,wemeasuredthattheprefetchersuccessfullyrequeststherightfragmentwitha98.
5%ofprobability.
TracPrioritization.
Toimplementthisproactivestrategywehavetakenad-vantageofourdynamicruntime-prioritizationtransportlibraryDTL[9]whichexposestotheapplicationlayertheabilitytoprioritizeindividualtransfersrel-ativetoeachotherandtochangethepriorityofeachindividualtransferatrun-time.
Uponstartingtofetchafragmentproactively,itisassignedaverySmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer37low-priority.
Therationaleistoavoidcontendingwiththetransferprocessoffragmentsthatarereactivelyrequestedandunderadeadlinebothontheup-loadinganddownloadingends.
SuccessfulPrefetching.
Onepossibilityisthatalow-priorityprefetchingpro-cesscompletesbeforeaplayer'srequestandthereisnowaytodeliverittotheplayerbeforethathappens,theonlyoptionistowaitforaplayerrequest.
Moreimportantly,whenthattimecomes,carefuldeliveryfromthelocalmachineisveryimportantbecauseextremelyfastdeliverymightmaketheadaptivestream-ingplayermistakenlythinkthatthereisanabundanceofdownloadbandwidthandstarttorequestthefollowingfragmentsahigherbitratebeyondtheac-tualdownloadbandwidthofthepeer.
Therefore,weschedulethedeliveryfromthelocalmachinetobenotfasterthanthealready-observedaveragedownloadrate.
Wehavetostressherethatthisisnotanattempttocontroltheplayertodosomethinginparticular,wejustmaintaintransparencybynotdeliveringprefetchedfragmentsfasterthannotprefetchedones.
InterruptedPrefetching.
Anotherpossiblityisthattheprefetchingprocessgetsinterruptedbytheplayerin3possibleways:i)Theplayerrequeststhefrag-mentbeingprefetched:inthatcasethetransportlayerisdynamicallyinstructedtoraisethepriorityandTplayerissetaccordinglybasedontheremainingamountofdataasdescribedintheprevioussection.
ii)Theplayerrequeststhesamefrag-mentbeingprefetchedbutatahigherratewhichmeanswehavetodiscardanyprefetcheddataandtreattherequestlikeanyotherreactivelyfetchedfragment.
iii)Theplayerdecidestoskipsomefragmentstocatchupandisnolongerinneedofthefragmentbeingprefetched.
Inthiscase,wehavetodiscarditaswell.
8EvaluationMethodology.
Duetothenon-representativebehaviourofPlanetlabandthedicultytodoparameterexplorationinpublicly-deployedproductionnetwork,wetriedanotherapproachwhichistodevelopaversionofourP2Pagentthatisremotely-controlledandaskforvolunteerswhoareawarethatwewillconductexperimentsontheirmachines.
Needlesstosay,thatthisfunctionalityisremovedfromanypublicly-deployableversionoftheagent.
TestNetwork.
Thetestnetworkcontainedaround1350peers.
However,themaximum,minimumandaveragenumberofpeerssimultaneouslyonlinewere770,620and680respectively.
ThenetworkincludedpeersmostlyfromSweden(89%)butalsosomefromEurope(6%)andtheUS(4%).
Theuploadband-widthdistributionofthenetworkwasasfollows:15%:0.
5Mbps,42%:1Mbps,17%:2.
5Mbps,15%:10Mbps,11%:20Mbps.
Ingeneral,onecanseethatthereisenoughbandwidthcapacityinthenetwork,howeverthemajorityofthepeersareonthelowerendofthebandwidthdistribution.
Forconnectivity,82%ofthepeerswerebehindNAT,and12%wereonopenInternet.
WehaveusedourNAT-Crackertraversalschemeasdescribedin[11]andwereabletoestablishbi-directionalcommunicationbetween89%ofallpeerpairs.
Theuniquenumber38R.
Roverso,S.
El-Ansary,andS.
Haridi010203040506070809010005101520Savings(%)Time(minutes)ImprovementsNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(a)020406080100x60Peers(%)Time(secs)ImprovementsNOP2PNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(b)Fig.
2.
(a)Comparisonoftracsavingswithdierentalgorithmimprovements,(b)ComparisonofcumulativebueringtimeforsourceonlyandimprovementsofNATtypesencounteredwere18types.
Apartfromthetrackerusedforin-troducingclientstoeachother,ournetworkinfrastructurecontained,aloggingserver,abandwidthmeasurementserver,aSTUN-likeserverforhelpingpeerswithNATtraversalandacontrollertolaunchtestsremotely.
StreamProperties.
Weusedaproduction-qualitycontinuouslivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps)andweletpeerswatch20minutesofthisstreamineachtest.
ThestreamwaspublishedusingMicrosoftSmoothStreamingtraditionaltoolchain.
ThebandwidthofthesourcestreamwasprovidedbyacommercialCDNandwemadesurethatithadenoughcapacitytoservethemaximumnumberofpeersinourtestnetwork.
ThissetupgaveustheabilitytocomparethequalityofthestreamingprocessinthepresenceandabsenceofP2Pcachinginordertohaveafairassessmentoftheeectofouragentontheoverallqualityofuserexperience.
Westressthat,inarealdeployment,P2PcachingisnotintendedtoeliminatetheneedforaCDNbuttoreducethetotalamountofpaid-fortracthatisprovidedbytheCDN.
Oneoftheissuesthatwefacedregardingrealistictestingwasmakingsurethatweareusingtheactualplayerthatwouldbeusedinproduction,inourcasethatwastheMicrosoftSilverlightplayer.
Theproblemisthatthenormalmodeofoperationofallvideoplayersisthroughagraphicaluserinterface.
Naturally,wedidnotwanttotellourvolunteerstoclickthe"Play"buttoneverytimewewantedtostartatest.
Luckily,wewereabletondaratherunconventionalwaytoruntheSilverlightplayerinaheadlessmodeasabackgroundprocessthatdoesnotrenderanyvideoanddoesnotneedanyuserintervention.
Reproducibility.
Eachtesttocollectonedatapointinthetestnetworkhap-pensinrealtimeandexploringallparametercombinationofinterestisnotfeasi-ble.
Therefore,wedidamajorparametercombinationsstudyonoursimulationplatform[10]rsttogetasetofworth-tryingexperimentsthatwelaunchedonthetestnetwork.
AnotherproblemistheuctuationofnetworkconditionsSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer39(a)(b)(c)Fig.
3.
Breakdownoftracquantitiesperbitratefor:(a)AnetworkwithP2Pcaching,Source&P2Ptracsummedtogether.
(b)ThesameP2Pnetworkwithsource&P2Ptracreportedseparately,and(c)AnetworkwithnoP2P.
andnumberofpeers.
Werepeatedeachdatapointanumberoftimesbeforegainingcondencethatthisistheaverageperformanceofacertainparametercombination.
EvaluationMetrics.
ThemainmetricthatweuseistracsavingsdenedasthepercentageoftheamountofdataservedfromtheP2Pnetworkfromthetotalamountofdataconsumedbythepeers.
EverypeerreportstheamountofdataservedfromtheP2Pnetworkandstreamingsourceevery30secondstotheloggingserver.
Inourbookkeeping,wekeeptrackofhowmuchofthetracwasduetofragmentsofacertainbitrate.
Thesecondimportantmetricisbueringdelay.
TheSilverlightplayercanbeinstrumentedtosenddebuginformationeverytimeitgoesin/outofbueringmode,i.
e.
whenevertheplayerndsthattheamountofinternally-buereddataisnotenoughforplayback,itsendsadebugmessagetotheserver,whichinourcaseisinterceptedbytheagent.
Usingthismethod,apeercanreportthelengthsoftheperiodsitenteredintobueringmodeinthe30secondssnapshotsaswell.
Attheendofthestream,wecalculatethesumofalltheperiodstheplayerofacertainpeerspentbuering.
8.
1DeploymentResultsStep-by-StepTowardsSavings.
Therstinvestigationwemadewastostartfromthebaselinedesignwithallthestrategiessettothesimplestpossible.
Infact,duringthedevelopmentcycleweusedthisbaselineversionrepeatedlyuntilweobtainedastableproductwithpredictableandconsistentsavingslevelbeforewestartedtoenablealltheotherimprovements.
Figure2ashowstheevolutionofsavingsintimeforallstrategies.
Thenaivebaselinecachingwasabletosaveatotalof44%ofthesourcetrac.
Afterthat,weworkedonpushingthehigher-bandwidthpeersaheadandmakingeachpartnerselectpeersthatareusefulusingtherequest-point-awarepartnershipwhichmovedthesavingstoalevelof56%.
Sofar,thepartnershipmaintenancewasrandom.
Turningonbit-rate-awaremaintenanceaddedonlyanother5%ofsavingsbutwebelievethatthisisakeystrategythatdeservesmorefocusbecauseitdirectlyaectstheeective40R.
Roverso,S.
El-Ansary,andS.
Haridi(a)0102030405060708090100102030405060Savings(%)NumberofPartners(b)Fig.
4.
(a)Breakdownoftracquantitiesperbitrateusingbaseline,(b)Comparisonofsavingsbetweendierentin-partnersnumberpartnershipsizeofotherpeersfromeachbitratewhichdirectlyaectssavings.
Fortheuploaderselection,runningthethroughput-basedpickingachieved68%ofsavings.
Finally,wegotourbestsavingsbyaddingproactivecachingwhichgaveus77%savings.
UserExperience.
Gettingsavingsaloneisnotagoodresultunlesswehaveprovidedagooduserexperience.
Toevaluatetheuserexperience,weusetwometrics:First,thepercentageofpeerswhoexperiencedatotalbueringtimeoflessthan5seconds,i.
e.
theyenjoyedperformancethatdidnotreallydeviatemuchfromlive.
Second,showingthatourP2Pagentdidnotachievethislevelofsavingsbyforcingtheadaptivestreamingtomoveeveryonetothelowestbitrate.
Fortherstmetric,Figure2bshowsthatwithalltheimprovements,wecanmake87%ofthenetworkwatchthestreamwithlessthan5secondsofbueringdelay.
Forthesecondmetric,Figure3ashowsalsothat88%ofallconsumedtracwasonthehighestbitrateandP2Paloneshouldering75%(Figure3b),anindicationthat,forthemostpart,peershaveseenthevideoatthehighestbitratewithamajorcontributionfromtheP2Pnetwork.
P2P-lessasaReference.
Wetakeonemorestepbeyondshowingthatthesystemoerssubstantialsavingswithreasonableuserexperience,namelytounderstandwhatwouldbetheuserexperienceincaseallthepeersstreameddirectlyfromtheCDN.
Therefore,werunthesystemwithP2Pcachingdisabled.
Figure2bshowsthatwithoutP2P,only3%more(90%)ofallviewerswouldhavealessthan5secondsbuering.
Ontopofthat,Figure3cshowsthatonly2%more(90%)ofallconsumedtracisonthehighestbitrate,thatisthesmallpricewepaidforsaving77%ofsourcetrac.
Figure4ainsteaddescribesthelowerperformanceofourbaselinecachingscenario,whichfalls13%oftheP2P-lessscenario(77%).
Thisismainlyduetothelackofbit-rate-awaremaintenance,whichturnsouttoplayaverysignicantroleintermsofuserexperience.
PartnershipSize.
Therearemanyparameterstotweakintheprotocolbut,inourexperience,thenumberofin-partnersisbyfartheparameterwiththemostsignicanteect.
Throughouttheevaluationpresentedhere,weuse50SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer4101020304050607080901003316681470Savings(%)Bitrate(Kbps)(a)0102030405060708090100x60.
0Peers(%)Time(secs)NOP2P1470KbpsP2P1470KbpsNOP2P668KbpsP2P668KbpsNOP2P331KbpsP2P331Kbps(b)Fig.
5.
(a)Savingsforsinglebitrateruns,(b)Bueringtimeforsinglebitraterunsin-partners.
Figure4bshowsthatmorepeersresultinmoresavings;albeitwithdiminishingreturns.
Wehaveselected50-peersasahigh-enoughnumber,atapointwhereincreasingthepeersdoesnotresultintomuchmoresavings.
SingleBitrate.
Anotherevaluationworthpresentingaswellisthecaseofasinglebitrate.
Inthisexperiment,weget84%,81%and69%forthelow,mediumandhighbitraterespectively(Figure5a).
AsfortheuserexperiencecomparedwiththesamesinglebitratesinaP2P-lesstest,wendthattheuserexperienceexpressedasdelaysismuchclosertotheP2P-lessnetwork(Figure5b).
Weexplaintherelativelybetterexperienceinthesinglebitratecasebythefactthatallthein-partnersarefromthesamebitrate,whileinthemulti-bitratecase,eachpeerhasinhispartnershipthemajorityofthein-partnersfromasinglebitratebutsomeofthemarefromotherbitrateswhichrenderstheeectivepartnershipsizesmaller.
Wecanalsoobservethattheuserexperienceimprovesasthebitratebecomessmaller.
9ConclusionInthispaper,wehaveshownanovelapproachinbuildingapeer-to-peerlivestreamingsystemthatiscompatiblewiththenewrealitiesoftheHTTP-live.
ThesenewrealitiesrevolvearoundthepointthatunlikeRTSP/RTPstreaming,thevideoplayerisdrivingthestreamingprocess.
TheP2Pagentwillhavealimitedabilitytocontrolwhatgetsrenderedontheplayerandmuchlimitedabilitytopredictitsbehaviour.
OurapproachwastostartwithbaselineP2PcachingwhereaP2PagentactsasanHTTPproxythatreceivesrequestsfromtheHTTPliveplayerandattemptstofetchitfromtheP2Pnetworkratherthesourceifitcandosoinareasonabletime.
Beyondbaselinecaching,wepresentedseveralimprovementsthatincluded:a)Request-point-awarepartnershipconstructionwherepeersfocusonestablishingrelationshipswithpeerswhoareaheadoftheminthestream,b)Bit-rate-aware42R.
Roverso,S.
El-Ansary,andS.
Haridipartnershipmaintenancethroughwhichacontinuousupdatingofthepartner-shipsetisaccomplishedbothfavoringpeerswithhighsuccessfultransfersrateandpeerswhoareonthesamebitrateofthemaintainingpeer,c)Manifesttrimmingwhichisatechniqueformanipulatingthemetadatapresentedtothepeeratthebeginningofthestreamingprocesstopushhigh-bandwidthpeersaheadofothers,d)Throughput-baseduploaderselectionwhichisapolicyusedtopickthebestuploaderforacertainfragmentifmanyexist,e)Carefultim-ingforfallingbacktothesourcewherethepreviousexperienceisusedtotunetimingoutonP2Ptransfersearlyenoughthuskeepingthetimelinessoftheliveplayback.
Ourmostadvancedoptimizationwastheintroductionofproactivecachingwhereapeerrequestsfragmentsaheadoftime.
Toaccomplishthisfeaturewithoutdisruptingthealready-ongoingtransfer,weusedourapplication-layercongestioncontrol[9]tomakepre-fetchingactivitieshavelesspriorityanddy-namicallyraisethispriorityincasethepiecebeingpre-fetchedgotrequestedbytheplayer.
Weevaluatedoursystemusingatestnetworkofrealvolunteeringclientsofabout700concurrentnodeswhereweinstrumentedtheP2Pagentstoruntestsunderdierentcongurations.
Thetestshaveshownthatwecouldachievearound77%savingsforamulti-bitratestreamwitharound87%ofthepeersexperiencingatotalbueringdelayoflessthan5secondsandalmostallofthepeerswatchedthedataonthehighestbitrate.
WecomparedtheseresultswiththesamenetworkoperatinginP2P-lessmodeandfoundthatonly3%oftheviewershadabetterexperiencewithoutP2Pwhichwejudgeasaverylimiteddegradationinqualitycomparedtothesubstantialamountofsavings.
References1.
Netixinc.
,www.
netflix.
com2.
Akhshabi,S.
,Begen,A.
C.
,Dovrolis,C.
:Anexperimentalevaluationofrate-adaptationalgorithmsinadaptivestreamingoverHTTP.
In:ProceedingsoftheSecondAnnualACMConferenceonMultimediaSystems,MMSys(2011)3.
Guo,Y.
,Liang,C.
,Liu,Y.
:AQCS:adaptivequeue-basedchunkschedulingforP2Plivestreaming.
In:Proceedingsofthe7thIFIP-TC6NETWORKING(2008)4.
Hei,X.
,Liang,C.
,Liang,J.
,Liu,Y.
,Ross,K.
W.
:InsightsintoPPLive:AMea-surementStudyofaLarge-ScaleP2PIPTVSystem.
In:Proc.
ofIPTVWorkshop,InternationalWorldWideWebConference(2006)5.
MicrosoftInc.
:SmoothStreaming,http://www.
iis.
net/download/SmoothStreaming6.
Liu,C.
,Bouazizi,I.
,Gabbouj,M.
:ParallelAdaptiveHTTPMediaStreaming.
In:Proc.
of20thInternationalConferenceonComputerCommunicationsandNet-works(ICCCN),July31-August4,pp.
1–6(2011)7.
Massoulie,L.
,Twigg,A.
,Gkantsidis,C.
,Rodriguez,P.
:RandomizedDecentralizedBroadcastingAlgorithms.
In:26thIEEEInternationalConferenceonComputerCommunications,INFOCOM2007,pp.
1073–1081(May2007)8.
Pantos,R.
:HTTPLiveStreaming(December2009),http://tools.
ietf.
org/html/draft-pantos-http-live-streaming-01SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer439.
Reale,R.
,Roverso,R.
,El-Ansary,S.
,Haridi,S.
:DTL:DynamicTransportLibraryforPeer-to-PeerApplications.
In:Bononi,L.
,Datta,A.
K.
,Devismes,S.
,Misra,A.
(eds.
)ICDCN2012.
LNCS,vol.
7129,pp.
428–442.
Springer,Heidelberg(2012)10.
Roverso,R.
,El-Ansary,S.
,Gkogkas,A.
,Haridi,S.
:Mesmerizer:Aeectivetoolforacompletepeer-to-peersoftwaredevelopmentlife-cycle.
In:ProceedingsofSIMU-TOOLS(March2011)11.
Roverso,R.
,El-Ansary,S.
,Haridi,S.
:NATCracker:NATCombinationsMatter.
In:Proc.
of18thInternationalConferenceonComputerCommunicationsandNet-works,ICCCN2009.
IEEEComputerSociety,SF(2009)12.
Silverston,T.
,Fourmaux,O.
:P2PIPTVmeasurement:acasestudyofTVants.
In:Proceedingsofthe2006ACMCoNEXTConference,CoNEXT2006,pp.
45:1–45:2.
ACM,NewYork(2006),http://doi.
acm.
org/10.
1145/1368436.
136849013.
Vlavianos,A.
,Iliofotou,M.
,Faloutsos,M.
:BiToS:EnhancingBitTorrentforSup-portingStreamingApplications.
In:Proceedingsofthe25thIEEEInternationalConferenceonComputerCommunications,INFOCOM2006,pp.
1–6(April2006)14.
Yin,H.
,Liu,X.
,Zhan,T.
,Sekar,V.
,Qiu,F.
,Lin,C.
,Zhang,H.
,Li,B.
:Livesky:Enhancingcdnwithp2p.
ACMTrans.
MultimediaComput.
Commun.
Appl.
6,16:1–16:19(2010),http://doi.
acm.
org/10.
1145/1823746.
182375015.
Zhang,M.
,Zhang,Q.
,Sun,L.
,Yang,S.
:UnderstandingthePowerofPull-BasedStreamingProtocol:CanWeDoBetterIEEEJournalonSelectedAreasinCom-munications25,1678–1694(2007)16.
Zhang,X.
,Liu,J.
,Li,B.
,Yum,Y.
S.
P.
:CoolStreaming/DONet:adata-drivenover-laynetworkforpeer-to-peerlivemediastreaming.
In:24thAnnualJointConferenceoftheIEEEComputerandCommunicationsSocieties,INFOCOM2005(2005)
rfchost怎么样?rfchost是一家开办了近六年的国人主机商,一般能挺过三年的国人商家,还是值得入手的,商家主要销售VPS,机房有美国洛杉矶/堪萨斯、中国香港,三年前本站分享过他家堪萨斯机房的套餐。目前rfchost商家的洛杉矶机房还是非常不错的,采用CN2优化线路,电信双程CN2 GIA,联通去程CN2 GIA,回程AS4837,移动走自己的直连线路,目前季付套餐还是比较划算的,有需要的可...
上次部落分享过VirMach提供的End of Life Plans系列的VPS主机,最近他们又发布了DEDICATED MIGRATION SPECIALS产品,并提供6.5-7.5折优惠码,优惠后最低每月27.3美元起。同样的这些机器现在订购,将在2021年9月30日至2022年4月30日之间迁移,目前这些等待迁移机器可以在洛杉矶、达拉斯、亚特兰大、纽约、芝加哥等5个地区机房开设,未来迁移的时...
昨天,有在"阿里云秋季促销活动 轻量云服务器2G5M配置新购年60元"文章中记录到阿里云轻量服务器2GB内存、5M带宽一年60元的活动,当然这个也是国内机房的。我们很多人都清楚备案是需要接入的,如果我们在其他服务商的域名备案的,那是不能解析的。除非我们不是用来建站,而是用来云端的,是可以用的。这不看到其对手腾讯云也有推出两款轻量服务器活动。其中一款是4GB内存、8M带宽,这个比阿里云还要狠。这个真...
tvants官网为你推荐
http500ZTCS500在哪能下载手机QQ?asp.net什么叫ASP.NET?flashftp下载《蔓蔓青萝(全)》.TXT_微盘下载360免费建站免费空间-360免费建站空间是多大?2828商机网千元能办厂?28商机网是真的吗?刚刚网刚刚网上刷单被骗了5万多怎么办啊 报警有用吗泉州商标注册泉州商标注册找什么公司?电子商务世界电子商务都有什么内容网站方案设计网站文案策划怎么写oa办公软件价格一个oa系统多少钱
域名拍卖 备案域名查询 com域名注册1元 万网域名空间 vps推荐 免费主机 美国主机代购 bash漏洞 名片模板psd 鲜果阅读 http500内部服务器错误 php空间购买 paypal注册教程 如何安装服务器系统 申请网页 架设邮件服务器 789 ebay注册 免费php空间 重庆联通服务器托管 更多