sittvants官网

tvants官网  时间:2021-04-20  阅读:()
SmoothCache:HTTP-LiveStreamingGoesPeer-to-PeerRobertoRoverso1,2,SamehEl-Ansary1,andSeifHaridi21PeerialismInc.
,Stockholm,Sweden2TheRoyalInstituteofTechnology(KTH),Stockholm,Sweden{roberto,sameh}@peerialism.
com,haridi@kth.
seAbstract.
Inthispaper,wepresentSmoothCache,apeer-to-peerlivevideostreaming(P2PLS)system.
ThenoveltyofSmoothCacheisthree-fold:i)ItistherstP2PLSsystemthatisbuilttosupporttherelatively-newapproachofusingHTTPasthetransportprotocolforlivecontent,ii)Thesystemsupportsbothsingleandmulti-bitratestreamingmodesofoperation,andiii)InSmoothcache,wemakeuseofrecentadvancesinapplication-layerdynamiccongestioncontroltomanageprioritiesoftransfersaccordingtotheirurgency.
WestartbyexplainingwhytheHTTPlivestreamingsemanticsrendermanyoftheexistingassumptionsusedinP2PLSprotocolsobsolete.
Afterwards,wepresentourdesignstartingwithabaselineP2Pcachingmodel.
We,then,showanumberofoptimizationsrelatedtoaspectssuchasneighborhoodmanagement,uploaderselectionandproactivecaching.
Finally,wepresentoureval-uationconductedonarealyetinstrumentedtestnetwork.
Ourresultsshowthatwecanachievesubstantialtracsavingsonthesourceofthestreamwithoutmajordegradationinuserexperience.
Keywords:HTTP-Livestreaming,peer-to-peer,caching,CDN.
1IntroductionPeer-to-peerlivestreaming(P2PLS)isaprobleminthePeer-To-Peer(P2P)net-workingeldthathasbeentackledforquitesometimeonboththeacademicandindustrialfronts.
Thetypicalgoalistoutilizetheuploadbandwidthofhostscon-sumingacertainlivecontenttoooadthebandwidthofthebroadcastingorigin.
Ontheindustrialfront,wendsuccessfullargedeploymentswhereknowledgeabouttheirtechnicalapproachesisratherlimited.
ExceptionsincludesystemsdescribedbytheirauthorslikeCoolstreaming[16]orinferredbyreverseengi-neeringlikePPlive[4]andTVAnts[12].
Ontheacademicfront,therehavebeenseveralattemptstotrytoestimatetheoreticallimitsintermsofoptimalityofbandwidthutilization[3][7]ordelay[15].
Traditionally,HTTPhasbeenutilizedforprogressivedownloadstreaming,championedbypopularVideo-On-Demand(VoD)solutionssuchasNetix[1]andApple'siTunesmoviestore.
However,lately,adaptiveHTTP-basedstream-ingprotocolsbecamethemaintechnologyforlivestreamingaswell.
Allcompa-nieswhohaveamajorsayinthemarketincludingMicrosoft,AdobeandAppleR.
Bestaketal.
(Eds.
):NETWORKING2012,PartII,LNCS7290,pp.
29–43,2012.
cIFIPInternationalFederationforInformationProcessing201230R.
Roverso,S.
El-Ansary,andS.
HaridihaveadoptedHTTP-streamingasthemainapproachforlivebroadcasting.
ThisshifttoHTTPhasbeendrivenbyanumberofadvantagessuchasthefollowing:i)RoutersandrewallsaremorepermissivetoHTTPtraccomparedtotheRTSP/RTPii)HTTPcachingforreal-timegeneratedmediaisstraight-forwardlikeanytraditionalweb-contentiii)TheContentDistributionNetworks(CDNs)businessismuchcheaperwhendealingwithHTTPdownloads[5].
TherstgoalofthispaperistodescribetheshiftfromtheRTSP/RTPmodeltotheHTTP-livemodel(Section2).
This,inordertodetailtheimpactofthesameonthedesignofP2Plivestreamingprotocols(Section3).
Apointwhichwendratherneglectedintheresearchcommunity(Section4).
Wearguethatthisshifthasrenderedmanyoftheclassicalassumptionsmadeinthecurrentstate-of-the-artliteratureobsolete.
Forallpracticalpurposes,anynewP2PLSalgorithmirrespectiveofitstheoreticalsoundness,won'tbedeployableifitdoesnottakeintoaccounttherealitiesofthemainstreambroadcastingecosystemaroundit.
TheissuebecomesevenmoretopicalasweobserveatrendinstandardizingHTTPlive[8]streamingandembeddingitinallbrowserstogetherwithHTML5,whichisalreadythecaseinbrowserslikeApple'sSafari.
ThesecondgoalofthispaperistopresentaP2PLSprotocolthatiscompatiblewithHTTPlivestreamingfornotonlyonebitratebutthatisfullycompatiblewiththeconceptofadaptivebitrate,whereastreamisbroadcastwithmultiplebitratessimultaneouslytomakeitavailableforarangeofviewerswithvariabledownloadcapacities(Section5).
ThelastgoalofthispaperistodescribeanumberofoptimizationsofourP2PLSprotocolconcerningneighborhoodmanagement,uploaderselectionandpeertransferwhichcandeliverasignicantamountoftracsavingsonthesourceofthestream(Section6and7).
Experimentalresultsofourapproachshowthatthisresultcomesatalmostnocostintermsofqualityofuserexperience(Section8).
2TheShiftfromRTP/RTSPtoHTTPInthetraditionalRTSP/RTPmodel,theplayerusesRTSPasthesignallingprotocoltorequesttheplayingofthestreamfromastreamingserver.
TheplayerentersareceiveloopwhiletheserverentersasendloopwherestreamfragmentsaredeliveredtothereceiverusingtheRTPprotocoloverUDP.
Theinteractionbetweentheserverandplayerisstateful.
Theservermakesdecisionsaboutwhichfragmentissentnextbasedonacknowledgementsorerrorinformationpreviouslysentbytheclient.
Thismodelmakestheplayerratherpassive,havingthemereroleofrenderingthestreamfragmentswhichtheserverprovides.
IntheHTTPlivestreamingmodelinstead,itistheplayerwhichcontrolsthecontentdeliverybyperiodicallypullingfromtheserverpartsofthecontentatthetimeandpaceitdeemssuitable.
Theserverinsteadisentitledwiththetaskofencodingthestreaminrealtimewithdierentencodingrates,orqualities,andstoringitindatafragmentswhichappearontheserverassimpleresources.
Whenaplayerrstcontactsthestreamingserver,itispresentedwithameta-datale(Manifest)containingthelateststreamfragmentsavailableattheserverSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer31(a)(b)Fig.
1.
a)SampleSmoothstreamingManifest,b)Client-ServerinteractionsinMi-crosoftSmoothStreamingatthetimeoftherequest.
Eachfragmentisuniquelyidentiedbyatime-stampandabitrate.
Ifastreamisavailableinndierentbitrates,thenthismeansthatforeachtimestamp,thereexistsnversionsofit,oneforeachbitrate.
Afterreadingthemanifest,theplayerstartstorequestfragmentsfromtheserver.
Theburdenofkeepingthetimelinessofthelivestreamistotallyupontheplayer.
Theserverincontrast,isstatelessandmerelyservesfragmentslikeanyotherHTTPserverafterencodingthemintheformatadvertisedinthemanifest.
ManifestContents.
Togiveanexample,weuseMicrosoft'sSmoothStreamingmanifest.
InFigure1a,weshowtherelevantdetailsofamanifestforalivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps).
Byinspectingoneofthestreams,wendtherst(themostrecent)fragmentcontainingadvaluewhichisthetimedurationofthefragmentinaunitof100nanosecondsandatvaluewhichisthetimestampofthefragment.
Thefragmentunderneath(theolderfragment)hasonlyadvaluebecausethetimestampisinferredbyaddingthedurationtothetimestampoftheoneabove.
Thestreamseachhaveatemplateforformingarequesturlforfragmentsofthatstream.
Thetemplatehasplaceholdersforsubstitutionwithanactualbitrateandtimestamp.
Foradenitionofthemanifest'sformat,see[5].
AdaptiveStreamingProtocol.
InFigure1b,weshowanexampleinteractionsequencebetweenaSmoothStreamingClientandServer[5].
TheClientrstissuesaHTTPGETrequesttoretrievethemanifestfromthestreamingserver.
Afterinterpretingthemanifest,theplayerrequestsavideofragmentfromthelowestavailablebitrate(331Kbps).
Thetimestampoftherstrequestisnotpredictablebutinmostcaseswehaveobservedthatitisanamountequalto10secondsbackwardfromthemostrecentfragmentinthemanifest.
Thisisprobablytheonlypredictablepartoftheplayer'sbehavior.
Infact,withoutdetailedknowledgeoftheplayer'sinternalalgorithmandgiventhatdierentplayersmayimplementdierentalgorithms,itisdiculttomakeassumptionsabouttheperiodbetweenconsecutivefragmentrequests,thetimeatwhichtheplayerwillswitchrates,orhowtheaudioandvideoare32R.
Roverso,S.
El-Ansary,andS.
Haridiinterleaved.
Forexample,whenafragmentisdelayed,itcouldgetre-requestedatthesamebitrateoratalowerrate.
Thetimeoutbeforetakingsuchactionisonethingthatwefoundslightlymorepredictableanditwasmostofthetimearound4seconds.
Thatisasubsetofmanydetailsaboutthepullbehavioroftheplayer.
ImplicationsofUnpredictability.
Thepointofmentioningthesedetailsistoexplainthatthebehavioroftheplayer,howitbuersandclimbsupanddownthebitratesisratherunpredictable.
Infact,wehaveseenitchangeindierentversionofthesameplayer.
Moreover,dierentadoptersoftheapproachhaveminorvariationsontheinteractionssequence.
Forinstance,AppleHTTP-live[8]dictatesthattheplayerrequestsamanifesteverytimebeforerequestinganewfragmentandpacksaudioandvideofragmentstogether.
Asaconsequenceofwhatwedescribedabove,webelievethataP2PLSprotocolforHTTPlivestreamingshouldoperateasifreceivingrandomrequestsintermsoftimingandsizeandhastomakethisthemainprinciple.
Thisltersoutthedetailsofthedierentplayersandtechnologies.
3ImpactoftheShiftonP2PLSAlgorithmsTraditionally,thetypicalsetupforaP2PLSagentistositbetweenthestreamingserverandtheplayerasalocalproxyoeringtheplayerthesameprotocolasthestreamingserver.
Insuchasetup,theP2PLSagentwoulddoitsbest,exploitingthepeer-to-peeroverlay,todeliverpiecesintimeandintherightorderfortheplayer.
Thus,theP2PLSagentistheonedrivingthestreamingprocessandkeepinganactivestateaboutwhichvideooraudiofragmentshouldbedeliverednext,whereastheplayerblindlyrenderswhatitissuppliedwith.
Giventheassumptionofapassiveplayer,itiseasytoenvisagetheP2PLSalgorithmskippingforinstancefragmentsaccordingtotheplaybackdeadline,i.
e.
discardingdatathatcomestoolateforrendering.
Inthiskindofsituation,theplayerisexpectedtoskipthemissingdatabyfast-forwardingorblockingforfewinstantsandthenstarttheplaybackagain.
ThistypeofbehaviortowardstheplayerisanintrinsicpropertyofmanyofthemostmatureP2PLSsystemdesignsandanalysessuchas[13,15,16].
Incontrasttothat,aP2PLSagentforHTTPlivestreamingcannotrelyonthesameoperationalprinciples.
Thereisnofreedominskippingpiecesanddecidingwhatistobedeliveredtotheplayer.
TheP2PLSagenthastoobeytheplayer'srequestforfragmentsfromtheP2Pnetworkandthespeedatwhichthisisaccomplishedaectstheplayer'snextaction.
Fromourexperience,delvinginthepathoftryingtoreverseengineertheplayerbehaviorandintegratingthatintheP2Pprotocolissomekindofblackartbasedontrial-and-errorandwillresultintoverycomplicatedandextremelyversion-speciccustomizations.
Essentially,anyP2PLSschedulingalgorithmthatassumesthatithascontroloverwhichdatashouldbedeliveredtotheplayerisratherinapplicabletoHTTPlivestreaming.
SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer334RelatedWorkWearenotawareofanyworkthathasexplicitlyarticulatedtheimpactoftheshifttoHTTPontheP2Plivestreamingalgorithms.
However,amorerelevanttopictolookatisthebehavioroftheHTTP-basedliveplayers.
Akhshabiet.
al[2],providearecentdissectionofthebehaviorofthreesuchplayersunderdierentbandwidthvariationscenarios.
Itishoweverclearfromtheiranalysisthatthebitrateswitchingmechanicsoftheconsideredplayersarestillinearlystagesofdevelopment.
Inparticular,itisshownthatthroughputuctuationsstillcauseeithersignicantbueringorunnecessarybitratereductions.
Ontopofthat,itisshownhowallthelogicimplementedintheHTTP-liveplayersistailoredtoTCP'sbehavior,astheonesuggestedin[6].
Thatinordertocompen-satethroughputvariationscausedbyTCP'scongestioncontrolandpotentiallylargeretransmissiondelays.
InthecaseofaP2PLSagentactingasproxy,itisthenofparamountimportancetonotinterferewithsuchadaptationpatterns.
Webelieve,giventhepresentedapproaches,themostrelatedworkistheP2PcachingnetworkLiveSky[14].
WeshareincommonthefactoftryingtoestablishaP2PCDN.
However,LiveSkydoesnotpresentanysolutionforsupportingHTTPlivestreaming.
5P2PLSasaCachingProblemWewilldescribehereourbaselinedesigntotacklethenewrealitiesoftheHTTP-basedplayers.
WetreattheproblemofreducingtheloadonthesourceofthestreamthesamewayitwouldbetreatedbyaContentDistributionNetwork(CDN):asacachingproblem.
ThedesignofthestreamingprotocolwasmadesuchthateveryfragmentisfetchedasanindependentHTTPrequestthatcouldbeeasilyscheduledonCDNnodes.
Thedierenceisthatinourcase,thecachingnodesareconsumermachinesandnotdedicatednodes.
TheplayerisdirectedtoorderfromourlocalP2PLSagentwhichactsasanHTTPproxy.
Alltracto/fromthesourceofthestreamaswellasotherpeerspassesbytheagent.
BaselineCaching.
Thepolicyisasfollows:anyrequestformanifestles(meta-data),isfetchedfromthesourceasisandnotcached.
Thatisduetothefactthatthemanifestchangesovertimetocontainthenewlygeneratedfragments.
Contentfragmentsrequestedbytheplayerarelookedupinalocalindexofthepeerwhichkeepstrackofwhichfragmentisavailableonwhichpeer.
Ifinforma-tionaboutthefragmentisnotintheindex,thenweareinthecaseofaP2Pcachemissandwehavetoretrieveitfromthesource.
Incaseofacachehit,thefragmentisrequestedfromtheP2Pnetworkandanyerrororslownessintheprocessresults,again,inafallbacktothesourceofthecontent.
Onceafragmentisdownloaded,anumberofotherpeersareimmediatelyinformedinorderforthemtoupdatetheirindicesaccordingly.
AchievingSavings.
Themainpointisthustoincreasethecachehitratioasmuchaspossiblewhilethetimelinessofthemovieispreserved.
Thecachehitratioisourmainmetricbecauseitrepresentssavingsfromtheloadonthesource34R.
Roverso,S.
El-Ansary,andS.
HaridiTable1.
SummaryofbaselineandimprovedstrategiesStrategyBaselineImprovedManifestTrimming(MT)OOnPartnershipConstruction(PC)RandomRequest-Point-awarePartnershipMaintenance(PM)RandomBitrate-awareUploaderSelection(US)RandomThroughput-basedProactiveCaching(PR)OOnofthelivestream.
Havingexplainedthebaselineidea,wecanseethat,inthe-ory,ifallpeersstartedtodownloadthesameuncachedmanifestsimultaneously,theywouldalsoallstartrequestingfragmentsexactlyatthesametimeinper-fectalignment.
Thisscenariowouldleavenotimeforthepeerstoadvertiseandexchangeusefulfragmentsbetweeneachothers.
Consequentlyaperfectalign-mentwouldresultinnosavings.
Inreality,wehavealwaysseenthatthereisanamountofintrinsicasynchronyinthestreamingprocessthatcausessomepeerstobeaheadofothers,hencemakingsavingspossible.
However,thelargerthenumberofpeers,thehighertheprobabilityofmorepeersbeingaligned.
Wewillshowthat,giventheaforementionedasynchrony,eventhepreviouslydescribedbaselinedesigncanachievesignicantsavings.
Ourtargetsavingsarerelativetothenumberofpeers.
Thatiswedonottargetachievingaconstantloadonthesourceofthestreamirrespectiveofthenumberofusers,whichwouldleadtolossoftimeliness.
Instead,weaimtosaveasubstantialpercentageofallsourcetracbyooadingthatpercentagetotheP2Pnetwork.
TheattractivenessofthatmodelfromabusinessperspectivehasbeenveriedwithcontentownerswhonowadaysbuyCDNservices.
6BeyondBaselineCachingWegivehereadescriptionofsomeoftheimportanttechniquesthatarecrucialtotheoperationoftheP2PLSagent.
Foreachsuchtechniqueweprovidewhatwethinkisthesimplestwaytorealizeitaswellasimprovementsifwewereabletoidentifyany.
ThetechniquesaresummarizedinTable1.
ManifestManipulation.
OneimprovementparticularlyapplicableinMi-crosoft'sSmoothstreamingbutthatcouldbeextendedtoallothertechnologiesismanifestmanipulation.
AsexplainedinSection2,theserversendsamanifestcontainingalistofthemostrecentfragmentsavailableatthestreamingserver.
Thepointofthatistoavailtotheplayersomedataincasetheuserdecidestojumpbackintime.
Minortrimmingtohidethemostrecentfragmentsfromsomepeersplacesthembehindothers.
Weusethattechniquetopushpeerswithhighuploadbandwidthslightlyaheadofothersbecausetheyhavetheycanbemoreusefultothenetwork.
Wearecarefulnottoabusethistoomuch,otherwisepeerswouldsuerahighdelayfromliveplayingpoint,sowelimitittoamaximumofSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer354seconds.
Itisworthnotingherethatwedoaquickbandwidthmeasurementforpeersuponadmissiontothenetwork,mainly,forstatisticalpurposesbutwedonotdependonthismeasurementexceptduringtheoptionaltrimmingprocess.
NeighborhoodandPartnershipConstruction.
Weuseatrackeraswellasgossipingforintroducingpeerstoeachother.
Anytwopeerswhocanestablishbi-directionalcommunicationareconsideredneighbors.
Eachpeerprobeshisneighborsperiodicallytoremovedeadpeersandupdateinformationabouttheirlastrequestedfragments.
Neighborhoodconstructionisinessenceaprocesstocreatearandomundirectedgraphwithhighnodearity.
Asubsetoftheedgesintheneighborhoodgraphisselectedtoformadirectedsubgraphtoestablishpartnershipbetweenpeers.
Unliketheneighborhoodgraph,whichisupdatedlazily,theedgesofthepartneshipgraphareusedfrequently.
Aftereachsuccessfuldownloadofafragment,thesetof(out-partners)isinformedaboutthenewlydownloadedfragment.
Fromtheoppositeperspective,itiscrucialforapeertowiselypickhisin-partnersbecausetheyaretheprovidersoffragmentsfromtheP2Pnetwork.
Forthisdecision,weexperimentwithtwodierentstrategies:i)Randompicking,ii)Request-point-awarepicking:wherethein-partnersincludeonlypeerswhoarerelativelyaheadinthestreambecauseonlysuchpeerscanhavefuturefragments.
PartnershipMaintenance.
Eachpeerstrivestocontinuouslyndbetterin-partnersusingperiodicmaintenance.
Themaintenanceprocesscouldbelimitedtoreplacementofdeadpeersbyrandomly-pickedpeersfromtheneighborhood.
Ourimprovedmaintenancestrategyistoscorethein-partnersaccordingtoacertainmetricandreplacelow-scoringpartnerswithnewpeersfromtheneigh-borhood.
Themetricweuseforscoringpeersisacompositeonebasedon:i)favoringthepeerswithhigherpercentageofsuccessfullytransferreddata,ii)favoringpeerswhohappentobeonthesamebitrate.
Notethatwhilefavor-ingpeersonthesamebitrate,havingallpartnersfromasinglebitrateisverydangerous,becauseonceabit-ratechangeoccursthepeerisisolated.
Thatis,allthereceivedupdatesaboutpresenceoffragmentsfromotherpeerswouldbefromtheoldbitrate.
Thatiswhy,uponreplacement,wemakesurethattheresultingin-partnerssethasallbit-rateswithagaussiandistributioncenteredaroundthecurrentbitrate.
Thatis,mostin-partnersarefromthecurrentbitrate,lesspartnersfromtheimmediatelyhigherandlowerbitratesandmuchlesspartnersfromotherbitratesandsoforth.
Oncethebit-ratechanges,themaintenancere-centersthedistributionaroundthenewbitrate.
UploaderSelection.
Inthecaseofacachehit,ithappensquiteoftenthatapeerndsmultipleuploaderswhocansupplythedesiredfragment.
Inthatcase,weneedtopickone.
Thesimpleststrategywouldbetopickarandomuploader.
Ourimprovedstrategyhereistokeeptrackoftheobservedhistoricalthroughputofthedownloadsandpickthefastestuploader.
Sub-fragments.
Uptothispoint,wehavealwaysusedinourexplanationthefragmentasadvertisedbythestreamingserverastheunitoftransportforsimplifyingthepresentation.
Inpractice,thisisnotthecase.
Thesizesof36R.
Roverso,S.
El-Ansary,andS.
Haridithefragmentvaryfromonebitratetotheother.
LargerfragmentswouldresultinwaitingforalongertimebeforeinformingotherpeerswhichwoulddirectlyentaillowersavingsbecauseoftheslownessofdisseminatinginformationaboutfragmentpresenceintheP2Pnetwork.
Tohandlethat,ourunitoftransportandadvertisingisasub-fragmentofaxedsize.
Thatsaid,therealityoftheuploaderselectionprocessisthatitalwayspicksasetuploadersforeachfragmentratherthanasingleuploader.
Thisparallelizationappliesforbothrandomandthroughput-baseduploaderselectionstrategies.
Fallbacks.
Whiledownloadingafragmentfromanotherpeer,itisofcriticalimportancetodetectproblemsassoonaspossible.
Thetimeoutbeforefallingbacktothesourceisthusoneofthemajorparameterswhiletuningthesystem.
Weputanupperbound(Tp2p)onthetimeneededforanyP2Poperation,computedas:Tp2p=TplayerSTfwhereTplayeristhemaximumamountoftimeafterwhichtheplayerconsidersarequestforafragmentexpired,SisthesizeoffragmentandTfistheexpectedtimetoretrieveaunitofdatafromthefallback.
Basedonourexperience,Tplayerisplayer-specicandconstant,forinstanceMicrosoft'sSmoothStreamingwaits4secondsbeforetimingout.
AlongerTp2ptranslatesinahigherP2Psuccesstransferratio,hencehighersavings.
SinceTplayerandSareoutsideofourcontrol,itisextremelyimportanttoestimateTfcorrectly,inparticularinpresenceofcongestionanductuatingthroughputtowardsthesource.
Asafurtheroptimization,werecalculatethetimeoutforafragmentwhileaP2Ptransferishappeningdependingontheamountofdataalreadydownloaded,toallowmoretimefortheoutstandingpartofthetransfer.
Finally,uponfallback,onlytheamountoffragmentthatfailedtobedownloadedfromtheoverlaynetworkisretrievedfromthesource,i.
e.
throughapartialHTTPrequestontherangeofmissingdata.
7ProactiveCachingThebaselinecachingprocessisinessencereactive,i.
e.
theattempttofetchafragmentstartsaftertheplayerrequestsit.
However,whenapeerisinformedaboutthepresenceofafragmentintheP2Pnetwork,hecantriviallyseethatthisisafuturefragmentthatwouldbeeventuallyrequested.
Startingtoprefetchitearlybeforeitisrequested,increasestheutilizationoftheP2Pnetworkanddecreasestheriskoffailingtofetchitintimewhenrequested.
Thatsaid,wedonotguaranteethatthisfragmentwouldberequestedinthesamebitrate,whenthetimecomes.
Therefore,weendureabitofriskthatwemighthavetodiscarditifthebitratechanges.
Inpractice,wemeasuredthattheprefetchersuccessfullyrequeststherightfragmentwitha98.
5%ofprobability.
TracPrioritization.
Toimplementthisproactivestrategywehavetakenad-vantageofourdynamicruntime-prioritizationtransportlibraryDTL[9]whichexposestotheapplicationlayertheabilitytoprioritizeindividualtransfersrel-ativetoeachotherandtochangethepriorityofeachindividualtransferatrun-time.
Uponstartingtofetchafragmentproactively,itisassignedaverySmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer37low-priority.
Therationaleistoavoidcontendingwiththetransferprocessoffragmentsthatarereactivelyrequestedandunderadeadlinebothontheup-loadinganddownloadingends.
SuccessfulPrefetching.
Onepossibilityisthatalow-priorityprefetchingpro-cesscompletesbeforeaplayer'srequestandthereisnowaytodeliverittotheplayerbeforethathappens,theonlyoptionistowaitforaplayerrequest.
Moreimportantly,whenthattimecomes,carefuldeliveryfromthelocalmachineisveryimportantbecauseextremelyfastdeliverymightmaketheadaptivestream-ingplayermistakenlythinkthatthereisanabundanceofdownloadbandwidthandstarttorequestthefollowingfragmentsahigherbitratebeyondtheac-tualdownloadbandwidthofthepeer.
Therefore,weschedulethedeliveryfromthelocalmachinetobenotfasterthanthealready-observedaveragedownloadrate.
Wehavetostressherethatthisisnotanattempttocontroltheplayertodosomethinginparticular,wejustmaintaintransparencybynotdeliveringprefetchedfragmentsfasterthannotprefetchedones.
InterruptedPrefetching.
Anotherpossiblityisthattheprefetchingprocessgetsinterruptedbytheplayerin3possibleways:i)Theplayerrequeststhefrag-mentbeingprefetched:inthatcasethetransportlayerisdynamicallyinstructedtoraisethepriorityandTplayerissetaccordinglybasedontheremainingamountofdataasdescribedintheprevioussection.
ii)Theplayerrequeststhesamefrag-mentbeingprefetchedbutatahigherratewhichmeanswehavetodiscardanyprefetcheddataandtreattherequestlikeanyotherreactivelyfetchedfragment.
iii)Theplayerdecidestoskipsomefragmentstocatchupandisnolongerinneedofthefragmentbeingprefetched.
Inthiscase,wehavetodiscarditaswell.
8EvaluationMethodology.
Duetothenon-representativebehaviourofPlanetlabandthedicultytodoparameterexplorationinpublicly-deployedproductionnetwork,wetriedanotherapproachwhichistodevelopaversionofourP2Pagentthatisremotely-controlledandaskforvolunteerswhoareawarethatwewillconductexperimentsontheirmachines.
Needlesstosay,thatthisfunctionalityisremovedfromanypublicly-deployableversionoftheagent.
TestNetwork.
Thetestnetworkcontainedaround1350peers.
However,themaximum,minimumandaveragenumberofpeerssimultaneouslyonlinewere770,620and680respectively.
ThenetworkincludedpeersmostlyfromSweden(89%)butalsosomefromEurope(6%)andtheUS(4%).
Theuploadband-widthdistributionofthenetworkwasasfollows:15%:0.
5Mbps,42%:1Mbps,17%:2.
5Mbps,15%:10Mbps,11%:20Mbps.
Ingeneral,onecanseethatthereisenoughbandwidthcapacityinthenetwork,howeverthemajorityofthepeersareonthelowerendofthebandwidthdistribution.
Forconnectivity,82%ofthepeerswerebehindNAT,and12%wereonopenInternet.
WehaveusedourNAT-Crackertraversalschemeasdescribedin[11]andwereabletoestablishbi-directionalcommunicationbetween89%ofallpeerpairs.
Theuniquenumber38R.
Roverso,S.
El-Ansary,andS.
Haridi010203040506070809010005101520Savings(%)Time(minutes)ImprovementsNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(a)020406080100x60Peers(%)Time(secs)ImprovementsNOP2PNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(b)Fig.
2.
(a)Comparisonoftracsavingswithdierentalgorithmimprovements,(b)ComparisonofcumulativebueringtimeforsourceonlyandimprovementsofNATtypesencounteredwere18types.
Apartfromthetrackerusedforin-troducingclientstoeachother,ournetworkinfrastructurecontained,aloggingserver,abandwidthmeasurementserver,aSTUN-likeserverforhelpingpeerswithNATtraversalandacontrollertolaunchtestsremotely.
StreamProperties.
Weusedaproduction-qualitycontinuouslivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps)andweletpeerswatch20minutesofthisstreamineachtest.
ThestreamwaspublishedusingMicrosoftSmoothStreamingtraditionaltoolchain.
ThebandwidthofthesourcestreamwasprovidedbyacommercialCDNandwemadesurethatithadenoughcapacitytoservethemaximumnumberofpeersinourtestnetwork.
ThissetupgaveustheabilitytocomparethequalityofthestreamingprocessinthepresenceandabsenceofP2Pcachinginordertohaveafairassessmentoftheeectofouragentontheoverallqualityofuserexperience.
Westressthat,inarealdeployment,P2PcachingisnotintendedtoeliminatetheneedforaCDNbuttoreducethetotalamountofpaid-fortracthatisprovidedbytheCDN.
Oneoftheissuesthatwefacedregardingrealistictestingwasmakingsurethatweareusingtheactualplayerthatwouldbeusedinproduction,inourcasethatwastheMicrosoftSilverlightplayer.
Theproblemisthatthenormalmodeofoperationofallvideoplayersisthroughagraphicaluserinterface.
Naturally,wedidnotwanttotellourvolunteerstoclickthe"Play"buttoneverytimewewantedtostartatest.
Luckily,wewereabletondaratherunconventionalwaytoruntheSilverlightplayerinaheadlessmodeasabackgroundprocessthatdoesnotrenderanyvideoanddoesnotneedanyuserintervention.
Reproducibility.
Eachtesttocollectonedatapointinthetestnetworkhap-pensinrealtimeandexploringallparametercombinationofinterestisnotfeasi-ble.
Therefore,wedidamajorparametercombinationsstudyonoursimulationplatform[10]rsttogetasetofworth-tryingexperimentsthatwelaunchedonthetestnetwork.
AnotherproblemistheuctuationofnetworkconditionsSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer39(a)(b)(c)Fig.
3.
Breakdownoftracquantitiesperbitratefor:(a)AnetworkwithP2Pcaching,Source&P2Ptracsummedtogether.
(b)ThesameP2Pnetworkwithsource&P2Ptracreportedseparately,and(c)AnetworkwithnoP2P.
andnumberofpeers.
Werepeatedeachdatapointanumberoftimesbeforegainingcondencethatthisistheaverageperformanceofacertainparametercombination.
EvaluationMetrics.
ThemainmetricthatweuseistracsavingsdenedasthepercentageoftheamountofdataservedfromtheP2Pnetworkfromthetotalamountofdataconsumedbythepeers.
EverypeerreportstheamountofdataservedfromtheP2Pnetworkandstreamingsourceevery30secondstotheloggingserver.
Inourbookkeeping,wekeeptrackofhowmuchofthetracwasduetofragmentsofacertainbitrate.
Thesecondimportantmetricisbueringdelay.
TheSilverlightplayercanbeinstrumentedtosenddebuginformationeverytimeitgoesin/outofbueringmode,i.
e.
whenevertheplayerndsthattheamountofinternally-buereddataisnotenoughforplayback,itsendsadebugmessagetotheserver,whichinourcaseisinterceptedbytheagent.
Usingthismethod,apeercanreportthelengthsoftheperiodsitenteredintobueringmodeinthe30secondssnapshotsaswell.
Attheendofthestream,wecalculatethesumofalltheperiodstheplayerofacertainpeerspentbuering.
8.
1DeploymentResultsStep-by-StepTowardsSavings.
Therstinvestigationwemadewastostartfromthebaselinedesignwithallthestrategiessettothesimplestpossible.
Infact,duringthedevelopmentcycleweusedthisbaselineversionrepeatedlyuntilweobtainedastableproductwithpredictableandconsistentsavingslevelbeforewestartedtoenablealltheotherimprovements.
Figure2ashowstheevolutionofsavingsintimeforallstrategies.
Thenaivebaselinecachingwasabletosaveatotalof44%ofthesourcetrac.
Afterthat,weworkedonpushingthehigher-bandwidthpeersaheadandmakingeachpartnerselectpeersthatareusefulusingtherequest-point-awarepartnershipwhichmovedthesavingstoalevelof56%.
Sofar,thepartnershipmaintenancewasrandom.
Turningonbit-rate-awaremaintenanceaddedonlyanother5%ofsavingsbutwebelievethatthisisakeystrategythatdeservesmorefocusbecauseitdirectlyaectstheeective40R.
Roverso,S.
El-Ansary,andS.
Haridi(a)0102030405060708090100102030405060Savings(%)NumberofPartners(b)Fig.
4.
(a)Breakdownoftracquantitiesperbitrateusingbaseline,(b)Comparisonofsavingsbetweendierentin-partnersnumberpartnershipsizeofotherpeersfromeachbitratewhichdirectlyaectssavings.
Fortheuploaderselection,runningthethroughput-basedpickingachieved68%ofsavings.
Finally,wegotourbestsavingsbyaddingproactivecachingwhichgaveus77%savings.
UserExperience.
Gettingsavingsaloneisnotagoodresultunlesswehaveprovidedagooduserexperience.
Toevaluatetheuserexperience,weusetwometrics:First,thepercentageofpeerswhoexperiencedatotalbueringtimeoflessthan5seconds,i.
e.
theyenjoyedperformancethatdidnotreallydeviatemuchfromlive.
Second,showingthatourP2Pagentdidnotachievethislevelofsavingsbyforcingtheadaptivestreamingtomoveeveryonetothelowestbitrate.
Fortherstmetric,Figure2bshowsthatwithalltheimprovements,wecanmake87%ofthenetworkwatchthestreamwithlessthan5secondsofbueringdelay.
Forthesecondmetric,Figure3ashowsalsothat88%ofallconsumedtracwasonthehighestbitrateandP2Paloneshouldering75%(Figure3b),anindicationthat,forthemostpart,peershaveseenthevideoatthehighestbitratewithamajorcontributionfromtheP2Pnetwork.
P2P-lessasaReference.
Wetakeonemorestepbeyondshowingthatthesystemoerssubstantialsavingswithreasonableuserexperience,namelytounderstandwhatwouldbetheuserexperienceincaseallthepeersstreameddirectlyfromtheCDN.
Therefore,werunthesystemwithP2Pcachingdisabled.
Figure2bshowsthatwithoutP2P,only3%more(90%)ofallviewerswouldhavealessthan5secondsbuering.
Ontopofthat,Figure3cshowsthatonly2%more(90%)ofallconsumedtracisonthehighestbitrate,thatisthesmallpricewepaidforsaving77%ofsourcetrac.
Figure4ainsteaddescribesthelowerperformanceofourbaselinecachingscenario,whichfalls13%oftheP2P-lessscenario(77%).
Thisismainlyduetothelackofbit-rate-awaremaintenance,whichturnsouttoplayaverysignicantroleintermsofuserexperience.
PartnershipSize.
Therearemanyparameterstotweakintheprotocolbut,inourexperience,thenumberofin-partnersisbyfartheparameterwiththemostsignicanteect.
Throughouttheevaluationpresentedhere,weuse50SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer4101020304050607080901003316681470Savings(%)Bitrate(Kbps)(a)0102030405060708090100x60.
0Peers(%)Time(secs)NOP2P1470KbpsP2P1470KbpsNOP2P668KbpsP2P668KbpsNOP2P331KbpsP2P331Kbps(b)Fig.
5.
(a)Savingsforsinglebitrateruns,(b)Bueringtimeforsinglebitraterunsin-partners.
Figure4bshowsthatmorepeersresultinmoresavings;albeitwithdiminishingreturns.
Wehaveselected50-peersasahigh-enoughnumber,atapointwhereincreasingthepeersdoesnotresultintomuchmoresavings.
SingleBitrate.
Anotherevaluationworthpresentingaswellisthecaseofasinglebitrate.
Inthisexperiment,weget84%,81%and69%forthelow,mediumandhighbitraterespectively(Figure5a).
AsfortheuserexperiencecomparedwiththesamesinglebitratesinaP2P-lesstest,wendthattheuserexperienceexpressedasdelaysismuchclosertotheP2P-lessnetwork(Figure5b).
Weexplaintherelativelybetterexperienceinthesinglebitratecasebythefactthatallthein-partnersarefromthesamebitrate,whileinthemulti-bitratecase,eachpeerhasinhispartnershipthemajorityofthein-partnersfromasinglebitratebutsomeofthemarefromotherbitrateswhichrenderstheeectivepartnershipsizesmaller.
Wecanalsoobservethattheuserexperienceimprovesasthebitratebecomessmaller.
9ConclusionInthispaper,wehaveshownanovelapproachinbuildingapeer-to-peerlivestreamingsystemthatiscompatiblewiththenewrealitiesoftheHTTP-live.
ThesenewrealitiesrevolvearoundthepointthatunlikeRTSP/RTPstreaming,thevideoplayerisdrivingthestreamingprocess.
TheP2Pagentwillhavealimitedabilitytocontrolwhatgetsrenderedontheplayerandmuchlimitedabilitytopredictitsbehaviour.
OurapproachwastostartwithbaselineP2PcachingwhereaP2PagentactsasanHTTPproxythatreceivesrequestsfromtheHTTPliveplayerandattemptstofetchitfromtheP2Pnetworkratherthesourceifitcandosoinareasonabletime.
Beyondbaselinecaching,wepresentedseveralimprovementsthatincluded:a)Request-point-awarepartnershipconstructionwherepeersfocusonestablishingrelationshipswithpeerswhoareaheadoftheminthestream,b)Bit-rate-aware42R.
Roverso,S.
El-Ansary,andS.
Haridipartnershipmaintenancethroughwhichacontinuousupdatingofthepartner-shipsetisaccomplishedbothfavoringpeerswithhighsuccessfultransfersrateandpeerswhoareonthesamebitrateofthemaintainingpeer,c)Manifesttrimmingwhichisatechniqueformanipulatingthemetadatapresentedtothepeeratthebeginningofthestreamingprocesstopushhigh-bandwidthpeersaheadofothers,d)Throughput-baseduploaderselectionwhichisapolicyusedtopickthebestuploaderforacertainfragmentifmanyexist,e)Carefultim-ingforfallingbacktothesourcewherethepreviousexperienceisusedtotunetimingoutonP2Ptransfersearlyenoughthuskeepingthetimelinessoftheliveplayback.
Ourmostadvancedoptimizationwastheintroductionofproactivecachingwhereapeerrequestsfragmentsaheadoftime.
Toaccomplishthisfeaturewithoutdisruptingthealready-ongoingtransfer,weusedourapplication-layercongestioncontrol[9]tomakepre-fetchingactivitieshavelesspriorityanddy-namicallyraisethispriorityincasethepiecebeingpre-fetchedgotrequestedbytheplayer.
Weevaluatedoursystemusingatestnetworkofrealvolunteeringclientsofabout700concurrentnodeswhereweinstrumentedtheP2Pagentstoruntestsunderdierentcongurations.
Thetestshaveshownthatwecouldachievearound77%savingsforamulti-bitratestreamwitharound87%ofthepeersexperiencingatotalbueringdelayoflessthan5secondsandalmostallofthepeerswatchedthedataonthehighestbitrate.
WecomparedtheseresultswiththesamenetworkoperatinginP2P-lessmodeandfoundthatonly3%oftheviewershadabetterexperiencewithoutP2Pwhichwejudgeasaverylimiteddegradationinqualitycomparedtothesubstantialamountofsavings.
References1.
Netixinc.
,www.
netflix.
com2.
Akhshabi,S.
,Begen,A.
C.
,Dovrolis,C.
:Anexperimentalevaluationofrate-adaptationalgorithmsinadaptivestreamingoverHTTP.
In:ProceedingsoftheSecondAnnualACMConferenceonMultimediaSystems,MMSys(2011)3.
Guo,Y.
,Liang,C.
,Liu,Y.
:AQCS:adaptivequeue-basedchunkschedulingforP2Plivestreaming.
In:Proceedingsofthe7thIFIP-TC6NETWORKING(2008)4.
Hei,X.
,Liang,C.
,Liang,J.
,Liu,Y.
,Ross,K.
W.
:InsightsintoPPLive:AMea-surementStudyofaLarge-ScaleP2PIPTVSystem.
In:Proc.
ofIPTVWorkshop,InternationalWorldWideWebConference(2006)5.
MicrosoftInc.
:SmoothStreaming,http://www.
iis.
net/download/SmoothStreaming6.
Liu,C.
,Bouazizi,I.
,Gabbouj,M.
:ParallelAdaptiveHTTPMediaStreaming.
In:Proc.
of20thInternationalConferenceonComputerCommunicationsandNet-works(ICCCN),July31-August4,pp.
1–6(2011)7.
Massoulie,L.
,Twigg,A.
,Gkantsidis,C.
,Rodriguez,P.
:RandomizedDecentralizedBroadcastingAlgorithms.
In:26thIEEEInternationalConferenceonComputerCommunications,INFOCOM2007,pp.
1073–1081(May2007)8.
Pantos,R.
:HTTPLiveStreaming(December2009),http://tools.
ietf.
org/html/draft-pantos-http-live-streaming-01SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer439.
Reale,R.
,Roverso,R.
,El-Ansary,S.
,Haridi,S.
:DTL:DynamicTransportLibraryforPeer-to-PeerApplications.
In:Bononi,L.
,Datta,A.
K.
,Devismes,S.
,Misra,A.
(eds.
)ICDCN2012.
LNCS,vol.
7129,pp.
428–442.
Springer,Heidelberg(2012)10.
Roverso,R.
,El-Ansary,S.
,Gkogkas,A.
,Haridi,S.
:Mesmerizer:Aeectivetoolforacompletepeer-to-peersoftwaredevelopmentlife-cycle.
In:ProceedingsofSIMU-TOOLS(March2011)11.
Roverso,R.
,El-Ansary,S.
,Haridi,S.
:NATCracker:NATCombinationsMatter.
In:Proc.
of18thInternationalConferenceonComputerCommunicationsandNet-works,ICCCN2009.
IEEEComputerSociety,SF(2009)12.
Silverston,T.
,Fourmaux,O.
:P2PIPTVmeasurement:acasestudyofTVants.
In:Proceedingsofthe2006ACMCoNEXTConference,CoNEXT2006,pp.
45:1–45:2.
ACM,NewYork(2006),http://doi.
acm.
org/10.
1145/1368436.
136849013.
Vlavianos,A.
,Iliofotou,M.
,Faloutsos,M.
:BiToS:EnhancingBitTorrentforSup-portingStreamingApplications.
In:Proceedingsofthe25thIEEEInternationalConferenceonComputerCommunications,INFOCOM2006,pp.
1–6(April2006)14.
Yin,H.
,Liu,X.
,Zhan,T.
,Sekar,V.
,Qiu,F.
,Lin,C.
,Zhang,H.
,Li,B.
:Livesky:Enhancingcdnwithp2p.
ACMTrans.
MultimediaComput.
Commun.
Appl.
6,16:1–16:19(2010),http://doi.
acm.
org/10.
1145/1823746.
182375015.
Zhang,M.
,Zhang,Q.
,Sun,L.
,Yang,S.
:UnderstandingthePowerofPull-BasedStreamingProtocol:CanWeDoBetterIEEEJournalonSelectedAreasinCom-munications25,1678–1694(2007)16.
Zhang,X.
,Liu,J.
,Li,B.
,Yum,Y.
S.
P.
:CoolStreaming/DONet:adata-drivenover-laynetworkforpeer-to-peerlivemediastreaming.
In:24thAnnualJointConferenceoftheIEEEComputerandCommunicationsSocieties,INFOCOM2005(2005)

CloudCone月付$48,MC机房可小时付费

CloudCone商家在前面的文章中也有多次介绍,他们家的VPS主机还是蛮有特点的,和我们熟悉的DO、Linode、VuLTR商家很相似可以采用小时时间计费,如果我们不满意且不需要可以删除机器,这样就不扣费,如果希望用的时候再开通。唯独比较吐槽的就是他们家的产品太过于单一,一来是只有云服务器,而且是机房就唯一的MC机房。CloudCone 这次四周年促销活动期间,商家有新增独立服务器业务。同样的C...

knownhost西雅图/亚特兰大/阿姆斯特丹$5/月,2个IP1G内存/1核/20gSSD/1T流量

美国知名管理型主机公司,2006年运作至今,虚拟主机、VPS、云服务器、独立服务器等业务全部采用“managed”,也就是人工参与度高,很多事情都可以人工帮你处理,不过一直以来价格也贵。也不知道knownhost什么时候开始运作无管理型业务的,估计是为了扩展市场吧,反正是出来较长时间了。闲来无事,那就给大家介绍下“unmanaged VPS”,也就是无管理型VPS,低至5美元/月,基于KVM虚拟,...

欧路云:美国CUVIP线路10G防御,8折优惠,19元/月起

欧路云新上了美国洛杉矶cera机房的云服务器,具备弹性云特征(可自定义需要的资源配置:E5-2660 V3、内存、硬盘、流量、带宽),直连网络(联通CUVIP线路),KVM虚拟,自带一个IP,支持购买多个IP,10G的DDoS防御。付款方式:PayPal、支付宝、微信、数字货币(BTC USDT LTC ETH)测试IP:23.224.49.126云服务器 全场8折 优惠码:zhujiceping...

tvants官网为你推荐
artessobjectflash点击google操作http指纹iphonehttp404未找到"HTTP 404 未找到"的错误如何对付?_全国企业信息查询想查一个企业的信息,哪个网站提供信息查询?重庆杨家坪猪肉摊主杀人重庆忠县的猪肉市场应该好好整顿一下了。6月份我买到了母猪肉。今天好不容易才下定决心去买农贸市场买肉。支付宝注册网站在哪里注册支付宝曲目ios
网站空间域名 手机网站空间 腾讯云数据库 paypal认证 火车票抢票攻略 tightvnc 阿里云浏览器 admit的用法 200g硬盘 域名转接 刀片式服务器 双十一秒杀 双11秒杀 卡巴斯基试用版 服务器干什么用的 华为云盘 空间登录首页 移动服务器托管 英雄联盟台服官网 国外网页代理 更多