>> SBE @ PBS Techcon 2019 - Section 6: Scheduler [four warm ascending brass notes]
>> Chriss Scherer: April 6, 2019. ♪ ♪
Launching ATSC 3.0 Next-Gen Broadcasts: A Tutorial.
♪ ♪ The SBE and PBS TechCon 2019 was made possible
through financial support from Acrodyne Services, Comark Communications, Dielectric, Enensys,
Gates Air, Technical Broadcast Solutions, Triveni Digital.
♪ ♪ [strong confident keyboard instrumental]
♪ ♪ ♪ Dot, dot, dot, dah, dot, dot, dah, daaah ♪
♪ ♪
[inspirational music]
>> Fred Baumgartner: So, the platform section of this is really kind of the curious part.
And Winston and his group is going to talk about the station of the future. And, obviously,
it’s more than linear video Those little guys off in the corner who have been doing
digital start to take over the station and the natural flow of technology, etc. who are
going to eventually take over the station. Now, we're going to talk about the central
portion of the system, which is the scheduler. And that's the hard part to get your head
around here is that it's just not an exciter that's a dumb thing and the video goes in
and goes out. But the scheduler's the part that makes all the decisions. Make yourself
at home, guy! Now, I have to apologize. We have a really bright guy who does nothing
but schedulers, and he was going to moderate this session, but he's over with their portable
transmitter at the moment, trying to make it go. So, out of 35 speakers, I've lost exactly
one. And I'm going to try and fill in here and I know just enough about schedulers to
be terrified. [chuckling] That's really about the way it goes. And so, I am literally gonna
throw the microphone over to these guys, and we'll give it 45 minutes. Our goal here is
to give you a feel for what this looks like, see what the user interface is, the decision
points, and you'll see how this ties in to what Luke was talking about this morning.
and what Madeline was talking about and all the pieces that go into making ATSC 3.0 go.
[engineers cheer and applaud]
>> Richard Lhermitte, Enesys: So, before going to the gateway, I start to make first introduction
of what is a gateway, where you can find the gateway in the delivery process, This is also
a summary of what you have seen this morning, especially by Luke and by Madeline. So, I
try to summarize just to give you the key feature you will have in the gateway. So,
a short diagram about the architecture, what is PLP, protocol structure, signaling. It
is also a summary of where you have seen this morning, especially by the presentation made
by Luke and by Merrill. So, I try to summarize, just to give you the key feature you will
have in the gateway. So, short diagram about what is the architecture, what is PLP, protocol
scanning, signaling. I will try to do everything in about 10, 15 minutes.
So, where is the broadcast gateway? When we say there is a broadcast gateway in the delivery
process, where is it? So, in fact, it’s— What— especially
when you have everything right close to the encoder. So, you have the generic studio side,
and at the end, you arrive on classical encoding process, multiplexing, and so on. So, the
broadcast gateway is here, close to the encoding at the studio side. And, you will see that
it is the last point and the last project you will have before going to the transmitter.
If you look at this general picture, it's a lot of blocks, but just to let you know
that their broadcast gateway is really here. And this is the last product and the last
feature you will have before going to the STL link.
It take the content from all the different kind of services: from live services, emergency
alert, the ESG, the interactive application.
All service and content you will need to broadcast over the air will go at the end through the
broadcast gateway, and all the last packaging, encapsulation, and so on we will explain later
is going through this broadcast gateway.
Sometimes you heard about the word which is scheduler.
We can say to be summarized that scheduler includes the broadcast gateway because inside
the broadcast gateway, you have scheduling mechanisms to schedule the different content
in different pipe and so and so. Sometimes you heard about the scheduler, sometimes you
heard about the broadcast gateway, but let's say that they’re same.
So, this slide was also presented this morning by Luke, and just to say, that these are all
the difference between ATSC 1 and ATSC 3 on the modulations part. A key point for ATSC
3 is that you have a lot of possibilities regarding the modulation scheme, and typically,
if you want to have a robust signal, or a very large coverage. So, it's always trade-off
between: if you want more bandwidth, less bandwidth, more robustness, more coverage.
So, anyway… So, you have a lot of parameters and all of these are prepared by the broadcast
gateway.
So, the broadcast gateway will make all the encapsulation and process to be able for the
exciter to deliver the content. Especially when you are making SFN, but I will not explain
that. And the concept in ATSC 3, which has been developed, is what we call physical layer
pipe. It means that you— I will describe later, but there is this mechanism to be able
to encapsulate the content. In ATSC 1, we have 6-Megahertz channel and
we have one transport stream inside this 6-Megahertz channel. Inside this transport stream, you
will have different content, but let's say you have one modulation, which is the same
and one transport stream, which is-- which has, likely, some characteristic in term of
error/ 6-Megahertz channel. So, the concept of physical layer pipe is to say, in each
6-Megahertz channel, I can divide by 6-Megahertz channel by different pipe, and put some different
characteristic of fills of this different pipe.
Typically, an example is by 6-Megahertz roads, I will divide it, let’s say four pipe, which
will have different modulation parameters, which allowed to have different robustness,
different coverage, different bitrate, and then I will decide which content I will put
on which pipe. And this is the goal of the Gateway to fill
the pipe with the correct content.
So, the classical example, I have UHD service with a pipe, and I will give it lots of bandwidth,
but less coverage, typically, to cover, I don't know. very dense area. And at the opposite
way, my non-realtime data services that I want to push to every kind of device, including
cars, I would like to have very large coverage for this service so I will change the modulation
parameters on this pipe, and I will cover that. So, it's-- it's very useful tools to
be able to to address different kind of receiver inside with have 6-Megahertz channel so you
can target different kind of reception, different kind of coverage in one 6-Megahertz channel.
Another big important things to think about in ATSC 3 is what we call signaling.
I summarized very basically this is signaling. You have two tables in ATSC 3, which is the
SLT, and the SLS. To make a very basic comparison with ATSC 1, SLT gives you a list of services
you have in your network. It's equal mostly to the PAT table you have in ATSC 1. It gives
a list of the service, and then, the SLS is similar to the PMT table. For each service,
you know how many components you have, what are the resolution, what does the description
of this component. You also have the service, the name of the service, and so on. So, let's
say this two table, which out I've already summarized it, but it's very important because
it gives you, in yours your ATSC signal, the list of services, and description of all of
these services.
So, the SLT is sent, and then you have as many SLS as you have services, like PAT and
PMT table. And if you come back to my previous slide,
or getting PLP, you will have to decide where I will put this table in my different PLPs.
So, it's also made at the broadcast gateway level.
I will not go through that, but it's a description of this table. The SLT and these different
SLS. And, if I look at from our example, four different PLPs, which contain one service,
two service, three service, four, or five services, in four different PLPs, as an example.
I have to decide where I will put my my tables. In this example, I choose to put the SLS which
is equal to the PAT on this PLP and on this PLP. Perhaps because this is the most robust
PLP, and I'm sure that everybody, every receiver, will receive it.
So, I will put for example my list of service in this PLP and then the SLS link to the service,
and then spread around the others’ PLP. So, that’s one example you could imagine
to deliver this table in all the different PLP. And this decision is made by you inside
the broadcast gateway configuration. This is also , let's say, encapsulation process,
which are made at the broadcast gateway level. You have the content coming from different
encoders, root servers, and so on, and you need to encapsulate everything in PLP and
BBP frame, and so on, to deliver the signaling and so on.
Everything is done in the gateway. And at the end here, you have lot of streams, a lot
of IP streams, but at the end, you need to deliver everything to the transmitter. So,
there is encapsulation-- and Merrill explained it this morning-- which is called STL TP,
which take all this content, make a tunneling, and deliver everything to the transmitter
to have only one IP multicast address, which is at the output of the gateway, which contain
all the content. This is what you have here. You have the STL, but at the end, you have
only one multicast addressed going to the transmitter.
And typically, inside is channeling STLTP protocol, you have all this different type
information, all the different IP multicast, ESG, and so on. So, you have this information
inside this channeling so we can say that STLTP is IP inside IP because it's an IP stream
which contain IP packets. More than that, you have also some timing information, typically
for SFN, to be sure that all transmitter will deliver the same content at the same time.
So, there are some timing information and there are some preamble data to configure
that transmitters, and so on.
So, this is the goal of the Gateway to generate this STLTP, which goes through the STL and
up to the transmitter. As a conclusion, if you want, there are there
are some technical poster over there which describe all of this so don’t hesitate to
pick one. So, all these different steps will be described now by this guy, which will allow--
show you the different gateway, how to allocate PLP, is how to put tables in these PLPs, how
to generate the STLTP, and so on. [applause]
>> Sang Jin Yoon, DigiCAP: I’m Sang Jin Yoon with DigiCAP. Nice to meet you all! So,
for us, Richard explained all the concepts and, yeah, you know, technical specs so we're
going to show you how we get down and dirty.
We actually create the PLPs. So, we're going to talk a little bit show you about the ATSC
3.0 frames, but I think you've already seen the pictures. and that transmission capacity.
How do we calculate that? And also, the example of setting that we did in Korea during the
Paralympic Games, and also show you the UI settings. The settings UI of the schedule,r
with the example of 8 fixed channel and 1 mobile channel.
So, I think you've all seen seen this picture in the morning, as well. It's a picture of
the bootstrap and the whole frame. And the concept here, what's important concept here
is the subframe and within the-- within each subframe, you could have multiple PLPs. So,
there are about two ways you can create multiple PLPs. And one is to have more than two PLPs
within a subframe, or you could also create one PLP per subframe, and create multiple
subframes. And, we'll show you the differences of those two, and, for example, that we're
going to show you is: you're creating multiple PLPs by way of creation
of multiple subframes so the reason we show you that is creating the subframe, you also
get to set the robustness of the signals that you transmit-- not just the data rate.
So, this is a very simplistic view of how you calculate the data rate that you're going
to get as TOV (Threshold Of Visibility.) So, that's a code rate or modulation order. That's
pretty much what you set at the PLP- level. And also, what determines the data rate is
a guard table and pilot pattern. And that's more to do with the robustness of the signal
that you transmit.
And this is the example that we did in Korea during the Pyeongchang Olympic. And, you can
see, you can see we created two subframes: one 1080p for mobile, and one 4K for the fixed
device. And these are the settings that we did. Preamble: same setting and then for the
subframe, we had a different FFT mode. For the mobile service, 1080p, it's 8K, and the
subframe 1 for the fixed is 32K. So, we would have a different datarate for the fixed and
mobile services, and also different signals with the robustness. So, we're going to show
you the example of creating a scheduler configuration for 1 HD for mobile and one 8 HD services
for fixed. So, we're going to consider having a mobile media 720p HD. That's about 2 Megabyte
Megabits per second data rate, and also 7/8 720p HD for fixed. So, total we are going
to have about 19. 6 Megabits/second and the frame setting, the frame time at 250 milliseconds.
That's the settings screen. We already preset the configurations here for the save time.
And the white spaces here are the changes that we made for this particular service.
So, this is a preamble parameter FFT set at 8K, and that you can see that we created two
subframes. And the one subframe, one PLP per subframe, and there will be several inputs
that you can put in for the fixed services. Okay, this-- each input to a PLP is a stream:
one HD, second HD, third HD, and this is going to the mobiel. So here, you can set the characteristics--
of the characteristic of each subframe so subframe number 1 as FFT, 8K, guarded table,
g8-- I can't quite read—[laughs] Thank you!
And also, here is what he said for the PLPs that belong to each subframe, and also, destination
IPs for the inputs. So, with that, I’m going to conclude this simple demo, and hand it
over to Enensys. [applause]
>> Thank you.
>> Floch Jérôme, Enensys: Alright, so let's conclude this presentation by having a look
on what tendencies can provide in term of scheduler. So, I'm just, at first, going to
sum up what we have presented this morning with Merrill and Luke, the physical layer
part. Too, remains that, in fact, for ATSC 3.0, the scheduler is definitely the common
element of the ATSC 3.0 infrastructure. As this project manage or the services you want
to deliver to your users. I'm talking about linear services, near-realtime services. The
scheduler also manages all ATSC 3.0 signaling to enable a perfect decoding on the receiver
part. The scheduler of build the physical layer, okay? And this physical layer will
be applied by the exciters in the field, okay?
So, point is that the scheduler, of course, broadcast all this content coming in route
or in MMTP streams to an STLTP streams. So, a new level of encapsulations. And finally,
the scheduler is a heart of the network, as it pilots exciters, in terms of configuration
and also in terms of synchronization, to ensure the SFN. Okay, so, if we have a look on the
key features and all the processing that has to be done by the gateway, there are many,
many things to manage and many, many things to set up. So, let me tell you that in some
cases could be a nightmare. [laughs] But don't worry! We are here to help you. [laughs] I'm
going to present the workflow for my product.
I hope that your Internet connection will work, Fred. [laughing skeptically] We will
see. So, beginning at gateway, the product has to be configured in three steps.
At first, you have to select the services you want to broadcast into your network. This
is the first part, and to have a look and check on the SLT table, which will be used
by the receiver to perfectly decode the services. This is the first step. Second step is to
build the physical frame. I mean, your channel, your subframe, your PLP. One PLP will enable
you to define a level of datarate of robustness for one or several services. Okay? The last
step is to configure the output, so the encoding of your distribution networks. You will have
to set up one or several STLTP streams output with a different level of FEC to ensure the
robustness of the transmission.
Do you feel confident? Great! Okay, so now it’s time for the demo.
I'm going to connect to Amazon Web Services, where my broadcast gateway is running. Okay,
so this is SmartGate from Enensys. At first, you are connected to the dashboard of the
product. It means when the product is running, it shows you all the ?APIs? ?Cap E Eyes? ?EPGs?
and the streams delivered by the product. So here, in my configuration, I have one STL
output. Okay? That is carrying two PLP. PLP 0 with this constellation, PLP 1 with another
constellation, and five services, which are linear services are commonly broadcast into
this ATSC ?tornado walk?transmission?
For example, here PLP 0: In green, I have the payload of the PLP. In yellow, I have
the padding. So, it mean that my PLP in not completely used, and if I want, I can integrate
new services to deliver in this PLP. Okay? So, I can show this PLP and I can see that
this PLP is currently broadcasting content from NexStar TV and W-R-A-L. This is just
a demo, not realistic case. But you can have look in realtime of what happened on your
networks.
So, in terms of configuration, here I have my three steps. At first, the service management.
So,the SmartGate will automatically scan an input of the route and MMTP stream to help
you to select which service you want to broadcast.
So, here, everything is checked. And I can have a look on the SLT table
So, to verify the signaling information that must be fully validated to be sure that on
the receiver that the service will be free decoded. So, this is a first step. Second
step, the physical frame. Okay. So, I am not going into detail, but here you have to set
up your channel information, your subframe information, and your PLP. So, here, I have
only one subframe and two PLP. Okay, but if I want I can add another subframe, another
PLP. Okay, so it’s very flexible.
And if I have a look on PLP 1, I can see that in my configuration, I have selected these
three services into PLP 1 with this modulation, this code rate for the robustness, and this
mode of FEC interleaving of cells.
Last step is the output. So, there is only one STL output that is configured with some
FEC to be sure that all the packets will be fully transmitted to all the exciter path.
Okay.
So, STL is flexible. You can either add several STL outputs if you have many distribution
networks: microwave, optical fiber, and so on… So, it’s time to finish, I think.
I hope that it’s now clear for you what is a scheduler for your network. There are
many possibilities. [applause]
>> Fred Baumgartner: Are there questions? Really, this is, like, scariest part of this
whole system. [laughter]
>> Engineer: How do you get the bitrate information? >> How do you get the bitrate information?
>> Is there a sampler?
>> So, bitrate, no, no. So, the bitrate here are internal Kap ee eyesIs. We directly calculate
into SmartGate. So, this here is bitrate in output into the STL?tuner ring? . So, we calculate
these metrics per service and per PLP. >> - Richard: If a service is delivered over
IP multicast, the gateway receives this multicast and knows its IP bitrate and compute this
IP bitrate including also all the overhead, but it’s mostly coming from the router and
the MMTP server and the IP multicast. And each service has its own bit—
>> [inaudible comment from audience] >> Yeah because on the network with either
bitrate arriving, either packet arriving, and then, it knows the IP bitrate incoming
on the gateway. >> How do you know what bitrate the encoder
packager has to put out? I mean, is—Does it tell you somewhere what the capacity of
the PLP is? >> It’s a good question. I mean, for now,
you define your PLPs with the maximum bitrate you can fill, and, the other way, on the encoders,
you define the bitrate you want per service. But now, for now, there is no molding between
each other. You can imagine to have encoding which providing too many data, then you have
overflow in your PLP. So, for today, it’s completely independent. You have to set up
your encoding system with different bitrate per service, you have to define your PLP which
give you maximum bitrate per service, and you mostly cross your finger that you will
never have a overflow. [audience laughs] This is for today, but we are working with
the encoder manufacturer to avoid that, to be able to from this broadcast gateway to
have a feedback to the encoder to ensure that if there are too many data, the encoder will
decrease a little bit the bitrate, but also, on the opposite side, if you have enough data,
then why not increase a little bit the bitrate? So, to be able to have 99.9% of capacity that
is used. So, this protocol is under definition between us and the different encoder manufacturer.
>> So, it sounds like what you’re saying is, right now, you don’t have the equivalent
of statistical multiplexing. That you do not have feedback enabled for—
>> Yeah, exactly. So— >> My question about that is: is that feedback
going to be across the entire scheduler or on an individual PLP-basis?
>> Per PLP. >> It’s per PLP?
>> Yeah. >> So, for each PLP that I set up, that is
its own statmux with feedback to a group of encoders? It’s not across multiple plps?
>> And, you can imagine, to have two different statmux pool from two different encoder manufacturer
and to address two different PLPs, and have this protocol.
>> On the subject of the subframes, are there advantages, tradeoffs of assigning multiple
PLPs to a single subframe versus a single PLP to a single subframe?
>> Richard: That’s a really good question. It's mostly… the idea of subframe is mostly
if you want to address two different kind of reception, mostly mobile reception versus
fixed reception. If the target device that you would like to address mostly the same,
there is no real advantage to separating in different subframe. But if you need two different
EFT, for example, then you need two different subframes. One must be for fixed and one must
be for mobile. So, it’s mostly for for me to answer to your question. One specific
timing information-- You need to have very precise timing from your side. Let's first,
we drink complete delivery chain. The second issue was the sto. You want sure that there
was no too many, and it's a jitter or looser packet side the stop because, if so, it's
very important to the other issue. Scheduler Scheduled Caste gateway are mostly same schedule
so we-- for people who know enhances we're working on and when South Korea decided to
launch ATSC 3.0 to be ready for this we decided to use the upward platform we have implemented
broadcast gateway. it is hardware.
Russia this is idiocy Skid Row. Now, we are in the process to add everything utilized.
So, imagine website name of the broadcast gateway. Your eyes on the road, non-pregnancy
its market. So, the 80s iscador is the product which deployed in South Korea, more than one
year under half now. And we are deploying in US, most software version. That's just
to make a ad of what we shall said. Spygate will also bring new features that a scheduler,
which is many dedicated from South Korea market will not, especially on the STL TV security,
especially on other advanced key features, like the image emergency. So that's why is that for
the USA, the key broadcast gateway for for broadcast sales will be SmartGate. Avoid any
confusion.
>> So, this is a software platform so we can run on Amazon, but you can also run it on
a protocol server. It's up to you. You can, you can have your appliance. for your office classical server. It's up to you.
>> Fred Baumgartner: I'm gonna say thank you and go to the next one.
[applause]