Operational Status
VPIF | Suspended Operations |
EZForms | Unavailable |
Michi on YouTube
Most popular
Aggregator
Petition for Determination of Effective Competition in 32 Massachusetts Communities and Kauai, HI (HI0011)
Pleadings
FCC Considers Eliminating Or Revising WWII-Era Rules For TV And FM Antenna Sites
FCC Finds Effective Competition In Parts Of Hawaii And Massachusetts
Actions
New Bill Would Force C-Band Auction
A bipartisan quartet of House members want to force the FCC to auction C-Band spectrum rather than repurpose it via free-market deals between satellite operators and wireless carriers, as those operators prefer.
The FCC wants to free up as much of that midband (3.7–4.2 GHz) spectrum for 5G as possible, likely at least 300 MHz. Satellite carriers (most as part of the C-Band Alliance) want to be able to strike deals to free up the spectrum. But many in Congress have argued that the money for the public spectrum — to which satellite operators have licenses — should instead go to the Treasury to help fund rural broadband buildouts among other things.
[Read: C-Band Hearing Scheduled for the House]
That definitely includes the four House members who introduced the Clearing Broad Airwaves for New Deployment (C-BAND) Act Thursday (Oct. 24). They are Rep. Mike Doyle (D-Pa.), chairman of the Communications Subcommittee, Rep. Doris Matsui (D-Calif.), subcommittee vice-chair, and Reps. Bill Johnson (R-Ohio), and Greg Gianforte (R-Minn.).
“I am pleased to introduce the bipartisan C-Band Act, which would require the FCC to promptly conduct a public auction to provide more much-needed midband spectrum,” said Doyle. “This bill would ensure a transparent and fair process that would generate billions of dollars in revenue to address the urgent needs of millions of Americans such as building out broadband internet service in rural America while protecting users of incumbent services.”
The FCC would have a September 2022 deadline for auctioning the spectrum.
The act:
- “Requires the FCC to hold a public auction of C-Band spectrum;”
- “Allow for no less than 200 megahertz and no more than 300 megahertz of C-band spectrum [with 20 MHz set aside for guard bands];”
- “Ensures that incumbent C-Band users will be protected” by mandating that they get as good or better service than before. Cable operators, who are also eyeing the C-Band spectrum for 5G, have signaled they could support freeing up as much of that spectrum for 5G as is practicable, perhaps even all of it, replacing the satellite feed with fiber. Broadcasters are concerned that fiber would put their must-have programming at the mercy of an errant backhoe that failed to miss the utility, as it were.
The C-Band Alliance initially propose private sales of 200 MHz, but is likely willing to boost that to 300 MHz if they can be private sales rather than an auction.
Incumbent users include broadcasters and cable operators, who receive their programming network feeds via the satellite spectrum.
The bill will definitely be a topic of conversation at the subcommittee’s C-Band hearing next week.
“ACA Connects salutes the House subcommittee for its introduction of this bipartisan bill,” said ACA Connects President Matt Polka. “The bill appropriately recognizes that any repurposing of C-Band spectrum for 5G must ensure the same or better service for existing users of the band, including the cable operators that rely on the band to deliver video programming to millions of households across the nation. If cable operators encounter any reduction in reliability, capability or quality of that service, or any increase in costs, it is competition and consumers that will ultimately suffer, especially in rural America. To head off these concerns, it is important that any C-Band transition fully compensate cable operators for any costs they incur in opening up the band for 5G, and that receiving programming via fiber instead of satellite is an option. We applaud the subcommittee for its leadership and look forward continuing to work together on this critical public policy issue.”
The post New Bill Would Force C-Band Auction appeared first on Radio World.
Community Broadcaster: Facebook Needs Community Radio
The author is membership program director of the National Federation of Community Broadcasters. NFCB commentaries are featured regularly at www.radioworld.com.
By the time you read this, Facebook will have relaunched its News tab. The Oct. 25 rollout is the social media giant’s return to aggregating journalism. It comes at one of media’s more curious moments, in a period of curiosities aplenty.
Mark Zuckerberg testified before Congress this week, as the House Financial Services Committee inquired about the company’s plans to get into the cryptocurrency business. Facebook had bowed out of news curation after being pelted with accusations of propping up misinformation in its old news feeds during the 2016 elections. Facebook promised to refocus on personal streams. Many media outlets’ fortunes plummeted in the process.
[Read: Community Broadcaster: A Cautionary Tale]
The reentry into news rekindles what has to be a love-hate relationship between journalism and Facebook. No one doubts Facebook’s power to generate audiences or conversation in news. But the power lies in Facebook’s hands and news organizations have minimal influence in what the company’s priorities may be. After Facebook changed its tools to deemphasize news stories, media organizations that had come to depend on Facebook traffic saw stock plunges and layoffs.
Will it be different this time around? Hard to know. Facebook’s newfound interest in local news is encouraging. Given the local lens, for all the criticism of Facebook receives, rightly or wrongly, the News tab could represent a benefit and opportunity to local journalism hubs like community radio.
Facebook would be wise to tap into the vast network of community radio stations providing coverage to their towns, and giving a local perspective to national stories all of us have our eyes on. Whether it’s the excellent coverage by Marfa Public Radio of the El Paso mass shooting or immigration issues, WRFI’s coverage of housing in New York state, or KZMU’s coverage of the complex environmental issues in Utah, there is no shortage of essential stories being told. They’re stories told not from the viewpoint of a parachuting journalist from the coasts, but reporters that live and work in these communities. It is authenticity that is rare in journalism. It is refreshing. And local news from community radio is needed now more than ever, by Facebook and the nation.
Beyond making community news more prominent in feeds, Facebook could build trust by investing financially in community radio journalism and by giving training and access to the slate of new features. Not every community radio station may be able to take advantage of such support, but for those willing and able, a powerful ally can only lift up local voices. Facebook has a unique power in can wield for the betterment of community media.
While details for independent publishers remain sketchy, a process for publishers to submit feeds and stories is expected. One can hope Facebook may have learned from the firestorm during its last foray into news. It launched an initiative to improve news delivered on its platform, and one can hope community radio stations are active in getting themselves listed. Facebook should also take this journalism seriously. I love the Washington Post, Fox News and the like as much as anyone, but Americans deserve the richness community media offers.
The post Community Broadcaster: Facebook Needs Community Radio appeared first on Radio World.
Phasing Quadrature Amplification
Two things have been overlooked with phasing amplitude modulation. One is the importance of pulse modulation; the other is that logic gates can be used for analog signal processing. Both of these things were new areas to explore, along with how far was it possible go with these ideas.
There are three types of pulse modulation that can be used to build other waveforms with. There are pulse width modulation (PWM), pulse phase modulation (PPM) and pulse location modulation (PLM). Whereas PPM can be used to generate the other two forms, PWM only has amplitude information and PLM only has phase information. So to be able to move from one form to the other depends on what is required: amplitude, frequency modulation or both.
By using PPM, we can do both phase and amplitude modulation, as demonstrated with this new phasing modulator amplifier design. This technique is able to work with both radio frequencies (RF) and with light at optical wavelengths.
Over the last three years during which I have been working on this phase modulation technique, it has been a test case to prove and evaluate the findings. Once this is done, the process is repeated again and again, making small improvements with each iteration.
Throughout this research process, it was not possible to go on the internet and see how this technology should work. By being first, there is no limitation on what can and cannot be done. The downside is that it takes a large amount of time to make small amounts of progress.
I also had ongoing support from Stephen Nitikman at our local college electronic labs, working through many different ideas throughout this process.
Once everything was working in class D, this amplifier was pushed into class E and I replaced the low-pass filter with a bandpass filter. The negative was poor modulation of 65.536 kHz; it was too low to be received on an AM broadcast radio.
The next step was to increase the frequency again to go above 150 kHz, to fall within the longwave band. I found that PWM was a limiting factor to modulating the carrier, so it was time to move away from using PWM and to try again with PPM. This is how the two unknown classes of switching amplification were found. At this stage, I needed to do more research into other forms of switching amplification and could not find any match to what I was working with.
After these experiments, I added in a field-programmable gate array (FPGA) into the circuit and used its PPM waveforms as a starting point. It was then possible to modify these pulses with logic gates to build a phasing modulator.
Looking at what was done with the Tayloe mixer and taking a new approach is where the Taylor modulator came from. It is far more than a simple switching RF mixer. The Taylor modulator takes the analog building blocks and converts their analog stages into logic equivalents.
This is an interesting area of discovery that falls between both analog and digital technologies, letting us take the best parts of both to work with. Once I worked out the required logic blocks and how they would go together to build the analog digital modulator (ADM), I soon found it was possible to use it with in-phase and quadrature (I & Q) inputs.
REQUIREMENTS
There is, in my view, a need for a transmitter that has lower total harmonic distortion (THD) and higher amplifier efficiency than any broadcast transmitter that is in production. What is the best way of going about designing such a device?
With AM, there are a number of stages that present problems in reaching these aims. The way to move forward is to look at other ways to generate the desired type of modulation to eliminate many of these shortcomings.
The power amplifier would need to operate in a switching configuration for the highest level of conversion efficiency from the DC input to the RF output stage. The only way this could be done would be by using some form of phasing modulator in combination with a switching amplification process.
Let’s take a look at a couple of current I and Q mixer designs.
Phasing Modulator Version 1
Fig. 1: Basic phase modulatorThe most common type of phasing modulator is made up of two balanced mixers offset by a 90° phase shift network. The oscillator is fed into this phase-shift network and each of the two inputs are driven via low pass filters. The two outputs are then combined and fed through a bandpass filter, leaving only the desired frequency. See Fig. 1.
Tayloe Mixer Version 2
Fig 2: Tayloe mixer.The other common type of phasing modulator is the Tayloe mixer, whereby the phase offset is done in logic by a divide-by-four, generating in this case four phase angles: 0°, 90°, 180° and 270°. The mixing is done with an analog switch, rebuilding the desired output frequency and as with the other type, this is then run through a bandpass filter. See Fig. 2.
BACKGROUND
The phasing modulator has been around since the 1940s. In its early form, it was used to generate SSB as a more efficient transmission format over AM that was widely used at that time. This is when we started working with In-phase and Quadrature inputs, to represent each part of the waveform as Phase and Amplitude.
While AM radio broadcasts have been around for more than 100 years now, the basic idea remains the same, with many improvements made over time. AM radio sound quality has also changed over time. The biggest impact came about with the invention of the super heterodyne receiver and its limited bandwidth, which is a design feature to increase selectivity and reduce adjacent-channel interference, and has therefore limited the audio frequency response to below 7 kHz. This is only one of the factors that have an impact. The others are the overprocessing of modulating audio and poor linearity of modulators and RF amplifiers stages.
Up until now, we have been generating various waveforms and measuring the effects of the pulse widths to work out the minimum required bandwidth. The process described herein works the opposite way and uses pulses to generate various waveforms. This technique is able to work both ways.
This type of quadrature amplification was invented in 2017. After experimenting with an optical road safety system called the Electronic Eye Project, it was soon discovered that the same process could be modified to work at radio frequencies. This form of switching amplification is made up of two parts, one being a phasing modulator using In-phase and Quadrature inputs, the other a switching output stage that acts as the amplifier. For this process to work, it must have a minimum of four pulses: two for the In-phase components positive- and negative-going, and the same for both Quadrature components.
Classes of Amplification
From the beginning of electronic amplification devices, there was a requirement to understand how the amplification process has been done. The way this was worked out in the analog classes was by using angles to specify the on time in degrees. So you had Class A that conducts for all 360° of the cycle, Class B that conducts for 180° x 2 of a cycle, and Class C that conducts for just a few degrees of the cycle and uses an L-C tuned circuit combination to restore the full cycle. With switching amplification, classification is based on the type of switching and the way the output filtering is being done.
Types of Pulse Modulation
Fig 3: Basic pulse waveforms.PWM has the same information on both sides of the pulse but is mirrored or 180° out of phase, and the phase information is canceled out, leaving just amplitude information. By converting PWM to PPM by removing one side, we keep all the encoded information as well as the all-important phase information. This is, in a way, like what you would get with amplitude modulation with the sidebands on each side of the carrier when all that is needed is just one of the side bands to convey the information.
PWM Amplification
Class D and I are switching amplifiers. Class D uses PWM. This process chops the sine wave into wide or narrow pulses. The widest point of the pulse is at the peak of the sine wave and the opposite at the minimum point. With Class I, there are two in-phase PWM carriers that are connected to a common clock, using a differential process where one input is offset to the other by 180°. This means the audio input needs to be phase-shifted by 0° and 180° to drive each PWM input.
Both classes of amplification, therefore, are linear. What goes in comes out with very good efficiency. These are known as switching classes, and all require filtering after their output stages to remove unwanted harmonics. In class D and I, a low-pass filter is used.
The efficiency of these classes comes from the output device being turned hard on and off, minimizing power being dissipated within the switching device.
Quadrature Amplification
Quadrature amplification starts out with two signals that have the same frequency and are offset by 90°, which is expanded out to four phase angles that have an offset of 90° (0°, 90°, 180° and 270°). Unlike Class D, quadrature amplification works at minimum of four times the highest frequency, where Class D works at a minimum of two times the highest frequency.
Another difference between the other switching classes is that quadrature amplification uses PPM and not PWM. The latter has no phase information and is therefore used to vary only the amplitude. However, if you remove one side, you end up with both the phase and amplitude components. In quadrature amplification, the amplitude part is not used.
Phase information is processed within logic gates and by adding I and Q pulses together, and with that, it is possible to rebuild any type of analog waveform. This is where Nyquist is very misleading, stating you only need two pulses to regenerate a sine wave. This is not true for phase integrity, where you need a minimum of four. This is the key difference between quadrature amplification and what happens in Class D and many other switching classes.
Class P and Q
Fig. 4: Class Q on the left with class P on the right, where the sine and cosine swap based on what sideband information is required. Both of these classes are based on pulse phase modulation.Class P and Q are unique due to the way that they are based on phasing principles, so you will have sine and cosine parts to the step waveform. These amplifiers employ four pulses as parts of the generated analog waveform: two positive-going and two negative-going. This approach is used in these new forms of amplification, moving on from the limitations of class D and the two-times-clock technique.
There are two forms of quadrature amplification, which I will call Class P and Class Q. In Class P (pulse), you have four PPM pulses that are offset by 90° from each other. In Class Q (quadrature), each side of the pulse has the in-phase and quadrature information.
Fig. 5: Prototype testing with an oscilloscope.In Class P, each pulse must have less than 25% on time, and you have a gating window that the pulse must fall within. The PPM, therefore, is between 0% and a maximum of 25%. It must be triggered to start at 0°, 90°, 180° and 270°. With PLM, the pulse just needs to be within the gating window. The output waveform, therefore, has 0° and 90° positive-going, and 180° and 270° are negative-going pulses. Possible uses for Class P would be in applications where you need an extra level of processing between input and output stages, whereas Class Q has the higher efficiency of the two.
With Class Q, the maximum on time is 50%, where you will end up moving into Class E (square wave). Therefore, when amplifying a modulated signal, you always will be less than the maximum of 50% on the positive- and negative-going cycles to provide room for modulation. Where there is a sharp cutoff between the linear and nonlinear zones, this starts to have an impact above 25% pulse average until you reach 30%, where it mostly becomes nonlinear. Another way to look at Class Q is that it provides the linearity of class A with the efficiency of Class E, making it ideal for many forms of analog and digital modulation systems.
With quadrature amplification, it can also be used for audio applications, but there is no real advantage over existing classes like D and I, so my focus has been on RF applications where I and Q inputs are used.
The phasing technique used is the same for both classes. The only difference is in pulse processing stage of the modulator. In Class P, you have four time slots for each of the angles, where one side of each pulse is modulated (PPM). The location within that time slot can also vary, using pulse position modulation. In Class Q, each side is modulated. The positive side has two parts of the information and the negative side has the other two. With Class Q, the 0° and 90° phases are set in the pulse processing stage, but are not so important in the pulse converter stage.
As with Class D and all the other switching classes, output filtering becomes very important to rebuild the analog waveform. Both Class P and Q use low-pass, bandpass or a combination of both.
The table shows amplifier grouping types.
Table 1: Amplifier Grouping TypesA Class Q AM Broadcast Transmitter
By using one dual device and doubling the frequency, in logic I then did a divide by two, bringing the operating frequency back down to 660 kHz from 1.32 MHz. With this version, it modulated both digital and analog waveforms with very good linearity. For analog testing, I used AM Stereo (C-QUAM), and for digital, Digital Radio Mondiale (DRM) was used at 64QAM.
Fig. 6: ADM design version 4, modulating QPSK. Fig. 7: ADM design version 4, modulating 16QAM.The current prototype has elements of all the other versions as well as new ideas for the first fully-working AM transmitter. Whereby in this configuration you are able to operate up to the maximum frequency of 10 MHz, this is well within the range of the AM broadcast band from 540 to 1700 kHz. All the testing was done over three frequencies, 660, 1110 and 1500 kHz, where there were gaps found between local radio stations. For the output power, I was getting a maximum of 200 watts at 100% modulation, using an LDMOS switching device.
Operating in I and Q Mode
Operating in I and Q mode with DRM using an offset of 12 kHz, there is no issue generating any waveform type, regardless of the type of information been sent analog or digital. A waveform as complex as COFDM can easily be modulated. The only limitation is the linearity of the phase modulators used. This is why the THD is so important.
Fig. 8: ADM design version 5, modulating QPSK. Fig. 9: ADM design version 5, modulating 16QAM.Unlike other types of analog RF amplifiers, this configuration uses very nonlinear amplification, just two states: on and off. So phase noise and phase distortion effects need to be minimized wherever possible. This is why so much negative feedback was used in various parts of this circuit.
Fig. 10: Two-tone test.From Fig. 10, you can see the version using two phase modulators with better switching output devices made an improvement over Version 4 using the four-phase modulator design. This was due the switching power MOSFETs that had a higher amount of phase distortion.
In version 5, a newer design was used for the negative feedback path, using two PWM signals through a low-pass filter driving back into each of the inputs of the phase modulators. With quadrature amplification, it is an ultralinear process, where most of the distortion takes place in the phase modulator stages (converting the analog inputs to PPM or PWM). With ongoing improvements, I am sure it is possible to bring this level of distortion down, closer to 0.5% at 100% modulation.
NEXT VERSION
Fig. 11: Two-tone test at 750 Hz and 1 kHz.I now have my first working product based on the experimental work done with all the previous versions. The next version is a 100-watt model that operates in Class Q with a small number of improvements, such as using a new type of phase modulator design, providing a wider frequency range from LF all the way up to HF. It also has an in-built audio compander stage just before the preemphasis to provide improved signal-to-noise performance on the receive side. The plan is to go with this design for on-air testing here in Toronto this year.
Fig. 12: 99% modulation at 1 kHz.The 1 kW model uses the NXP MRFX1K80H device. For higher efficiency, I am working with a switching DC power supply rail to operate in class G and Q using a combination of techniques from the older versions with flexibility from a common hardware layout. Quadrature amplification is a fully scalable process, making for much higher power levels above 1 kW possible with minimal design changes.
This transmitter design is lost on your average AM receiver. I am using a Denon TU-680NAB receiver connected to a Pioneer EX-9000 expander, and this is providing good off-air performance with this setup. With renewed interest in AM stereo, I hope manufacturers will soon get the message and start making receivers again — it is not that hard to do in a single DSP chip these days.
Grant Taylor started experimenting with a simple FM transmitter in high school. He spent the next few years experimenting with home-made television equipment within amateur radio. From there, he worked repairing and installing outside broadcast links in New Zealand, which led to working on local radio and television infrastructure projects. He experiments with new technologies that have applications in broadcasting.
The post Phasing Quadrature Amplification appeared first on Radio World.
WorldCast Supplies Audio Transmission for Purbeck Coast FM
Purbeck Coast FM, which just began broadcasting earlier in 2019 in Dorset, United Kingdom, is going with WorldCast Systems as the supplier for its audio transmission system. WorldCast Systems’ U.K. distributor Baudion managed the project.
Needing a system to link its studios with the transmitter site 4 km away, Purbeck went with WorldCast’s studio-to-transmitter-link codecs, an Ecreso FM 300 W transmitter and an ATP IP Silver encoder that was installed at the studio, while an ATP IP Silver decoder was put at the transmitter site. The connectivity uses two IP paths, one through a microwave radio link and a second via internet VPN.
Purbeck shares its transmission site with other FM stations, so in order to comply with U.K. regulator Ofcom’s requirements, WorldCast Systems is supplying an additional custom, tuned filter to remove unwanted intermodulation products.
The station is using the Ecerso transmitter’s backup audio players as a program source, which allows the STL codec configuration to be monitored and optimized prior to the commencement of broadcasting from the studios.
The post WorldCast Supplies Audio Transmission for Purbeck Coast FM appeared first on Radio World.
FCC Announces Membership and Working Group Chairs for the Advisory Committee on Diversity and Digital Empowerment
Pleadings
Applications
Broadcast Applications
Broadcast Actions
Michael Karr d/b/a WVUX-LD v. DIRECTV, LLC and DISH Network L.L.C.
Actions
Make the Most of Your Uncompressed Opportunities
The authors are the founders, respectively, of StreamGuys and Barix AG.
Reliable urban performance is particularly important for competing with satellite, which often has dropouts even in cities with terrestrial repeaters.
As with most things in the broadcast universe, the transition from legacy to IP workflows has been gradual. In radio, this is perhaps best represented in the STL category.
For one thing, IP networks were uncharted, unproven territory for audio transport. The less reliable nature of IP as a transport medium versus tried-and-true T1/E1 lines was an immediate concern for broadcasters. From dropped packets to network outages, time spent off the air is money and listeners lost.
But there were other concerns as well. Working with IP meant learning an entirely new operation; configuration processes often required IT specialists to open firewalls and establish IP addresses on send (encode) and receive (decode) devices — a starting point that caused major frustration and confusion for many. This would grow even more complex for broadcasters seeking to adopt IP for point-to-multipoint architectures such as program syndication.
Once operational with live, local area connections, these send and receive devices, along with other boxes in the architecture that began to speak digital, required a great deal of local management and monitoring to ensure consistent reliability. That required being on-premise to manage all of these systems on the network.
Architecture of an uncompressed reflector and remote encoder system.Security was also a concern — a concern that remains, but continues to grow stronger thanks to more secure solutions, and a better understanding of how broadcasters should protect their networks.
Early innovations like the Barix Reflector Service aimed to change these dynamics by providing a plug-and-play solution that simplified configuration, enhanced security and established a future foundation for cloud management. As these challenges have been addressed more strongly and broadcasters transition to IP more aggressively, the next logical question was how to optimize audio quality and support new media services over the network.
Radio has often been an industry of compromise; and with IP transport that compromise has been to the detriment of great-sounding audio. For radio studios and content owners in the adjacent audio production landscape, the focus is on creating high-quality, impactful audio. On the internet, the industry begrudgingly has accepted compressed formats — albeit for good reasons.
MP3 compression was widely accepted when the internet was slow; and in terms of compressed formats, it remains the most reliable when it comes to managing program-associated metadata. Nowadays, connections of 10 Mbps, 100 Mbps and even 5 Gbps are supporting 4K video to consumers, along with more efficient metadata management. It is now possible to send uncompressed streams over once-unthinkable 4G connections, for example, where T1 or better was traditionally necessary.
The question remains: With upload bandwidth no longer a concern, why compromise a radio station’s audio quality with compression?
Compressed formats still have a role in content networking and distribution, but when packaged for last-mile delivery to the consumer, the concept of “no compromise” in the signal chain is enormously important. This, along with a desire to support new media services and business models, makes an increasingly stronger case for broadcasters to move to an uncompressed IP transport service.
MOVING TOWARD GREATNESS
Similar to how broadcasters grew comfortable with IP, operating within the cloud is no longer a technical uncertainty. The transition has been similarly gradual, but the evidence exists that moving to the cloud is both operationally sound, while also simplifying systems management. This also reduces exposure to security risks, as the devices within the architecture are phoning home to the CDN or service providers, versus living inside the broadcaster’s network.
For example, there is no longer a need to run encoders on-premise for an uncompressed service. In most cases, the in-studio overhead is reduced to a stable desktop solution — typically well under $1,000.
Today’s premium encoders no longer need to sit inside the studio environment, and instead will reliably take in an input signal and its associated metadata in the cloud. In addition to reducing equipment costs and maintenance, operationally this cloud-based architecture unlocks the potential to mix-and-match digital signage processors, as well as codecs. The latter provides the flexibility to repackage program audio in HLS or segmented formats required for the radio affiliate, tower and or/consumer.
The metadata component unlocks a lot of this potential and flexibility at the final production stage. In addition to simplifying encoding into several formats, the presence of metadata provides more information to the listener to visualize and enhance the user experience. That same information also simplifies royalty reporting for the artists.
Enabling the service comes down to a stable, dedicated connection on the WAN interface — the same configuration that an ISP would embrace — that can support a bandwidth payload of 1.4 Mb per second. A 1.4 Mbps payload will support uncompressed PCM audio at 44.1 kHz, which delivers a human resolution up to 20 kHz — the standard for compact disc audio. This is representative of the Nyquist frequency, delivering a high-fidelity signal at approximately half of the sampling rate.
PCM audio, which represents the starting point of the uncompressed audio, remains a more reliable format for external IP distribution landscape. While AES67 inside the studio has come to fruition, PCM is still better equipped to tolerate the latencies and network condition variables of long-haul IP transport; our tests and real-world deployments prove latency at sub-1 second, with very minimal packet loss.
With more data moving across the network in an uncompressed format, packet loss or slight bandwidth interruptions will have minimal impact on the resulting audio quality.
There will come a time where 192 kHz resolution will be more reliable to manage over long distances, but PCM will provide the high fidelity of an uncompressed audio service with optimal reliability on today’s networks.
SOLVING PROBLEMS
While understanding the path to uncompressed transport is necessary, what matters most to broadcasters is solving problems and supporting new services. Let’s outline some of these scenarios, the value that an uncompressed transport platform delivers.
Quality Sourcing
Operating within a cloud workflow requires that the broadcaster send the program audio data into the cloud. While this can be achieved with a compressed stream, that signal will require further compression from downstream transcoding or transrating, among other processes. The more the audio is encoded and compressed, the greater likelihood of stream latency, undesirable audio artifacts and other issues with quality of experience.
With uncompressed source audio, a single encoding stage will support a varied bouquet of codecs and bitrates required for many consumer formats. And, with one device accommodating all encoding, the outputs are more tightly aligned from a latency perspective. This remains true when outputting different protocols, such as RTMP and HLS, at the encoding stage.
Therefore, working with uncompressed source audio — in addition to enhancing sound quality for audiences — will deliver a wide array of tightly aligned outputs encoded once from the master quality source.
Encoder Upgrades
As referenced earlier, moving encoders to the cloud introduces several new operational efficiencies, both in terms of upgrade and network growth.
On-premise encoders are offered in two flavors: a hardware device with fixed, limited CPU and RAM resources, and a software solution that typically runs on a PC or Mac. Both offer limitations that are amplified when working within an uncompressed environment.
The built-in capabilities of a hardware encoder are typically finite, and upgrades are often limited by what the vendor makes available. Any significant changes, such as adding a new codec or an increase in CPU processing, will likely require replacement of the encoder, with a potentially lengthy configuration process to bring the new system online.
While a software encoder is typically easier to replace, the supporting computer infrastructure hosting the software may require an upgrade. Over the long term, the management of that software, computer hardware and operating system will escalate costs and labor — and potentially put more stress on an already overburdened IT department.
Cloud encoders offer a simpler upgrade path. Most can be sized on the fly to amplify computing resources without wasting unnecessary resources and power, while also eliminating the need to replace the OS or software. An increase in available CPU, RAM and/or disk resources can be executed through a simple reboot process.
Scaling the infrastructure is also much easier in the cloud environment, with greater flexibility to increase the number of encoders efficiently without burdensome integration costs and labor.
Systems Management
The audio contribution and distribution pool continues to broaden, and broadcasters are finding themselves more limited by the locations of their on-premise encoders. For example, a remote contribution application may be limited by the resources and gear of the corresponding studio. Perhaps the content has been supplied to an affiliate that has no control of the master studio.
More specifically, an on-premises encoder increases the challenge of encoding at the right point in the signal chain. If the on-premise encoder is not at the precise location where the broadcaster desires, this means that encoding at the distribution point to the end user or desired application may not be possible — potentially introducing more than one encoding stage in the workflow.
Encoding in the cloud solves this problem by offering the option to insert the encoding output at any relevant place in the signal chain. If the broadcaster wants to condition and process a signal prior to sending to an affiliate, that affiliate could use an uncompressed master signal to feed their headend. From there, the uncompressed feed can be transported without any encoding required. Instead, a decoder can be supplied that can pass through the unmodified source at very low latency.
Using a cloud encoder also enables the broadcaster to send high- and low-bitrate signals in two formats, such as HE-AAC v2 and AAC-LC — and then output them as both RTMP, HLS and Icecast audio sources. A single uncompressed signal at the studio, with a fixed bandwidth rate of 1.4 Mbps, is all that is required, which equates to much less than the combined total of sending high and low bitrates for each protocol.
The overarching benefit here is that the management burden at the studio is reduced to one output to support a wide array of audio contribution and distribution requirements.
OUT IN THE REAL WORLD
Philadelphia-based WXPN, the public radio service of the University of Pennsylvania, is one example of a major broadcaster that has embraced the benefits of uncompressed audio over IP for program syndication. The broadcaster set out to develop a more sustainable distribution model for its XPoNential Radio channel, leveraging the Reflector Service from StreamGuys and Barix.
XPoNential Radio was originally distributed to affiliates via satellite and offered only for use on HD2 or HD3 channels. WXPN wanted to widen the usage of the channel to include primary broadcast, and while it continues to use satellite for national programming, the station sought an alternative, sustainable distribution model for the smaller-scale XPoNential Radio. Despite cost-effectiveness being one of the station’s motivations, quality and reliability were also key criteria.
The WXPN architecture leverages uncompressed PCM audio, which is transported between the encoding and decoding endpoints across the CDN infrastructure, while link management is simplified through a cloud-based portal. The station has achieved lossless, CD-quality audio enabled by uncompressed delivery to affiliates as far away as Alaska. New affiliates plug in Ethernet, power and audio cables to receive XPoNential Radio programming. Affiliates connect using 1.4 to 1.5 Mbps of bandwidth, which is plenty to receive the uncompressed signal and deliver it to consumers.
Moving the service to the cloud simplifies management, with station personnel able to access the portal to confirm that all clients are connected and streaming. The portal also allows operators to start, stop and configure delivery to each affiliate. Service can also be terminated for any client directly through the management portal.
Affiliates also don’t need any “special” internet connectivity to use the service. A very modest 1.5 Mbps of bandwidth is enough to receive the uncompressed signal, and most consumer-level internet connections are sufficiently reliable and stable. Even WXPN does not require hefty bandwidth regardless of how many affiliates they serve, as the Barix Reflector service takes a single feed from the origin (a Barix codec), with StreamGuys’ delivery network scaling out the bandwidth for reaching recipients.
BRINGING IT TOGETHER
As we look deeper into the future, the enhanced reliability and flexibility of an uncompressed IP service will provide a strong value proposition that will be hard to deny. Uncompressed STL will simply deliver T1-like audio quality over IP unhindered by downstream processes like transcoding, while syndicators will save a great deal of money and labor in the transition from satellite to IP for contribution and distribution.
Moving encoders into the cloud will support more formats and services while reducing the systems management burden, both at the studio and elsewhere in the audio contribution and distribution chain. The opportunity to better manage metadata alongside the uncompressed program audio stream will strengthen business opportunities and the consumer experience. And, the adaptability to accommodate even high-resolution formats as network conditions evolve will surely open new doors from both a service provision and listener experience perspective.
Kiriki Delany, a musician, computer geek and multimedia specialist, founded StreamGuys in 2000. Johannes Rietschel, a communications engineer by heart, founded Barix AG in 2000 and serves as CTO.
The post Make the Most of Your Uncompressed Opportunities appeared first on Radio World.
WorldDAB to Spotlight DAB+ Progress at General Assembly
The author is president of WorldDAB.
LONDON — The last 12 months have been an exciting period for DAB digital radio. At the end of last year, the European Union adopted the European Electronic Communications Code (EECC), which will require all new car radios in the EU to be capable of receiving digital terrestrial radio. Shortly afterwards France confirmed the launch of national DAB+ with the support of all their major broadcasters.
Patrick HannonDEVELOPMENTS
Progress has continued throughout 2019 — in May, Austria launched national DAB+ services and in the summer, Sweden saw the launch of national commercial DAB+.
More established markets have maintained their momentum in driving DAB+ digital radio forward. Following Norway’s switch-off in 2017, Switzerland has confirmed the switch-off of national FM services by the end of 2024; Germany and the Netherlands continue to make strong and steady progress, and the United Kingdom is seeing record levels of digital listening.
Belgium, the country hosting this year’s General Assembly, is also seeing high levels of activity, with both the Flemish (Dutch Speaking) and Wallonia (French speaking) regions demonstrating their commitment to the growth of DAB+.
[Read: EuroDAB Italia Begins Airing BBC World Service]
A further important development in Europe is the introduction of regulation requiring consumer receivers to include DAB+. Such laws will come into force in Italy and France in 2020, while a similar law — coming into effect in December 2020 — has just been passed in Germany. For WorldDAB, encouraging the adoption of such rules in other markets will be a priority in 2020 and beyond.
Joan Warner, CEO Commercial Radio Australia, addresses the audience at the 2018 WorldDAB general assembly.We are also seeing interesting developments outside of Europe, with numerous markets pursuing trials in the Middle East, North and South Africa as well as Southeast Asia, and more significant developments in Australia and Tunisia. The former is now seeing its highest ever levels of DAB+ radio being fitted in new cars, while the latter — which is a potential gateway to the wider Arabic speaking region — has recently launched the first regular services in North Africa.
PROTECTING RADIO BROADCASTERS
Against this positive background, it’s increasingly clear that broadcasters and policy makers are concerned about the growing power of the tech giants in relation to national, regional and local content providers. This is likely to be a key topic of discussion at this year’s General Assembly. As WorldDAB, our focus will be on highlighting the contribution which DAB+ radio makes toward promoting and protecting the interests of national and local radio broadcasters.
Of course, the digital radio listening experience is evolving, and DAB+ is not the only digital platform. The key to long-term success is to position DAB+ at the heart of broadcasters’ digital strategies, and ensure its unique characteristics are preserved as the radio industry moves forward.
All of the above topics will be covered over the two days of the event held in Brussels, Belgium, and we look forward to seeing as many of you as possible there.
The post WorldDAB to Spotlight DAB+ Progress at General Assembly appeared first on Radio World.