This article refers to the address: http://
Abstract : Firstly, the MPEG-2 system layer protocol is analyzed, then the application fields and technical points of DirectShow are introduced. Finally, a multi-channel video and audio playback technology based on DirectShow is proposed, and the design scheme of the main modules is given. The solution to the DirectShow link deadlock problem. This method is applied to the real-time playback of multi-channel TV programs and the collection and collation of program materials. Compared with the traditional single-channel program playback and collection, the hardware cost is saved and the work efficiency is improved.
Keywords : DVB-S, DirectShow, MPEG-2, demultiplexing
1 Introduction
DVB (Digital Video Broadcasting) is a television broadcasting system proposed by the European Standards Organization. At present, many provincial TV stations in China have adopted the DVB-S (DVB satellite transmission) standard. The signal transmission of DVB-S belongs to a typical multi-channel single carrier (MCPC) model, and each carrier can carry multiple different television programs. The traditional digital satellite receiving system can only play and collect a certain program in the DVB-S signal. This paper proposes a design scheme based on DirectShow technology for demultiplexing DVB-S transport stream based on DirectShow technology, extracting and playing multi-channel video programs, and realizing multi-channel TV under single monitor condition. The broadcast of the program. At the same time, the technology discussed in this paper has important reference significance for multimedia applications such as program material collection, network video on demand and hard disk recorder. Wireless SOC development platform 499 yuan S3C44B0 ARM7 development board 378 yuan S3C2410 ARM9 open 2 MPEG-2 system layer protocol analysis
DVB-S uses MPEG-2 as a multiplex transmission and video coding protocol. The MPEG-2 standard (ISO/IEC 13818) is an encoding standard introduced by the Moving Picture Experts Group (MPEG) in 1994. MPEG-2 compression technology can achieve better compression and maintain high-definition moving images, so MPEG-2 is widely used in digital video broadcasting and digital multimedia.
The MPEG-2 protocol is mainly divided into three parts: system, video and audio. The system layer protocol of MPEG-2 describes the multiplexing method of multiple data and specifies the format of data transmission.
2.1 Transport stream structure
The MPEG-2 system layer defines two data transmission methods: transport stream (TS, Transport Stream) and program stream (PS, Program Stream). The former is designed for environments that are prone to errors, such as DVB-S for transmission over satellite channels, while the latter is designed for environments with less errors, such as DVD discs. The transport stream is a packet-oriented multiplexed stream: the basic data stream (ES) is packaged into a PES, and then the system is multiplexed to generate TS data, and finally packed into a fixed frame length (188 bytes) TS packet for transmission. . Each TS packet contains only one type of ES (compressed video, audio, or IP data, etc.). The system layer uses a 13-bit value as the identifier for each packet, called the PID. The PID and ES are in a one-to-one relationship in a transport stream. The frame structure of the TS packet is as shown in FIG. 1.
figure 1
2.2 PSI information and PID mapping
PSI (Program Specific Information) contains multiplexing information of multiple programs, which is the basis for demultiplexing operations. The PSI includes a Program Association Table (PAT), a Program Map Table (PMT), a Network Information Table (NIT), and a Conditional Access Table (CAT).
The PID of the PAT is fixed at 0x0000, which defines the correspondence between the program of the specific sequence number in the transport stream and the associated transport stream packet. The PMT provides a mapping relationship between the program number and the ES PID that makes up the program. The NIT is a private segment and usually includes information such as the service selected by the user, the channel frequency, and the like, and the vendor and program name of the program. The PID system layer of NIT is not limited, so theoretically any valid PID value may be used as the PID of the NIT. The PID of the CAT is fixed to 0x0001, which occurs when there is encrypted data in the transport stream. CAT describes the type of conditional access system and other user private information.
3 Introduction to DirectShow
DirectShow is part of the Microsoft DirectX framework. The DirectShow implementation is based on COM (Component Object Model) and therefore has good developability and reusability.
DirectShow focuses on the processing of multimedia data. Multimedia data has the characteristics of large data volume, demanding audio and video synchronization, and numerous media formats. The DirectShow framework provides a complete package of hardware such as graphics cards and sound cards. Developers don't have to worry about how the hardware works and the specific implementation details of driver programming. On the other hand, the relative independence between DirectShow components allows developers to focus on the implementation of the algorithm without having to think too much about the data transfer between components, so developers can efficiently do it by writing relatively simple code. Complex multimedia processing.
3.1 Filter link
Filter is the most basic part of DirectShow, it is a COM component that performs a specific function. Filters are connected in turn to form a Filter link. DirectShow manages the entire Filter link through a COM object called the Filter Graph Manager. The application uses the Filter Graph Manager to control the state of the link, such as play, pause, or stop. According to the function, Filter can be divided into three types: Source Filter, Transform Filter, and Rendering Filter.
Source Filter is used to get data. Data can come from files or real-time data sources such as networks, data acquisition cards, and more.
The Transform Filter receives the data transmitted by the Source Filter and processes it, such as demultiplexing operation, separation of audio and video data, or encoding/decoding.
The main function of Rendering Filter is to send data to the graphics card, sound card for multimedia presentation or output to file for storage.
3.2 working mode
The DirectShow framework defines two working modes: Push Mode and Pull Mode.
In push mode, the data is actively pushed by the Source Filter to the Transform Filter connected to it, and the latter pushes the processed data to the downstream Filter. Push mode is usually used in the case of real-time data. For real-time data sources, the data transmission rate may not be constant (such as network media transmission, video capture card, etc.), so the push mode Source Filter can be used to decide how to pass data to the downstream Filter according to the actual situation of the data source.
In the pull mode, the Source Filter passively provides data, and the Transform Filter connected to it creates a data thread to actively request data from the Source Filter. There is a “pull†process. Filters that work in pull mode generally use asynchronous read data. Pull mode is usually applied to local file playback as well as media editing.
4 design
4.1 Principles and processes
The satellite signal is received, amplified, and frequency-converted by the antenna and sent to the general-purpose receiver in the form of an intermediate frequency signal. The universal receiver decodes the signal and decodes the channel coding. The output transport stream data stream is firstly subjected to PSI analysis to obtain a complete PID mapping relationship, and then the video and audio data therein is separated and sent to the decoder according to the program association table information, and finally played on the display terminal. The workflow is shown in Figure 2.
figure 2
From data acquisition, separation to display, it can be implemented on a Filter link: Source Filter obtains the transport stream data output by the receiver; demultiplexes the Filter to analyze the PSI, demultiplexes the function, and sends the video and audio data to the decoding. Filter.
The complete Filter link diagram is shown in Figure 3.
image 3
Each box in the figure represents a Filter. The Source Filter has no input and only one output. Connected to it is the demultiplexing Filter, which has one input and multiple video and audio outputs. The video data is sent to the MPEG-2 decoder Filter, and the audio data is sent to the audio decoder Filter. The output of the decoder is connected to the Render Filter.
4.2 Filter working mode selection
The processing microcomputer obtains the output of the universal receiver through the high-speed data acquisition card, so the data source is a real-time source for the Filter link. So choose push mode as the working mode of the entire Filter link.
5 major module design and implementation difficulties
5.1 Source Filter
The Source Filter encapsulates the interface function of the capture card, and uses double buffering to check whether the buffer is full by polling. When the buffer is full, the data is sent to the demultiplexed Filter connected to it.
5.2 Demultiplexing Filter
The demultiplexing Filter is the core part of the entire Filter link. Its function is to analyze the PSI of the transport stream, establish a complete PDI mapping relationship; then separate the video and audio data of each program from the transport stream, send them to the corresponding video and audio decoder, and receive the application Control information. The process of demultiplexing Filter for each transport stream packet is shown in Figure 4.
Figure 4
5.3 solve the link deadlock problem
The Filter link usually requires a transfer thread. In push mode, the transfer thread is usually created by the Source Filter, and the data is pushed by the thread to the demultiplexed Filter. The processed video and audio data are sent to the decoder and finally played on the terminal. The entire process is done in a single thread.
In the case of multiple outputs, the general push mode single-threaded model will cause link deadlock. The key to solving the deadlock problem is to create a dedicated transport thread for each video or audio output, and the creation of the thread should be done in the demultiplexed Filter rather than in the Source Filter (because the Source Filter has only one transport stream output).
In the DirectShow SDK, multi-threaded transfers can be implemented by using the COutputQueue object, which ultimately solves the deadlock problem. The method is to declare a COutputQueue object in the demultiplexing Filter, and call the Receive(IMediaSample* pSample) function of the COutputQueue every time the transfer is started. At this point the object will automatically generate a transport thread and add it to its own thread queue, which will be cleared from the queue when the thread completes.
5.3 Video Mixing Renderer9
Video Mixing Renderer9 (VMR9) is a newly added component of DirectX9. It uses Direct3D9 technology, which makes full use of the graphics processing capabilities of the graphics card, and does not occupy system CPU resources when doing video synthesis and display. Multi-channel video playback can be efficiently performed using VMR9. If you use the traditional Video Render Filter, you must add multiple Render Filters to the link under multiple video display conditions. The VMR9 can receive up to sixteen video inputs. In the multi-channel playback, each video can be displayed in different areas of the window for display. It can also superimpose multiple videos or pictures, for example, adding dynamic subtitles or logos to the program. .
6 experimental results
PC configuration
l Operating system: Windows2000 Professional
l CPU: P4 2.4GHz
l Memory 1G Byte
l Graphics card: motherboard integration, memory 128M Byte
Signal parameters:
l Satellite parameters: Pan American No. 8, 166 °E
l Polarization mode: vertical
l Downlink frequency: 3836MHz
l Code rate: 22000 baud
l Program provider: TVBS
l Number of channels: 9 (number of encrypted channels 6, number of unencrypted channels 3)
Figure 5 is a display interface in which three non-encrypted programs are being played.
Figure 5
7 Conclusion
The author's innovation: using DirectShow technology, combined with a universal receiver, solves the problem that digital satellite receiving single monitor system can not play multiple programs, reducing hardware costs, and playing (monitoring) multimedia programs under non-professional conditions. And the collection of materials has practical value. On the other hand, other functions can be implemented using DirectShow, such as DES (DirectShow Editing Services) for non-linear editing of multimedia files. In theory, the design of this paper can also be applied in the network environment: the network client can select to play or record the TV program of interest through PSI information to realize the network video on demand function.
references
1. ISO/IEC 13818-1 (MPEG-2 System), ISO/IEC 13818-2 (MPEG-2 Video), ISO/IEC 13818-3 (MPEG-2 Audio)
2. Microsoft, DirectX 9.0 Programmer's Reference, 2002.
3, Lu Qiming, DirectShow Development Guide, Tsinghua University Press, 2003
4, Shi Jingling, Liu Wangkai, Bai Tao. Development of monitoring software flow chart interface in VC environment. Microcomputer information, 2004, Volume 20, Issue 4
Powerwall Battery is a high-tech product developed to meet the requirements of the new home backup power supply. It has the characteristics of integration, miniaturization, light weight, intelligence, standardization, and environmental protection.ZTTEK-48V-100Ah has a lithium (LiFePO4) battery storage capacity of 5kWh, allowing you to stay away from the grid as you like.
Home Solar Energy Storage,Home Solar Energy Storage Battery,Home Energie Storage Batterie,Lithium Battery For Solar Energy Storage
Jiangsu Zhitai New Energy Technology Co.,Ltd , https://www.ztbatteries.com