I need a feature extension of my bash script which I use to create HLS (http-live-stream) streams using ffmpeg. At the current state the script is only able to output a single subtitle and single audio track beside the video even if the source has multiple audio/subtitle languages or channels. So far so good ...
What I need is a script that simply processes all subtitle and all audio track from my mp4 input, creating a playlist m3u8+snippets for each stream. In the end all generated m3u8 files (one for each stream) must be referenced in a master m3u8 playlist file so that the user can simply select a track to play using [login to view URL] or shaka-player. From my understanding the workflow is straight forward and all can be done using ffmepg/ffprobe:
1. Create a HLS stream only for the video inside the mp4.
2. For each audio track within the mp4 container found create a HLS stream accordingly.
3. For each subtitle track within the mp4 container found create a HLS stream accordingly.
Each of the steps (1-3) will output many snippets in .m4s format and each step will generate a single m3u8 file that references these generated m4s and webvtt snippets.
If all that is done, the generated m3u8 playlists (one for each stream as explained above) need to be referenced in a master m3u8 file.
Please see my current work attached.
The script can be written either as a single python function or a bash script (Python is prefered as it will be used inside celery as a task)
Video Stream (Single - No Adaptive Streaming needed):
1. HLS video output must be in .m4s format (fragmented mp4 format)
2. HLS video output must have the same resolution and bitrate as mp4 input (Passtrough).
3. HLS video output format must be in the same codec as the input (h264 or libx265) as we also process 10-bit footage.
Simply keep the codec of the file input as long as it is h264/h265 otherwise convert it.
Audio Streams (Multiple):
1. All Audio languages need to be processed (ffprobe can be used for parsing)
2. If Audio Language source is available in 5.1 Dolby Surround process only Dolby 5.1 Surround (AC3/E-AC3). Otherwise use AAC/HE-AAC Stereo.
3. A max Bitrate of 384 kbps is allowed for 5.1 Dolby Surround (Check Source Bitrate).
4. A max Bitrate of 256 kbps is allowed for Stereo (Check Source Bitrate).
Subtitle Streams (Multiple):
1. All Subtitle languages need to be processed (Source always comes as tx3g embedded with proper Meta inside the mp4).
2. Output format must be WebVTT (ffmpeg default)
Playlist (.m3u8) Processing:
- Each tracks found inside the mp4 file (Video, Audio, Subtitle) must be processed as a single stream with a dedicated m3u8 playlist file and m4s/webvtt snippeds.
- After all tracks have been processed, create a master .m3u8 playlist file and reference all other m3u8 files within that master playlist file.