How to force youtube to play 5.1 surround sound PC->HDMI->Receiver - HotUKDeals
We use cookie files to improve site functionality and personalisation. By continuing to use HotUKDeals, you accept our cookie and privacy policy.
Get the HotUKDeals app free at Google Play

Search Error

An error occurred when searching, please try again!

Login / Sign UpSubmit
Expired

How to force youtube to play 5.1 surround sound PC->HDMI->Receiver

£0.00 @ Youtube
I can play full surround sound 5.1 using HDMI to my receiver with all the HD audio extensions. This works with games/blurays/etc in windows 8.1 & 7 ultimate but I can't get youtube to output anything …
kester76 Avatar
2y, 4m agoPosted 2 years, 4 months ago
I can play full surround sound 5.1 using HDMI to my receiver with all the HD audio extensions. This works with games/blurays/etc in windows 8.1 & 7 ultimate but I can't get youtube to output anything but stereo through windows. Analogue 5.1, HDMI Stereo. Is this a flash problem or can you force youtube to play 5.1 through hdmi ?
Other Links From Youtube:
kester76 Avatar
2y, 4m agoPosted 2 years, 4 months ago
Options
Best Answer
YouTube videos are encoded in stereo. Set your receiver to Pro Logic or All Channel Stereo (or something along those lines). You won't get discrete 5.1 from YouTube.

Edited By: rev6 on Nov 22, 2014 21:47

All Responses

(40) Jump to unreadLocked
Responses/page:
#1
YouTube videos are encoded in stereo. Set your receiver to Pro Logic or All Channel Stereo (or something along those lines). You won't get discrete 5.1 from YouTube.

Edited By: rev6 on Nov 22, 2014 21:47
#2
rev6
YouTube videos are encoded in stereo. Set your receiver to Pro Logic or All Channel Stereo (or something along those lines). You won't get discrete 5.1 from YouTube.
https://support.google.com/youtube/answer/1722171?hl=en-GB
AAC-LC supports 5.1 do I need a codec to decode it before output to HDMI ?
#3
If YouTube did output 5.1, it would be PCM. You'd really want to bitstream to the AV so it can decode it there.
#4
rev6
If YouTube did output 5.1, it would be PCM. You'd really want to bitstream to the AV so it can decode it there.

It could be decoding it with a QuickTime codec I guess and this is by passed if playback is HDMI PCM.
My Receiver doesn't support AAC-LS 6 channel. I'll keep looking, thanks for the reply
#6
But remember it will only be upmixed like your AV would do, it's not discrete 5.1
#7
kester76
rev6
YouTube videos are encoded in stereo. Set your receiver to Pro Logic or All Channel Stereo (or something along those lines). You won't get discrete 5.1 from YouTube.
https://support.google.com/youtube/answer/1722171?hl=en-GB
AAC-LC supports 5.1 do I need a codec to decode it before output to HDMI ?

That appears to be a guide as to which formats Youtube lets you upload, that's not necessarily the same as what it'll output on any given platform.

You don't say whether you're using the flash or browser specific(HTML5) playback but I'd be surprised if either of those supported Bitstreaming so it's likely you're dealing with PCM output.

Looking around there's a test page here that may be worth trying:
https://www2.iis.fraunhofer.de/AAC/multichannel.html
#8
EndlessWaves
kester76
rev6
YouTube videos are encoded in stereo. Set your receiver to Pro Logic or All Channel Stereo (or something along those lines). You won't get discrete 5.1 from YouTube.
https://support.google.com/youtube/answer/1722171?hl=en-GB
AAC-LC supports 5.1 do I need a codec to decode it before output to HDMI ?

That appears to be a guide as to which formats Youtube lets you upload, that's not necessarily the same as what it'll output on any given platform.

You don't say whether you're using the flash or browser specific(HTML5) playback but I'd be surprised if either of those supported Bitstreaming so it's likely you're dealing with PCM output.

Looking around there's a test page here that may be worth trying:
https://www2.iis.fraunhofer.de/AAC/multichannel.html
Looks like HTML 5, thanks will try now
#9
Fine playing HDMI 5.1 but still no joy in youtube. Must be a probably with the way it checks for surround. thanks again.
#10
To find out the audio CODEC being used, download the video and use a tool such as MediaInfo to inspect the audio CODEC. MediaInfo will report the channels.

The default audio CODEC used by Youtube is 2 channels AAC.
#11
Open the youtube link in VLC.

You're welcome.
#12
MadeDixonsCry
Open the youtube link in VLC.

You're welcome.

This is actually a very good idea.
#13
ElliottC
MadeDixonsCry
Open the youtube link in VLC.
You're welcome.
This is actually a very good idea.

That won't change the fact that the videos are encoded with 2.0 audio. If you have an AV and would like to simulate surround sound like with Dolby Pro Logic, that's fine, but it'll never be discrete 5.1
#14
rev6
ElliottC
MadeDixonsCry
Open the youtube link in VLC.
You're welcome.
This is actually a very good idea.

That won't change the fact that the videos are encoded with 2.0 audio. If you have an AV and would like to simulate surround sound like with Dolby Pro Logic, that's fine, but it'll never be discrete 5.1

The point is to aid Kester in understanding the fundamental issues as he/she appears to be confused (see the quotes below). By demonstrating that it is the PC that is decoding via an AAC decoder and NOT the A/V receiver carrying out the decoding, the suggestion of using VLC may enable the OP in understanding that the signal sent to the A/V receiver is a post decoded signal and not a bitstream that requires further decoding. Furthermore, the mention of VLC for investigative work into the CODECs being used will show that the post decoded signals being sent are in stereo (in most cases). In fact, you even mentioned that stereo encoding is used in the first post, and that should have been the end of it.

It could be decoding it with a QuickTime codec I guess and this is by passed if playback is HDMI PCM. My Receiver doesn't support AAC-LS 6 channel

Fine playing HDMI 5.1 but still no joy in youtube. Must be a probably with the way it checks for surround


Edited By: ElliottC on Nov 26, 2014 07:41: .
#15
Yep just doubled checked
ElliottC
rev6
ElliottC
MadeDixonsCry
Open the youtube link in VLC.
You're welcome.
This is actually a very good idea.

That won't change the fact that the videos are encoded with 2.0 audio. If you have an AV and would like to simulate surround sound like with Dolby Pro Logic, that's fine, but it'll never be discrete 5.1

The point is to aid Kester in understanding the fundamental issues as he/she appears to be confused (see the quotes below). By demonstrating that it is the PC that is decoding via an AAC decoder and NOT the A/V receiver carrying out the decoding, the suggestion of using VLC may enable the OP in understanding that the signal sent to the A/V receiver is a post decoded signal and not a bitstream that requires further decoding. Furthermore, the mention of VLC for investigative work into the CODECs being used will show that the post decoded signals being sent are in stereo (in most cases). In fact, you even mentioned that stereo encoding is used in the first post, and that should have been the end of it.

It could be decoding it with a QuickTime codec I guess and this is by passed if playback is HDMI PCM. My Receiver doesn't support AAC-LS 6 channel

Fine playing HDMI 5.1 but still no joy in youtube. Must be a probably with the way it checks for surround

Thanks for the tip. Checked and everyone is right. PC auto decodes stereo to Dolby Pro logic II/ Dolby Digital Live! and both amps are set to AFD which prevents the amp's up mixing the sound to 5.1. Thanks for everyone's patience with me over this. I still don't understand why youtube's advanced encoding settings page gives :-
Advanced encoding settings

Audio codec: AAC-LC
•Channels: Stereo or Stereo + 5.1
•Sample rate 96 khz or 48 kHz

I guess it must be a feature that they're going to implement at a later date but seems daft that you can do 1080p but not 5.1 sound.
#16
Thanks ElliottC I guess I got my head wrapped around the idea that just because it could support 5.1 that it would and had to see it for myself. It's strange that all these so called 5.1 calibration videos are just in stereo. It's like a uploading a 3DTV calibration video in 2D and then emulating 3D through the TV.
#17
Well, browser tech doesn't have much call for surround sound so it doesn't surprise me that the browser-based playback defaults to 2 channels. I would guess that things like SmartTV and Set Top box youtube apps can play the 5.1 tracks.
#18
EndlessWaves
Well, browser tech doesn't have much call for surround sound so it doesn't surprise me that the browser-based playback defaults to 2 channels. I would guess that things like SmartTV and Set Top box youtube apps can play the 5.1 tracks.

There aren't any 5.1 tracks on YouTube. I can see the need for 5.1 in a browser, for things like Netflix, iPlayer.
#19
It's plain wrong that I can get 1080p and SBS 3d but no discreet 5.1 surround. It's like when I got the Xbox 360 and found it only supported DD, DTS and WMA Pro ( which exists somewhere in the world ) which was like the Xbox Classic sound chip. Damn you youtube. Rant over :)
#20
kester76
Thanks ElliottC I guess I got my head wrapped around the idea that just because it could support 5.1 that it would and had to see it for myself. It's strange that all these so called 5.1 calibration videos are just in stereo. It's like a uploading a 3DTV calibration video in 2D and then emulating 3D through the TV.

The people who uploaded the 5.1 sound calibration tests to Youtube don't know any better.

You seem surprised that 1080P is supported yet 5.1 audio has been eschewed. The two are dichotomous. That is to say, the video uses a certain CODEC and the audio uses another CODEC - they are separate entities. It is not plain wrong as you are not watching a Blu Ray movie. The 1080P and 4K videos have reduced bitrates in comparison to broadcast quality 1080P and 4K video. Do we criticise YouTube for reduction of video quality as well as reduction of audio? Of course not, because YouTube has to cater for most users and this means attempting to ensure bandwidth usage is kept to an acceptable level.
#21
ElliottC
kester76
Thanks ElliottC I guess I got my head wrapped around the idea that just because it could support 5.1 that it would and had to see it for myself. It's strange that all these so called 5.1 calibration videos are just in stereo. It's like a uploading a 3DTV calibration video in 2D and then emulating 3D through the TV.

The people who uploaded the 5.1 sound calibration tests to Youtube don't know any better.

You seem surprised that 1080P is supported yet 5.1 audio has been eschewed. The two are dichotomous. That is to say, the video uses a certain CODEC and the audio uses another CODEC - they are separate entities. It is not plain wrong as you are not watching a Blu Ray movie. The 1080P and 4K videos have reduced bitrates in comparison to broadcast quality 1080P and 4K video. Do we criticise YouTube for reduction of video quality as well as reduction of audio? Of course not, because YouTube has to cater for most users and this means attempting to ensure bandwidth usage is kept to an acceptable level.

The difference is Stereo 384kbps vs 5.1 512kbps = 16KB a second difference. The extra 16KB isn't going to stress youtube. I'm not asking for Lossless DTS-MA just an discreet 5.1 AAC audio option. Most smart TV's have a youtube app and allow 5.1 return on either arc or optical.
#22
kester76
ElliottC
kester76
Thanks ElliottC I guess I got my head wrapped around the idea that just because it could support 5.1 that it would and had to see it for myself. It's strange that all these so called 5.1 calibration videos are just in stereo. It's like a uploading a 3DTV calibration video in 2D and then emulating 3D through the TV.

The people who uploaded the 5.1 sound calibration tests to Youtube don't know any better.

You seem surprised that 1080P is supported yet 5.1 audio has been eschewed. The two are dichotomous. That is to say, the video uses a certain CODEC and the audio uses another CODEC - they are separate entities. It is not plain wrong as you are not watching a Blu Ray movie. The 1080P and 4K videos have reduced bitrates in comparison to broadcast quality 1080P and 4K video. Do we criticise YouTube for reduction of video quality as well as reduction of audio? Of course not, because YouTube has to cater for most users and this means attempting to ensure bandwidth usage is kept to an acceptable level.

The difference is Stereo 384kbps vs 5.1 512kbps = 16KB a second difference. The extra 16KB isn't going to stress youtube. I'm not asking for Lossless DTS-MA just an discreet 5.1 AAC audio option. Most smart TV's have a youtube app and allow 5.1 return on either arc or optical.

Those figures are based on reducing the bitrate per channel for a 5.1 signal giving 85 kbps per channel (OK, a little higher than 85 since the LFE channel is a little different) and when re-encoded to stereo, the net effect is 170 kbps. I think those figures were taken from another user in another forum, right?

Assuming constant bit rate is used and assuming the same 384 kbps for stereo, then 5.1 audio using the same bitrate per channel (except the LFE channel) results in a difference that far exceeds 16 KB. This is certainly more taxing for those on lower speed networks.

Of course, there are more compression techniques to be used for multi channel audio. Where 2 or more channels may sound the same, it is possible to select one base channel containing the full bit pattern at a certain interval and use that as a reference to compress the other channels (that is how video decoding works, by the way, by using something called an I Frame as references to compress other frames called P and B frames). But even assuming that this method is used it is still possible that bandwidth may still be stretched. Furthermore, on machines with stereo speakers (laptops that are not connected to surround sound systems, for example), this results in a quieter and less distinct sound on the left and right speakers as the other 4 speakers, although of course they can be multiplexed together into 2 channels. Hence, we have 2 issues already.

So, a workaround is to heavily reduce the bitrate, such as your example of using 512 kbps but when this is used, many users will complain that the quality is so poor. Where do we draw the line regarding bitrates? Do we give the power users the bitrates and quality they want and leave other users behind?
#23
kester76
Most smart TV's have a youtube app and allow 5.1 return on either arc or optical.

If youtube is sending a 5.1 signal via the smart tv then it should do the same via VLC. Obviously you have to configure VLC to pass through audio.
#24
MadeDixonsCry
kester76
Most smart TV's have a youtube app and allow 5.1 return on either arc or optical.

If youtube is sending a 5.1 signal via the smart tv then it should do the same via VLC. Obviously you have to configure VLC to pass through audio.
Sorry I meant smart TV's have the ability to return 5.1 to an amp. Youtube is fixed to stereo output only it seems.
#25
Those figures are based on reducing the bitrate per channel for a 5.1 signal giving 85 kbps per channel (OK, a little higher than 85 since the LFE channel is a little different) and when re-encoded to stereo, the net effect is 170 kbps. I think those figures were taken from another user in another forum, right?

Assuming constant bit rate is used and assuming the same 384 kbps for stereo, then 5.1 audio using the same bitrate per channel (except the LFE channel) results in a difference that far exceeds 16 KB. This is certainly more taxing for those on lower speed networks.

Of course, there are more compression techniques to be used for multi channel audio. Where 2 or more channels may sound the same, it is possible to select one base channel containing the full bit pattern at a certain interval and use that as a reference to compress the other channels (that is how video decoding works, by the way, by using something called an I Frame as references to compress other frames called P and B frames). But even assuming that this method is used it is still possible that bandwidth may still be stretched. Furthermore, on machines with stereo speakers (laptops that are not connected to surround sound systems, for example), this results in a quieter and less distinct sound on the left and right speakers as the other 4 speakers, although of course they can be multiplexed together into 2 channels. Hence, we have 2 issues already.

So, a workaround is to heavily reduce the bitrate, such as your example of using 512 kbps but when this is used, many users will complain that the quality is so poor. Where do we draw the line regarding bitrates? Do we give the power users the bitrates and quality they want and leave other users behind?

https://support.google.com/youtube/answer/1722171?hl=en <= It's from this link. I assume Google know what they're talking about but they could be wrong I guess. Dolby Digital DVD-Video 5.1 448kbit/s I'm pretty sure AAC is able to do a better job.

Not sure what you mean about the stereo/5.1 quality bit as you upload both audio tracks with your video and I assume the youtube app would automatically select the viewer's audio settings for how many channels they have or ask the viewer's preference as with the display size.







Edited By: kester76 on Nov 27, 2014 12:58: i
#26
kester76
Those figures are based on reducing the bitrate per channel for a 5.1 signal giving 85 kbps per channel (OK, a little higher than 85 since the LFE channel is a little different) and when re-encoded to stereo, the net effect is 170 kbps. I think those figures were taken from another user in another forum, right?

Assuming constant bit rate is used and assuming the same 384 kbps for stereo, then 5.1 audio using the same bitrate per channel (except the LFE channel) results in a difference that far exceeds 16 KB. This is certainly more taxing for those on lower speed networks.

Of course, there are more compression techniques to be used for multi channel audio. Where 2 or more channels may sound the same, it is possible to select one base channel containing the full bit pattern at a certain interval and use that as a reference to compress the other channels (that is how video decoding works, by the way, by using something called an I Frame as references to compress other frames called P and B frames). But even assuming that this method is used it is still possible that bandwidth may still be stretched. Furthermore, on machines with stereo speakers (laptops that are not connected to surround sound systems, for example), this results in a quieter and less distinct sound on the left and right speakers as the other 4 speakers, although of course they can be multiplexed together into 2 channels. Hence, we have 2 issues already.

So, a workaround is to heavily reduce the bitrate, such as your example of using 512 kbps but when this is used, many users will complain that the quality is so poor. Where do we draw the line regarding bitrates? Do we give the power users the bitrates and quality they want and leave other users behind?

https://support.google.com/youtube/answer/1722171?hl=en <= It's from this link. I assume Google know what they're talking about but they could be wrong I guess. Dolby Digital DVD-Video 5.1 448kbit/s I'm pretty sure AAC is able to do a better job.

Not sure what you mean about the stereo/5.1 quality bit as you upload both audio tracks with your video and I assume the youtube app would automatically select the viewer's audio settings for how many channels they have or ask the viewer's preference as with the display size.







Ah, you were referring to Google transcoding to 512kbps 5.1 audio rather than speaking generally. It is a huge deficit in audio quality in comparison with the higher bitrates (per channel) of their stereo broadcasts. This would be for reasons of limiting network bandwidth since maintaining the same audio quality as their stereo transmissions would require over a megabit of bandwidth.

As the documentation lists 5.1 support is available, you could try encoding a test clip using 5.1 discrete audio channels (preferably using AAC so that Google's transcode can maintain the same number of channels) and using VLC to verify whether the playback from the YouTube site still result in 5.1 channels.
#27
kester76
Those figures are based on reducing the bitrate per channel for a 5.1 signal giving 85 kbps per channel (OK, a little higher than 85 since the LFE channel is a little different) and when re-encoded to stereo, the net effect is 170 kbps. I think those figures were taken from another user in another forum, right?

Assuming constant bit rate is used and assuming the same 384 kbps for stereo, then 5.1 audio using the same bitrate per channel (except the LFE channel) results in a difference that far exceeds 16 KB. This is certainly more taxing for those on lower speed networks.

Of course, there are more compression techniques to be used for multi channel audio. Where 2 or more channels may sound the same, it is possible to select one base channel containing the full bit pattern at a certain interval and use that as a reference to compress the other channels (that is how video decoding works, by the way, by using something called an I Frame as references to compress other frames called P and B frames). But even assuming that this method is used it is still possible that bandwidth may still be stretched. Furthermore, on machines with stereo speakers (laptops that are not connected to surround sound systems, for example), this results in a quieter and less distinct sound on the left and right speakers as the other 4 speakers, although of course they can be multiplexed together into 2 channels. Hence, we have 2 issues already.

So, a workaround is to heavily reduce the bitrate, such as your example of using 512 kbps but when this is used, many users will complain that the quality is so poor. Where do we draw the line regarding bitrates? Do we give the power users the bitrates and quality they want and leave other users behind?

https://support.google.com/youtube/answer/1722171?hl=en <= It's from this link. I assume Google know what they're talking about but they could be wrong I guess. Dolby Digital DVD-Video 5.1 448kbit/s I'm pretty sure AAC is able to do a better job.

Not sure what you mean about the stereo/5.1 quality bit as you upload both audio tracks with your video and I assume the youtube app would automatically select the viewer's audio settings for how many channels they have or ask the viewer's preference as with the display size.







Oh YouTube does not automatically select the audio channels. It doesn't work that way. Audio quality can be degraded for lower bandwidth (using variable bit rate streaming) but number of channels cannot be downgraded from 5.1 to 2. It requires separate storage of a 2 channel copy of the video clip - don't confuse bit rate with number of channels. If a clip is encoded with 5..1 discrete channels, it will always be decoded as such and the clip cannot be dynamically changed to 2 channels without multiplexing. You may ask why not multiplex into 2 channels? I'll leave that for you to ponder over.
#28
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
#29
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.

Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.

Edited By: ElliottC on Nov 28, 2014 07:20: .
#30
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.

Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.

I said handled, not created.
#31
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.

Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.

I said handled, not created.

Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.

Don't get upset if you can't handle being wrong. Go ahead and have the last say.

Edited By: ElliottC on Nov 29, 2014 01:27
#32
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.

And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
#33
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.

And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?

5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code uses static linking to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. What is the point of Google handling their own 5.1 decoding when FFMPEG contains the requisite functionality and of which, is used many applications. It is the same for VLC and many others that are capable of network streaming.

I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.

By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg

EDIT: I can even tell you the function call. It is avcodec_decode_audio3 which takes the an audio stream packet from the original file and returns a buffer containing the PCM bitstream.

Edited By: ElliottC on Nov 29, 2014 02:17: .
#34
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg

The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.


Edited By: rev6 on Nov 29, 2014 02:16
#35
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg

The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.


I still don't know what you are alluding to with regards to "handling".

Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.
#36
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg
The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.
I still don't know what you are alluding to with regards to "handling".
Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.

Do you think FFMPEG is written purely out of code the author wrote? Did they also write the librarys used, or C? Handle is a base term of the author creating the method to piece everything together, otherwise we would never really find who wrote anything.
To integrate 5.1 into YouTube would take much more than just the encoding and decoding of the media.

*Grabs a spade* "Dig us a hole" "I'll handle it". "Won't the manufacturer of the carbon steel really be handling it"?

In the end, my way of "handling it" is a logical way. Google plans the implementation and means and methods and then writes the code using open source code/librarys, whatever makes it easier, while writing their own code at the same time, as it's not lego.
Google could patch others code while doing this, improving librarys/code at the same time, it happens. Needing a function that doesn't currently exist, they could patch it themselves and get it added to the source. Now you see why my "handle it" makes things much easier, there was no need to go off on one and make yourself look more intelligent. Credits are given when credits are due, there's no need be a walking open source credits due guy.








Edited By: rev6 on Nov 29, 2014 02:49: edit
#37
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg
The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.
I still don't know what you are alluding to with regards to "handling".
Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.

Do you think FFMPEG is written purely out of code the author wrote? Did they also write the librarys used, or C? Handle is a base term of the author creating the method to piece everything together, otherwise we would never really find who wrote anything.
To integrate 5.1 into YouTube would take much more than just the encoding and decoding of the media.

*Grabs a spade* "Dig us a hole" "I'll handle it". "Won't the manufacturer of the carbon steel really be handling it"?

In the end, my way of "handling it" is a logical way. Google plans the implementation and means and methods and then writes the code using open source code/librarys, whatever makes it easier, while writing their own code at the same time, as it's not lego.
Google could patch others code while doing this, improving librarys/code at the same time, it happens. Needing a function that doesn't currently exist, they could patch it themselves and get it added to the source. Now you see why my "handle it" makes things much easier, there was no need to go off on one and make yourself look more intelligent. Credits are given when credits are due, there's no need be a walking open source credits due guy.








Look, the function does exist. It is decoding and encoding 5.1 audio using AAC codec. Google do nothing with regards to patching the function within FFMPEG. In an earlier post you mentioned "Do you think they install some dll's and click "Go" to have YouTube function with 5.1?". What would be the programming steps involved in streaming a video in the correct format? I say it is a case of passing a buffer to a function in FFMPEG and outputting the resulting buffer as a packet stream. You seem to suggest there is more involved. Can you be more specific? I am not "going off on one" as you suggested but merely taking a professional interest in what you actually know that I do not know.
#38
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg
The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.
I still don't know what you are alluding to with regards to "handling".
Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.
Do you think FFMPEG is written purely out of code the author wrote? Did they also write the librarys used, or C? Handle is a base term of the author creating the method to piece everything together, otherwise we would never really find who wrote anything.
To integrate 5.1 into YouTube would take much more than just the encoding and decoding of the media.
*Grabs a spade* "Dig us a hole" "I'll handle it". "Won't the manufacturer of the carbon steel really be handling it"?
In the end, my way of "handling it" is a logical way. Google plans the implementation and means and methods and then writes the code using open source code/librarys, whatever makes it easier, while writing their own code at the same time, as it's not lego.
Google could patch others code while doing this, improving librarys/code at the same time, it happens. Needing a function that doesn't currently exist, they could patch it themselves and get it added to the source. Now you see why my "handle it" makes things much easier, there was no need to go off on one and make yourself look more intelligent. Credits are given when credits are due, there's no need be a walking open source credits due guy.
Look, the function does exist. It is decoding and encoding 5.1 audio using AAC codec. Google do nothing with regards to patching the function within FFMPEG. In an earlier post you mentioned "Do you think they install some dll's and click "Go" to have YouTube function with 5.1?". What would be the programming steps involved in streaming a video in the correct format? I say it is a case of passing a buffer to a function in FFMPEG and outputting the resulting buffer as a packet stream. You seem to suggest there is more involved. Can you be more specific? I am not "going off on one" as you suggested but merely taking a professional interest in what you actually know that I do not know.

Google can patch functions, and so can you, or me, or anyone. That's the magic of open source.
You're acting like for 5.1 to exist in YouTube, Google just click their fingers and it's done. It's more than just the stream but the site implementation as well.
#39
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg
The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.
I still don't know what you are alluding to with regards to "handling".
Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.
Do you think FFMPEG is written purely out of code the author wrote? Did they also write the librarys used, or C? Handle is a base term of the author creating the method to piece everything together, otherwise we would never really find who wrote anything.
To integrate 5.1 into YouTube would take much more than just the encoding and decoding of the media.
*Grabs a spade* "Dig us a hole" "I'll handle it". "Won't the manufacturer of the carbon steel really be handling it"?
In the end, my way of "handling it" is a logical way. Google plans the implementation and means and methods and then writes the code using open source code/librarys, whatever makes it easier, while writing their own code at the same time, as it's not lego.
Google could patch others code while doing this, improving librarys/code at the same time, it happens. Needing a function that doesn't currently exist, they could patch it themselves and get it added to the source. Now you see why my "handle it" makes things much easier, there was no need to go off on one and make yourself look more intelligent. Credits are given when credits are due, there's no need be a walking open source credits due guy.
Look, the function does exist. It is decoding and encoding 5.1 audio using AAC codec. Google do nothing with regards to patching the function within FFMPEG. In an earlier post you mentioned "Do you think they install some dll's and click "Go" to have YouTube function with 5.1?". What would be the programming steps involved in streaming a video in the correct format? I say it is a case of passing a buffer to a function in FFMPEG and outputting the resulting buffer as a packet stream. You seem to suggest there is more involved. Can you be more specific? I am not "going off on one" as you suggested but merely taking a professional interest in what you actually know that I do not know.

Google can patch functions, and so can you, or me, or anyone. That's the magic of open source.
You're acting like for 5.1 to exist in YouTube, Google just click their fingers and it's done. It's more than just the stream but the site implementation as well.

Yes I am aware of contributing to FFMPEG - we have done this ourselves as has been mentioned earlier. I couldn't work out why you've been so vague and unspecific and yet so vehement. So I wondered if you have been digesting wrongful information from here http://www.quora.com/What-does-YouTube-use-for-encoding-video ?

No, Google have not patched the codebase. Take a look at FFMPEG source code repository change commits and tell me where you see Google having contributed the patches. Use their search option if necessary as it is a long list.

We can continue on PM because this has digressed too far from the original topic.
#40
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
ElliottC
rev6
I think the means and method would easily be handled with Google, I'm sure it has been considered more than once, and it may eventually get to us. They'll have overhaul most things, but sure they're capable of working that out.
Google don't develop the means for encoding, decoding and transcoding video for YouTube service. They primarily use FFMPEG under the LGPL license. FFMPEG is worked on by many private individuals and organisations and it is something we have contributed towards and also use in our products too.
I said handled, not created.
Yes you did and you said "easily handled". They make use of code that was not developed in house, and consequently this is not handled by Google, let alone easily. FFMPEG is "handled" by a community of developers.
Don't get upset if you can't handle being wrong. Go ahead and have the last say.
And again, even if they used code from other places, the actual method is handled by Google. Do you think they install some dll's and click "Go" to have YouTube function with 5.1?
5.1 decoding is NOT handled by Google at all. The decoder is built into LIBAVCODEC. This is a lib file and a dll. Google's YouTube servers side code is uses the linker to link to the LIBAVCODEC lib file and the LIBAVCODEC.DLL is called to perform the decoding. There is nothing here that Google handled or develop - they simply call the a function in the DLL to carry out the decoding. It is the same for VLC and many others that are capable of network streaming.
I am also aware that they are approaching other developers for extra functionality, such as CEA 708 closed captioning and that is a task in which they have asked us for a quote. They do not have enough knowledge to carry out or handle everything themselves.
By the way, back in 2011, a developer recognised his code was being used for YouTube transcoding http://multimedia.cx/eggs/googles-youtube-uses-ffmpeg
The whole reason for Google and many other developers to use these librarys is to quicken the entire process, and as they are freely available, with the right open source license, why re-create what has been created already? What is the point?
Again, Google will be using their own code to piece everything together. Every application you use right now, or tomorrow, will be a result of others hard work, to make the authors life easier.
I never said Google will develop the whole process, I said they will handle it. Instead of us in some forum here trying to create the method ourselves, I said, Google will easily be able to handle this themselves, if they use open source code or proprietary.
You have got far too emotional about the subject.
I still don't know what you are alluding to with regards to "handling".
Decoding YouTube audio is a simple call to a DLL via static linking. You asked me if the install "DLL" and click go, and in essence that is it, in a nutshell. The 5.1 audio decoding, encoding and transcoding is all carried out within FFMPEG. Yes, Google will write a framework to link code together but how is that a case of Google "handling". It's the same for VLC, MPlayer and even our "Atlas" player. We write a framework to link code together but we make function calls into FFMPEG because FFMPEG is a proven product.
Do you think FFMPEG is written purely out of code the author wrote? Did they also write the librarys used, or C? Handle is a base term of the author creating the method to piece everything together, otherwise we would never really find who wrote anything.
To integrate 5.1 into YouTube would take much more than just the encoding and decoding of the media.
*Grabs a spade* "Dig us a hole" "I'll handle it". "Won't the manufacturer of the carbon steel really be handling it"?
In the end, my way of "handling it" is a logical way. Google plans the implementation and means and methods and then writes the code using open source code/librarys, whatever makes it easier, while writing their own code at the same time, as it's not lego.
Google could patch others code while doing this, improving librarys/code at the same time, it happens. Needing a function that doesn't currently exist, they could patch it themselves and get it added to the source. Now you see why my "handle it" makes things much easier, there was no need to go off on one and make yourself look more intelligent. Credits are given when credits are due, there's no need be a walking open source credits due guy.
Look, the function does exist. It is decoding and encoding 5.1 audio using AAC codec. Google do nothing with regards to patching the function within FFMPEG. In an earlier post you mentioned "Do you think they install some dll's and click "Go" to have YouTube function with 5.1?". What would be the programming steps involved in streaming a video in the correct format? I say it is a case of passing a buffer to a function in FFMPEG and outputting the resulting buffer as a packet stream. You seem to suggest there is more involved. Can you be more specific? I am not "going off on one" as you suggested but merely taking a professional interest in what you actually know that I do not know.
Google can patch functions, and so can you, or me, or anyone. That's the magic of open source.
You're acting like for 5.1 to exist in YouTube, Google just click their fingers and it's done. It's more than just the stream but the site implementation as well.
Yes I am aware of contributing to FFMPEG - we have done this ourselves as has been mentioned earlier. I couldn't work out why you've been so vague and unspecific and yet so vehement. So I wondered if you have been digesting wrongful information from here http://www.quora.com/What-does-YouTube-use-for-encoding-video ?
No, Google have not patched the codebase. Take a look at FFMPEG source code repository change commits and tell me where you see Google having contributed the patches. Use their search option if necessary as it is a long list.
We can continue on PM because this has digressed too far from the original topic.

No, I didn't mean Google have contributed, but they can, that's what I was saying.

Post an Answer

No more comments can be posted to this thread.
Thanks for your comment! Keep it up!
We just need to have a quick look and it will be live soon.
The community is happy to hear your opinion! Keep contributing!