我正在使用像苹果示例GLCameraRipple
我在iPhone屏幕上的结果如下:iPhone Screen
我需要知道我做错了.
我把部分代码放到了发现错误的位置.
ffmpeg配置帧:
ctx->p_sws_ctx = sws_getContext(ctx->p_vIDeo_ctx->wIDth,ctx->p_vIDeo_ctx->height,ctx->p_vIDeo_ctx->pix_fmt,ctx->p_vIDeo_ctx->wIDth,PIX_FMT_YUV420P,SWS_FAST_BIliNEAR,NulL,NulL);// Framebuffer for RGB datactx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P,ctx->p_vIDeo_ctx->height));avpicture_fill((AVPicture*)ctx->p_picture_rgb,ctx->p_frame_buffer,ctx->p_vIDeo_ctx->height);
我的渲染方法:
if (NulL == vIDeoTextureCache) { NSLog(@"displayPixelBuffer error"); return;} CVPixelBufferRef pixelBuffer; CVPixelBufferCreateWithBytes(kcfAllocatorDefault,mTexW,mTexH,kCVPixelFormatType_420YpCbCr8BiPlanarVIDeoRange,buffer,mFrameW * 3,&pixelBuffer);CVReturn err; // Y-planeglActiveTexture(GL_TEXTURE0);err = CVOpenGLESTextureCacheCreateTextureFromImage(kcfAllocatorDefault,vIDeoTextureCache,pixelBuffer,GL_TEXTURE_2D,GL_RED_EXT,GL_UNSIGNED_BYTE,&_lumaTexture);if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d",err);} glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture),CVOpenGLESTextureGetname(_lumaTexture));glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); // UV-planeglActiveTexture(GL_TEXTURE1);err = CVOpenGLESTextureCacheCreateTextureFromImage(kcfAllocatorDefault,GL_RG_EXT,mTexW/2,mTexH/2,1,&_chromaTexture);if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d",err);}glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture),CVOpenGLESTextureGetname(_chromaTexture));glTexParameterf(GL_TEXTURE_2D,GL_CLAMP_TO_EDGE); glBindFramebuffer(GL_FRAMEBUFFER,defaultFramebuffer);// Set the vIEw port to the entire vIEwglVIEwport(0,backingWIDth,backingHeight);static const GLfloat squareVertices[] = { 1.0f,1.0f,-1.0f,};GLfloat textureVertices[] = { 1,};// Draw the texture on the screen with OpenGL ES 2[self renderWithSquareVertices:squareVertices textureVertices:textureVertices];// Flush the CVOpenGLESTexture cache and release the textureCVOpenGLESTextureCacheFlush(vIDeoTextureCache,0); CVPixelBufferRelease(pixelBuffer); [movIEPlayerDelegate bufferDone];
RenderWithSquareVertices方法
- (voID)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices{ // Use shader program. gluseProgram(shader.program);// Update attribute values.glVertexAttribPointer(ATTRIB_VERTEX,2,GL_float,squareVertices);glEnabLevertexAttribarray(ATTRIB_VERTEX);glVertexAttribPointer(ATTRIB_TEXTUREPOSITON,textureVertices);glEnabLevertexAttribarray(ATTRIB_TEXTUREPOSITON);glDrawArrays(GL_TRIANGLE_STRIP,4);// PresentglBindRenderbuffer(GL_RENDERBUFFER,colorRenderbuffer);[context presentRenderbuffer:GL_RENDERBUFFER];
}
我的片段着色器:
uniform sampler2D SamplerY;uniform sampler2D SamplerUV;varying highp vec2 _texcoord;voID main(){mediump vec3 yuv;lowp vec3 rgb;yuv.x = texture2D(SamplerY,_texcoord).r;yuv.yz = texture2D(SamplerUV,_texcoord).rg - vec2(0.5,0.5);// BT.601,which is the standard for SDTV is provIDed as a reference/* rgb = mat3( 1,-.34413,1.772,1.402,-.71414,0) * yuv;*/// Using BT.709 which is the standard for HDTVrgb = mat3( 1,-.18732,1.8556,1.57481,-.46813,0) * yuv; gl_Fragcolor = vec4(rgb,1);}
很感谢,
解决方法 我想问题是YUV420(或I420)是一种三平面图像格式. I420是8位Y平面,接着是8位2×2子采样U和V平面.来自GLCameraRipple的代码期待NV12格式:8位Y平面,接着是具有2×2子采样的交错U / V平面.鉴于此,我希望你需要三个纹理. luma_tex,u_chroma_tex,v_chroma_tex.另请注意,GLCameraRipple也可能期待“视频范围”.换句话说,平面格式的值是亮度= [16,235]色度= [16,240].
总结以上是内存溢出为你收集整理的opengl-es – 使用CVPixelBufferRef和着色器在ffmpeg的OpenGL中渲染YUV视频全部内容,希望文章能够帮你解决opengl-es – 使用CVPixelBufferRef和着色器在ffmpeg的OpenGL中渲染YUV视频所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)