GPUImage源码解读(二十三)
GPUImageFilter 是GPUImage中很重要、很基础的类,它可以处理帧缓存对象的输入输出,但是对纹理并不添加任何特效,也就是说只是简单的让纹理通过。它更多的是作为其它滤镜的基类,一些具体的滤镜由它的子类去完成。同时它也只能处理单个帧缓存对象的输入,处理多个帧缓存对象的输入也是由它的子类去完成
向量定义
//在 GPUImage 中主要用到了3维向量、4维向量、4x4矩阵、3x3矩阵,对应OpenGL中的vec3、vec4、mat4、mat3。之所以使用这些向量、矩阵,是为了方便向着色器传值。在 GPUImageFilter 中定义了一组传值的接口,在需要向着色器传值的时候很方便
struct GPUVector4 {
GLfloat one;
GLfloat two;
GLfloat three;
GLfloat four;
};
typedef struct GPUVector4 GPUVector4;
struct GPUVector3 {
GLfloat one;
GLfloat two;
GLfloat three;
};
typedef struct GPUVector3 GPUVector3;
struct GPUMatrix4x4 {
GPUVector4 one;
GPUVector4 two;
GPUVector4 three;
GPUVector4 four;
};
typedef struct GPUMatrix4x4 GPUMatrix4x4;
struct GPUMatrix3x3 {
GPUVector3 one;
GPUVector3 two;
GPUVector3 three;
};
typedef struct GPUMatrix3x3 GPUMatrix3x3;
着色器
在滤镜中着色器程序是很重要的,它决定了滤镜的表现效果。在 GPUImageFilter 中的着色器程序比较简单,只是简单的进行纹理采样,并没有对像素数据进行相关操作。在自定义相关滤镜的时候,我们通常改变片段着色器就行了,如果涉及多个纹理输入,可以使用之前介绍的多重输入滤镜(也是GPUImageFilter的子类,但扩展了帧缓存的输入)。以下是 GPUImageFilter 的相关着色器
NSString *const kGPUImageVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#else
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#endif
实例变量
@interface GPUImageFilter : GPUImageOutput <GPUImageInput>
{
//输入帧缓存对象
GPUImageFramebuffer *firstInputFramebuffer;
//GL程序
GLProgram *filterProgram;
//属性变量
GLint filterPositionAttribute, filterTextureCoordinateAttribute;
//纹理统一变量
GLint filterInputTextureUniform;
//GL清屏颜色
GLfloat backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha;
//结束处理操作
BOOL isEndProcessing;
//子类中使用
CGSize currentFilterSize;
//屏幕旋转方向
GPUImageRotationMode inputRotation;
BOOL currentlyReceivingMonochromeInput;
// 保存RestorationBlocks的字典
NSMutableDictionary *uniformStateRestorationBlocks;
// 信号量
dispatch_semaphore_t imageCaptureSemaphore;
}
构造方法
GPUImageFilter 构造方法需要我们传入顶点着色器和片段着色器就,当然我们一般只需要传入片段着色器即可。初始化的过程可以概括为这几个步骤:
1、初始化相关实例变量;
2、初始化GL上下文对象;
3、初始化GL程序;
4、创建GL程序;
5、获取GL相关变量;
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [super init]))
{
return nil;
}
// 初始化相关实例变量
uniformStateRestorationBlocks = [NSMutableDictionary dictionaryWithCapacity:10];
_preventRendering = NO;
currentlyReceivingMonochromeInput = NO;
inputRotation = kGPUImageNoRotation;
backgroundColorRed = 0.0;
backgroundColorGreen = 0.0;
backgroundColorBlue = 0.0;
backgroundColorAlpha = 0.0;
imageCaptureSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageCaptureSemaphore);
runSynchronouslyOnVideoProcessingQueue(^{
// 初始化GL上下文对象
[GPUImageContext useImageProcessingContext];
// 创建GL程序
filterProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString];
if (!filterProgram.initialized)
{
// 初始化属性变量
[self initializeAttributes];
// 链接着色器程序
if (![filterProgram link])
{
// 输出错误日志
NSString *progLog = [filterProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [filterProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [filterProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
filterProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
// 获取顶点属性变量
filterPositionAttribute = [filterProgram attributeIndex:@"position"];
// 获取纹理坐标属性变量
filterTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate"];
// 获取纹理统一变量
filterInputTextureUniform = [filterProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputImageTexture" for the fragment shader
// 使用当前GL程序
[GPUImageContext setActiveShaderProgram:filterProgram];
// 启用顶点属性数组
glEnableVertexAttribArray(filterPositionAttribute);
glEnableVertexAttribArray(filterTextureCoordinateAttribute);
});
return self;
}
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [self initWithVertexShaderFromString:kGPUImageVertexShaderString fragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
{
NSString *fragmentShaderPathname = [[NSBundle mainBundle] pathForResource:fragmentShaderFilename ofType:@"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragmentShaderPathname encoding:NSUTF8StringEncoding error:nil];
if (!(self = [self initWithFragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (id)init;
{
if (!(self = [self initWithFragmentShaderFromString:kGPUImagePassthroughFragmentShaderString]))
{
return nil;
}
return self;
}
其它方法
GPUImageFilter 的方法中,为着色器传值的方法比较多,这是因为着色器能接受不同类型的值,如:GLint、GLfloat、vec2、vec3、mat3 等。在这些方法中有三个比较重要的方法
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
这三个方法和响应链密切相关。GPUImageFilter 会将接收到的帧缓存对象经过特定的片段着色器绘制到即将输出的帧缓存对象中,然后将自己输出的帧缓存对象传给所有Targets并通知它们进行处理。方法被调用的顺序:
1、生成新的帧缓存对象
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;;
2、进行GL绘制
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
3、绘制完成通知所有的target处理
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
// 变换方法
- (void)setupFilterForSize:(CGSize)filterFrameSize;
- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
- (CGPoint)rotatedPoint:(CGPoint)pointToRotate forRotation:(GPUImageRotationMode)rotation;
// 查询方法
- (CGSize)sizeOfFBO;
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
- (CGSize)outputFrameSize;
// 渲染方法
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
// 设置清屏颜色
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
// 传值方法
- (void)setInteger:(GLint)newInteger forUniformName:(NSString *)uniformName;
- (void)setFloat:(GLfloat)newFloat forUniformName:(NSString *)uniformName;
- (void)setSize:(CGSize)newSize forUniformName:(NSString *)uniformName;
- (void)setPoint:(CGPoint)newPoint forUniformName:(NSString *)uniformName;
- (void)setFloatVec3:(GPUVector3)newVec3 forUniformName:(NSString *)uniformName;
- (void)setFloatVec4:(GPUVector4)newVec4 forUniform:(NSString *)uniformName;
- (void)setFloatArray:(GLfloat *)array length:(GLsizei)count forUniform:(NSString*)uniformName;
- (void)setMatrix3f:(GPUMatrix3x3)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setMatrix4f:(GPUMatrix4x4)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloat:(GLfloat)floatValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setPoint:(CGPoint)pointValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setSize:(CGSize)sizeValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec3:(GPUVector3)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec4:(GPUVector4)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloatArray:(GLfloat *)arrayValue length:(GLsizei)arrayLength forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setInteger:(GLint)intValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setAndExecuteUniformStateCallbackAtIndex:(GLint)uniform forProgram:(GLProgram *)shaderProgram toBlock:(dispatch_block_t)uniformStateBlock;
- (void)setUniformsForProgramAtIndex:(NSUInteger)programIndex;
/******************* 方法实现 ************************************/
// 根据旋转方向获取纹理坐标
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
{
static const GLfloat noRotationTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat rotateLeftTextureCoordinates[] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 0.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightTextureCoordinates[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat verticalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
static const GLfloat rotate180TextureCoordinates[] = {
1.0f, 1.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
switch(rotationMode)
{
case kGPUImageNoRotation: return noRotationTextureCoordinates;
case kGPUImageRotateLeft: return rotateLeftTextureCoordinates;
case kGPUImageRotateRight: return rotateRightTextureCoordinates;
case kGPUImageFlipVertical: return verticalFlipTextureCoordinates;
case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates;
case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates;
case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates;
case kGPUImageRotate180: return rotate180TextureCoordinates;
}
}
// 产生新的帧缓存
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
// 先渲染到帧缓存
[self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];
// 通知所有的Targets
[self informTargetsAboutNewFrameAtTime:frameTime];
}
// 渲染到帧缓存
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
[GPUImageContext setActiveShaderProgram:filterProgram];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
if (usingNextFrameForImageCapture)
{
[outputFramebuffer lock];
}
[self setUniformsForProgramAtIndex:0];
// GL绘制
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
glUniform1i(filterInputTextureUniform, 2);
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// 解锁输入帧缓存对象
[firstInputFramebuffer unlock];
// 需要等待绘制完成才去生成图像
if (usingNextFrameForImageCapture)
{
// 发送渲染完成信号
dispatch_semaphore_signal(imageCaptureSemaphore);
}
}
// 通知所有的Targets
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
{
if (self.frameProcessingCompletionBlock != NULL)
{
self.frameProcessingCompletionBlock(self, frameTime);
}
// 传递帧缓存给所有target
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[self setInputFramebufferForTarget:currentTarget atIndex:textureIndex];
[currentTarget setInputSize:[self outputFrameSize] atIndex:textureIndex];
}
}
// Release our hold so it can return to the cache immediately upon processing
[[self framebufferForOutput] unlock];
if (usingNextFrameForImageCapture)
{
// usingNextFrameForImageCapture = NO;
}
else
{
[self removeOutputFramebuffer];
}
// 通知所有targets产生新的帧缓存
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
// 让所有target生成新的帧缓存
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndex];
}
}
}
// 需要生成图片则先消耗信号量,确保生成图片的时候GL绘制已经完成
- (void)useNextFrameForImageCapture;
{
usingNextFrameForImageCapture = YES;
// 消耗信号量
if (dispatch_semaphore_wait(imageCaptureSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
}
// 等待渲染完成信号,如果接收到完成信号则生成图片
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput
{
// Give it three seconds to process, then abort if they forgot to set up the image capture properly
double timeoutForImageCapture = 3.0;
dispatch_time_t convertedTimeout = dispatch_time(DISPATCH_TIME_NOW, timeoutForImageCapture * NSEC_PER_SEC);
// 等待GL绘制完成,直到超时
if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) != 0)
{
return NULL;
}
usingNextFrameForImageCapture = NO;
dispatch_semaphore_signal(imageCaptureSemaphore);
// All image output is now managed by the framebuffer itself
return [[self framebufferForOutput] newCGImageFromFramebufferContents];
}
Last updated