philm-iOS-wiki
  • 介绍
  • 网络层
    • 说明
  • UI
    • 说明
    • 在ios7以前使用ColorSpace的坑
    • UITableView偏移异常问题
    • drawRect时单独设置文字阴影无效
    • Xcode9下相册访问权限问题
    • 避免同时点击多个Button
    • scroll上的button延迟响应问题
    • uibutton触发边界事件
    • ios 11 上tableview 改动
    • YYImage 显示指定区域的图片
  • 数据持久化
    • 说明
  • 其它
    • 取消延迟执行之坑
    • NSString 转换 float 的精度问题
  • 每周阅读
    • 目录
    • 深入思考NSNotification
    • gitBook使用小助手
    • iOS App签名的原理
    • 响应链
    • iOS10跳转系统到设置页
    • SDWebImage下载高清图内存问题
    • iOS圆角避免离屏渲染
    • 常用的延时调用
    • iOS 神经网络
    • SDWebImage缓存策略
    • 3Dtouch
    • 为什么 Objective-C 对象存储在堆上而不是栈上
    • 深入浅出理解视频编码H264结构
    • CATextLayer学习
    • cocoaPods
    • 任意网站支持RSS
    • Metal简介
    • 动态更改icon
    • CAReplicatorLayer
    • 增加点击间隔
    • 勒索病毒当道的时代
    • iOS常用宏定义
    • Metal实现YUV转RGB渲染视频
    • 获取当前下载的app及下载进度
    • OpenGL ES 三种类型修饰 uniform attribute varying
    • 技术部门引入OKR
    • 基于runloop的线程保活、销毁与通信
    • 深入理解哈希表
    • TOLL-FREE BRIDGING 和 UNMANAGED
    • 开发者能拿到的标识符
    • Swift自定义LOG
    • 系统通知整理
    • iOS 中的 imageIO 与 image 解码
    • CGImageRef基本介绍及方法说明
    • Swift 3.0 语法
    • webview加载部分网页
    • 在CAAnimation中暂停动画
    • 把代码迁移到协调器上
    • ios11API更新整理
    • 非越狱iOS设备的远程控制实现原理
    • 关于本地化
    • swift命名空间
    • CoreML与coremltools体验
    • 力学动画
    • Swift 4官方文档中文版: The Basic(上)
    • swift 中的KVO用法
    • GPUImage的图像形变设计(简单形变部分)
    • iOS响应式架构
    • 移动端图片上传旋转、压缩的解决方案
    • AVFoundation使用指南AVAssert使用
    • 过渡动画
    • 谈谈 MVX 中的 Model
    • AVFoundation编程-AVPlayer使用
    • GPUImage的图像形变设计(复杂形变部分)
    • What's New in LLVM 9
    • ios的事件机制
    • GPUImage源码解读(一)
    • GPUImage源码解读(二)
    • iOS 启动优化
    • 模块化 Swift 中的状态
    • swift中的let和var背后的编程模式
    • Swift Runtime动态性分析
    • RAC下的响应式编程
    • GPUImage源码解读(三)
    • 如何准确判断webView是否加载完成
    • NSObject的+load和+initialize详解
    • ios8以后设置启动图
    • GPUImage源码解读(四)
    • Swift自动闭包
    • IOS11新特性
    • GPUImage源码解读(五)
    • 理解 OC 内部的消息调用、消息转发、类和对象
    • 修饰符
    • IOS 切面统计事件解耦
    • GPUImage源码解读(六)
    • CoreImage介绍
    • 影响Core Animation性能的原因
    • Instruments中的动画工具选项介绍
    • GPUImage源码解读(七)
    • Xcode 7新的特性Lightweight Generics 轻量级泛型与__kindof修饰符
    • GPUImage源码解读(八)
    • Core Image之自定 Filter
    • iOS通用链接
    • 谈nonatomic非线程安全问题
    • 深拷贝与浅拷贝
    • CIKernel 介绍
    • iOS11适配
    • GPUImage源码解读(九)
    • CVPixelBufferCreate使用的坑
    • ios一窥并发底层
    • ARKit进阶:物理世界
    • ARKit的工作原理及流程介绍
    • UI线程卡顿监控
    • FBKVOController使用
    • GPUImage源码解读(十)
    • WKWebView在ios11崩溃问题解决方法
    • 微信iOS SQLite源码优化实践
    • HEIF 和 HEVC 研究
    • 谈谈 iOS 中图片的解压缩
    • 提升 iOS 开发效率! Xcode 9 内置模拟器的9个技巧
    • ObjC和JavaScript的交互,在恰当的时机注入对象
    • iOS数据保护
    • iOS11中网络层的一些变化(Session707&709脱水版)
    • GPUImage源码解读(十一)
    • 一种避免 iOS 内存碎片的方法
    • pods的原理
    • GPUImage源码解读(十二)
    • GPUImage源码解读(十三)
    • iOS 11 Layout的新特性
    • iOS应用瘦身方法思路整理
    • GPUImage源码解读(十四)
    • CAEmitterLayer属性介绍
    • 浅析移动蜂窝网络的特点及其省电方案
    • 如何在 table view 中添加 3D Touch Peek & Pop 功能
    • iOS中锁的介绍与使用
    • NSLog效率低下的原因及尝试lldb断点打印Log
    • GPUImage源码解读(十五)
    • GPUImage源码解读(十六)
    • CADisplayLink
    • GPUImage源码解读(十七)
    • CADisplayLink
    • 老生常谈category增加属性的几种操作
    • 30行代码演示dispatch_once死锁
    • GPUImage源码解读(十八)
    • YYImage设计思路
    • GPUImage源码解读(十九)
    • 深入理解Tagged Pointer
    • iOS 11:WKWebView内容过滤规则详解
    • Swift语法对编译速度的影响
    • GPUImage源码解读(二十)
    • GPUImage源码解读(二十一)
    • iOS App间常用的五种通信方式
    • YYCache深入学习
    • 冲顶大会插件
    • iOS高性能图片架构与设计
    • YUV颜色编码解析
    • iOS传感器:App前后台切换后,获取敏感信息使用touch ID进行校验
    • GPUImage源码解读(二十二)
    • GPUImage源码解读(二十三)
    • 从零开始的机器学习 - Machine Learning(一)
    • 从零开始的机器学习 - Machine Learning(二)
    • GPUImage源码解读(二十四)
    • Objective-C消息转发机制
    • iOS 程序 main 函数之前发生了什么
    • MMKV--基于 mmap 的 iOS 高性能通用 key-value 组件
    • Objective-C 消息发送与转发机制原理
    • 谈Objective-C block的实现
    • GPUImage源码解读(二十五)
    • podfile语法
    • 轻量级低风险 iOS 热更新方案
    • 使用objection来模块化开发iOS项目
    • swift 中delegate的使用注意
    • 使用appledoc自动生成api文档
    • UITextChecker的使用
    • ARKit 如何给SCNNode贴Gif图片
    • Unity与iOS平台交互和原生插件开发
    • SceneKit编程珠玑
Powered by GitBook
On this page
  1. 每周阅读

GPUImage源码解读(二十二)

PreviousiOS传感器:App前后台切换后,获取敏感信息使用touch ID进行校验NextGPUImage源码解读(二十三)

Last updated 7 years ago

GPUImageFourInputFilter 可以接收四个帧缓存对象的输入。它的作用可以将四个帧缓存对象的输入合并成一个帧缓存对象的输出。它继承自GPUImageThreeInputFilter,与GPUImageTwoInputFilter、GPUImageThreeInputFilter类似,主要增加了第四个帧缓存对象处理的相关操作。 philm gpuimage 版本没有这个类

下面也会有全部代码

  • 实例变量。主要增加了与第四个帧缓存对象相关的实例变量。

#import "GPUImageThreeInputFilter.h"

extern NSString *const kGPUImageFourInputTextureVertexShaderString;

@interface GPUImageFourInputFilter : GPUImageThreeInputFilter
{
    GPUImageFramebuffer *fourthInputFramebuffer;

    GLint filterFourthTextureCoordinateAttribute;
    GLint filterInputTextureUniform4;
    GPUImageRotationMode inputRotation4;
    GLuint filterSourceTexture4;
    CMTime fourthFrameTime;

    BOOL hasSetThirdTexture, hasReceivedFourthFrame, fourthFrameWasVideo;
    BOOL fourthFrameCheckDisabled;
}

- (void)disableFourthFrameCheck;

@end
  • 方法列表。 GPUImageFourInputFilter与GPUImageTwoInputFilter、GPUImageThreeInputFilter相比,它们的一些关键方法的逻辑都比较类似。

NSString *const kGPUImageFourInputTextureVertexShaderString = SHADER_STRING
(
 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;
 attribute vec4 inputTextureCoordinate2;
 attribute vec4 inputTextureCoordinate3;
 attribute vec4 inputTextureCoordinate4;

 varying vec2 textureCoordinate;
 varying vec2 textureCoordinate2;
 varying vec2 textureCoordinate3;
 varying vec2 textureCoordinate4;

 void main()
 {
     gl_Position = position;
     textureCoordinate = inputTextureCoordinate.xy;
     textureCoordinate2 = inputTextureCoordinate2.xy;
     textureCoordinate3 = inputTextureCoordinate3.xy;
     textureCoordinate4 = inputTextureCoordinate4.xy;
 }
);

@implementation GPUImageFourInputFilter

- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
    if (!(self = [self initWithVertexShaderFromString:kGPUImageFourInputTextureVertexShaderString fragmentShaderFromString:fragmentShaderString]))
    {
        return nil;
    }

    return self;
}

- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
    if (!(self = [super initWithVertexShaderFromString:vertexShaderString fragmentShaderFromString:fragmentShaderString]))
    {
        return nil;
    }

    inputRotation4 = kGPUImageNoRotation;

    hasSetThirdTexture = NO;

    hasReceivedFourthFrame = NO;
    fourthFrameWasVideo = NO;
    fourthFrameCheckDisabled = NO;

    fourthFrameTime = kCMTimeInvalid;

    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
        filterFourthTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate4"];

        filterInputTextureUniform4 = [filterProgram uniformIndex:@"inputImageTexture4"]; // This does assume a name of "inputImageTexture3" for the third input texture in the fragment shader
        glEnableVertexAttribArray(filterFourthTextureCoordinateAttribute);
    });

    return self;
}

- (void)initializeAttributes;
{
    [super initializeAttributes];
    [filterProgram addAttribute:@"inputTextureCoordinate4"];
}
// 增加方法
- (void)disableFourthFrameCheck;
{
    fourthFrameCheckDisabled = YES;
}
// 重写方法
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
    if (self.preventRendering)
    {
        [firstInputFramebuffer unlock];
        [secondInputFramebuffer unlock];
        [thirdInputFramebuffer unlock];
        [fourthInputFramebuffer unlock];
        return;
    }

    [GPUImageContext setActiveShaderProgram:filterProgram];
    outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
    [outputFramebuffer activateFramebuffer];
    if (usingNextFrameForImageCapture)
    {
        [outputFramebuffer lock];
    }

    [self setUniformsForProgramAtIndex:0];

    glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
    glClear(GL_COLOR_BUFFER_BIT);

    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
    glUniform1i(filterInputTextureUniform, 2);

     // 激活纹理对象
    glActiveTexture(GL_TEXTURE3);
    glBindTexture(GL_TEXTURE_2D, [secondInputFramebuffer texture]);
    glUniform1i(filterInputTextureUniform2, 3);

    glActiveTexture(GL_TEXTURE4);
    glBindTexture(GL_TEXTURE_2D, [thirdInputFramebuffer texture]);
    glUniform1i(filterInputTextureUniform3, 4);

    glActiveTexture(GL_TEXTURE5);
    glBindTexture(GL_TEXTURE_2D, [fourthInputFramebuffer texture]);
    glUniform1i(filterInputTextureUniform4, 5);

    glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
    glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
    glVertexAttribPointer(filterSecondTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation2]);
    glVertexAttribPointer(filterThirdTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation3]);
    glVertexAttribPointer(filterFourthTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation4]);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    [firstInputFramebuffer unlock];
    [secondInputFramebuffer unlock];
    [thirdInputFramebuffer unlock];
    [fourthInputFramebuffer unlock];
    if (usingNextFrameForImageCapture)
    {
        dispatch_semaphore_signal(imageCaptureSemaphore);
    }
}

- (NSInteger)nextAvailableTextureIndex;
{
    if (hasSetThirdTexture)
    {
        return 3;
    }
    else if (hasSetSecondTexture)
    {
        return 2;
    }
    else if (hasSetFirstTexture)
    {
        return 1;
    }
    else
    {
        return 0;
    }
}

- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
{
    if (textureIndex == 0)
    {
        firstInputFramebuffer = newInputFramebuffer;
        hasSetFirstTexture = YES;
        [firstInputFramebuffer lock];
    }
    else if (textureIndex == 1)
    {
        secondInputFramebuffer = newInputFramebuffer;
        hasSetSecondTexture = YES;
        [secondInputFramebuffer lock];
    }
    else if (textureIndex == 2)
    {
        thirdInputFramebuffer = newInputFramebuffer;
        hasSetThirdTexture = YES;
        [thirdInputFramebuffer lock];
    }
    else
    {
        fourthInputFramebuffer = newInputFramebuffer;
        [fourthInputFramebuffer lock];
    }
}

- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
    if (textureIndex == 0)
    {
        [super setInputSize:newSize atIndex:textureIndex];

        if (CGSizeEqualToSize(newSize, CGSizeZero))
        {
            hasSetFirstTexture = NO;
        }
    }
    else if (textureIndex == 1)
    {
        if (CGSizeEqualToSize(newSize, CGSizeZero))
        {
            hasSetSecondTexture = NO;
        }
    }
    else if (textureIndex == 2)
    {
        if (CGSizeEqualToSize(newSize, CGSizeZero))
        {
            hasSetThirdTexture = NO;
        }
    }
}

- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
    if (textureIndex == 0)
    {
        inputRotation = newInputRotation;
    }
    else if (textureIndex == 1)
    {
        inputRotation2 = newInputRotation;
    }
    else if (textureIndex == 2)
    {
        inputRotation3 = newInputRotation;
    }
    else
    {
        inputRotation4 = newInputRotation;
    }
}

- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
{
    CGSize rotatedSize = sizeToRotate;

    GPUImageRotationMode rotationToCheck;
    if (textureIndex == 0)
    {
        rotationToCheck = inputRotation;
    }
    else if (textureIndex == 1)
    {
        rotationToCheck = inputRotation2;
    }
    else if (textureIndex == 2)
    {
        rotationToCheck = inputRotation3;
    }
    else
    {
        rotationToCheck = inputRotation4;
    }

    if (GPUImageRotationSwapsWidthAndHeight(rotationToCheck))
    {
        rotatedSize.width = sizeToRotate.height;
        rotatedSize.height = sizeToRotate.width;
    }

    return rotatedSize;
}

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
    // You can set up infinite update loops, so this helps to short circuit them
    if (hasReceivedFirstFrame && hasReceivedSecondFrame && hasReceivedThirdFrame)
    {
        return;
    }

    BOOL updatedMovieFrameOppositeStillImage = NO;

    if (textureIndex == 0)
    {
        hasReceivedFirstFrame = YES;
        firstFrameTime = frameTime;
        if (secondFrameCheckDisabled)
        {
            hasReceivedSecondFrame = YES;
        }
        if (thirdFrameCheckDisabled)
        {
            hasReceivedThirdFrame = YES;
        }
        if (fourthFrameCheckDisabled)
        {
            hasReceivedThirdFrame = YES;
        }

        if (!CMTIME_IS_INDEFINITE(frameTime))
        {
            if CMTIME_IS_INDEFINITE(secondFrameTime)
            {
                updatedMovieFrameOppositeStillImage = YES;
            }
        }
    }
    else if (textureIndex == 1)
    {
        hasReceivedSecondFrame = YES;
        secondFrameTime = frameTime;
        if (firstFrameCheckDisabled)
        {
            hasReceivedFirstFrame = YES;
        }
        if (thirdFrameCheckDisabled)
        {
            hasReceivedThirdFrame = YES;
        }
        if (fourthFrameCheckDisabled)
        {
            hasReceivedFourthFrame = YES;
        }

        if (!CMTIME_IS_INDEFINITE(frameTime))
        {
            if CMTIME_IS_INDEFINITE(firstFrameTime)
            {
                updatedMovieFrameOppositeStillImage = YES;
            }
        }
    }
    else if (textureIndex == 2)
    {
        hasReceivedThirdFrame = YES;
        thirdFrameTime = frameTime;
        if (firstFrameCheckDisabled)
        {
            hasReceivedFirstFrame = YES;
        }
        if (secondFrameCheckDisabled)
        {
            hasReceivedSecondFrame = YES;
        }
        if (fourthFrameCheckDisabled)
        {
            hasReceivedFourthFrame = YES;
        }

        if (!CMTIME_IS_INDEFINITE(frameTime))
        {
            if CMTIME_IS_INDEFINITE(firstFrameTime)
            {
                updatedMovieFrameOppositeStillImage = YES;
            }
        }
    }
    else
    {
        hasReceivedFourthFrame = YES;
        fourthFrameTime = frameTime;
        if (firstFrameCheckDisabled)
        {
            hasReceivedFirstFrame = YES;
        }
        if (secondFrameCheckDisabled)
        {
            hasReceivedSecondFrame = YES;
        }
        if (thirdFrameCheckDisabled)
        {
            hasReceivedThirdFrame = YES;
        }

        if (!CMTIME_IS_INDEFINITE(frameTime))
        {
            if CMTIME_IS_INDEFINITE(firstFrameTime)
            {
                updatedMovieFrameOppositeStillImage = YES;
            }
        }
    }

    // || (hasReceivedFirstFrame && secondFrameCheckDisabled) || (hasReceivedSecondFrame && firstFrameCheckDisabled)
    // 如果已经接收四个纹理输入或者是有效帧,则渲染
    if ((hasReceivedFirstFrame && hasReceivedSecondFrame && hasReceivedThirdFrame && hasReceivedFourthFrame) || updatedMovieFrameOppositeStillImage)
    {
        static const GLfloat imageVertices[] = {
            -1.0f, -1.0f,
            1.0f, -1.0f,
            -1.0f,  1.0f,
            1.0f,  1.0f,
        };

        [self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];

        [self informTargetsAboutNewFrameAtTime:frameTime];

        hasReceivedFirstFrame = NO;
        hasReceivedSecondFrame = NO;
        hasReceivedThirdFrame = NO;
        hasReceivedFourthFrame = NO;
    }
}

@end
GPUImageFourInputFilter github地址