GPUImage源码解读(十四)

GPUImageRawDataInput继承自GPUImageOutput,它可以接受RawData输入(包括:RGB、RGBA、BGRA、LUMINANCE数据)并生成帧缓存对象。

  • 构造方法。构造的时候主要是根据RawData数据指针,数据大小,以及数据格式进行构造。

    ```

    // Initialization and teardown

  • (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;

  • (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat;

  • (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat type:(GPUPixelType)pixelType;

    ```

  • 构造的时候如果不指定像素格式和数据类型,默认会使用GPUPixelFormatBGRA和GPUPixelTypeUByte的方式。构造的过程是:1、初始化实例变量;2、生成只有纹理对象的GPUImageFramebuffer对象;3、给纹理对象绑定RawData数据。最终调用下面方法

- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat type:(GPUPixelType)pixelType;
{
    if (!(self = [super init]))
    {
        return nil;
    }

    dataUpdateSemaphore = dispatch_semaphore_create(1);

    uploadedImageSize = imageSize;
    self.pixelFormat = pixelFormat;
    self.pixelType = pixelType;

    [self uploadBytes:bytesToUpload];

    return self;
}
  • 其它方法。

// Image rendering
// 上传数据
- (void)updateDataFromBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
// 处理数据
- (void)processData;
- (void)processDataForTimestamp:(CMTime)frameTime;
// 输出纹理大小
- (CGSize)outputImageSize;
  • 在其它方法中比较重要的是数据处理方法,它的主要作用是驱动响应链,也就是将读取的RawData数据传递给接下来的targets进行处理。

// 上传原始数据
- (void)updateDataFromBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
{
    uploadedImageSize = imageSize;
    // 调用数据上传方法
    [self uploadBytes:bytesToUpload];
}

// 数据处理
- (void)processData;
{
    if (dispatch_semaphore_wait(dataUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }

    runAsynchronouslyOnVideoProcessingQueue(^{

        CGSize pixelSizeOfImage = [self outputImageSize];

         // 遍历所有的targets,将outputFramebuffer交给每个target处理
        for (id<GPUImageInput> currentTarget in targets)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
            [currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
            [currentTarget newFrameReadyAtTime:kCMTimeInvalid atIndex:textureIndexOfTarget];
        }

        dispatch_semaphore_signal(dataUpdateSemaphore);
    });
}

// 数据处理
- (void)processDataForTimestamp:(CMTime)frameTime;
{
    if (dispatch_semaphore_wait(dataUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }

    runAsynchronouslyOnVideoProcessingQueue(^{

        CGSize pixelSizeOfImage = [self outputImageSize];
         // 遍历所有的targets,将outputFramebuffer交给每个target处理
        for (id<GPUImageInput> currentTarget in targets)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
            [currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
        }

        dispatch_semaphore_signal(dataUpdateSemaphore);
    });
}

// 输出纹理大小
- (CGSize)outputImageSize;
{
    return uploadedImageSize;
}

Last updated