iOS App Dev: Image Filter and Analysis with Core Image

Whenever you are planning to create an app or service related to image/photo editing, Core Image is the framework you have to rely on. Let's talk about the Core Image concept first.

Core Image

 

Core Image is a framework to provide support for image processing and analysis. Core Image works on Images from Core Graphics, Core Video, I/O Image framework using GPU or CPU rendering path. As a developer, you can interact with high-end frameworks like Core Image, without knowing low-level processing details of OpenGL or GPU or Metal or even GCD multicore processing.

 

The following image describes the interaction process of Core Image with running Apps and the OSs.


 

 


Overall, Core Image provides the features to developers like,

 

  1. Access to built-in image processing filters
  2. Feature detection capability
  3. Support for automatic image enhancement
  4. The ability to chain multiple filters together to create custom effects
  5. Support for creating custom filters that run on a GPU
  6. Feedback-based image processing capabilities

 

While using Core Image, you may have to use three basic classes from this framework.

 

  1. CIImages: A representation of an image to be processed or produced by Core Image filters.

 

  1. CIFilter: An image processor that produces an image by manipulating one or more input images or by generating new image data.

 

  1. CIContext: An evaluation context for rendering image processing results and performing image analysis.

 

Core Image Filter

Let’s implement the in-built image filtering, using the Core Image framework. Assume that, you have a running Xcode project and one of the Storyboard scene contain an Image View and a couple of buttons with image filter names, say Saphia tone, Comic and /or Invert color. You are may apply other effects as well and a list of image effects provided by the Core Image framework can access through this link.

Core Image Filters

A common button action for all these Buttons can be looks like this:


@IBAction func filterButtonTapped(_ sender: UIButton) {
        
        applyEffect(title: sender.titleLabel!.text!)
  
}


And the implementation of the applyEffect(title:) function would be:


func applyEffect(title:String){

        let image = UIImage(named: "image")!

        let context = CIContext(options: nil)

        let filterName:String

        switch title {

            case "Sepia Tone": filterName = "CISepiaTone"

            case "Comic Effect": filterName = "CIComicEffect"

            default: filterName = "CIColorInvert"

        }

        if let filter = CIFilter(name: filterName) {

            let workingImage = CIImage(image: image)

            filter.setValue(workingImage, forKey: kCIInputImageKey)

            if filterName == "CISepiaTone"{

                filter.setValue(0.5, forKey: kCIInputIntensityKey) // Intensity range: 0 to 1

            }

            if let output = filter.outputImage {

                if let cgImg = context.createCGImage(output, from: output.extent) {

                    let changedImage = UIImage(cgImage: cgImg)

                    imageView.image = changedImage

                }

            }

        }

    }


Remember, by default the images we are using for an Image View are in UIImage type. But CIFilter will work on CIImage. Therefore, convert the image into CIImage and name it “workingImage”. Now we can use the "workingImage" to assign the filter to it along with the intensity of the effect.

Now the time to render the image with all effects using CIContext, called “context”. Using this context, create a CGImage from CIFilter’s output image. Once successful, add the CGImage, here “cgImg”, into the Image View, but by converting back to UIImage, here “changedImage”.

That’s it, run the App in the simulator and tap on the desire filter button. In bellow image, it shows the Comic effect.

Core Image Analysis

There are image attributes detection features available with Core Image. These detection functionalities are helpful to analyze a particular image. For example, we have an Image with text content into it. You can check whether the image contains any face or not, the position of the face, the mouth, the eyes, etc. Even whether the image has text or not and so on. Apple has a document on different CIDetector types, urging you to go through it from the link below.

Core Image Analysis

Let's implement such concept. Inside a new button action, you may implement bellow code into that.


 @IBAction func imageAnalysis(_ sender: Any) {
        var msg = ""
        
        if let inputImage = UIImage(named: "steve") {
            let ciImage = CIImage(cgImage: inputImage.cgImage!)
            
            let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
            
            // Detect face
            let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)!
            
            let faces = faceDetector.features(in: ciImage)
            
            if let face = faces.first as? CIFaceFeature {
               msg += "Found face at : \(face.bounds)\n"
                
                if face.hasLeftEyePosition {
                    msg += "Found left eye at : \(face.leftEyePosition)\n"
                }
                
                if face.hasRightEyePosition {
                    msg += "Found right eye at : \(face.rightEyePosition)\n"
                }
                
                if face.hasMouthPosition {
                    msg += "Found mouth at : \(face.mouthPosition)\n"
                }
            }
            
            // Detect Text
            let textDetector = CIDetector(ofType: CIDetectorTypeText, context: nil)!
            
            let texts = textDetector.features(in: ciImage)
            
            if let texts = texts.first as? CITextFeature {
                msg += "Found text at : \(texts.bounds)"
            }
        }
        
        let alert = UIAlertController(title: "Image Analysis", message: msg, preferredStyle: .alert)
        let action = UIAlertAction(title: "Done", style: .cancel, handler: nil)
        alert.addAction(action)
        
        present(alert, animated: true, completion: nil)
}


There are two parts of the code, one to detect face with its attributes and another for text. As mentioned earlier, Core Image related works can be done upon CIImage, converting the UIImage to CIIgame for the processes. We used CIDetectorTypeFace for face analysis and CIDetectorTypeText for text analysis. The report, all together, we will display as an Alert view.


Summery

Try with different filter and analysis options, available in Apple's developer documentation.


The sample project you may get access from the GitHub repository: Core Image Example

Happy coding!

#apple #iOS #macOS #swift #app #development #custom #framework


Comments

Popular posts from this blog

Swift Programming: Big O notation for complexity checking

macOS 10.14 Mojave’s Archive Utility