I have three camera apps on the App Store under my name, and I’m tinkering with two others that haven’t seen the store yet. I thought it was time I create a framework that wraps all the camera features and open up enough APIs so that —

  • I can easily deploy another camera app that takes photos;

  • I don’t have to interact with Apple’s AVFoundation framework which, while providing all the control capabilities, is cumbersome for an app that captures still images;

  • I can have all the device control logic in one place, and only update them once.

That’s why I created this TLCameraFramework. I’ll call it The Framework.

What’s with it?

The Framework is a wrapper that help the developer avoid cumbersome device property checks when performing simple configurations. There are delegates that outputs CIImage objects for preview frames and the final capture image, so that the developer can connect the image with their own logic.

Let’s take the example of locking camera exposure. There are 7 steps to do this –

  1. Stop any current AVCaptureSession
  2. Find the currently active AVCaptureDevice a.k.a. a camera
  3. Check if the device supports custom exposure mode
  4. Lock the device for configuration
  5. Configure the settings with developer-provided parametres
  6. Unlock the device from configuration
  7. Restart the capture session

In each of the steps, there are things that can go wrong – device is not found because of denied camera access; the current device is a dual-camera matrix, and doesn’t support custom exposure mode; the device is already locked by another application for configuration; developer provided illegal parametres…

Instead of asking the developer (you) to do all the safety checks, The Framework wraps everything in a switchCameraTo(_:duration:iso:completion:) method. The completion handler will then report the success/failure of the operation, and error if there’s any.

Completion handlers abound in The Framework and I found them to be helpful controlling important UI flows that have an impact on the user experience.

You can go to the README file for more information on how to do a quick start, as well as full documentation on each of the open APIs.

Using it with SwiftUI

To use the Framework with SwiftUI, you will need to create a coordinator that holds an instance of TLCamera, and publishes important UI-facing variables. The coordinator should also conform to the TLCameraDelegate protocol to receive preview images and handle logics of captured still images.

There is a sample app written in SwiftUI in the Framework’s repository; you should be able to build, sign and run it on your own devices.

What’s next?

I already have plans to grow the Framework as my camera apps grow –

  • Support for Live Photo capture, probably through a new delegate function

  • Support for video capture

If there is anything you’d like to contribute to the Framework, feel free to leave a line or do a pull request :)