Creating 3D Models from Photographs with RealityKit in Swift
RealityKit Object Capture is a feature introduced in Xcode 13 that allows you to create 3D objects from photographs using a process called photogrammetry. Although intended for retailers to enhance their online shopping experience by creating models out of things like furniture and using them in augmented reality experiences, RealityKit is incredibly easy to use for anything you may want, like 3D printing random things from your house.
To use Object Capture, you must:
- Be running macOS 12 (this feature is not available in iOS!)
- Provide photographs of the object you want to capture
Capturing photographs of the object is not hard, although you may have to fine-tune your environment to get better results. The idea is to place your object in a well-lit place and continuously take pictures as you round the object. You can also take pictures of the bottom of the object if you'd like that to be modeled as well, by taking pictures of the object flipped upside-down in the same environment.
Your pictures don't need to have any particular ordering or naming for Object Capture to work and there's no minimum number of pictures you need to take, although you might better results the more you do. I've found that ~30ish pictures already give a nice result. Quite easy!
Creating a Photogrammetry CLI App
To use Object Capture, start by creating a new Command Line App in Xcode and add new a class named Session
:
import Foundation
import RealityKit
import Combine
final class Session {
let inputFolder = URL(fileURLWithPath: "/Users/myUser/myPictures", isDirectory: true)
let outputFile = URL(fileURLWithPath: "/Users/myUser/result.usdz")
var subscriber: AnyCancellable?
func run() throws {
}
}
Object Capture works by creating a PhotogrammetrySession
object, configuring it, passing the folder that contains our pictures and waiting for the result. The result comes asynchronously through Combine, so make sure to create a subscriber
object like in the snippet.
In the run()
method, create a PhotogrammetrySession
with the default configuration:
let configuration = PhotogrammetrySession.Configuration()
let session = try PhotogrammetrySession(
input: inputFolder,
configuration: configuration
)
It's possible to fine-tune the final model, which we'll see further on. For now, let's use the default settings.
With a session, we should now create a request to fabricate a model from our photographs:
let request = PhotogrammetrySession.Request.modelFile(url: outputFile)
To wrap it up, we should observe the progress of the request and wait for it to end. As mentioned before, this process is done asynchronously with Combine, so we should attach a subscriber to the session's output
property.
Additionally, since this is a CLI app, we need to make sure the app stays alive while the model is being created. For simplicity, I've decided to attach a semaphore to the operation:
let semaphore = DispatchSemaphore(value: 0)
subscriber = session.output.sink(receiveCompletion: { completion in
print(completion)
exit(0)
}, receiveValue: { output in
switch output {
case .processingComplete:
print("Processing is complete.")
semaphore.signal()
case .requestComplete(let request, let result):
print("Request complete.")
print(request)
print(result)
semaphore.signal()
case .requestProgress(let request, let fractionComplete):
print("Request in progress: \(fractionComplete)")
default:
print(output)
}
})
try session.process(requests: [request])
semaphore.wait()
As you can see, many aspects of the process can be observed. In this case, I'm only interested in the actual progress.
Before running the app, edit your main.swift
file to call our run()
method:
try! Session().run()
The process takes a while to start and may take several minutes to finish, so don't worry if you get a bunch of prints but not progress at first. Wait a bit and it will start!
Configuring PhotogrammetrySessions
To fine-tune your results, there are two aspects of Object Capture that can be configured. The first one is the detail of the output, which you can control to determine the number of polygons in the final model:
let request = PhotogrammetrySession.Request.modelFile(
url: outputFile,
detail: .preview
)
Details range from the lower quality preview
to the high-end full
. Try playing with lower-quality settings before trying to generate a higher-quality one.
The second aspect that can be configured in the input itself:
var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .low
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .high
The configuration struct allows you to provide more information about your photographs, which may result in a better model. sampleOverlap
allows you to describe how much overlap there is between each photograph, sampleOrdering
allows you to indicate whether or not your photographs are ordered (which will speed the process), and featureSensitivity
indicates how hard RealityKit should try to search for features in your object, which should be used in cases where the object doesn't have a lot of discernible structures, edges or textures.