Estafette
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2023-11-10T17:30:15+00:00

Deploy CoreML Models on the Server with Vapor

---

**Deploy CoreML Models on the Server with Vapor**

[Image: Drew Althage]

Drew Althage

9 min readNov 7, 2023

Get the benefits of Apple’s ML tools server-side.

Press enter or click to view image in full size

SwiftUI client showing image classification results

Recently, at Sovrn, we had an AI Hackathon where we were encouraged to experiment with anything related to machine learning. The Hackathon yielded some fantastic projects from across the company. Everything from SQL query generators to chatbots that can answer questions about our products and other incredible work. I thought this would be a great opportunity to learn more about Apple’s ML tools and maybe even build something with real business value.

A few of my colleagues and I teamed up to play with CreateML and CoreML to see if we could integrate some ML functionality into our iOS app. We got a model trained and integrated into our app in several hours, which was pretty amazing. But we quickly realized that we had a few problems to solve before we could actually ship this thing.

- The model was hefty. It was about 50MB. That’s a lot of space to take up in our app bundle.
- We wanted to update the model without releasing a new app version.
- We wanted to use the model in the web browser as well.

We didn’t have time to solve all of these problems. But the other day I was exploring the Vapor web framework and the thought hit me, “Why not deploy CoreML models on the server?”

Apple provides a few pre-trained models, so today we’ll deploy an image classification model on the server behind a REST API with Vapor and create a SwiftUI client to consume it.

Foreword

This prototype is just that, a prototype. It’s not meant to be a production-ready solution. It’s meant to be a proof of concept. There will be warnings in the console, and the code won’t be very clean, but it will work and hopefully get your wheels turning.

If you want to skip all this, or if you do want to follow along, you can find the source code for this project on GitHub.

Okay, disclaimers over. Let’s get started!

Requirements

- Xcode 15
- macOS 14
- Homebrew
- Apple Developer Account + Physical Device for testing

Getting Started

First start by creating a new directory that will house our Xcode workspace. We’ll call it coreml-web-api .

`
cd ~/Desktop && mkdir coreml-web-api && cd coreml-web-api
`

Now let's install Vapor and bootstrap a brand new server. See the docs for more details.

`
brew install vapor
vapor new server -n
open Package.swift
`

We want our users to be able to upload images for classification so add a new route called classify that supports this. In server/Sources/App/routes.swift , clear out all that generated boilerplate, and add in the following:

`
import CoreImage
import Vapor

func routes(_ app: Application) throws {
app.post("classify") { req -> [ClassifierResult] in
let classificationReq = try req.content.decode(ClassificationRequest.self)
let imageBuffer = classificationReq.file.data
guard let fileData = imageBuffer.getData(at: imageBuffer.readerIndex, length: imageBuffer.readableBytes),
let ciImage = CIImage(data: fileData)
else {
throw Errors.badImageData

let classifier = Classifier() // we'll add this in a sec

return try classifier.classify(image: ciImage)

enum Errors: Error {
case badImageData // or whatever

struct ClassificationRequest: Content {
var file: File
`

Also, bump up the max file size allowed for uploads in configure.swift :

`
import Vapor

// configures your application
public func configure(_ app: Application) async throws {
app.routes.defaultMaxBodySize = "10mb"

try routes(app)
`

Alright, now let's write up a Classifier API. First, head over to Apple’s ML page to download a pre-trained model of your choosing. In this demo, I’m using the Resnet50 model. We’ll add this to the package in just a moment.

Add a new file called Classifier and drop in the following:

`
import CoreImage
import Vapor
import Vision

struct Classifier {
func classify(image: CIImage) throws -> [ClassifierResult] {
let url = Bundle.module.url(forResource: "Resnet50", withExtension: "mlmodelc")!
guard let model = try? VNCoreMLModel(for: Resnet50(contentsOf: url, configuration: MLModelConfiguration()).model) else {
throw Errors.unableToLoadMLModel

let request = VNCoreMLRequest(model: model)

let handler = VNImageRequestHandler(ciImage: image)

try? handler.perform([request])

guard let results = request.results as? [VNClassificationObservation] else {
throw Errors.noResults

return results.map { ClassifierResult(label: $0.identifier, confidence: $0.confidence) }

enum Errors: Error {
case unableToLoadMLModel
case noResults

struct ClassifierResult: Encodable, Content {
var label: String
var confidence: Float
`

Let’s break this down.

First, we load the model. Adding a CoreML model to a package is not super straightforward. We need to compile the .mlmodelourselves and add some files to Sources/. We’ll go over that in a few but this wonkiness explains why loading the model might look slightly different from adding one to a standard Xcode project.

Once the model is loaded, we prepare the request and the request handler; then we do the classification. To send the results as JSON to the client, we need to remap the results to a structure that conforms to Encodable and Content .

#### Adding the Model to the Package

This part definitely took me the longest to figure out. Unfortunately, this step is pretty manual; we can’t just drag and drop the model into the project. So, at the root of the server package, add a new folder called MLModelSource and add the Resnet50.mlmodel file here. Create another folder called Resourcesat server/Sources/App/Resources/ .

Now, we need to compile the model, add the Swift class to sources, and include the .mlmodelc in the package bundle. The compilation steps are repetitive so we’ll place them in a Makefile target. In the project root, create a Makefile:

`

~/Desktop/coreml-web-api/

touch Makefile
`

And add a compile_ml_modeltarget:

`
compile_ml_model:
cd server/MLModelSource && \
xcrun coremlcompiler compile Resnet50.mlmodel ../Sources/App/Resources && \
xcrun coremlcompiler generate Resnet50.mlmodel ../Sources/App/Resources --language Swift
`

Next, add this to the executable target inPackage.swift file:

`
resources: [
.copy("Resources/Resnet50.mlmodelc"),
`

The target should look like this:

`
.executableTarget(
name: "App",
dependencies: [
.product(name: "Vapor", package: "vapor"),
resources: [
.copy("Resources/Resnet50.mlmodelc"),
`

Okay, now from the project root, run the compile_ml_model target:

`
make compile_ml_model
`

Awesome!!! Now, we have an amazing server that supports classifying uploaded images using the Resnet50 model. Before we move on to the creating the client, we need to adjust the App scheme to make the server available to a physical device on your network.

Open up the scheme editor, and add serve --hostname 0.0.0.0 to the run arguments.

Press enter or click to view image in full size

Sweet. Now, we’ll create a client to do the uploading.

iOS Client

OK, in Xcode go to File -> New -> Project and add an iOS app to the workspace. We only need SwiftUI, no tests or SwiftData. I’m giving mine a really clever name of CoreMLWebClient … poetic.

Great. Now, let's do a little config work. Since we’re going to be using the camera, we need to update the Info.plist with the Privacy — Camera Usage Description key.

Press enter or click to view image in full size

Nice! In our client, we want to give users the option of using the camera or selecting from the photo library. Create a new file called ImagePicker.swift and paste in the following:

`
import SwiftUI

struct ImagePicker: UIViewControllerRepresentable {
@Binding var sourceType: UIImagePickerController.SourceType
@Environment(\.presentationMode) private var presentationMode
var completion: (UIImage) -> Void

func makeUIViewController(context: Context) -> some UIViewController {
let picker = UIImagePickerController()
picker.sourceType = sourceType
picker.delegate = context.coordinator
return picker

func updateUIViewController(_: UIViewControllerType, context _: Context) {}

func makeCoordinator() -> Coordinator {
Coordinator(self)

class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
var parent: ImagePicker

init(_ parent: ImagePicker) {
self.parent = parent

func imagePickerController(_: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
if let image = info[.originalImage] as? UIImage {
parent.completion(image)
parent.presentationMode.wrappedValue.dismiss()
`

We’ll use the sourceType binding to switch between the camera and the library.

Now, we’ll add a Classifierto handle the image uploading and return the classification results. I’m jumping around a little, but all this will come together in a few. Create a new file called Classifier.swift and add this in:

`
import Foundation
import UIKit

struct Classifier {
/// replace this with your dev machine IP address
/// for testing with a physical device.
private let host = "localhost"

func classify(image: UIImage) async throws -> [ClassifierResult] {
// Ensure the URL is valid
guard let uploadURL = URL(string: "http://\(host):8080/classify") else {
throw URLError(.badURL)

// Convert the image to JPEG data
guard let imageData = image.jpegData(compressionQuality: 1.0) else {
throw URLError(.unknown)

// Generate boundary string using a unique per-app string
let boundary = "Boundary-\(UUID().uuidString)"

// Create a URLRequest object
var request = URLRequest(url: uploadURL)
request.httpMethod = "POST"
request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")

// Create multipart form body
let body = createMultipartFormData(boundary: boundary, data: imageData, fileName: "photo.jpg")
request.httpBody = body

// Perform the upload task
let (data, response) = try await URLSession.shared.upload(for: request, from: body)

// Check the response and throw an error if it's not a HTTPURLResponse or the status code is not 200
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)

// Decode the data into an array of ClassifierResult
return try JSONDecoder().decode([ClassifierResult].self, from: data)

/// Creates a multipart/form-data body with the image data.
/// - Parameters:
/// - boundary: The boundary string separating parts of the data.
/// - data: The image data to be included in the request.
/// - fileName: The filename for the image data in the form-data.
/// - Returns: A Data object representing the multipart/form-data body.
private func createMultipartFormData(boundary: String, data: Data, fileName: String) -> Data {
var body = Data()

// Add the image data to the raw http request data
body.append("--\(boundary)\r\n")
body.append("Content-Disposition: form-data; name=\"file\"; filename=\"\(fileName)\"\r\n")
body.append("Content-Type: image/jpeg\r\n\r\n")
body.append(data)
body.append("\r\n")

// Add the closing boundary
body.append("--\(boundary)--\r\n")
return body

struct ClassifierResult: Decodable, Identifiable {
let id = UUID()
var label: String
var confidence: Float

// Helper function to append string data to Data object
private extension Data {
mutating func append(_ string: String) {
if let data = string.data(using: .utf8) {
append(data)
`

Great! Now on to the UI. Back in ContentView , let's add an enum called RequestStatus to communicate to the user what is going on — this is an easy UX win.

`
enum RequestStatus {
case loading, success, idle, error
`

Now, we’ll create a view model for ContentView that uses the newly created classifier to upload a photo to the server and share the results with the UI. This is also going to use the new Observation framework ⭐.

`
extension ContentView {
@Observable
class ViewModel {
var requestStatus: RequestStatus = .idle
var results: [Classifier.ClassifierResult] = []

private var classifier = Classifier()

func upload(_ image: UIImage) {
Task { @MainActor in
do {
requestStatus = .loading
results = try await classifier.classify(image: image)
requestStatus = .success
} catch {
print(error.localizedDescription)
requestStatus = .error
`

Now we need to add some state. This stuff should probably go in the view model, but for now, I’m going to add these as member vars to ContentView

`
// ContentView.swift
@State private var selectedImage: UIImage?
@State private var isImagePickerPresented = false
@State private var viewModel = ViewModel()
@State private var sourceType: UIImagePickerController.SourceType = .camera
`

Alright, now we’ll do some more UI building. Replace the body variable with this:

`
var body: some View {
VStack(spacing: 20) {
HStack(spacing: 20) {
if let image = selectedImage {
VStack {
Image(uiImage: image)
.resizable()
ForEach(viewModel.results, id: \.id) { result in
VStack(alignment: .leading) {
Text(result.label)
.font(.callout)
Text(formatAsPercentage(result.confidence))
.font(.caption2)
Divider()
HStack(spacing: 20) {
if viewModel.requestStatus == .loading {
ProgressView()
.sheet(isPresented: $isImagePickerPresented) {
ImagePicker(sourceType: $sourceType) { image in
self.selectedImage = image
`

And to address those compiler errors, add two new functions:

`
// ContentView.swift
@ViewBuilder
private func actionButton() -> some View {
if let image = selectedImage {
Button("Upload Image") {
viewModel.upload(image)
}.buttonStyle(.borderedProminent)
} else {
HStack(spacing: 20) {
Button("Camera") {

// and

private func formatAsPercentage(_ value: Float) -> String {
String(format: "%.2f%%", value * 100)
`

Heck yeah, you guys. If everything has gone according to plan, you should now be able to create/select a picture, upload it to the server, classify the dominant object in the picture, and then display the classification results in the UI!

I hope this project inspires you and gets the gears turning for your next ML project.

Cheers!

---

[Original source](https://medium.com/better-programming/deploy-coreml-models-on-the-server-with-vapor-48809a853fae?source=rss----d0b105d10f0a---4)

Reply