In part 1 we looked at a basic non-AR setup where we added an environment map and hooked into the render loop to create a simple camera animation.

This time we’ll look at two things that aren’t documented very well:

  • Creating a basic transform animation that loops.
  • Adding gestures to an entity and detect when these gestures start and end.

Basic Setup

Lets start by creating a setup similar to the one in the first tutorial:

arView = ARView(frame: view.frame, 
cameraMode: .nonAR,
automaticallyConfigureSession: false)
view.addSubview(arView)
let skyboxResource = try! EnvironmentResource.load(named: "decor_shop_2k")
arView.environment.lighting.resource = skyboxResource
arView.environment.background = .skybox(skyboxResource)
let cubeMaterial = SimpleMaterial(color: .blue…


We can safely say that SceneKit is being abandoned in favor of RealityKit. Why? The past two years no new features have been added, serious bugs haven’t been fixed and there have been no SceneKit mentions at WWDC.

RealityKit on the other hand has been at the forefront. It has great potential but documentation is scarce and some important features are missing. As of speaking there’s no way to set up custom geometry and there is no support for shaders.

Apple engineers are actively encouraging developers to start using RealityKit instead of SceneKit. …


In this post we’ll be looking at how to detect, classify, segment and occlude objects in ARKit using CoreML and Vision Framework.

We’ll use two machine learning models that are available from the Apple Developer website:

  • YOLOv3 to locale and classify an object
  • DeeplabV3 to segment the detected object’s pixels

This example will run on devices that don’t have a LiDAR sensor so we’ll look at a way to ‘fake’ depth in a Metal fragment shader.

Take a look at this video to see what we’ll achieve:

Let’s get started.

Preparing The DeeplabV3 Model

We’ll be using segmentation data from Deeplab as a…


In this short tutorial we’ll use Vision Framework to add object detection and classification capabilities to a bare-bones ARKit project. We’ll use an open source Core ML model to detect a remote control, get its bounding box center, transform its 2D image coordinates to 3D and then create an anchor which can be used for placing objects in an AR scene.

Here’s a preview of what we’ll create:

To get started you’ll need to create a new Augmented Reality App in Xcode: File > New > Project … and then choose “Augmented Reality App”.

Replace the code in ViewController.swift…


SceneKit is a powerful framework with many useful features. I’ve been using it on a daily basis for the past six months and I’ve grown to love it. However, one thing that Apple hasn’t done well is providing us with extensive documentation. Using SceneKit means digging through GitHub, StackOverflow and the very few meaningful results returned by Google.

One of the things that is hard to figure out is how to perform custom Metal drawing after SceneKit has rendered a scene. Apple’s documentation reveals the following:

… to perform custom Metal or OpenGL rendering before or after SceneKit renders the…

Dennis Ippel

AR/VR Developer | http://www.rozengain.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store