Photogrammatic Techniques for 3D Reconstruction of Archeological Artifacts
02 Jun 2014Using methods of Visual Structure from Motion (VSFM), we’ve started developing an automated 3D modeling setup within the MNSU Archeology lab for very little cost.
###Hardware and Software:
-
Nikon D5200
Arduino-uno
Stepper Motor
MotorShield v2
Linux
Python 2.7
MeshLab
Visual-SFM
gphoto2
What we essentially needed was a system where all we had to do is place an object on a turntable, click run, and get a 3d model as an output.
##The Automated Turntable __ The turntable we created programmatically takes an image every time it turns. Using the Arduino-uno, a Motor-Shield v2, and a stepper motor, we created a turntable that runs off of a simple C code. The C script turns the stepper motor in Microsteps, then sends a signal from the Arudino to a Linux-based computer using the ttyACM0 terminal.
Below is an excerpt of the C code used to run the stepper motor.
myMotor->setSpeed(10); // 10 rpm
}
void loop() {
Serial.print("2");
delay(2000);
myMotor->step(8, FORWARD, MICROSTEP);
Serial.print("2");
}
Then, from the Linux machine, we can listen to the ttyACM0 terminal
>>>import serial
>>>print ser.read()
Using either subprocess
or the os
module, we pass commands to the command-line terminal from within the Python script.
######include diagram of arduino, camera, motor to comp and signal Using a while loop, we tell the camera to capture a photo every time that it reads a specific value.
import serial
ser = serial.Serial('/dev/ttyACM0', 9600)
trigger = 'gphoto2 --capture-image-and-download'
if ser.read() == "2":
os.system(trigger)
After having control over the motor, and having python read the signal from the Arduino to take an image, we set up a user prompt asking for a catalog number and how many images were going to be taken of it. With this setup, the best results have been coming out with around 25-30 photos.
In the full script below, the user is prompted for a catalog number and an amount of images which will be taken. Once the user inputs the data, it is used to control the amount of iterations made through the while loop using a countdown method.
import serial
import subprocess
import os
base_folder = "/artifacts"
catalog_number = str(raw_input('Please enter catalog number (e.g.2013.2.5):'))
number_of_photos = int(input('# of photos to take: '))
total_photos = number_of_photos
def trigger_capture(output_folder, filename, file_number, file_extension):
output_filename = os.path.join(output_folder, filename + '_' + file_number + file_extension)
trigger = 'gphoto2 --capture-image-and-download --filename %s' % output_filename
os.system(trigger)
print filename, 'captured!'
triggercapture = 'gphoto2 --capture-image-and-download --filename %s' % os.path.join(base_folder, catalog_number + ".jpg")
ser = serial.Serial('/dev/ttyACM0', 9600)
capture_number = 0
filename_number = 1
while number_of_photos > 0:
print ser.read()
print "waiting for serial input on port ACM0..."
if ser.read() == "2":
print "Serial input detected..."
trigger_capture(base_folder, catalog_number, str(filename_number), '.jpg')
filename_number += 1
number_of_photos = number_of_photos -1
print "%s photos remaining of %s" % (number_of_photos, total_photos)
elif number_of_photos == 0:
ser.write("3")
break
else:
print "unknown serial input"
##Visual Structure From Motion _____ The algorithm behind VSFM relies on identifiable features between images to estimate the three dimensional distances between those features. In order to have VSFM recognize the motion of each artifact, I’ve made a mat for the turntable plate that has unique symbols on it, helping the program detect the relative positions from each angle.
After observing the output, we want to make sure that VSFM recognizes only one complete model rather than two or more separate ones.
Since VSFM has a command line utility, we added it into the Python script so that we can automate the creation of a dense reconstruction using CMVS.
CMVS is a dense reconstruction method that uses a a Visual structure from motion algorithm to create a very detailed point cloud.
Using the 25-30 same images we took initially, CMVS is called through Python into the command-line to create a point cloud.
Using this point cloud, we load the points into MeshLab where they are treated as vertices. Creating a surface between the vertices is then done through a surface poisson reconstruction.
Applying a texture to this generated surface is also done through Meshlab using parameterization from the original images we had taken.
##What’s Next ___