Tag Archives: photogrammetry

Scan Project: Reproducing 19th‑Century African Artefacts Through Photogrammetry and 3D Printing

This is a project to replicate 19th century African artefacts currently held by the Royal Greenwich Heritage Trust.

The workflow is: scan, 3d print and then authentically finish these historical items; an East African carved wooden headrest, a series of wooden hair combs, and bronze horseman sculptures. Together, these objects offer a range of material and surface qualities, from worn hardwood to aged metal, making them ideal for testing the full photogrammetry‑to‑copy workflow.

Rather than aiming to produce pristine, as-new replicas, this project focuses on how digital and fabrication processes can retain – and even amplify – traces of age, handling, and material character present in the original objects.

Photogrammetric Capture

All artefacts are captured using DSLR‑based photogrammetry. This approach allows for flexible image capture that adapts well to differing shapes and surface finishes.

Each object is photographed from all angles using a consistent lighting setup, ensuring strong image overlap and even exposure. Particular care taken with the bronze horseman, where reflective highlights can easily interfere with surface reconstruction. Diffuse lighting and careful camera positioning help minimise glare while still preserving subtle form and texture.

The carved wooden artefacts – the headrest and combs – are more straightforward, but the shallow relief and worn surfaces still require dense coverage to avoid soft or ambiguous geometry in the resulting models. In practice, capturing more images than strictly necessary consistently results in more reliable reconstructions.

Digital modelling and cleanup

The image sets are processed, via the photogrammetry software Reality Capture, into dense 3D meshes, followed by light digital cleanup in 3d editing software Blender. This stage is kept light – occlusion holes and obvious artefacts are addressed, but surface irregularities are largely preserved. This reflects an photogrammetry not as a means of achieving visual perfection, but as a way of retaining material history. Minor distortions introduced through capture can echo the wear already present in physical artefacts. Excessive smoothing risks replacing these qualities with an artificial digital uniformity.

Fabrication: choosing different printing processes

Each artefact here is fabricated using a different 3D printing technology, selected based on form, scale, and desired surface qualities.

The bronze horseman is printed on a nylon SLS (powder) printer. Due to its size, the model is produced in multiple parts and later reassembled. The slightly grainy surface typical of SLS prints becomes an advantage rather than a problem here, closely resembling the texture of cast metal.

The wooden combs are produced using a resin printer. This process builds objects by curing layers of liquid resin from the bottom up, resulting in extremely smooth surfaces once the supports are removed. This makes it particularly well‑suited to the fine teeth and delicate detailing of the combs.

The headrest is printed on a more conventional FDM printer, which extrudes molten plastic in layers, building the object up gradually. It is printed in two halves and then fixed together. The visible layering produced by this method echoes carved tool marks, making it a surprisingly appropriate choice for a wooden object.

Surface preparation

Before finishing, all prints undergo a combination of sanding, cleaning, and under‑coating. Preparation varies depending on material, but the intention is never to eliminate surface traces entirely.

Light sanding removes sharp artefacts while leaving enough texture to contribute to realism. Some pieces receive a single coat of filler primer, which is then sanded back so that it remains primarily in deeper print lines. This softens the surface without erasing its character.

Painting and finishing: simulating aged wood

The headrest and combs are finished using layered paint techniques designed to suggest aged hardwood rather than freshly finished timber.

A dark, warm brown base coat provides an initial foundation. From here, multiple diluted washes are applied, allowing pigment to settle into recesses and carved details. Raised areas are selectively blotted back, reducing the “plastic” appearance of the prints. Some judicious use of a nail to physically carve deeper into some grooves too.

Very restrained dry‑brushing is used to pick out edges and ridges. The emphasis here is subtlety – highlights that are obvious quickly undermine the illusion of age.

To finish, a neutral shoe polish is applied to areas that would naturally be handled. Buffed gently, this creates a soft, uneven sheen associated with skin oils and long‑term use, rather than a uniform varnish.

Painting and finishing: the bronze horseman

The bronze horseman follows a different finishing approach. Rather than painting metallic colour directly, the model is first coated in matte black. This acts as a shadow base, ensuring that any paint missed in recesses reads as depth rather than exposed plastic.

Metallic colour is then introduced using Antique Gold Rub ’n Buff, applied through a dry‑brushing process so that pigment catches only on raised surfaces. This technique works particularly well with the subtle grain of the SLS print, creating broken highlights that resemble cast metal rather than smooth paint.

Age and wear are introduced through thin brown and ochre washes, which are allowed to pool in crevices before being partially wiped back from high points. This layering process avoids a uniformly “dirty” appearance and instead produces variation consistent with long‑term exposure and handling.

A final light touching of Rub ‘n Buff to the tips and sharp edges of the horseman to add the glint to these ends.

Conclusion

The resulting prints are not intended to replace the artefacts they are based on. Instead, they offer a way to examine how historic objects can be translated into contemporary materials while still retaining evidence of use, age, and making.

View Post↗

Scan Project: Ottoman Tombstone Replica

This is an undertaking to replicate an 18th century Ottoman gravestone which is currently held by the Royal Greenwich Heritage Trust.

Michael Talbot, Associate Professor in the History of the Ottoman Empire and Modern Middle East at the University of Greenwich, was informed by the Royal Greenwich Heritage Trust about an “Arabic tablet” which he identified as an Ottoman object and subsequently transcribed and translated the inscriptions on it.

The original tombstone itself is a late-eighteenth century artifact made of limestone or marble featuring Ottoman Turkish inscriptions in the sülüs calligraphic style. The gravestone’s origin is not clear, but it was possibly brought back from Constantinople as a memento by a British officer in the 19th century. The inscription features a poetic composition reflecting the youth and untimely death of its owner.

The tombstone itself measures 72 cm x 21.3 cm x 11 cm thick. The stone is broken at the bottom where it would have originally been set into the ground. It would also have been topped with a carved representation of the headgear associated with the deceased’s rank and profession, likely a turban, indicating a position in the religious-scholarly class.

Replicating the tombstone comprised of the following steps:

  1. Scanning: photograph all sides / angles of the object and utilising photogrammetry to generate an accurate 3d digital model of it.
  2. 3D Printing: produce actual-size moulds of the tombstone from the scan model.
  3. Casting: pour Jesmonite (similar to plaster) into the moulds and allow to set

Scanning

Photogrammetry is a process of 3d scanning whereby many photographs of an object are used to create an accurate digital model. Common points in the overlapping photos are identified in order to align them and create a point cloud – a 3D representation of the model as dots in 3d space extracted from these aligned images. This is further refined into a mesh model to form a network of triangles which are lastly “wrapped” with the texture that has been derived from the photographs to provide colour and detail to the 3D surfaces.

To get the best results for use with the photogrammetry software there should be many photos which are sharp, evenly lit – with as little shadow as possible – and capture all sides and angles. Ideally the object would be photographed in a photo-studio with controlled lights and blank backdrop etc, but since this was not possible in this case some soft lighting was brought to site to offset directional shadows from windows.

The model was photographed with a Panasonic Lumix FZ82 (on a tripod) which is a mid-range bridge camera. Manual settings / RAW format, approximately 500 photographs – then post processed in Adobe Lightroom to eliminate any blurred shots and batch edited to further reduce shadows and bump up highlights.

The photogrammetry software used for this exercise is called Reality Capture. Since the tombstone was too heavy to stand up on its end it had to be laid flat, horizontally for one set of photos and then flipped onto its front to capture the other side. Ideally Reality Capture would have automatically detected all the photos as a single object but in this case it generated two separate components: a top and a bottom. To fix this one halve had to be flipped and manually aligned in the software in order to produce a single complete model.

With the model successfully generated it can be exported to a variety of formats for a variety of purposes. For viewing / zooming / spinning the model online it has been exported to Sketchfab this model includes the texture for added realism.

For purposes of 3d printing the model is exported to a common OBJ file format. The texture wrapping step is not important to 3D printing since these printers do not reproduce the model’s colour, so the version used there is effectively monochrome.

3D Printing

Commonly 3D prints are created using PLA or photopolymer resin etc. While these materials recreate accurate models, they can feel light and “plasticky”. For the tombstone it was important to recreate as much as possible the tactility of a stone / marble material and have a weightiness that approaches something more authentic to the original in that respect.

For these reasons instead of printing the model directly, moulds of the model – effectively an inverted version of the 3d scan – were produced, into which a plaster-like material was poured and left to set.

Material used to print the moulds: TPU 95A flexible filament. This means the resulting structure is supple and bendable – strong enough to hold the pour but can be peeled away from the cast once it has set.

3D Printer used: Bambu Lab P1P. The maximum printable area of this printer is much smaller than the size of the tombstone itself meaning the mould needed to be printed as 4 sections and then reassembled for casting. Each of the sections took 18 hours to print. (shorter test section illustrated above with red mould).

Since a full dense model would use a lot of casting material – and also create a very heavy model – only an outside skin of about 10-15 cm of the model was needed to be cast. To achieve this an additional 3d printed core of the tombstone was placed inside the mould in order to cast around it. The final model is lighter and more economic with casting material and retains the proper look and feel – but with a hidden, enclosed, non-dense core.

Casting

The material used to cast the tombstone is Jesmonite – this is a water-based composite material that combines natural materials such as gypsum and resin with various other components, including water-based polymers. It is known for its flexibility, durability, and environmental friendliness compared to traditional resin-based materials. It is also less likely to chip or crack like regular plaster and can be mixed with a pigment to colour the material.

Images below show Jesmonite being mixed with pigments and some experimental, test colours.

The 4 separate 3d printed moulds were taped together to form a continuous mould to be cast into – so the moulds were printed as 4 sections with the casting as a single object which required no reassembly.

The photo above left shows the 3d printed, light core inside the mould while on the right the Jesmonite is poured in to fill the mould with the core embedded inside.

The cast itself sets fairly quickly in about 20 minutes and is then ready to be removed from the mould.  The moulds themselves can be re-used to produce more replicas (though the core would require re-printing).

The weight of the replica is initially about a third of the weight of the original, though as the moisture evaporates over a few weeks it becomes somewhat lighter, though still a substantial weight of about 15kg.

Creed Monument – Scan Techniques

The monument of the Creed family sits against the North wall of St. Alfege Church, Greenwich. Sir James Creed (1696 – 1792) was an MP and lead merchant and is buried with his wife at the church. This is a marble monument, about 4 metres high – with markings higher up that suggest a metal cross piece used to be fixed to it.

Photogrammetry Scan

By photographing an object from all sides and capturing many images – with enough overlap so they can be tied together – photogrammetry software can create an accurate 3d model of that object. The resulting mesh object can then be edited and used in CAD / 3D modelling software such as 3DS Max, Rhino, Maya etc. Processes could include replacing textures / materials or applying sun and light models to examine artificial shadow patterns.

This model was created with the software Zephyr Aerial 4.5 using 64 photographs taken with an Apple iPhone X in good daylight. The clarity of a high definition photograph enables the model to carry over very fine, close up detail. Zephyr allows for the mesh to be tidied up, cropped and then exported to the Sketchfab website / service which allows models to be zoomed, spun and examined via browser or app (embedded link below).

Photogrammetry lends itself particularly well to constructing museum-grade scans of smaller, closer objects. It can also deal with larger projects though these are likely require the use of extra equipment – drones, zoom lens, etc. – to obtain distant, high up and otherwise hidden spots to sufficiently cover the entire subject.

Creed Monument: Photogrammetry

Laser Scan

LIDAR technology – radar with light – bounces many light rays off objects within a space to measure distances to those objects and build up a cloud of points with accurate spatial data that represent the shapes found. Typically a tripod mounted laser scanner will rotate the beam vertically and the scanner unit horizontally to capture a 360 sphere of data in a single scan. A number of scans are carried out to best capture the space from all points – and eliminate “blind spots”. These scans are combined together – or registered – to create a single unified point scan.

While the density – and size – of the points can give the impression of solid geometry it is important to remember that this model is floating dots – not solids or meshes that can be edited in the same way as the photogrammetry final output. The size of these points can be adjusted to create revealing, x-ray style views through a building. More practically a point cloud survey of a site can reside as a reference layer on a CAD site plan; the very fine accuracy of a laser scan and the distance it can reach being a distinct advantage.

A Leica BLK360 scanner was used to carry out this scan – with three scans about the monument registering into a single point cloud. Each scan takes around 5 minutes and with so few scans the registration process is straightforward – large projects with lots of scans can be a very involved and time consuming process.

Relative to other laser scanners this model has a range of “only” around 50m (the Faro scanners have nearly three times this). The high concentration of light points sent out also mean that – even with tree coverage around a building or landscape – enough of the beams will still get through to record the semi-hidden project behind.

Creed Monument: Laser Scan

iPhone Polycam App

The Apple iPhone 12 Pro and iPad Pro include a Lidar sensor – a feature to enhance the accuracy of distance and measurement for purposes of augmented reality and camera focusing. This feature has also been utilised by a number of developers to create Lidar scanning apps which open up the opportunity for quick, on-the-go scans straight from the phone.

This app by Polycam is one of the earliest and best to exploit the hardware and point to possibilities of this handset based technique.

This is a lower resolution mesh but the high resolution images wrapped around it still give a good impression of the model. Polycam / Lidar sensor will continuously try to correct itself during the scan sweep to maintain alignment – but there are a few tears in this example where the registration has slipped. More careful movement when scanning would help to prevent this. This scan took about 5 minutes.

Creed Monument – Polycam Scan / iPhone

iPhone TrueDepth Apps

Recent Apple devices use the front facing camera – with “TrueDepth” sensor – to capture 3D information for use with face ID authentication and Animoji. This technique involves projecting 30 000 infrared points and reading back a 3d map of the user’s face. Similar to the Lidar apps, developers have utilised this feature to author apps that can 3d scan with it.

Heges and Capture by Standard Cyborg are two good apps that lever this power of TrueDepth to carry out 3d scans.

Although the capture resolution here is very high the range is short which makes it suitable just for smaller, close-up scans. The other big barrier is that since it uses the front facing camera the handset needs to be pointed at the subject – with the screen away from the viewer. This can make it difficult to see what areas are being scanned – though the Heges app does include a screen share feature where the scan view shows on another device. Constructing a rig that can rotate the camera smoothly all around the model is another option too, where possible, to control speed and shake.