How to unpack a mummy? The new technique does not need human hands


Some scientific fields encourage destruction – like smashing atoms together at high speed or burying in the earth’s crust for geological samples. But other sciences, like archeology, require a much lighter touch.

So light, in fact, that scientists can undress a mummy without even lifting a piece of its wrapper using algorithms and machine learning. Yet even with advanced technology, this process can still be computationally long and complex.

Today, a team of scientists from France and Malta have devised a technique that works better than ever and with a fraction of the computing power.

Johann briffa is Associate Professor of Communications and Computer Engineering at the University of Malta and lead author of the new study published wednesday in PLOS ONE.

Briffa says their new technique could have applications far beyond archeology, including paleontology, geology and even medical imaging.

“In principle, our method can be used on any volumetric image,” Briffa explains. Reverse.

Using machine learning, researchers can identify many different parts of a wrapped mummy – from bones to feathers to skin tissue. Above, the technique is demonstrated on an ibis.Tanti et al., 2021, PLOS ONE

What’s up – Gone are the days when mummified remains were desecrated for the good of science. Instead, archaeologists have used x-ray scanners – like a human scanner – and algorithms for decades to study these delicate specimens. The name of this game is called “segmentation,” Briffa says.

“Segmentation is the process of labeling the pixels in the image (or [3D] “Voxels”), depending on the semantic context required, ”explains Briffa. “In our case, we tag the voxels to identify the different materials that make up the sample, such as bone, soft tissue, etc. This is an operation on the scanned image, so it is always performed virtually.

Briffa explains that these voxels can then be individually selected to – like selecting only bone voxels – to virtually “unwrap” the specimen. A common technique for doing this is deep learning which uses a complex neural network to create these voxels, but Briffa says it has significant drawbacks as well.

“The method we have developed uses classic machine learning with 3D functionality, as opposed to existing deep learning methods that work on volume slice by slice,” Briffa explains. “This avoids the discontinuities that often occur in a slice-by-slice approach. “

Why is this important – One of the advantages of their technique is its ability to evolve, Briffa says. This means that this technique could be more easily accessible to other types of projects, and its less complexity could also come with a lower price.

This combination means that more and more people can perform this type of non-invasive analysis, from living humans to ancient remains.

Using their machine learning technique, the researchers were able to practically unbox a mummified puppy from Roman times.Tanti et al., 2021, PLOS ONE, CC-BY 4.0

What did they do – In this work, Briffa and his colleagues tested their new technique on four specimens of mummies from the Natural History Museum of Grenoble in France with an estimated dating to the Ptolemaic and Roman period (around the 3rd century BC to the 4th century AD ). The specimen included:

  • A mummified puppy
  • Two mummified ibis birds
  • A mummified raptor

The team created volumetric images of the sample using a synchrotron – which emits high-power x-rays – and then fed them to the machine learning algorithm. The algorithm then identified different types of materials in the images, such as the differentiation between soft tissue and bone, with limited human interaction.

Comparing the results of their technique to other deep learning methods, Briffa and colleagues report that they achieved 94-98% accuracy in segmenting these voxels, compared to 97-99% accuracy for deep learning.

And after – In addition to extending this technique to other disciplines, Briffa says the team is also interested in how their work can be applied to deep learning methods as well.

For example, a next step in this work is to develop a way to apply these same complexity reduction techniques to deep learning models. This would help make this technique more accessible and allow the use of deep learning in a “full 3D context”.

Summary: Propagation phase contrast synchrotron microtomography (PPC-SRμCT) is the benchmark for non-invasive and non-destructive access to internal structures of archaeological remains. In this analysis, the virtual sample must be segmented to separate different parts or materials, a process that normally requires considerable human effort. In the Automated SEgmentation of Microtomography Imaging (ASEMI) project, we developed a tool to automatically segment these volumetric images, using manually segmented samples to tune and train a machine learning model. For a set of four ancient Egyptian animal mummy specimens, we achieve an overall accuracy of 94-98% compared to manually segmented slices, approaching the results of standard commercial software using deep learning (97-99%) with much less complexity. A qualitative analysis of the segmented output shows that our results are close in terms of usability to those of deep learning, justifying the use of these techniques.


Comments are closed.