How cutting-edge AI tools are remaking history

The sunset overlooking the Acropolis of Athens in Greece

Rarely does a month pass without fresh reports of a forward-looking and autonomous system poised to change our future. What we don’t often hear about is the increasing use of artificial intelligence (AI) to examine our past.

Historians, archaeologists, musicians and data scientists are deploying AI to reimagine and recreate historical moments. Like so many tales from the evolution of modern computing, success with AI is grounded in the values of collaboration, opportunity and experimentation.

There are immense human challenges in getting the best results from AI and there’s no magic bullet computing at work The challenges experts face require distinct solutions, whilst sharing striking amounts of commonality. Bias and ethics of restorative AI are also of widespread concern, as is how we should interpret and categorise such works.

Uncovering ancient Athenian secrets

The ancient Acropolis in Athens in daylight

Jonathan Prag, professor of ancient history at Oxford University, has always had a passion for computing. “I got into mapping and visual analysis, which led to trying to build a digital catalogue of all the inscriptions from ancient Sicily,” he says.

Prag is an epigraphist, specialising in the restoration of ancient Greek texts carved into stone. Over the centuries, many carvings have been smashed into fragments, some of which have never been recovered, leaving vast gaps between words. In 2018, Prag’s PhD student Thea Sommerschield and Google DeepMind’s Yannis Assaelto suggested using AI to speed up the laborious process of filling the gaps in ancient texts. “I just sat there and went, that’s cool! Can you do it?”

A successful AI project relies on high-quality source data to ‘learn’ and Prag’s team didn’t have any. “In the eighties, Hewlett Packard mobilised scholars to type up the published Greek inscriptions,” he says. “It’s horribly messy and because it went from beta code into Unicode at a certain point, it’s full of artifacts”.

Sommerschield painstakingly cleaned the data to give the team more than 100,000 texts, from which the team trained Pythia, its first AI machine, to hypothesise the missing words from the Greek texts. However, Pythia’s successor was already in development. “Ithaca pays attention to patterns in the text, with reference about the region that each text came from, and the proposed date,” says Prag.

Ithaca has already resolved a point of conjecture in Athenian texts regarding the Greek letter S, sigma. “The state abandoned the three-bar form of the sigma, which enabled you to put a bunch of texts on one side or the other of 445 BC,” Prag notes. This date marker had been considered gospel within historical scholarship, but the few dissenting voices must have cheered when Ithaca was let loose on the data. “We ran Ithaca over the original data and it came out with new dates, moving them down by about 30 years”.

This shift alters the interpretation of a key period of Athenian imperialism, Prag adds, and makes a big difference to our reading of Greek history.

Using neural networks to seek lost tombs

Close-up of a Scythian tomb

Dr Gino Caspari, from the University of Sydney, is an archaeologist who’s studied the burial mounds of the Scythians, an ancient tribe of nomadic warriors who lived in parts of Asia more than 3,000 years ago.

Collaborating with a colleague, they built a convolutional neural network (CNN) which used satellite imagery to identify circular structures, seeking lost Scythian tombs. “Arriving in the survey area a year later, I immediately saw how wrong I had been,” he says. “What I had assumed to be buried structures in fact stemmed from the locals corralling sheep overnight in circularly fenced areas”.

Caspari’s long journey to see someone else’s lambs was because of poor data fuelling the AI. “The limitation is the availability of high-resolution satellite data, which is too expensive for archaeological projects to afford,” he adds.


The three keys to successful AI and ML outcomes

Democratised, operationalised, and responsible


His recent work on tracking 3,000-year-old Native American settlements in southern America was completed with the commercial package, ArcGIS. “In archaeology, we are clearly not at the forefront of development in AI and the number of people actively working on AI is limited,” he says. “To find broader adoption, we will ultimately need a kind of intuitive GUI that allows you to train models without coding”.

It’s important to remember that despite what dystopian headlines may suggest, AI is simply a customisable tool. Dr Caspari combined AI, LiDAR images and multispectral data to find more settlements, including many situated further north than had ever been documented. Could this have been achieved without AI? “Yes, but it would have taken a lot more time,” he argues. “Due to small training datasets, we often have a high number of false positive detections and those still need to be weeded out by hand. We are not really reaching superhuman performance in most cases due to a lack of training data.”

Resurrecting lost films

Piles of old film reels gathering dust

In the garden of Oakwood Grange in Leeds, Yorkshire, in 1888, Louis Le Prince, the ‘father of cinematography’, is using his invention, the motion-picture camera, to film the Whitely family. Today, only 20 grainy frames remain of what we acknowledge as the oldest surviving film in existence, but this didn’t stop Denis Shiryaev from using AI to create something rather marvellous. “I’ve always been the history junkie, and I decided to apply my knowledge of AI,” he says.

Using the images posted on the Science Museum website, Shiryaev reanimated the stills, applying a CNN to add detail to the faces and upscale the resolution. In total, the tool generated 250 colourised and stable frames. “I am a creator for my own audience,” says Shiryaev. “AI colourisation is not real, and it’s not historically accurate. I generate faces based on old source photos or paintings, it’s also an approximation. I think it’s important to have this small disclaimer that this video is AI.”

His use of AI is not to establish historical accuracy, then, but to inject verisimilitude into old footage, refreshing it for a modern audience. Almost 65 million YouTube hits underline the interest. “I saw this popularity as an opportunity,” he says. “Some companies from Hollywood, huge brand names, contacted me”.

In 2020, Shiryaev launched, an automatic cloud-based AI service that makes media enhancement accessible to anyone. “We have this beautiful feature called Generate Portfolio,” he says. “You can upload really low-resolution old photos and generate high-quality portraits and people love it”.

Cleaning up old hits

David Bowie performing in LA as Ziggy Stardust

A popular use of AI within the music industry is to enhance old recordings by removing noise and artefacts, often present since the day they were captured. In May 1972, singer Mary Hopkin performed at London’s Royal Festival Hall and a venue engineer captured her sublime performance. By 2005, the technical quality of the recording needed some assistance, as Mary’s son Morgan Visconti explains. “I think it was probably done hastily; I think it was quarter-inch tape,” he says. “The noise flow is heavy, and it gets worse as you got towards the end of the gig, there’s more noise than signal.”

Visconti is a musician and producer with a passion for technology. “I was chatting with my mum about noise removal and suggested she let me have a whack at it, and it was stunning. Just to hear it without the noise, it felt like a refresh, like you cleaned your ears out to hear the nuances. She was very pleased with it”.

Once again, experimentation and passion have honed Visconti’s skills after having a lightbulb moment with iZotope’s mastering suite. “This feature – music rebalance – was just an eye opener – or an ear opener,” says Visconti. “The ability to go into a mixed track and make stems for bass, vocals, drums, guitars and keyboards. It’s something I dreamt about as a kid. I went nuts, I took everything apart like old Beatles recordings, and I was like, ‘What can I do with this thing’.”

Visconti sees AI as just another tool available to a music producer and having grown up in music studios, he’s seen them all. His father is the legendary music producer, Tony Visconti, who asked Morgan to sprinkle the technical stardust for the soundtrack of Moonage Daydream, a new film about the life and career of David Bowie.

“The director used a ton of found footage and one clip is of Bowie playing Rock And Roll With Me and the mix isn’t great. Like Festival Hall, this was recorded straight to mono. I was able to provide separate tracks of vocals, drums, keyboards and bass and Tony was able to remix the song for the film.”

Restoring decaying works of art

The Austrian artist Gustav Klimt

AI remains an experimental technology, even for the largest tech companies. Emil Wallner is part of Google’s Arts & Culture lab. “We experiment with the cutting edge of technology and try to find interesting areas to apply it to,” he says. “Sometimes it doesn’t work and we don’t publish things, sometimes we have interesting results and we share them”.

Recently the lab has experimented on the works of the world-renowned artist, Gustav Klimt. “We go to a lot of museums, to digitise collections so anyone can access them online. At the Belvedere Museum in Vienna, the topic to start working with these Faculty paintings came up.”


Big payoffs from big bets in AI-powered automation

Automation disruptors realise 1.5 x higher revenue growth


Klimt’s controversial Faculty Paintings, Jurisprudenze, Philosophie & Medizin, were destroyed by fire in 1945 and could only be viewed via a handful of old monochromatic photos. To give Google’s AI some colour training data, Dr Franz Smola, curator at the Belvedere Museum and Klimt expert, stepped in. “Franz looked at what all the critics said about these paintings,” says Wallner.

Smola scoured written records and gathered paintings by similar period artists to reference their palettes and paint types. His months of painstaking research took Wallner’s team another six months to convert into usable machine data. “We would just enable it to add a few pixels and a few motifs that we knew the colours of. We add pixels in the machine learning model, and from there, it could colourise the entire painting.”

Wallner is clear that this extraordinary project is not a restoration, but a re-colourisation due to the lack of detail in the monochrome photos. “One area would be to recreate the pointillism,” he says. “He used a lot of gold which you can’t really see, so you need to somehow augment that in a 3D environment to capture the impression of the metallic elements in those paintings”.

Lee Grant

Lee’s career began in TV as a programme maker for the BBC, ITV and Channel 4, specialising in education, history and science. He was part of the early video-on-demand team for ITV which lead to technical roles within the fledgling online-shopping department for ASDA/Walmart.

In 2003, he and his wife escaped the corporate world and setup a computer repair business focussing on the consumer side of the market.

Lee is a contributing editor and podcaster for sister title PC Pro with a particular interest in the right-to-repair movement and the circular economy. He can usually be found running around the streets of Yorkshire at a ridiculous time of the morning, and he’s also trying to solve a mystery involving David Bowie.

You can reach Lee at or @tnargeel