The digital entries to Creative Review this year took many forms – from rich websites and interactive voice experiences to a running track that claims to focus the mind. But all of our selected projects combine a clever use of tech with creative thinking – showing how digital technology can be used to surprise, delight and entertain.

A brilliant example of a project that does all three is Draw to Art – an interactive easel created by Google Creative Lab for Google Arts and Culture. The easel uses machine learning to analyse doodles and serve up visually similar paintings, drawings and sculptures – offering a fun way to browse museum collections.

The mobile installation was unveiled at Google Cultural Institute’s lab in Paris and has since travelled to several museums and events in Europe and the US. It was designed by Google Creative Lab as part of Google Arts and Culture’s initiative to make art more accessible.

Xavier Barrade, Creative Lead at Google Creative Lab, says the project was led by an idea rather than a set brief: “Draw to Art is a good example of the type of innovation projects that are part of the remit of Google Creative Lab,” he explains.

“We always start with a challenge for the brand. This time it was, ‘now that Google Arts & Culture has digitised millions of artworks, it can be hard to discover relevant ones – especially if you are not an art expert’. We wanted to create something that made discovering art accessible and creative,” adds Barrade. The team also wanted to explore new developments in AI – and decide to use machine-learning to solve Google Arts & Culture’s search problem.

From there, the team began with sketching what the experience could look like – and creating an animated mock-up of the experience. “We had a great response internally so we started to test different technical approaches and created a few interactive prototypes which kick started the production,” explains Barrade.

The team initially planned to create a website (Draw to Art was built using html5), but found that using physical tools made for a more exciting experience. “[We] realised the user experience was so much more interesting when people were using a stylus on a touchscreen, with a scale that makes it easier to create detailed drawings,” says Barrade. “Google Arts & Culture also has a few great online experiments, but we saw an opportunity to bring a new type of art discovery experience that uses machine learning in the vast amount of museums and events that they are involved with. When we had that idea of the ‘digital easel’ it made sense and felt fresh, so we decided to focus on this direction.”

Most of the experience was created internally by Creative Lab and Google Arts & Culture, which Barrade says helped keep costs down and ensure a more “fluid” process. The first month of the project was spent testing out different machine learning models and prototyping interfaces, with Google Creative Lab training a deep neural network (a machine learning system) to recognise visual features such as shapes, lines and perspectives in hand-drawn doodles. The team also trained the network to recognise the same features in paintings, sketches and sculptures from museum collections. The best matches are then presented to the user, and users can help ‘train’ the network by rating its matches, helping to improve the service for others. After testing out different machine-learning models, prototyping interfaces and finalising the design, the team worked with a product designer to create the interactive easels.

The finished experience requires little explanation: users make a drawing on the left and similar artworks are served up on the right, making it accessible to museum and event visitors of all ages and nationalities.

Five easels have been created so far, travelling to venues around the world – including the Grand Palais in Paris and the Long Museum in Shanghai – and appearing in 20 exhibitions. Draw to Art is also used daily by visitors at the Google Cultural Institute, with people spending an average of 10 minutes interacting with it.

Barrade says the installation has allowed Google & Arts and Culture to bring new technologies into partner institutions “in a tangible and relevant way”. It has also been used to demonstrate the creative potential of AI at the Mobile World Congress and Google’s annual developer conference, Google I/O, and Barrade says Google’s product teams have used the project for inspiration as an example of a novel approach to image search.

While Draw to Art makes use of some clever tech, its brilliance lies in its simplicity. Google Creative Lab have produced an intuitive experience that allows people to discover art using their imagination, offering a more tangible and engaging alternative to scrolling or swiping through a digital archive. Interactive installations can often be unnecessarily complex, leaving users confused or frustrated, but Draw to Art makes it simple and fun to engage with historic and contemporary collections.

“Great experiences always start with a strong and simple idea,” says Barrade. “I knew this project had the potential to be special from the start – even before designing any mock – as the team was on board after hearing the one-sentence idea ‘Discover artworks by sketching’,” he explains.

“With interactive experiences, especially the ones based on bleeding-edge tech, things can get complicated quite quickly. This one worked as we always focused on keeping the experience and the interaction as simple and fun as possible. The tech is only an enabler and never gets in the way. These are general principles we try to apply to anything we create,” he adds.

Draw to Art is the latest in a series of brilliant projects from Google Creative Lab. Also featured in our Annual this year is NSynth Super – a touchscreen device which allows musicians to access and use sounds created using NSynth (an AI-powered algorithm that learns the characteristics of different instruments and combines them to create new sounds). The prototype was created to understand how NSynth could fit into musicians’ creative process, and Google has made the hardware open source, allowing musicians to download source files from GitHub to create their own version.

Artificial intelligence is often viewed as a threat to creativity, but both NSynth Super and Draw to Art show (albeit in very different ways) how machine learning can support it, whether it’s through encouraging the public to explore artworks through their own doodles, or getting musicians to experiment with AI-generated sounds. Both projects also show how tangible devices can help people understand AI – giving form to complex technologies and creating fun physical experiences in the process.

by Anniek Corporaal