GV Design Sprint
Gallery Pal
Upgrading the Art Museum Experience
The Scenario
You want to learn to appreciate art. You go to the MOMA. You’re looking at a painting. Why is it famous? You try to squint at the text, but it's being blocked by a person. You wait. You move around to readjust your view of the text. Too small. You get bored and move on. Is this what art museums are?
The Questions
How might we provide a myriad of visitors enough context to understand the art the are seeing?
How might we engage all visitors to inspire reflection and create their own opinion?
The Solution
Gallery Pal is a mobile application that improves the viewing experience of art in a gallery or museum - paintings, sculptures, and installations. Museums and galleries are trying to increase customer satisfaction while viewing art, focusing on connections and customizable elements. They’ve decided to create a mobile app solution. To come up with an app that fit user’s needs, I followed the Google Ventures design sprint process, a five-day process for answering critical business questions through design, prototyping, and testing ideas with customers. The result is the minimum viable product (MVP).
Below is a prototype video after mapping the problem, a lightening demo, Crazy 8s, sketching, storyboarding, prototyping and usability testing, The solution prioritizes location/proximity, concise information, prompts, visuals, and AI to provide a customizable art viewing experience.
The Prototype
The Team
Just me!
The 5 Day Process
Monday
Mapping the problem, research, and the user flow
Day 1 of the modified Google Ventures Design Sprint began with user research to understand what visitors seeked to learn and take away from an art museum. Quick research provided that the most frequented demographic of art museums were upper middle class and highly educated people. To contrast this, users between the ages of 20-30, with varying education qualifications, provided the following quotes for the basis of the problem:
​
-
"Long articles [on artwork]...are super overwhelming"
-
"I feel like I'm missing out by not knowing the background information or context”
-
“I don't enjoy group tours, I like to do my own thing"
-
“Hard to form my own opinion when I don't know about the artist"
-
I would love to know more about their process or technique”
-
“I may do a little research before"
​
​
Problem
Improve the audience experience of art by providing context of different levels to inform and inspire personal reflection, while targeting people of all age and education qualifications was an exciting problem for me to solve. I love art museums but finding a co-companion to come with me was always next to impossible because of similar issues.
Long Term Goal
The long term goal was identified as increasing museum traffic and also making art widely available to younger and less educated demographics. the accessibility of personal smartphones would be key. The long term goal could be measure by number of downloads of the GalleryPal app and feedback.
How Might We Fail?
As this is a short timeline, I compiled a list of questions to target and refer back to at the end of every day to keep me focused:
-
How can we juggle different levels of depth in which people willing to learn?
-
How can we entice people in reading paragraphs about artists background and artwork context?
-
How can we interest visitors in the background as much as the visuals?
-
How can we retain app interest in text heavy portions?
-
How many steps before the user closes the app out of frustration?
-
How can "make experience better" be correlated to "more information?"
Persona
Jumping back to the user research, the persona of Angela was created to progress into user flows.
​​
"It's hard to form my own opinion when I don't know about the artist - "I feel like I'm missing out by not knowing the background information or context”
User Map
The user flows would have one major goal - the ability to pull up context regarding any piece of art, realtime, while in the gallery, on the visitor's handy dandy pocket pal (smartphone). With Angela in mind, the user would:
-
launch the app
-
pick the gallery of their current location
-
scan a piece of art
-
pull up an artwork information card with links to the artist, similar pieces, location etc
-
be prompted with a reflection question
​​
Features in the information architecture that would be followed through into prototyping and testing:
-
Spontaneous art scans
-
Varied detail levels of context
-
Artwork​
-
Artist
-
Technique
-
Pop culture sightings
-
Similar pieces nearby
-
-
Main takeaway prompt
-
Feedback loop
​​
Tuesday
Sketching lightning demos, Crazy 8's, and a solution frame
Lightning Demo
Must like GV suggests, my first step was to peruse existing applications to get my creativity gears moving. I looked into AI and AR apps of all types - Planta for plants, Hutch for house interiors, Wegmans Scan for groceries, before zooming into apps that enhance the museum experience, such as Explorer - the official American National Museum app - or tap into AI for famous art pieces, such as Google Arts and Culture, and Smartify.
SMARTIFY
AI scanning feature
Simple, clear interface
Context with option to expand if interested, without being overwhelming
What if this was linked to a gallery?
EXPLORER
Location sensitive updates
Also simple and clear interface, prevents overwhelm
Map of gallery with visuals of exhibits to personalize the experience
What if there was a scan feature?
Thus the idea to juxtapose these two apps was formed.
The Crazy 8's
The Crazy 8's is a core design sprint method to quickly sketch ideas, spending one minute on each screen. The limited time to sketch forced me to draw a strema of consciousness, of sorts. I did two of these, focusing on the two screens I thought were most critical to the app -
1. The "Scan Artwork" Screen
2. The "Results" or "Artwork Card" Screen
The Solution Sketch
I took the most successful sketch of each screen for the solution sketch and built out the most important feature of the app - scanning artwork, pulling up the artwork card, and scrolling through background information to provide context, location or artwork in gallery, and similar pieces nearby, with the ability to read more in depth if desired.
Wednesday
Deciding on the most effective sketch solution
It was time to sketch the entire app from Angela's start to finish experience, which took lot of thinking and multiple drafts.
Landing page, with logo
Angela picks her gallery - the MOMA, with the option to enable location services for smart picking.
She is taken to the explore screen, with featured guided art tours and art pieces in the MOMA
She wants more information on the piece she is looking at. She clicks the scan button on the bottom toolbar and if prompted to enable her camera.
The smart scan screen if pulled up. She frames it on Starry Night.
The app AI recognizes the painting and pulls up the Starry Night Artwork card, with the painting, artist, date, era, and options to read the "expert" or "short" summary.
There is a back button to rescan.
Angela scrolls to the bottom of the artwork card to see where Starry Night is referenced in pop culture, similar paintings, and related paraphernalia she can buy in the museum shop.
Once she hits the bottom of the artwork card, a prompt pops up, detailing the main reason Starry Night is famous, with a follow up reflection question. The task flow ends with an option to give feedback.
Scrolling down on the artwork card comes a map of the MOMA, with the expandable context on the artist, his intention, and his technique.
Thursday
Prototyping in Figma
The following screens were created after many rounds of adding detail and building graphics. Scroll to the top to see the prototype!
I can't believe this is where I ended on Thursday, when this is where I began:
Friday
User testing and Iteration!
5 users between the ages of 20-30 were gathered via word of mouth to test an earlier form on the prototype. I was testing:
1. Scan "Starry Night" and tell me the one takeaway fact.
2. Tell me the pop culture references "Starry Night" has appeared in.
3. Locate it in the MOMA.
​
​
​
​
Within these 4 screens, the users were able to correctly and successfully complete all three tasks. The results revealed a couple things:
-
Most participants hesitated on the explore screen to find the scan button on the bottom toolbar - perhaps make it a larger, a floating button.
-
Most of the participants asked "do I just wait?" after scanning "Starry Night," indicating the lack of written instruction.
-
Some participants tapped through the reflection "did you know?" question in a hurry to get to the artwork card. Perhaps the placement of that screen in the series can be switched to after the artwork card.
-
All participants correctly named the pop culture references of "Starry Night" and found all the information very helpful and fun.
-
All participants correctly located the painting in the gallery and really enjoyed seeing featured paintings nearby to create a customizable gallery experience.
-
All participants claimed they would use this app to improve their art viewing experience, one emphatically claiming "I love this shit!"