A.R. in D.C. – at the Smithsonian
The Smithsonian’s Skin & Bones app was one of the first examples I encountered when researching the use of Augmented Reality in Museums over a year ago, and it looked promising. So, I was excited to get a chance to try it out in the context of the actual Smithsonian National Museum of Natural History on a recent trip to the nation’s capitol.
What Went Right
Let’s start with what I liked – and there’s a lot to like here. This is not just a single-function A.R. app. It features extended information about the subject of the exhibition, including a section called Animal Life, where the user can watch a video about the specific species in question, another entitled Meet the Scientist where you get to meet an expert on the subject of the exhibit, and other options that vary somewhat from exhibit to exhibit, including games and other activities.
This accomplishes two goals, in my opinion. It makes the app useful to users who aren’t actually at the museum, and for those who are, it creates a context using media that the museum just doesn’t have to space to include.
While some of the augmentations are simple 3D models of the actual animal overlaid onto the skeleton on display, which also serves as the target, others include animations, such as a woodpecker pecking, a rattlesnake jaw opening really, really, wide, and a bat crawling on all four arms and legs. These add a lot of dynamics to an otherwise static display.
Instructions are provided for properly viewing each augmentation, and three About screens are included, providing access to the history of Bone Hall and the exhibit, information about the app itself, and credits, including source credits for the 3D models, animations, videos, and other media bundled in the app. Finally, the ability to link to social media puts the finishing touch on what comes across as a robust experience overall.
What Went Wrong
Unfortunately, the app features only thirteen items out the estimated 145 million artifacts in the museum’s collection, a much smaller but still impressive subset of which are actually on display, and of these, only nine were represented in Augmented Reality. This light coverage makes this feel like an experiment, and leaves one wishing for more.
However, the app was last updated two years ago, and it appears that the experiment has been abandoned, to some extent. There are several technical issues that could be improved with an update. Some links to the Encyclopedia of Life, an excellent reference, are broken, and the targeting seems to be based on photographs of the exhibits, not 3D scans of the skeletons. Therefore, the 3D models looks best when viewed perpendicular to and at eye level with each skeleton. See the mandril at the top of this page, or the swordfish below. The largest skeleton supported by the app would not trigger its augmentation at all, probably because the target photo was shot perpendicular to the skeleton, which is mounted high overhead, and you can’t get your camera that high (it works fine on the images that you can print out at home).
However, there is a bigger problem here. In order to get to the Augmented Reality content, one must first select the artifact from either the list, which is presented in the form of a scrolling grid that fills several screens, or from the map, which fits on one screen. Of the two, the map is the better option.
This is where the interface starts to become cumbersome, and coordination with the physical exhibit falters. For starters, some (but not all) of the physical exhibits display the icon of the app overlaid with a number, which is coordinated with the map, but not with the list, which does not have numbers. Since the augmented content is available only after you have selected one of the artifacts from either the list or the map, and not until you select one of the icons from the screen that follows, you must back out of the AR camera to the artifact screen, and then back to the list or map. This means that there are several steps involved in moving from one to the next, which get in the way of enjoying the museum experience.
This organization of the content probably works fine for users at home, but for the museum-goer, there could have been a much better option which might have given the app more traction and kept it alive – if not expanding throughout the museum.
A Missed Opportunity
If in addition to the map and the list, the A.R. targets were used as a menu system, and all thirteen artifacts used as targets, a museum visitor could simply scan those artifacts marked with an icon with the A.R. camera to bring up the screen of options for that artifact. From there, videos, activities, and other information specifically related to the individual object could be accessed from the tool bar on the A.R. screen, leaving the camera as the default view. This would streamline on-site navigation and keep the visitor’s eyes and camera on the displays, not the menus, lists, and maps, as the user would be free to move from one artifact to the next by simply scanning it.