DISNEY INFINITY 3.0

DISNEY INFINITY 3.0

DISNEY INFINITY 3.0

DISNEY INFINITY 3.0

OVERVIEW

Joining the team at Studio Gobo during the development of the Disney Infinity 3.0: Moana, and Rise Against the Empire, I was the first full-time UX Designer on staff. In my role, I introduced and evangelised a player-centric approach to designing systems, levels and interfaces. I also worked to update and improve usability testing processes in the studio, helping deliver more value and actionable results to make our games even more awesome.

DATE

September '15 - May '16

ROLE

UX Designer

TOOLS

UXPin, Visio, Photoshop, InDesign, Hansoft, OBS

TEAM MEMBERS (MINI-GAMES)

Game Designers (x2), Concept Artist, Programmers (x3)

The challenge

Introducing UX to the studio

Getting player feedback has been an important part of the design process since Studio Gobo first opened their doors. The studio has their own in-house usability lab facilities, and have been investing in frequent usability testing for several years now. The culture of iterative design and testing is ingrained into the DNA of the studio.

However, with no one dedicated to UX, there was a lack of overall ownership, meaning things could fall through the cracks. UI was driven largely by concept artists, and usability testing suffered from a lack of standardisation, and the outcomes were often not followed up.

Having initially observed and prioritised issues for usability tests toward the end of Rise Against the Empire, joining the team during Moana as a UX Designer I took ownership of testing in the studio, and provided UX support across all features, from system and level design, as well as UI design for mini-games.

Improving Usability Testing

Understanding the problems

When I first took over the usability testing, I met with various team members to understand what they felt was working with the current process, and what could be improved. This exercise also helped better understand how the team was running tests, and kept them involved in the discussion.

From these interviews, and by reviewing the existing documentation, I identified the following areas that could be improved:

  1. Small sample sizes. The first problem was that tests were run infrequently and with small numbers of players. This meant that the teams were receiving feedback and acting upon it, without further testing to validate it. While this worked for identifying some of the larger usability issues, it failed to identify the smaller ones.
  2. Unrepresentative players. While the team had built a large database of players they could use for sessions, often the only criteria they used to recruit was based on age. In doing so, players would often be recruited who had no experience of the product being tested, and some not even having experienced the controllers being used.
  3. Lack of standardisation for sessions. As sessions were run by different team members with no formal usability training, it meant there were few standards in place. Team members would frequently interact with players, help them immediately when they became stuck, and asked leading questions.
  4. Lack of test plan. Most tests conducted at the studio were broad, and often involved players just playing through the game. No formal test plan or research questions meant results often failed to offer the detail desired.
  5. No follow up process or tracking for issues. As there was no formal process in place, it meant that fixes were not verified and often fell through the cracks. Lack of tracking also meant the wider team had no visibility on the recurrence of issues.
  6. Issues not prioritised. As issues weren't tracked, they was also no triage process following sessions, and all issues were recorded as an unordered list in the team wiki. There was no indication of issue severity, and usability issues were mixed in with bugs and player feedback. This meant that issues were frequently missed.
  7. Missing data. There was often data from the usability sessions that was not recorded that would have been useful for the team, such as gamepad inputs, player ratings, task completion rates, time on task etc. As such, information that could help the team make better decisions was not collected or reported on.
  8. No summary. Finally, the biggest piece of feedback that came out of the interviews was with regards to a lack of high level summary. Given the game team are often pushed for time, it was just not practical to trawl through hours of footage or detailed notes to find relevant information.

Getting better data

The first step in getting better data was creating test plans for each usability session. I would formalise the goals of the team ("Do players understand how to use the Fish Hook mechanic?", "Is the difficulty curve for puzzles in Polatu too steep?" etc.) which would then inform what parts of the game would be tested, and how, to zero in on specific areas. These test plans meant we could maximise the return on investment for usability testing and capture data that was useful for the team 

Images of usability test plan used on Moana

An example of a session test plan. By defining clear objectives up front, testing could be more focused and ensure the right parts of the game were being looked at.

I also wanted to tackle the problem of recruiting unrepresentative players. In addition to creating a simple screener survey via Google Forms to ensure players from our existing database fit our required profile, I created a dedicated page on the Studio Gobo website to recruit additional players to increase the size of our database. I also discontinued the infrequent sessions with small sample sizes, and instead created a schedule of regular tests with at least five players (or pairs of players) per round. These were aligned with key milestones to test content and deliver feedback when it was most relevant.

Improving the lab

The studio already had a dedicated usability lab which functioned well. It was comfortably furnished, and audio and video recording were available. Even so, there were some quick wins I was able to make.

Firstly, I switched to OBS for recording sessions (instead of Wirecast that was being used). OBS had more useful features (such as the ability to composite footage), and allowed me to stream sessions directly to the team. I also used a web plugin to record gamepad inputs, something the team had requested. The final footage then included the gameplay, gamepad inputs, and room feed.

Streaming the footage meant the team could watch in real time and as a group, which always got more traction than asking people to watch the videos afterwards. By having the team be involved in this aspect, it helped increase visibility of sessions and meant they could react instantly (when necessary) if they identified an issue. We would meet after each session to compare notes, allowing them to engage with this aspect of the tests too.

A capture from a typical usability session, featuring game footage, a live room camera, and gamepads displaying inputs from players

OBS allowed me to create more useful footage for testing sessions. This includes the game (top left), gamepad input (bottom left), and room feed (top right).

Providing better results

At the end of each round of testing I compiled the results in a presentation for the team. As part of this, I summarise the issue and describe the possible causes, and provide clips of them occurring for the team to watch. Finally, I also assigned a severity rating to issues based on whether they affected core systems, their persistence, and their recurrence. This high level summary was invaluable for the team as they could get a download of the issues and also observe them happening without watching every session.

To ensure issues could be included as part of sprint work and followed up on, I started tracking them via Hansoft (and Jira for later projects). Adding issues to project management software was an important step as it gave the wider team visibility, and meant I was able to check on the status of issues between sessions. I could then make sure they were re-tested to see if a given fix had worked.

As part of the report presentations, we would assign each issue to an owner who was then responsible for making sure a fix was applied, ideally before the next round of testing where possible. This ownership ensured nothing slipped through the cracks.

A screenshot from Hansoft, showing the way in which usability issues could be recorded.

Issues were logged in Hansoft using a format similar to the above. Priority was added to these to help the team focus their work,

Introducing UX Design

Information Architecture and Flow

Information Architecture and Flow

Finding the fun

During pre-production of Moana, the team wanted to gain a better understanding of the target audience's in-game behaviour and cognitive abilities in a similar game. The team identified The Legend of Zelda: The Wind Waker as a quality and design benchmark. However, while the game was aimed at players of all ages, we hypothesised that some of the aspects of the game would pose problems for our target audience. By running formal usability sessions on Wind Waker, the aim was to uncover usability issues and successes with the design.

Over the course of two weeks, ten pairs of players in our target demographic (young girls between the ages of 6 and 12) were invited to the studio. Each pair played the opening 4 hours of the game, taking turns with the controls. During this time, players learnt some of the core combat and tool mechanics for the game, and completed two of the dungeon levels which emphasised puzzle solving.

In the end we uncovered a number of issues specific to our audience. These included a persistent lack of experimentation to solve puzzles from the youngest children, difficulty extracting meaning from ambiguous instructions, and general issues with fine motor skills and navigation. We also found that young players often preferred exploring the environment rather than focusing on their active mission. Free gameplay, as well as fun and unexpected interactions in the open world, were highlighted as the best parts by players.

These findings were presented to the studio and formed the basis of how the design for Moana would be implemented.

Screenshot of the usability lab setup for Windwaker

Footage from a usability test of Wind Waker.

Case Study: Moana's Mini-games

Aside from general support with UX input and usability testing, I also owned the UX design of several mini-games within Moana. The goals of the mini-games were to add variety and replayability to the game, unlocking new content as the player progressed, and rewarding exploration. The UI had to successfully bury complex systems in order to create a rewarding experience for players, while not interfering with the gameplay.

To initiate these mini-games, I opted to have the interactions in-world, rather than just popping up a UI. This made the experience feel more toylike and experimental, something which tested well with players. For example with the Music Fale mini-game which had three difficulties and several levels, the proposal had a button the player could jump on to change the stage, and they initiated the game by walking up to an interaction point for their desired difficulty. This proved to be very intuitive, and during usability tests players would happily spam jumping on the button to see the whole stage change in front of them.

Storyboard board for the Music Fale mini-game. It displays a 8 panels following the flow of the gameplay, from initiating to completion and scoring

The storyboard shows the proposed flow for the Music Fale mini-game. Visually blocking out how the this would work helped sell the idea, especially of the 3D interactions. without worrying too much about the UI design at this stage.

With the team happy with the flow, I worked on the UI design. As we had 3 mini-games, I wanted to share as many elements as possible across these so players didn't need to learn new elements, this consistency was vital for reducing cognitive load especially for our youngest players.

As scoring was less critical in the second-to-second gameplay, I aligned this with the existing Infinity HUD leaning into the behaviour of players checking that section of the screen. For the score multiplier which was much more important, rather than having this in the same place for each mini-game, I instead positioned it based on where the player's attention was. This ensured they always had a clear view of this without needing to move their eyes away. 

Wireframe of the Music Fale. The avatar is displayed mid-screen, with notes cascading down from top to bottom on the left and right. A score is displayed.

For the Music Fale mini-game, notes cascade down the screen to target zones as which point the player makes an input. Score is positioned in the top left with existing HUD, while the more important multiplier is front and centre, right where the player's attention is.

Reflection

Conclusion

During my first months at Studio Gobo I was able to make quick wins to both design in general, and the feedback collected during usability tests. In both instances, I demonstrated the need for UX in the studio and was hired as the first dedicated UX Designer.

Most importantly, these processes are still in place today. The whole team is involved in UX of course, but I continue to be a driving force for a more player-centric approach in the design process. Additionally, the improvements I put in place for usability testing, and the lab setup, continue to deliver on their investment, ensuring we're able to test our games with representative players and make them even better.

Designed with somewhere between Brighton and Vancouver