Usability Lab on a Budget for Start-ups


When I first joined RefME as a user researcher, one of my first challenges was to set up our in-house usability lab from scratch. Offering a great user experience to our users has always been one of RefME’s top priorities, and conducting usability testing sessions to understand how our product is used, has been a regular practice in the company since the early beginnings. In its first year RefME was much smaller, and the research sessions were conducted by our UX designer, who reported the findings to our Engineering team. This approach worked well initially, but when I started the company had already grown considerably in size, and we quickly realised we needed a setup that would allow internal stakeholders to observe the sessions.

Coming from an agency background, I had some experience in managing a usability lab. However, the lab at the London based UX agency where I used to work was quite advanced, and working in a start-up often means having to make do with whatever resources are available.  An ad-hoc sound streaming system and professional usability testing software was out of the question. Eventually, with some research and tweaking, we found a setup that works for us. It is very cost effective but flexible, and allows us to test multiple devices during the same session – and – if a team wanted to  observe the research interviews as they happen, the lab makes that possible. In this article I’d like to take you through our one-of-a-kind set up and show you how to get it ready to run a usability testing session step-by-step.

What you will need:

  • 2 rooms (adjacent, ideally) – a testing room and an observation room
  • A TV for the observation room
  • A laptop to test on (running Windows)
  • HDMI cable


  • Virtual camera software
    ManyCam – £21.18 (Standard subscription)

Optional equipment

  • External webcam – for capturing the face of the participant when testing on mobile devices
    IPEVO Point 2 View USB Camera – £55
    This camera is not the cheapest option you can find, but since it’s a document camera, it can also be used to test on tablets when needed.

Although you don’t necessarily need to have two  rooms to make this setup work (at the very minimum you would only need a laptop, ManyCam and Skype/Google Hangouts to run a session with remote observers), here at RefME we are fortunate enough to have a dedicated observation room adjacent to the testing room. If this is at all possible at your company, I strongly suggest choosing this approach over remote screencasting: apart from the lower streaming quality, programs like Skype/Google Hangouts tend to be very resource-intensive for the testing laptop, and if ran in the background while testing on other software with participants they might cause the machine to slow down substantially. If you don’t have the luxury of an adjacent room for observing the sessions, don’t panic! Keep reading and you should still be able to figure out how to use the setup to stream the usability testing for remote viewing.

And now, let’s get started.


1.Connect the TV and all other peripherals to the PC
Connect the PC laptop (testing room) to the TV (observation room) via HDMI cable, to stream the sessions to the observation room. Make sure all the peripherals are connected to the PC (pic. 1).


2. Make the displays adjacent

Open the screen preferences window (Right-click on desktop > ‘Display settings’) and turn on “extend these displays” (pic. 2). Now the TV and the PC’s monitor will work as 2 adjacent displays.


3. Set up sound streaming

Open the “Sound” window from “Control panel”. Set the TV as the default audio playback device (pic. 3). Now, from the “Recording ” tab, select the  external microphone (if using one) as the default device (pic. 4), double-click to access the microphone’s settings, and tick “Listen to this device” box (pic. 5). Now everything the external microphone should pick up will be played through the TV in the observation room. Adjust TV and microphone volume as needed. NOTE: Make sure to select the TV as the output device before you tick “Listen to this device”, as this will result in a loud feedback noise if the PC speakers are selected as output (or any other device in the same room as the microphone). Also, this setup means that the participant won’t be able to hear any sound coming from the computer, including videos or system sounds.

4. Set up ManyCam for capturing the interaction 

The advantage of using ManyCam is that you can use up to 4 video inputs (e.g. desktop testing, mobile testing, etc.), and easily switch between them during a session. To set it up, launch ManyCam, right-click on one of the panels on the right and choose the main input (pic. 6): for example, for desktop testing this would be Desktop 1 (the PC desktop, as opposed to the TV’s – Desktop 2).

If needed, set the resolution to 1080p (bottom-left corner), then select the PiP option (i.e. Picture in Picture) and pick the PC webcam by clicking on the PiP box on the  preview (pic. 7).



5. Add any additional setups
Repeat Step 4 for the mobile setup (selecting the mobile camera as main input, and the external webcam for the PiP), and for the interview setup (just the external webcam as the main input). You now have 3 streaming outputs which you can switch between during a session (pic. 8).The external webcam is used as the input for capturing the participant’s face during the interview part of the session and the mobile testing (if needed). This is because during these phases of the session the facilitator will be sitting in front of the PC, which has the advantages of making sure the setup is working, changing streaming output when needed, and watching the mobile testing on screen, without having to lean over the participant.


6. Streaming to the observation room

The video output from ManyCam needs to be streamed only to the observation room, in order to provide the PiP for observers while avoiding the participant feeling self-conscious about the camera pointed at their face.
Click on the “ManyCam” menu in the top-left corner, then “Full Screen Broadcast”. This should launch a separate window with the selected video output (pic. 9).

 Since the two screens are adjacent, drag and snap the window to the top of the TV’s monitor, to make it go full screen (this is a bit tricky, since you won’t see the TV’s screen, but you’ll get the hang of it). Now you’ll be able to control which video output the observers will be able to see by clicking on “Cut” on the different video panels within ManyCam.

NOTE: The addition of PiP is one of the reasons why we are using extended displays rather than mirroring the PC one (pic. 10).




7. Remote streaming (optional)

For remote streaming we usually use Google Hangouts, as it integrates well with Calendar. Open Google Calendar in a browser and click on the “Join meeting” link on the calendar event previously created for user testing. When in the conference call, click on the “Screenshare” icon on the left side and select Screen 2 (to share the TV screen with the PiP). If you are not streaming to an observation room, all you need to do is select the ManyCam virtual camera as your input camera, so the other attendees will see ManyCam’s output, including the face of the participant.

NOTE: Always check your microphone is on for the call and tell any remote observers to turn off their microphones, so that the attendees in the observation room won’t hear any background noise.


8. Recording the session (optional)

If you need to record the sessions, Debut Video Capture Software is a very effective way of doing this. For less than £25 it records mouse interactions as well as the screen, and in a number of different video outputs. On the downside, it’s a bit fiddly to set up and, more annoyingly, its icon changes colour intermittently while in use, which is not great when the participant is carrying out their task.


To wrap up, I want to leave you with my thoughts and observations about the pros and cons of this setup, after using it for a few months.


  • Its main strong point is it’s much cheaper than most software solutions around
  • It’s flexible, especially if using the mobile kit. It allows you to change testing devices during the session without the need to change set up or stop the recording. Furthermore, it allows for remote viewing, as well as live streaming and recording, and the PiP view is available for all of these


  • It uses a number of different software to achieve the desired outcome, especially when streaming and recording the screen at the same time. This requires quite some RAM capability, so make sure you use  a powerful laptop
  • It uses 2 connected screens. This is probably the biggest downside in terms of the actual testing, as the participant can end up with the mouse in the second screen, losing track of the pointer. The solution I found to work best is managing expectations and reassuring the participant: before testing I’d always tell the participant to “not freak out if the lose the pointer”, which never fails to get a laugh out of them and provides some comic relief and a nice introduction into the session.

Thanks to RefME for supporting the important work of user testing so we can continue to create accurate, automated citations for all students:)

blog comments powered by Disqus