limit the user to understand the system in terms of metaphor
poor designs can be used as the metaphor
rely on metaphors might hinder coming up with new conceptual models
some times may break conventional and cultural rules
interaction types
instructing
tell the system what to do
e.g. typing in commands
e.g.selecting menus
conversing
have a dialog with the system
menu-based dialogues
text-based dialogues
virtual agents
manipulating
manipulating objects and users experiences
based on users experience with real objects (affordance)
exploring
moving through virtual or physical environments
e.g. vr
responding
initiative to alert, describe, or show the user sth. of interesting
relevance to time or context
interface types
types of input and output methods
interface used by the users to support the interaction
choose the most appropriate or a combination
envisionment
make ideas visible and externalize thoughts
represent design work
occurs throughout development
different representations
sketches
ideas and thoughts can be quickly visualized
quick, timely, inexpensive, disposable and plentiful
o allow quick test of new ideas during brainstorming
o reduce attachment to design
basic elements, people, objects - depends on the purpose
context, user view, snapshot
advantages
storyboards
sequence of actions or events
user journey
3-7 steps
each picture labelled with 1 short description
context of interaction is visible
correct level of details
wireframes (e.g. wireflow)
single screen or interaction page
plan the layout and interaction patterns
different level of details
prototypes
low-fidelity
medium unlike the final medium
capture early design thinking
quick and easy to produce
high-fidelity
similar in look and feel with anticipated final product
detailed evaluation of the main design elements
paper prototypes
produce quickly
enables non-technical people to interact easily with the design team
flexibility - 'redesigned'
advantages
faking interaction
wizard of oz (lo-fi)
human is responding to output rather than the system
video prototype
how the prototype is 'used' in real-life
early stage - fake interaction
later stage - communicate what product looks like and can do
focus on information to be conveyed
limited by imgination, time and materials
compromises
horizontal - wide range but little details
vertical - lots of details but a few functions
computer prototyping tools
fidelity of prototype
level of details and functionality built into a prototype
low-fidelity
limited functionality and interactivity
examples: paper prototype
high-fidelity
close resemblance to the final design
high functionality and interactivity
examples: digital prototypes
what are prototyping tools?
tools develop for the sole purpose of prototyping
code-based
code-free
software prototyping tools
certain degree of coding required
web uis
html5 with a lot of libraries
three.js
user interface builders
visual studio, xcode, visual basic
finished design can be used for final implementation
processing.org
a programming ide for prototyping
supports many libraries
video, audio, network, animation, vision, ml
based on java
comparison
designing tools
tools allow you to design within or import from other softwares
different tools, different range of fidelity
software suitable to create
balsamiq
adobexd
linking to create clickthroughs
prototype that links multiple screens together via hotspots
hotspots area that is interact-able by the user.
moving from paper to digital prototype
upload existing images
add hotspots
sharing the prototype
purpose
collaboration
add team members to project (cloud-based tools)
edit and comment on design
presentation and testing
user participants and stakeholders
view and use prototype
different types of shareable/export formats
web link
pdf file with hyperlinks
view in ios/android phones
html files
choosing a prototype tool
fidelity
layout and navigation design
visual design and micro-interactios
ease of collaboration (teamwork support)
ease to pick up tool
number of people on the same project
platform - mac/window/cloud
integration with workflow
import ad export previous work
assert libraries
costs
free/trial
subscription
physical prototypes
what are the physical prototypes?
mostly focus on electrical products
wearable technology
tangible ui
same principle - test out ideas quickly
test out idea quickly
resources to support development
physical computing kits
build and code prototypes and devices using electronics
arduino
open-source electronics platform based on easy-to-use hardware and software
toolkit comprises of two parts
arduino board
arduino ide - program sketch to board
sketch - unit of code
bbc micro: bit
similar to arduino
add to external components at the edge connector
teach programming in schools - scratch, python
makey makey
rapid fabrication
computer aided production tools
3d printers (additive manufacturing)
laser cutters (subtractive)
helps to quickly fabricate high quality physical prototypes
easy to modify and change
subtopic 2
introducing evaluation
ethics
inform participants about their rights during the study
protect participants during study
physical or emotional endangerment
privacy of participants
ethics approval must be obtained before study is conducted.
university human ethics policy
ethics approval helps to
protect the welfare, rights, dignity and safety of research participants
protect researchers' rights to conduct legitimate investigation
protect the university's reputation for research conducted and sponsored by it.
minimize the potential for claims of negligence made against individual researchers and the university
human research
research conducted with or about people, their biological materials or information.
it covers activities including:
taking part in surveys, interviews, or focus groups
undergoing psychological, physiological or medical testing or treatment
being observed by researchers
assessing personal documents or other materials
collection and use of biological materials
assess to personal information as part of an existing published or unpublished source.
before the session
don't waste the user's time
make sure experiment is designed well
be prepared
make users feel comfortable
communicate only the system will be tested, not the user
indicate that the software may have problems
inform that they can stop at any time
maintain privacy
tell the user that results will be anonymized (if applicable)
inform the user
explain what is being recorded (video, audio, data logging, etc.)
answer user's questions (but avoid bias)
do not coerce users
obtain informed consent
during the session
don't waste the user's time
do not ask to perform unnecessary tasks
make users feel confortable
give early success experience (pre-trials)
keep a relaxed atmosphere
sufficient breaks (e.g. coffee breaks)
hand out test tasks one at a time
do not show displeasure
avoid disruptions
stop the test if participant show discomfort
maintain privacy
external people should not be present
after the session
make users feel comfortable
thank the user and inform they have helped
provide additional information if necessary
answer any other remaining questions user had
e.g. something that could have lead to a bias
maintain privacy
report the data without compromising privacy
only share audio visual data with expression permission
store all the data in a secure location
university has a dedicated research data storage
research computing optimized storage (rcos)
main steps in evaluation
1. establish aims of evaluation
2. select evaluation methods. good to have combination of participant (with users) and non-participant methods (without users).
3. carry out non-participant methods first.
4. use results from non-participant methods to plan participant testing
5. plan session, recruit participants and setup equipment
6. carry out evaluation
7. analyze results, document and report
selecting and combining methods
use a combination of methods to obtain richer understanding of users and product
controlled - test hypothesis about specific features
uncontrolled - insight to people's experience of interacting with technology in the context of daily life
examples
combination of usability testing in labs combined with field studies
cognitive walkthrough to test run the prototype before actual usability testing in the lab
when to evaluate?
during iterative design - check if
design matches the requirements
problems with the design
before deployment - for acceptance testing
does the system meet expected performance
continuous evaluation after deploying
"performance beta"
continuous evaluation
in the wild, bug reports, field studies
where to evaluate?
usability lab
testing room constructed for usability testing
instrumented
camera, microphones, data recording, etc.
separate observation room
connected by one-way mirror
benefits
controlled situation
ideal to study one precise aspect
many equipment available
only option if real location is dangerous or remote
problems
does not represent a natural situation
hard to generalize results
research lab
naturalistic setting
observation occurs in realistic setting
real life
workplace / home
in-situ
benefits
more realistic (e.g. external effects)
situation and behavior more natural
better suited for long-term studies
well-suited for user experience studies
problems
hard to arrange and run
time consuming
task is difficult to control
environment is difficult to control (e.g. distractions)
remote study
what to evaluate?
conceptual model
focus is on (standard) usability issues
product is close to final / feature rich
comparative results
early and subsequent prototypes of a new system
get early feedback on a design
low-fidelity prototype
can fix design issues in advance
final product
how the product works for new markets / user groups
existing product, already evaluated for one market
why evaluate?
to judge system features / functionality
does it facilitate users' tasks and match their requirements?
does it offer the right features?
to judge effects on users
how easy is the system to learn and use?
how do users feel about the system?
to discover unforeseen problems
what unexpected / confusing situations come up?
to compare your solution against competitors
important for marketing / sales department
evaluations without users
inspections
heuristic evaluation
a review guided by a set of heuristics
small set of evaluators examine the interface and judge its compliance with recognized usability principles.
original heuristics - nielsen ten usability heuristics derived empirically from an analysis of 249 usability problems.
number of evaluators
on average five evaluators identify 75-80 percent of usability problems
choice of heuristics
should depend on goals of the evaluations
suggest to use category-specific heuristics that apply to a specific class of product as a supplement to the general heuristics.
can tailor original heuristics with other design guidelines, market research and requirements documents for this purpose.
how to heuristics evaluation
briefing session to tell experts what to do
evaluation period of 1-2 hours in which
each expert works separately
take one pass to get a feel for the product
take a second pass to focus on specific features
debriefing session in which experts work together to prioritize problems.
subtopic 1
benefits
few ethical and practical issues to consider because users not involved
best experts have knowledge of application domain and users
problems
can be difficult and expensive to find experts
many trivial problems are often identified, such as false alarms
experts have biases
cognitive walkthrough
involve stepping through a pre-planned scenario noting potential problems
focus on ease of learning
designer presents an aspect of the design and usage scenarios
expert is told the assumptions about user population, context of use, task details.
one or more experts walk through the design prototype with the secenario
how to cognitive walkthrough
ux researchers walk through the action sequences for each task.
as they do this, answer the following questions:
will the correct action be sufficiently evident to the user?
will the user notice that the correct action is available?
will the user associate and interpret the response from the action correctly?
record problems.
benefits
can be done without users
considers users' task
quick and inexpensive to apply
problems
limited by skills of the evaluator
labor intensive - answering and discussing questions may take a long time.
analytics
web analytics
a form of interaction logging that analyzes users' activities on website
total number of people
length of stay
content site visits
outcomes can be used to improve their design
when designs don't meet users' needs, users will not return to the site (one-time users)
example - sparkplus
learning analytics
web analytics applied to field of education
learner's activity in massive open online courses ( moocs) and open education resources (oers).
a/b testing
a large-scale experiment
offers another way to evaluate a website, application of app running on a mobile device
often used for evaluating changes in design on social media applications
compares how two groups of users perform on two versions of a design
model
predictive models evaluate a system without users being present.
fitts' law
time taken to hit a screen target is independent on distance of cursor and size of target.
evaluation with users (1) - usability test
usability testing
involves recording performance of typical users doing typical tasks
users are observed and timed
data is recorded on video, and key presses are logged
user satisfaction is evaluated using questionnaires and interviews
team roles during testing
all members are encouraged to participate in the evaluations
facilitator
person in the lab together with participant
responsibilities
plan and execute session
set up lab for session
responsible for putting participant at ease during session
must have people skills
prototype executor
person to 'execute' the prototype and move it through its paces as users interact
only if you are using a low-fidelity prototype
must have thorough technical knowledge of how design works
poker face and should not speak a single word during the session.
quantitative data collector
works
time to complete task
number and type of errors per task
number of errors per unit of time
number of navigations to online help or manuals
record into spreadsheet directly
tools - stopwatch and counter.
qualitative data collector
observation notes
critical incidents
think aloud comments
supporting actors
optional
if part of the setting or task requires participant to interact with someone.
manage the props needed in the evaluation (other than the prototype execution).
example: call client on the telephone.
tasks during session
representative, frequent and critical tasks that apply to the key work role and user class represented by each participant.
prepare corresponding task description and ux target metrics to guide data collection and compare observed results.
test conditions same for every participant.
task description
what to do, no hints about how to do
recuiting participants
find representative users (usually outside you team and outside project organization)
recruitment methods and screening
people around you - spouses, children, friends.
post ads in public spaces
announcements at meetings of user groups and professional organizations if the group matches your user class needs
temporary employment agencies.
number of participants
schedule for testinng
availability of participants
costs of running tests
famous rule of thumb
3 - 5 participants
typically 5-10 participants
or: test until no new insights are gained
planning the session
if it is in the lab, configure the lab to your needs.
computer / device
placement of participants, facilitator and executor
set up hardware, e.g. eye-trackers, timers, counters etc.
determine length of session for one participant
typical length: 30 mins to 120 mins
strategies to manage long sessions
warn participants in advance
schedule breaks between task - exercise, toilet break, refreshments.
prepare food and water in advance to keep participant at ease
prepare necessary paperwork
informed consent (important)
formal and signed permission given to ux professional by participants to use data gathered within stipulated limits.
preparation for informed consent begins with institutional review board (irb) / ethics approval committee
evaluators / project manager to prepare application.
usyd ethics application
participant information statement (pis)
participant consent forms (pcf)
advertisements, letters and emails seeking participants.
interview or focus group questions / themes
letters of support or permission from organizations assisting in the research in any way
external research declarations (for researchers not affiliated with the university)
participants must read both pis and pcf before session
allow participants / guardians to ask questions
participants / guardian must sign pcf before session
prepare two copies for the session
one for participant to keep
one for submission
other data collection forms
non-disclosure agreements (ndas)
if required by developer or customer organizations to protect intellectual property (ip) contained in design.
must be included during signing of pcf.
questionnaires
if your evaluation plan includes administration of one.
sus - system usability scale
usefulness, satisfaction and ease of use (use) questionnaire
on the big day
before session
invite participant into the lab
offer refreshments and paperwork
explain details of study and check for questions
participant to complete requested forms
[optional] interview participant to check responses on questionnaire
during session
hand out task (one at a time)
encourage participant to think - aloud
describe his actions and why
stop test if participant is in distress
to help or not to help participant during the task
depends on purpose of test
guide users if questions are asked
after session
post-session probs if have any
debrief participant - answer remaining questions that the participant has and what you will do next
thank the participant and give token of appreciation for their time
prepare for the next participant.
evaluation with users (2) - experiments
usability testing vs. experiments
usability testing is applied experimentation
developers check that the system is usable by the intended user population by collecting data about participants' performance on prescribed tasks
experiments test hypotheses to discover new knowledge by investigating the relationship between two or more variables.
experiments
basics
test hypothesis
predict the relationship between two or more variables
independent variable is manipulated by the researcher
dependent variable influenced by the independent variable
typical experimental designs have one or two independent variables
validated statistically and replicable
designs
when dealing with human subjects, need to determine which participants to involve for which conditions in an experiments.
experience in participating in one condition will affect performance of those participants if asked to participate in another condition.
using multimedia materials in class will improve students' learning.
different - participants design (between - subjects design)
single group of participants is allocated randomly to each of the experimental conditions so that different participants perform in different conditions.
need many participants to minimize individual differences- differences in experience and expertise. perform pre-testing to identify participants that differ strongly.
same - participants design (within - subjects design)
all participants perform in all conditions
must perform counter-balancing to reduce effects of ordering.
effects of ordering - learning from pervious task affects performance of subsequent task.
matched - participant design (pair-wise design)
participants are matched in pairs based on certain user characteristics such as expertise and gender.
each pair is then randomly allocated to each experimental condition.
does not consider important variables that may influence the results.
how to experiments
1. determine goals, explore the questions then formulate hypothesis
7. interpret results to accept or reject hypothesis
crowdsourcing
internet - source to recruit participants and run large-scale experiments
amazon mechanical turk (mturk)
turkers - volunteers to perform human intelligence tasks (hit)
evaluation with users (3) - field study
field studies
done in natural settings
"in the wild" is a term for prototypes being used freely in natural settings
seek to understand what users do naturally and how technology impacts them
field studies are used in product design to
identify opportunities for new technology
determine design requirements
design how best to introduce new technology
evaluate technology in use
range from a few minutes to longitudinal studies (few years)
data collection that is obtrusive but informative
self-reports of problems encountered when occur
interval logging triggered by smartphone notifications
logging software - monitor frequency / patterns of daily activities
conundrums with field studies
informing participants that they are studied
knowledge of study would make people conscious about how they behave.
without informing people of their participation, how do you get their consent for participation?
privacy of participants
studies done in people's homes will always be intrusive
agreement between participant and research the activities that can or cannot be recorded.
what to do with the prototype?
event of breakdown
security arrangements if deployed in public spaces
e.g. pain monitoring
special permission and leads to privacy issues
user journey
wireflow
adv and disadv
experts use their knowledge of users and technology to review software usability / expert critiques can be formal or informal
a variety of users' actions can be recorded by software automatically - key presses, time spent searching web page, looking at help systems. / unobtrusive provided system's performance not affected. / large volumes logged automatically, explored and analyzed. / ethical issues - observing without knowledge.
more maps from user
copy the code to embed this map into your article. the embeded map can even be zoomed in / out