In July I attended the Microsoft Inspire 2018 conference in Las Vegas.
The event was pretty impressive, not so much because of its size (and, yes, it was huge), but more so because of the clear and consistent strategy that was evident in everything.
And that strategy has direct relevance to the trajectory of banking software: to the way in which product strategists should be envisioning and architecting banking customer experiences (CXs).
Continue reading “Lego Banking: A modular approach to banking customer experiences” →
I recently came across a story about an approach to UX testing being used by Wells Fargo which highlights the importance of experimenting with different ways to conduct UX testing.
At one of its downtown San Francisco branches, Wells Fargo has set up an area called ‘Digital Express’. This section of the branch provides customers with a series of tablets demonstrating proposed new new digital banking features/functions. Customers can interact with the prototype solutions and provide quick and direct feedback to the bank, thus allowing the Wells Fargo product development team to ‘…test fast failures in a matter of weeks, rather than months or years’.
It’s an excellent example of the different ways in which UX testing can be conducted.
Continue reading “13 ways to conduct UX testing – and why it’s so important” →
These days we’re all under pressure to produce new software, new features and new interface improvements quickly. And the speed demanded by a market of disruptors and startups is ever-increasing.
Within this context, techniques such as agile and lean startup can help immensely to identify critical issues, bring people together in constructive forms and ensure a focus on delivery of software. However, in the rush to ideate, build an MVP and launch, we can still sometimes forget to validate assumptions and fail to incorporate the right kind of user input through selected contextual research. When this happens, sometimes the results can be frustrating; other times they can be disastrous.
Two recent instances have highlighted this. One is well known: Microsoft’s now-infamous Tay AI bot fiasco. The other is virtually unknown but personally frustrating to me: the recent relaunch of the public website for my son’s school Trinity Grammar.
Continue reading “How user testing can go wrong: two case studies” →