Speaking to a screen: Mobile UX London

#conference  #speaking  #ux  #design  #covid  #process  #leadership 

When I spoke at the recent MUX Londonā€™s ā€˜Festival of UX & Designā€™ in September, I knew things would be different thanks to Covid-19. I just donā€™t think I realised how awkward it would be to hear your own voice so many times.

Love/hate

Iā€™d assume like many, I have a love/hate relationship with public speaking. I love the genesis of an idea that turns into a talk, and love the rush of exhilaration and relief once it's over. Equally, I hate the deleterious effects the anticipation can have imagining any number of terrible things happening on-stage.

However, since delivering the ā€˜talkā€™ at MUX London, Iā€™ve realised how much we might take for granted the myriad of non-verbal cues we receive when speaking to other humans. Coughs, smiles, nuanced seat-shifting, eyebrows raising or falling. All of these things - like it or not - are part of the experience of public speaking, and they form a positive (or in some cases negative) feedback loop for the speaker. I realised how much I look for these cues, subconsciously.

When I gave my first rendition of this talk in Quebec City, I practiced many times. First audio-only on my laptop, then graduated to video. After many cringeworthy, self-reflective moments I felt confident enough to perform in front of a live audience, to (what I hope was) success.

The brief from the organisers at MUX London was much different. Pre-record your talk, send it over, and done. But it was truly anything but a simple job.

iMovie?!

After paring down my talk to the allocated time (from 45 to 30, no small task in and of itself) I sat down in front of my laptop and pressed record. And then pressed record again. And again.

After multiple attempts and much swearing, I shifted to recording on my phone at 720p. I quickly realised I was no single-take Kubrick, and turned to the only video editing software I had access to: iMovie.

After learning iMovieā€™s many idiosyncrasies on the fly and with the deadline for submission looming, I realised that if there is one good thing about speaking at virtual conferences, itā€™s the ability to edit your talk beforehand.

I must have removed hundreds of umms, aaahs, grunts, silly pauses and more, to the point where I think their removal is what took me under the 30 minute mark. The editing also made it easier to slice and dice sections of a talk after the fact, meaning I was under more control of the narrative than if I gave the talk in one take.

In the end

My learnings about the whole experience have been that, frankly, it sucks speaking to a screen. Like many, these days my screen is my only conduit to my colleagues, and giving virtual talks is an extension of the same. I miss the nuanced body language and cues of real-life audiences, and the buzz of live events.

However, giving a virtual talk has its plus points.

If you can get over the pain of hearing your own voice and editing out your worst real-life speaking traits, you have more creative control of the final product. Whatā€™s more, your pre-recorded talk is now done and re-usable for other conferences should you be so inclined.

Panels to augment

The saving grace for MUX London was a growing (and positive) trend Iā€™ve noticed lately. Using the excellent Hopin.to platform I joined 4 other ā€˜speakersā€™ on the day our talks were broadcast, to take part in a real-time panel discussion.

We did the same at the recent Sofa Conf and the mixture of broadcast/sit back talks to listen to interspersed with real-time communication and feedback as a panel or Q&A is a good stop-gap until events come back to normal. Though it was obviously screen-based, the audience interaction (Q&A) and panel moderator helped bring a human touch to what would otherwise have been a cold and depressing 100% virtual experience.

Watch my talk

UX Festival - Jonathan Aizlewood - Fail. Learn. Rinse.Repeat.



Let's work together. For fractional leadership queries, drop me a line.