ConFoo Wrap Up

Monday, March 27, 2017

Note: The slides from my presentations are available.

I was invited to speak at Confoo Montreal this month and had a wonderful time. Not only did I have the opportunity to learn from some brilliant presenters in the field of software engineering, but I was able to build upon my public profile. Having delivered two presentations in Canada to an audience including one or more appreciative Canadians, I am now able to put the epithet "internationally renowned speaker" in front of my name.

The conference featured many simultaneous presentations, leading to an obvious dilemma for attendees: how can one maximize the value of the conference by identifying the best presentations ahead of time?

The vast majority of the attendees appeared to select presenters based upon their interest in the speakers' topics.

I didn't.

I attempted to look for signaling to identify the "best" presentations, no matter the subject matter. I didn't much care if I spent my time learning about artificial intelligence or language localization.

Using information available on the event's website, I evaluated speakers on the following:

I then attempted to gather some unpublished information. While doing so might be considered insider trading in financial markets, I was sure that the authorities would take little notice of my action.

Throwing caution to the wind, I ignored possible Hawthorne effects and surreptitiously evaluated my fellow speakers by the following criteria:

Having thus reduced the issue to a simple spreadsheet, I created my list of all-star speakers. I then proceeded to ignore my list and attend presentations at random. As much as I wanted to attend only the best presentations, there was something I desired even more: I wanted to put my scientific methodology to the test.

Did the signaling data I selected lead me to identify the best presentations?

In short: No. The signaling data wasn't just useless, it was also misleading.

Whether due to Dunning-Kruger or perhaps a bit of false modesty, many of the more interesting and insightful presenters fared the worst in my spreadsheet. The absolute best presentation that I attended started with an admission from the speaker that he was very nervous and hated public speaking. Within minutes, the entire audience was enthralled and hanging on every word that he uttered. In fact, many of the presentations that I would have skipped proved delightfully surprising. Not only were they highly educational, but they were very entertaining as well.

While many of my criteria proved unhelpful, my use of the sizing of rooms proved to be the most poorly thought out. In retrospect, it likely reflected an estimated interest in the speaker's topic, rather than a gage of his presentation quality. In a sense, the room assignments were an example of a Keynesian beauty contest and didn't necessarily reflect any particular or reasoned opinion on the quality of a given presentation itself.

The lessons from my experience shouldn't be limited to one's selections at conferences. They apply to all sorts of markets.

Unreliable signals are everywhere. Whether you're part of a hiring committee trying to ferret out the best potential staff members, or a customer looking to buy a new pair of shoes, you're going to be inundated with signals that will lead to suboptimal outcomes, if followed.

As signal seekers (generally the buyers in a market), we need to stop relying on signals unless we're confident that the signals are somehow signaling something that is important - and that they are trustworthy in nature.

As signal providers (generally the sellers in a market), we must stop relying on simply providing excellent product offerings. Sometimes it can be crucial to display signals (even in cases where those signals are often misleading) because, for better or worse, signal seekers might be relying upon them.

Ironically, this analysis might make for an interesting presentation itself. Perhaps I'll submit it for the conference next year.