I’m actually not convinced that this is a good use case. As the article points out these bots seem to get a lot of facts wrong in a right-ish looking sort of way. A whiteboard interview feels like it would easily trap the bot into perusing an incorrect line of reasoning, like asking the subject to fix logic errors that weren’t actually there.
(Perhaps you were imagining a bot that just replies vaguely?)
I choose the cancelled flight example specifically to avoid having the bot “decide” the truth of the cancellation.
(Perhaps you were imagining a bot that just replies vaguely?)
I choose the cancelled flight example specifically to avoid having the bot “decide” the truth of the cancellation.