I have been very fortunate. In my PhD research, I have had the chance to venture through diverse seemingly disconnected areas of applied mathematics. I have been combining operations research and game theory, as well as theoretical proofs and computer simulations. My bibliography overlaps papers from economics, computer science and mathematics, while the connections with philosophy, biology and industry seem clearly within reach. Yet, lately, I have found myself doing even more unexpected things.

Yesterday, I gave a talk which invited 20 other students to join an experiment which tests the theoretical and computational outputs of my research. In brief, my research consists of a shift scheduling algorithm which includes employees’ preferences. One big question I raised amounts to the incentive-compatibility of this algorithm. More specifically, will employees have incentives to reveal their preferences truthfully?

Quite often, I’m asked: “Why would an employee not want to reveal his preferences truthfully? How could it not be in his incentives to do so?” After all, the shift scheduling algorithm aims at optimizing employees’ satisfactions. So, how can it yield me better shifts if he uses untruthfully revealed preferences rather than my actual real preferences? Well, let’s take a cake to figure it out. Suppose the cake is half vanilla, half chocolate, and that there are three contenders. Now, imagine you like vanilla and chocolate equally, but that the two others have strong but different opinions about vanilla chocolate. One loves vanilla. The other loves chocolate. A cake-cutting algorithm aiming at maximizing the sum of satisfactions would then yield all the vanilla to the vanilla-lover, and all the chocolate to the chocolate-lover. This leaves you, chocolate-and-vanilla-lover, with nothing. That’s because the cake-cutting algorithm doesn’t maximize all contenders’ satisfactions (this actually doesn’t make any sense); it merely maximizes the sum of the satisfactions.

With this insight, the more natural question that comes in mind is rather: “Among all preference revelations (and there are lots of them!), how on earth could it be that the best revelation is always the truthful one?” It seems unlikely. Worse, it seems impossible to design a shift scheduling algorithm that would guarantee truthful revelations to be optimal. In particular, the nicely optimized shift scheduling algorithm I have been developing in the last year seems unlikely to yield truthful revelations as employees’ optimal strategies. I want to make sure of that.

Surely enough, I could have made computer simulations to search for optimal strategies (and, for cake-cutting procedures, I did!). But wouldn’t it be more convincing if, in addition to formal proofs of that, I had user experience feedbacks that point to this weakness of my shift scheduling algorithm? This is why my professor convinced me to organize an experiment with human subjects playing the roles of strategic employees. Hopefully, they’ll be smart enough to figure out optimal untruthful strategies! Hence pointing out a major flaw in my shift scheduling algorithm.

At this point, you might wonder why I’m so eager to have a well-founded case against my own shift scheduling algorithm. It’s because I have a way to fix that. This fix is based on heavy mathematics and computer codes. I’ve been trying to sell it. But buyers have been having troubles to see why such an (intellectually) expensive fix should be worth purchasing. Probably because they don’t see what flaws the fix fixes.

But does the fix works? Interestingly, the second part of the experiment will help us unveil the efficiency of this fix, as I’ll add it to the shift scheduling algorithm. Hopefully, if my fix is good enough, truthful strategies should suddenly become optimal, or, at least, near-optimal. Hopefully.

To be perfectly honest, I have my own doubts about the performance of this fix (mainly due to computational limitations). It’s a huge challenge that my work and I will be facing here, and, frankly, I’m quite worried about it. I’ve got used to working silently and alone on my research ideas, testing them with myself and my supervisors being the only judge, and exposing them only once I consider them mature enough. Yet, in this experiment, I feel like I’m working in a very exposed manner. It adds pressure. It adds stress. It adds anxiety.

But it’s exciting.