This was a side finding from the Decision Fatigue experiment, and it might be the most useful thing we measured.
We were testing whether AI recommendations help or hurt decision speed. The answer was mixed — helpful for routine calls, actually slower for strategic ones. That’s in the full experiment.
But buried in the data was a time-of-day signal we weren’t looking for. Decisions made at 4pm were 23% slower than decisions made at 10am. Regardless of whether AI was helping or not. Regardless of decision type. Regardless of the person.
Decision fatigue is real and it’s measurable. It’s not a self-help concept — it’s a design constraint.
This changed how we think about when systems surface recommendations. If you’re going to ask a human to make a call, the time of day matters. The number of decisions they’ve already made that day matters. The cognitive load of the interface matters.
Most AI tools treat humans as always-on decision machines. They’re not. They degrade over the course of a day, just like agents degrade past their complexity threshold (see: Agents don’t degrade gracefully). The difference is we can’t just restart the human.
We’re now looking at whether Lens should factor time-of-day into how it presents recommendations. Not what it recommends — but when and how much it asks of the human.