One point from our recent post is about explicit and implicit context in agent systems, and the big gap that the reasoning that connects information -> decision -> action was not captured, as data are scattered in different systems and in humans’ brains.
A recent article posted on X talked about the same issue from another angle: context graphs as a way to represent the reasoning chain behind actions (decision traces, exceptions, overrides, cross-system nuance).
Though I think graph may not be the best data representation in this agent era, but the core questions remain for exploration:
- How do we capture the implicit contexts with low friction (as work happens)?
- How do we keep them so it stays correct as systems and policies change?
- How do we use let agents use them effectively?