AI and ML techniques have made big leaps this year. With AlphaFold solving the protein folding problem to the GPT3 engine that can solve most text related problems using its language model. Computer vision techniques have improved as well. All these research has some drivers behind it. And usually these drivers are based on a research question, grant or a niche. But AI research in the area of understanding real-life context is hard to come by as there is no beneficiary to such a model right away. The Google Assistant is close. As the assistant tries to look at multiple sources of data in different forms like previous searches, calendar appointments, email etc. to make a better decision and provide a better reply. These kind of models require something that can account for the great randomness that is humans. The same set of calendar appointments and search inputs could still have a different meaning and level of importance for different people. Rightly judging this based on more second-order inputs like usage pattern and key strokes might be the next big step.