Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all. This approach, of course, has achieved notable successes when applied to stable and well-defined situations such as chess or computer games. When such conditions are absent, however, machines struggle. One such example is virus epidemics. In 2008, Google launched Flu Trends, a web service that aimed to predict flu-related doctor visits using big data. The project, however, failed to predict the 2009 swine flu pandemic. After several unsuccessful tweaks to its algorithm, Google finally shuttered the project in 2015. In such unstable situations, the human brain behaves differently. Sometimes, it simply forgets. Instead of getting bogged down by irrelevant data, it relies solely on the most recent information. This is a feature called intelligent forgetting. Adopting this approach, an algorithm that relied on a single data point—predicting that next week’s flu-related doctor visits are the same as in the most recent week, for instance—would have reduced Google Flu Trends’ prediction error by half. Intelligent forgetting is just one dimension of psychological AI, an approach to machine intelligence that also incorporates other features of human intelligence such as causal reasoning, intuitive psychology, and physics. In 2023, this approach to AI will finally be recognized as fundamental for solving ill-defined problems. Exploring these marvelous features of the evolved human brain will finally allow us to make machine learning smart. Indeed, researchers at the Max Planck Institute, Microsoft, Stanford University, and the University of Southampton are already integrating psychology into algorithms to achieve better predictions of human behavior, from recidivism to consumer purchases. One feature of psychological AI is that it is explainable. Until recently, researchers assumed that the more transparent an AI system was, the less accurate its predictions were. This mirrored the widespread but incorrect belief that complex problems always need complex solutions. In 2023, this idea will be laid to rest. As the case of flu predictions illustrates, robust and simple psychological algorithms can often give more accurate predictions than complex algorithms. Psychological AI opens up a new vision for explainable AI: Instead of trying to explain opaque complex systems, we can check first if psychological AI offers a transparent and equally accurate solution.