UKRI is experimenting with AI to streamline grant peer review, a process that has become increasingly burdensome due to a surge in funding applications. The UK Research and Innovation (UKRI) body allocates over £8 billion annually for research, but the number of applications has skyrocketed by 80% in the last seven years, while the number of grants funded has halved. This has led UKRI to explore innovative solutions, including the use of generative AI. A research team led by Mike Thelwall, a data scientist at the University of Sheffield, is investigating the potential of AI to predict peer review scores and decisions. The team will analyze up to 2000 grant proposals, using large language models (LLMs) to see if they can accurately replicate the peer review process. While the team won't disclose the actual scores, they aim to determine if AI can assist in speeding up the review process or supporting human reviewers. This isn't UKRI's first foray into AI; Thelwall previously explored AI's role in refereeing research articles for the UK's Research Excellence Framework. However, a previous study found that AI systems needed more development to assist peer review, achieving only 72% accuracy in replicating human reviewer scores. Experts like Mohammad Hosseini from Northwestern University raise concerns about AI's ability to generate truly novel ideas, as it trains on existing data. Additionally, transparency is crucial; if AI criteria are unclear, researchers may feel misled. UKRI's potential use of AI includes tiebreaker situations and as an additional reviewer, but the key question remains: can AI truly replicate the nuanced judgment of human peers?