This is a weekly roundup of the important news in AI, emerging tech and how it will impact the economy.
AI & Chinese Exam Season
With any new technology in the classroom we need to reflect on whether we trying to get students to internalise the logic of a subject, or to become adept at using tools? Do we care more that a kid can understand Pythagorus, or that they enter the workforce knowing how to use a calculator? I guess ideally both, right?
A path many education systems took with calculators was to allow students to use them in class, but to block their use in exams, because exams are how we test that they have internalised the core concepts of the subject.
We can also use exams test aptitude for other reasons, like college admission. It’s probably best to limit tech here too. So how do we do that with AI? One way is for teachers to collect phones in a bag at the start of an exam, another is for the country’s major AI firms to shut down their image recognition services for a few days. Guess which route China took! From Business Insider:
As millions of high school seniors began sitting for China's notoriously grueling "gaokao" college entrance exam from Saturday, the country's biggest tech firms quietly pulled the plug on their AI tools.
Apps from Tencent, ByteDance, and Moonshot AI disabled features like photo recognition and real-time question answering, a move aimed at preventing students from using chatbots to cheat during the high-stakes national exam.
Last month, China's education ministry warned students not to rely on AI-generated answers for assignments or tests, even while promoting AI education from a young age.
More Legal Slop
Lawyers are presenting AI slop briefings so often in the UK that the courts are issuing warnings. From the NYT:
The High Court of England and Wales warned lawyers on Friday that they could face criminal prosecution for presenting false material generated by artificial intelligence, after a series of cases cited made-up quotes and rulings that did not exist.
In a rare intervention, one of the country’s most senior judges said that existing guidance to lawyers had proved “insufficient to address the misuse of artificial intelligence” and that further steps were urgently needed.
This is a dismal story full of professionals who should really know better. But it’s also an illustration of where we don’t need new “AI laws” to stop people doing bad things with AI. It doesn’t matter if these error-laden submissions are being made because they were written by a novice intern, a drunk boss or an AI hallucination. The UK already has laws against lawyers providing false information, which seem sufficient here.
AI Makes Civil Servants More Productive
A UK Government study gave 20,000 employees access to Microsoft Copilot, tracked their usage over a few months and released the findings. Some interesting findings:
User sentiment was overwhelmingly positive, with 82% expressing they would not want to return to their pre-Copilot working conditions. (This is shockingly high)
Civil servants *self reported* saving an average of 26 minutes a day when using M365 Copilot.
33% of them used it in Outlook daily, and 43% used it in Word weekly. This was about double the rate of usage in Powerpoint and Excel
Co-pilot was mostly used for “performing mundane tasks, and increased time spent on more strategic activities.”
Other Links
Google traffic to the Washington Post, HuffPost, and Business Insider has declined by about half in the past three years, according to SimilarWeb. AI overviews look set to dramatically reshape the economics of the web.
Sam Altman shares how much energy ChatGPT uses (with no citations, links or sources): "the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon."
Wikipedia is going to test "the presentation of machine-generated, but editor moderated, simple summaries for readers." You can read the full open debate amongst the moderators, but here’s some excerpts: "Yuck" (x3), "I sincerely beg you not to test this," "as long as the feature is opt-in, I don't see any downsides", "We can read the lead, which is a convenient, short summary written by real people", "Have you seen our leads lately?"
UK Chancellor Rachael Reeves said the government was investing in “the biggest rollout of nuclear power for half a century”, with the Rolls-Royce group building the UK’s first small modular nuclear reactors.
The UK is planning to launch trials of driverless taxi services in spring 2026. Uber and local company Wayve will be the first providers.
Proxima Fusion holds the new record for the largest investment round in a European fusion company, but that's still far behind the US - "Bill Gates-backed Commonwealth Fusion Systems, which is building a demonstration tokamak device in Massachusetts, raised a record $1.8bn in 2021 while Sam Altman-backed Helion raised $425mn in January."
An alliance of industrialist think tanks just released the “Techno-Industrial Policy Playbook”
On their way out the door, employees at the IRS (US tax collection) have open-sourced the Direct File software, presumably in anticipation of it being nerfed to benefit private tax-filing software.
Can Apple’s on-device AI processing be opened to app developers in a way that’s compatible with user privacy? EU privacy expert Mikołaj Barczentewicz argues that it can and explains the different tech approaches being taken by Apple & Meta.
Dario Amodie, CEO of Anthropic, said that there was about to be a bloodbath in the jobs market, where AI could wipe out half of all entry-level white-collar jobs. I’m honestly not sure if I’ll keep including AI CEO pronouncements in this newsletter as it’s very hard to parse useful info from the hype they need to generate for fundraising and recruitment.
I suspect we’ll be reading horrifying tales from inside DOGE for years to come. Here’s one from ProPublica on the guys using AI to identify government contracts to be cancelled. “The code, using outdated and inexpensive AI models, produced results with glaring mistakes. For instance, it hallucinated the size of contracts, frequently misreading them and inflating their value. It concluded more than a thousand were each worth $34 million, when in fact some were for as little as $35,000.”
Data Privacy and Competition are always in tension. Privacy experts (and regulators) view data primarily as personal information, that companies should lock down. Competition experts (and regulators) view data primarily as corporate advantage, which should be accessible and transferable to competitors. This debate is carrying through into AI. Slack (owned by Salesforce) have announced moves to limit 3rd parties ability to access and train on slack messages. The companies who use slack are not happy.
The Duolingo CEO is surprised at the level of anxiety people have about change. He told the Financial Times about the blowback he got on his LinkedIn post about the company going “AI first”. Another case of CEOs trying to excited pronouncements to investors about cost savings vs the public about AI driven job losses. He ultimately backtracked his automation predictions to “a small number of contractors who will probably be redeployed elsewhere”.