“Hello 911, Robot speaking, how can I help?”
Language Models are great at generating text that resembles the kind of thing that a an average person would say in any given context.
This means they will be used to automate and replace tasks where we currently pay humans to respond to contexts with generalised, repeatable or simple language (either through spoken voice or written text).
A prime candidate here is customer support. By combining a Language Model with the traditional tools of customer support - namely the FAQ and the decision tree - we should expect to see lots of low hanging fruit.
Salt Lake City are taking advantage of this in an interesting way, looking to use AI to answer 911 calls. From the Salt Lake Tribune:
Faced with chronic understaffing among the ranks of its dispatchers, the 911 Communications Bureau based in Utah’s capital city wants to explore using an AI-automated sorting system to reroute a share of its nonemergency calls as a way to help human responders focus on the urgent ones.
The over-stretched team think it could handle non-emergency calls, up to 30% of the total. It will also increase the number of languages they can take calls in too.
An interesting application space to watch - I hope it works well for them!
How LVMH Is Using AI for Incremental Gains
LVMH, the giant luxury goods company with brands like Louis Vuitton, Christian Dior and Hennessy, is rapidly rolling out AI-powered solutions throughout the company. From the WSJ:
Over the past four years, the conglomerate has worked with Google Cloud to [….] apply predictive AI, generative AI and agents in areas like supply chain planning, pricing, product design, marketing and personalization, all with the goal of maintaining and growing market share and improving operational efficiency.
Here are some of the places they’re finding early success:
Language Models are helping them provide more personalised customer care
Predictive Models are helping them to optimize transport logistics
Generative AI is used by the design teams to prompt ideas, create mood-boards and overcome the cold-start problem in the creative process.
They have an internal chatbot that helps staff find info and write emails.
I like this story precisely because none of the above are transformative or revolutionary, but they are impactful. We see them repeated across many industries, so the gains are likely real.
The list above is a wide array of real, tangible productivity gains that a single set of technologies (Machine Learning) are delivering in a very big, very old, non-technical company.
This feels like we’re getting to productivity gains much faster than with previous tech shifts like the personal computer or the internet.
10 Billion Reasons Not to be Skeptical
AI skeptics face a difficult tight-rope walk. On the one hand, they are rightfully trying to talk-down predictions that AI will either kill us all, nuke every jobs or deliver infinite abundance. On the other hand they risk downplaying the real and substantial impact this new epoch of technology is already having.
If your position is that AI is all just nonsense (merely “plagiarism machines” and “synthetic text extruders”, as one critic told the FT this week ), then one big data point you have to contend with is how much ordinary people are willing to pay to use them.
For example, OpenAI has just hit $10 billion in annualised revenue (last month’s earnings x12).
This is a stunningly impressive growth rate. It’s so large that I find it hard to draw parallels to put the growth in context. No Web 2.0 companies grew this fast because none of them earned real money for years. No B2B companies grew this fast. Figures like this make it very difficult to entertain skeptics, when people are clearly getting so much value that they’re willing to pay.
Maybe a good comparison is to say that, for ChatGPT alone, people are paying one third of what they pay to go to the movies. (Global box office revenue is about $30bn).
That’s pretty real!
Other Links
AI
A Deloitte survey says that almost half of the UK have used Generative AI. The things people are using it for seem mostly to be the things that language models are good at - writing emails (44%) generating ideas (43%) and summarising articles (38%). The most (potentially) troublesome one is “Looking up information - 50%”. Link
OpenAI got a $200m contract with the US Department of Defence. The DoD says it’s for “ frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains”, but OpenAI describes it more like a chatbot for government employees. Link
Disney is suing MidJourney for copyright infringement. Link
Grok is an important player in the ecosystem, providing more choice and competition in the market for foundational models. There is, however, a risk that Elon Musk will impair it by trying to make it hurt his feelings less. Link
Anthropic made more PR waves with a report that AI models will “deceive, steal and cheat to reach its goals.” My initial reaction is to eye-roll, because of course a language model trained on the average of all human text would respond similar in a way similar to how the average of all humans would. Of course it wouldn’t be wise to put such a language model in charge of your nuclear codes or to decide when to disable a patient’s life support. That would be dumb! This is obvious! But maybe it’s important that we need to say these things out loud, repeatedly, because people have a history of doing obviously dumb things. Link
A UN study found that trust in AI is higher in lower and middle income countries. This is consistent with other similar studies and I think broadly equates to “people with more to gain and less to lose are more excited about tech-driven change”. Links
Not AI
Tiny Teams is a cool collection of new companies building big products with minimal (human) staff. Link
In another huge boost for Stablecoins, Shopify has started accepting USDC. Link
Walmart and Amazon are also exploring how to use Stablecoins to reduce their transaction fees. Link
Waymo rides cost on average 20% more than Uber or Lyft. At first glance this might seem to suggest something worrying about the true unit economics of driverless cars, but the study seems to suggest the price is higher because people are willing to pay more. Seems like a fair price to not have to chat to a taxi driver, or be pressured to tip. Link