In contrast to my lamentations on the lack of progress in technology, it is equally the paradoxical case that there is much technology in the world that we simply don’t see.

A Backdrop of Technology

Technology that works has a tendency to just seep into the background. If you have a reasonably modern smartphone, you use quite a lot of advanced AI technology every time you take a photo. The mechanics of pointing the camera at a subject (more often than not, a cat) and pushing the button, remain the same, but the process is completely different, not even the optics are really comparable to those in a film camera anymore. But the results, especially in challenging conditions, are incomparable – it’s almost impossible to take a photo that’s out of focus, incorrectly lit, has motion blurring or any of the other ailments the average holiday snaps from the film-age would suffer from. Few people think much about, if anything, there’s a challenge of dealing with an archive of photos numbering in the thousands: It’s just what cameras do.

Inevitably, a cat. Note the subtle hints of the Google Tensor chip.
Inevitably, a cat. Note the subtle hints of the Google Tensor chip.

Another example is driverless cars. Replacing drivers with AIs remains frustratingly over-promised and under-delivered: But new cars have adaptive cruise control, automatic braking, lane assist and sensors to aid in parking. All are the spoils, the “peace dividend”, of decades of work towards driverless cars, and more likely than not, employ a fair bit of AI to deploy, but we don’t really think of them as such, we don’t really marvel at driving an 80% self-driving car, nor do most drivers even reflect much on how much better this is than just a decade ago. It’s just what cars do.

Into the Shadows

In a recent leaked memo from Google, the author laments that the firm has no moat around their LLM models. They had, as had OpenAI, assumed that being one of a few organisations in the world controlling the immense data centre resources required to train and run the largest LLMs such as Bard and GPT, they would be able to control the market, at least for a while.

No such luck: Meta’s LLaMA model leaked to the public, and in a matter of weeks, it was modified, hacked and tuned to run on consumer grade gear, and an explosion in applications appeared.

I’m quite sceptical that the outcome of the LLM-revolution will look very much like ChatGPT does today. Outside of a handful of specific situations, speaking to a computer does not seem like a terribly useful interface.

However, the awesome power demonstrated by LLMs will make its way into an immense range of applications. This is another important point the memo makes: A relatively small model, trained on a modest data set of high quality for a specific purpose is more powerful than a huge, general purpose model applied to the same purpose.

Expect to see a lot of interesting developments in the near future. But equally expect, ironically, to not see them.